It's generally regarded as stupid to spend a bunch of time writing your own CMS. It's the website equivalent of spending hours fiddling with your vi config. Or the software equivalent of sorting your record collection in biographical order.
Heck, I know folks who work on WordPress. They are nice people and they are doing their hardest to work around the troubles of WordPress past and I don't want to slight them.
It's just Rm worked the way I wanted it to and, since then, very little has come that I really felt worked the way I wanted it to, so I guess I'll just keep going in my direction.
Around 2004-2007, I was hacking on some stuff in Ruby and PostgreSQL and objects. I called it Rm which is short for "the name which used to mean something, but it was stupid sounding, so it's just Rm"
I'd gotten a lot of really neat features partially done and deployed it to run 3 sites. At the same time, it was missing lots of stuff. I'd accidentally and systematically managed to create barriers to progress.
I didn't use Rails, even though it's in Ruby. This is probably a smart move, but I also avoided using any other Ruby libraries or frameworks, such that the only dependency was an XML library that's an also-ran in the world of XML libraries, the PostgreSQL driver, and ImageMagick. Granted this was before the world moved to user-level programming-language-specific package management tools.
I'd also managed to not write any unit tests or testing infrastructure whatsoever, such that the only way to test it was to literally hit every page in the site and make sure that all of them worked and even then I'd get unpleasant surprises.
Similarly, I'd built a mirror of the Ruby class hierarchy structure in code, because that's how you'd do things in C++. I could have avoided all of that because I was using a dynamic and functional language and I could replace a bunch of complex bits of code with iterators that walked the class hierarchy itself intelligently.
It did a lot of XML transforms, which meant that the only reason why it ran at all was the front-end cache. And it was extremely sensitive to the performance of the XML library. Heck, the first version of the formatting pipeline used actual XSLT before I realized that was just not going to work out at all.
Also, there were a bunch of quirks. You had to really understand everything about it to get stuff done, and it was fairly easy to get it in a state where you didn't observe any errors until you had broken the site and had to go back and manually fix things in the database.
So, I tried refactoring, lost interest, and by around 2009-2010, it was ponderous to do anything against it.
Some of the good highlights were:
Thus, my goal for the next version was:
I did build a decent set of unit tests and it avoided a lot of the flaws from the past.
I spent some time working on it, got it to the point where it could be used kind of like a wiki, and then realized that I'd spent so much time building a simpler but still complex infrastructure atop CouchDB and the services and everything that I'd forgotten about what the front-end was supposed to look like.
After it was able to work more-or-less as a wiki, I stopped working on things for a while. I'd realized that maybe I was going in the wrong direction. Experience had shown me that no matter how much I though the data model I was imposing on CouchDB would be OK, I couldn't necessarily trust my instincts about that.
I never got to the "Less XML" point, really.
Because I was using node.js I was able to make it more async, such that it builds and sends as much of the response as it can before waiting for everything else to complete.
Thus, my goals for the next version were:
I'd ignored the Rm install for far too long. Eventually, I wanted to teach myself Chef and I really needed to migrate off of Ruby 1.8 so I did a fairly painful quick port of the original stuff over to a newer Ruby because they stopped shipping 1.8 with the operating system.
It pains me to point this out, but there were a small number of breaking changes from 1.8 to 1.9. That was do-able. If I wanted to continue to 2.0, that was another set of breaking changes. Still probably reasonable. Whereas going from Python 2.x to 3.x is painful. There's a group I used to work with that was shipping endless new versions of a product and they didn't keep up with their Ruby or Rails upgrades and now they've got a blob of hell.
I decided to start all over, but from the other direction. Instead of building the data model for the very backend and then working forwards, I'd start with the front-end and then build out a backend from there.
I started out looking at Pure CSS. I then built a new version of the site, and a few pages, solely in HTML and stuff. And then I started making it a server with mock data, copying bits and pieces of Rm2 code as I did so.
Eventually, it all started to fit together.
There was about 2 years of serious weekend hackery involved in building it.
It's close enough to the old stuff that I was able to write a translator tool that would take the binary blob store and a database dump from rm and output a backup in a format that Rm3 could load. I'd always been storing the history in Rm, but it wasn't ever visible. As it turns out, I was able to even import the history and tweak that a little bit, so I can see old revisions. The tool still ends up doing some substantial data model massaging, but at least I only have to do that once.
I didn't let myself try to actually import a real version of the site and thus be tempted to try to replace the Rm version of the site until it had a lot of really basic fundamental features that I'd not done before. Like a proper model for users and logins and rights. Or being able to work with the history.