April 6, 2004
Oops
Oops, Tabulas had its first significant downtime when my upgrade script much longer than it should of and I went to bed because I couldn't bother waiting for it to finish.
Tabulas has always had the ability to break your entry into two pieces, and it was being stored in the database as two pieces; there were two columns in the primary entry_data table (entry and entry_ext). Now, this is really bad practice.
There's maybe 3% of people who even know how to use the break functions; and with both columns having FULLTEXT indexes, there's a lot of wasted resources. My main priority was cutting down on redundant and stupid setups in the database, and this was the main problem.
So the upgrade script took the data in entry_ext and combined it in the entry table. However, this took a lot longer than I expected (because the entry_data table is about 400 megs) ... and then once I had done this, I had to drop the entry_ext table. The dropping took 2.5 hours.
This is just a lesson to me on how slow mySQL can be when it comes to large data sets. So learn to plan your database architecture right from the start or you'll pay.
But the good news is I've cut down on disk usage on the database by about 20 megs or so (a lot!) ... and I can soon support multiple entry snippets (so you can break more than once).
Unrelated: The lessons of Mogadishu applied to Iraq.
Tabulas has always had the ability to break your entry into two pieces, and it was being stored in the database as two pieces; there were two columns in the primary entry_data table (entry and entry_ext). Now, this is really bad practice.
There's maybe 3% of people who even know how to use the break functions; and with both columns having FULLTEXT indexes, there's a lot of wasted resources. My main priority was cutting down on redundant and stupid setups in the database, and this was the main problem.
So the upgrade script took the data in entry_ext and combined it in the entry table. However, this took a lot longer than I expected (because the entry_data table is about 400 megs) ... and then once I had done this, I had to drop the entry_ext table. The dropping took 2.5 hours.
This is just a lesson to me on how slow mySQL can be when it comes to large data sets. So learn to plan your database architecture right from the start or you'll pay.
But the good news is I've cut down on disk usage on the database by about 20 megs or so (a lot!) ... and I can soon support multiple entry snippets (so you can break more than once).
Unrelated: The lessons of Mogadishu applied to Iraq.
Comment with Facebook
Want to comment with Tabulas?. Please login.
haiphong
If thats the case, how will this make it possible to support multiple snippets? (sorry, new to db stuff in general so this stuff is still fascinating to me lol)
roy
I mean, it\'s possible with the old set-up, but I would have to take the entry_ext data and then see if a break exists there; with the new system all the data exists in one field so I can check it.
haiphong
would you be willing to share the tabulas schema? i\'ve been going through some really cool optimization techniques, reducing the number of functional dependencies and such. this db stuff is quite nifty especially when you can see a good working implementation.
phantompenguin
laline
roy
Symphonia