Coming to automated testing in Django from the Zope and Plone world, I was pleased to find full support for all the testing machinery that I've become used to: regular Python unit tests, and doctests. Of course, these being unit tests, they don't do any 'framework' management out of the box.
Unit tests are supposed to test your code, and just your code. However, once you're in a framework environment (be that Zope and Plone, Django, or anything else) then testing how your code integrates with that framework is vital. Zope and Plone provide unittest.TestCase subclasses (ZopeTestCase and PloneTestCase respectively) which provide a lot of scaffolding for you to be able to run integration tests. Part of that scaffolding is automatic transaction management. This hooks into Zope's transaction API to roll back the transaction after each test runs.
I wanted to do something similar for my Django test cases; I was finding 'state pollution' between my unit test runs, since data created by one test method isn't automatically cleaned out.
Django's transaction handling is much simpler than Zope's: it cares only about the one database transaction that the current request has, and only if the transaction support middleware is installed. This means that we can pretty easily crib the code from that middleware and use it in a test case base class:
from django.db import transaction
UPDATE: Fixed an error in the call to the base class' tearDown() method, which caused open transactions to hang around and (among other things) prevented the test database being cleanly dropped at the end of the test run.
After this, you can simply derive your test fixture classes from TransactionalTestCase, and make sure that you call the base setUp() and tearDown() methods if you do need to override them to perform your own setup and teardown.
My next spare time (hah!) project will be to integrate Django's transaction management into repoze.tm (which is Zope's transaction management suitably WSGI-fied). This would let a Django application participate in transactions with other transaction-aware components, making integration at the WSGI layer much more straightforward.
(Thanks to Jan Lehnardt on the couchdb-user mailing list for apparently being psychic and posting a solution just as I tried to run Erlang.)
Yes, it's that big, bad old Leopard 10.5.3 update at it again. As well as breaking my Time Machine over AirDisk, it broke my Erlang shell.
I've got Erlang installed using MacPorts, so fortunately the solution was as simple as:
sudo port uninstall erlang
sudo port install erlang +universal
Erlang, back in business. Now all I have to do is learn it! (Good thing I did Haskell at uni - never thought I'd be saying that...)
I'll keep you updated how my Adventures in Erlang go. I may even have to add a new Erlang category to the blog.
My 40GB Windows XP VM had mysteriously grown to 50GB. I couldn't quite figure it out: 40GB disk, 1.5GB RAM, what more could it want to store?
Answer: I'd taken a VM snapshot prior to applying XP SP3.
Conceptually, a VMWare snapshot is a point-in-time image of your VM. However, you'll notice that taking a snapshot doesn't double the amount of disk space that your VM takes up. What actually appears to happen is the VMWare starts appending changes you make to your VM to a new 'differences' file within the VM package on disk, leaving your original VM file intact. If you ever revert to that snapshot, it can simply throw away this file containing the changes.
This also means, that as you change the contents of your VM, it will take more and more disk space as VMWare builds this 'differences' file. The solution to this is to discard the snapshot: select Discard Snapshot from the Virtual Machine menu. Be aware though that this operation can take a long time. VMWare has to go through the differences file and apply them to the original image. If a lot of data has changed, this will take a while. However, once the snapshot has been discarded, your VM will shrink back to its expected size.
Leopard's incarnation of Mail.app is mostly lovely. However, when you load it up with tens of thousands of mail messages, it can get a little slow. The usual solution - Rebuild, from the Mailbox menu - wasn't doing it for me.
However, I found this gem of a tip which I wanted to link to in order to improve its Google rank - it took me too long to find it!
It goes without saying that you should back your data up before trying this.
In essence, however:
Shut down Mail
Open Terminal, and enter the following:
hornet:~ dan$ cd ~/Library/Mail
hornet:Mail dan$ sqlite3 Envelope\ Index
You'll then see the SQLite prompt appear. Enter 'vacuum subjects;' and press enter:
SQLite version 3.4.0
Enter ".help" for instructions
sqlite> vacuum subjects;
You'll then have to wait a bit - don't panic, this is normal.
What's happening is that the SQLite database engine (used by Mail.app behind the scenes) is cleaning up data fragmentation and empty data pages within the database file itself. Doing this reduces the amount of disk activity required to read the database, improving performance.
Once you get your sqlite prompt back, simply quit:
Fire up Mail.app again, and you should notice a significant speed improvement. Sweet.
Here's how to kill a base station so badly that it needs power cycling:
- Create a few big Time Machine backups on a USB disk attached to your Mac.
- Unmount the backup and hang the drive off the back of your AirPort Extreme.
- Configure the AirPort Extreme to share the disk as an AirDisk
- Mount up the new AirDisk on your Mac. Note how you can browse the old backups. (They won't work as an AirDisk Time Machine backup though, those are sparse disk images.)
- In a terminal window, su to root and go to the Backups.backupdb directory.
- rm -rf <machinename>, to try to remove the old Time Machine backup.
At this point, my Mac gets disconnected from the wireless network. A subsequent attempt to reconnect times out, and the base station then disappears completely. Yanking the power cord is the only way to fix it.
Don't do this to other peoples' base stations, it's mean.
(Hm - wonder if it's accessible if I attach via Ethernet? Might have to give that a go, in the spirit of inquiry...)