- Tarballs - not really recommended, but source in it's raw form
- BZR branches - if you are doing development, or just *have* to be on the bleeding edge
- pypi - convenient, easy to install, updated with monthly release cycles
- .deb packages in the PPA - convenient, easy to install for Ubuntu, updated with monthly release cycles
Here are the problems:
Packages are fairly convenient to install, but take quite a bit of time at release time to update, rebuild, copy to all supported versions, and test. Because of this, if we have a new feature or important bug fix that we want to roll out before the next release, we have only two choices: 1. hot-fix it on the server, make very sure that we apply the same fix to trunk, or 2. fix it in trunk, test, make a lava-foo-20YY.MM-1 release, repackage, install, etc. Option 1 is a bit ugly, but fast. Option 2 is really the right thing to do, but very time consuming.
Another thing we would really like to do is have the ability to host multiple "instances", such as a production and a staging instance. Using packages, this isn't really possible. Using VMs is an option of course, but there are downsides and it would consume a lot of extra resources. Being able to deploy multiple instances is not only useful for production systems, but for development as well. If you are working on multiple branches and want to test them separately, it's nice to have an easy way to do that.
Finally, as we look for ways to make LAVA more scalable, one of the things we are looking at is celery. There are other libraries we need as well, so celery is just one of many, but one of the issues we have here is that there are no packages in the archive. Sure, we could build a package of it and keep it in our PPA, but then we are maintaining that package in addition to all the other LAVA components. And there will surely be others besides celery too.
As of yesterday, we are now deploying LAVA in the Linaro Validation Lab using a more flexible approach. Basically, it involves python virtual environments, with separate tables for each instance, and each instance running under it's own userid. Zygmunt and Michael in particular did a lot of hacking on most of the components to make them aware of the instances, and create upstart jobs that can start/stop/restart components based on the instance ID. Instances can be assembled from a list of requirements that can pull from pypi, or even bzr branches. There are even scripts (lp:lava-deploy-tool) to help with creation and setup of the instances. The scripts even support backing up and restoring the data.
So what will become of the packages? It was recently announced on the linaro-dev mailing list that we are phasing out packages, for at least the server components. We feel like the new methods of deployment offer greater flexibility, stable deployment support as well as easy ways to update to the latest code, or even your own branches, and many other benefits. Try it out and let us know what you think.