The Minecraft community seems to have an obsession with doing everything on YouTube. For some people that’s great, but as someone who is already a coder and I just need to know the environment-specific details for coding for Minecraft I have a really hard time sitting through an hour of tutorial videos just to get the four things I needed to know that I didn’t already know how to do. This is where having a text and pictures version of a tutorial comes in handy, so you can skim through it looking for the parts you still need to know, and easily skip over the stuff you already know. Another friend of mine was also recently trying to get into writing a plugin for Spigot, and everything I could find to point him at which was text-based rather than a YouTube video was very outdated, had instructions that were no longer relevant because Bukkit and Spigot and Eclipse have all evolved since then, etc. So I figured I’d write one, and here it is!
Today is the 10th anniversary of Ubuntu‘s first release. This is slightly nostalgic as I was employed by them at the time. I was actually the very first employee of MRS Virtual Development (the legal name of the entity at the time), Robert Collins (mentioned in the above-linked article) being the second. The first two things Mark wanted in his company were a good bug tracker and a good source control system. My involvement with Mozilla as a volunteer at the time is actually how I came to be involved with it, as I’d been heavily involved with the Bugzilla bug tracking system at that point. Robert had been heavily involved with GNU Arch, an up-and-coming source code management system which was eventually forked by Canonical to become Bazaar.
The biggest thing I remember about my time working for Canonical (as the corporate entity eventually became known) was that I spent 2 weeks at a time in London approximately every 2 months. I spent almost a quarter of that year in London. This, of course, was pretty hard on me since it was hard being away from my family so much.
Although I thought the Ubuntu OS was a fantastic idea, and loved the way it was being built (and I still use it to this day on most of my computers), in the end, I didn’t really fit in well with the other people working on it. Almost everyone else Mark had hired to work on it came from a Debian background, which I had had almost zero involvement with prior to this experience, and the culture was very different from anything I’d ever dealt with before. I much preferred the culture among the volunteer community at Mozilla. Fortunately, Firefox was released about 3 weeks later, and the Mozilla Foundation suddenly had money as a result. I left Canonical and was hired by Mozilla the day before Firefox 1.0 released, and I am still at Mozilla today. This means I will also be celebrating my 10th anniversary working at Mozilla in just a few weeks.
It’s been a few years since I’ve posted anything here… Mozilla IT has a blog now, and most of my work-related posts have been going there. Most of my personal stuff has been going on Facebook or Twitter. Just the way things go as technology evolves I suppose.
Just this side of heaven is a place called Rainbow Bridge.
When an animal dies that has been especially close to someone here, that pet goes to Rainbow Bridge. There are meadows and hills for all of our special friends so they can run and play together. There is plenty of food, water and sunshine, and our friends are warm and comfortable. All the animals who had been ill and old are restored to health and vigor. Those who were hurt or maimed are made whole and strong again, just as we remember them in our dreams of days and times gone by. The animals are happy and content, except for one small thing; they each miss someone very special to them, who had to be left behind.
They all run and play together, but the day comes when one suddenly stops and looks into the distance. His bright eyes are intent. His eager body quivers. Suddenly he begins to run from the group, flying over the green grass, his legs carrying him faster and faster.
You have been spotted, and when you and your special friend finally meet, you cling together in joyous reunion, never to be parted again. The happy kisses rain upon your face; your hands again caress the beloved head, and you look once more into the trusting eyes of your pet, so long gone from your life but never absent from your heart.
I recently did up a diagram of how our Bugzilla site was set up, mostly for the benefit of other sysadmins trying to find the various pieces of it. Several folks expressed interest in sharing it with the community just to show an example of how we were set up. So I cleaned it up a little, and here it is:
At first glance it looks somewhat excessive just for a Bugzilla, but since the Mozilla Project lives and dies by the content of this site, all work pretty much stops if it doesn’t work, so it’s one of our highest-priority sites to keep operating at all times for developer support. The actual hardware required to run the site at full capacity for the amount of users we get hitting it is a little less than half of what’s shown in the diagram.
We have the entire site set up in two different datacenters (SJC1 is our San Jose datacenter, PHX1 is our Phoenix datacenter). Thanks to the load balancers taking care of the cross-datacenter connections for the master databases, it’s actually possible to run it from both sites concurrently to split the load. But because of the amount of traffic Bugzilla does to the master databases, and the latency in connection setup over that distance, it’s a little bit slow from whichever datacenter isn’t currently hosting the master, so we’ve been trying to keep DNS pointed at just one of them to keep it speedy.
This still works great as a hot failover, though, which got tested in action this last Sunday when we had a system board failure on the master database server in Phoenix. Failing the entire site over to San Jose took only minutes, and the tech from HP showed up to swap the system board 4 hours later. The fun part was that I had only finished setting up this hot failover setup about a week prior, so the timing couldn’t have been any better for that system board failure. If it had happened any sooner we might have been down for a long time waiting for the server to get fixed.
When everything is operational, we’re trying to keep it primarily hosted in Phoenix. As you can see in the diagram, the database servers in Phoenix are using solid-state disks for the database storage. The speed improvement when running large queries that is gained by using these instead of traditional spinning disks is just amazing. I haven’t done any actual timing to get hard facts on that, but the difference is large enough that you can easily notice it just from using the site.