Posts from "November 2009"

Opower Labs

Locking screens on Macs with the keyboard

  • By Charles Koppelman
  • November 24, 2009

Desktop security is a big issue when you have access to massive amounts of data. As far as plain old terminal security, locking your computer while you’re away is a pretty basic rule.

We have lots of OSes around the office here – Ubuntu, Windows, and OSX. Of those, Ubuntu and Windows natively provide a way to lock your desktop with a keystroke (Win+L or Ctrl+Alt+Del, Enter on Windows and Ctrl+Alt+L on Ubuntu). Strangely, Mac gives no easy way to lock your desktop.

We’ve been using HotCorners for the screen saver, but who wants to touch the mouse if you don’t have to (especially when you’re racing to grab a free lunch)?

Enter Quicksilver, everyone’s favorite application interface. (There are all sorts of things you can do with Quicksilver and its various plug-ins, but you can look that up yourself.) To set up a keyboard shortcut with Quicksilver to lock your screen (hat tip to Bryan Helmkamp):

  1. Create a symbolic link to your screen saver application:
    $ sudo ln -s /System/Library/Frameworks/ScreenSaver.framework/Versions/Current/Resources/ /Applications/Screen
  2. Quicksilver -> Triggers -> Custom Triggers
  3. Add a Hotkey
  4. Find the ScreenSaverEngine by typing and press enter
  5. Double-click the cell in the Trigger column
  6. Choose your hotkey (I like Ctrl+Cmd+ since it’s almost Ctrl+Alt+Del
  7. Close the window (you may also need to restart QS

Now set up your System Prefs in Security -> General to require a password immediately after screen saver begins and you’re set.

Sadly, you need to do this or your screen saver will awake as soon as you release your QS shortcut. This even happens if you set QS to launch screen saver “On Release”.

Anyway, after that, lock screen is just a keystroke away.

Read More
Opower Labs

When adding more threads makes it all slower

  • By Dave Copeland
  • November 16, 2009

I’ve been working on a new feature that requires analysis of each individual’s entire energy-use history. In other words, I have a process that will touch every single bit of data in our database. This should be a rare thing, so if it takes a while, it’s not that big of a deal. My initial implementation was on track to complete in…11 days.

My first thought was: there’s lots of blocking reading and writing from the database, so adding some threads should speed things up. While one thread was analyzing Bob’s energy data, another could be fetching Mary’s, while another could be updating Joe’s meta-data with the results. Or so I thought.

The more threads I added, the slower the entire thing became. It turned out that the fastest implementation was a single-threaded one. But why? It all has to do with the diminishing returns one gets from scaling out.

If you think of a task as having a serial component, which cannot be broken up concurrently, and then multiple tasks which can be done concurrently, we can analyze the returns we get by increasing the number of available processors (threads, in my case). This is Amdahl’s Law and is exemplified by the following equation (where “x” is the number of processors or threads, and “s” is the percentage of your overall task that must be serialized; “y” is the increase in speed you will see from scaling).
Amdahl's Equation

When you graph this, it’s pretty obvious that there are diminishing returns to adding more threads/processors (the graph below assumes that 90% of the overall job can be done concurrently). As we add threads, we get less and less of a gain in speed.
But there’s still a gain to get, so what happened to me?

Graph of Amdahl's Equation with 10% of our task serialized

Amdahl’s law is actually pretty optimistic. It doesn’t account for the overhead required to synchronize the shared data. If we account for this, with a new value “k” ( the percentage penalty for maintaining consistency), we see that increasing our processors/threads actually starts to hurt us (this equation is in red)!

Taking shared state synchronization into account

In my case, almost all of this synchronization is happening within the database; it turns out that almost all the time my process needs is accessing data from the database. So, I dialed it down to only one thread, and we should be done early next week.

Read More