Post image for Synchronization Using Grand Central Dispatch

Synchronization Using Grand Central Dispatch

by Ed on September 1, 2010

To date, we’ve covered most of the basics of using GCD. This time we’ll get a bit fancier and use it to implement synchronization without traditional locks.Typically, to do synchronization, you use a mutex (lock) to guard a piece of data or code you want to ensure no one else is touching while you are. For this post, I’m going to assume you’re familiar with locking and just jump into the good stuff.

Using Locks

Typically, you’d either use NSLock or @synchronized to protect a critical section. So if we had a simple function we might do:

- (void)setStatus:(Status)status {
    @synchronized(self) {
        _status = status;
    }
}

Or

- (void)setStatus:(Status)status {
    [_lock lock];
    _status = status;
    [_lock unlock];
}

Regardless of the syntax here, you’re taking a lock, setting your value, and unlocking afterward. That means two kernel traps every time you set your status. And at least last I knew, @synchronized adds additional overhead on top of this.

Using a FIFO Dispatch Queue

Using dispatch_sync, you can implement locks simply by putting your critical sections onto a dispatch queue. While it might seem like you’re just trading one mechanism for another, you actually get more using GCD. First, the dispatch mechanism is faster than a traditional lock. Second, if you desire, you can get concurrency at no cost to you, which may potentially pay off in large quantities depending on what you are doing. Even if it only saves you a tiny bit, it’s still better than the alternative, and for me it’s easier to use and work through in my head.

The basic principle here is that you use a FIFO queue to control access to your vital properties, etc. Since only one block can be running at a time, that serializes your access to your critical data. While to some this might seem like an odd way to go about synchronizing, it is slightly more efficient.

Consider:

- (void)setStatus:(Status)status {
    dispatch_sync(_lockQueue, ^{
        _status = status;
    }
}

- (Status)status {
    __block Status result;

    dispatch_sync(_lockQueue, ^{
        result = _status;
    }
    return result;
}

So syntax-wise, it’s not much tougher than using @synchronized, though you have to use __block to get your result out.

As for speed, if I compare locking via @synchronize on self, @synchronize on another variable, NSLock, and dispatch queues, I get this (on a MacBook Pro):

[Shuttlecraft]~/src/locktest% ./locktest
@synchronized(self) took 1.832627 seconds
@synchronized(_stringValue) took 1.824402 seconds
NSLock took 1.201493 seconds
Dispatch Queue took 1.078460 seconds

As you see, @sychronized is the most expensive, followed by NSLock and then the dispatch queue method. This is over 10,000,000 iterations, so in real-world uses, they’re all pretty much equivalent.

So why use a dispatch queue? Well, some people find it simpler to conceive of running the critical sections on a queue. But the most compelling reason to consider it is to gain asynchrony, something you can’t do with a traditional lock. Well, not easily.

Asynchronous Setters

The trick to this is to use dispatch_async when setting, and dispatch_sync when getting. Changing our setter yields:

- (void)setStatus:(Status)status {
    dispatch_async(_lockQueue, ^{
        _status = status;
    }
}

Now when it runs the setter, it will actually do it in the background. If you were to immediately try to do a get, it would block and wait for the setter to finish, then run the getter code. But if you just did a set, or maybe 10 sets in a row, they can be running in the background while the rest of your code moves along happily. You’ll only truly need to block and wait if you happen to call the getter. That’s where you can get some serious gains.

But there is a cost. Here’s running the same lock test with dispatch_async:

[Shuttlecraft]~/src/locktest% ./locktest
@synchronized(self) took 1.766395 seconds
@synchronized(_stringValue) took 1.753744 seconds
NSLock took 1.169725 seconds
Dispatch Queue took 6.268607 seconds

Ouch. It’s actually quite a bit slower. This is because when you run the block async, it needs to copy the block, and that’s not exactly zero-cost. Of course, we’re still talking about 10 million iterations here, so in the big picture it’s still reasonable.

What this means though, is that if you want to take advantage of asynchronous locks like this is that the code inside the block should generally be worth putting off on another thread, such that the cost of the block copy is lost in the noise. A simple setter as shown here would be silly, but something more complicated, now you’re talking.

So ultimately, you get a locking system with less overhead, and the ability to gain concurrency. This is win-win in my book.

I’ve been using this type of ‘lock’ in a couple of places so far, and what I like best is that it seems to help me think about flow better. I don’t have to think “there’s two threads here so what’s what”. I just know that these things happen in sequence, and for whatever reason, it clarifies things for me. This means I can spend time worrying about the bigger picture of what my code does, and not trying to think of all types of edge cases. The other great thing is that the blocks retain my object automatically, so even if I released the object while the setter was running, I wouldn’t have to worry about accessing something that was no longer valid. It’s just simpler.

So OK… Win-Win-Win.

{ 8 comments… read them below or add one }

Dan (Yar) Rosenstark February 11, 2012 at 5:28 pm

Fascinating. So you run your setters async with the lockqueue (which you made?) and your getters sync (using those __block variables, that I didn’t even know about). Can you use this from different NSThreads, or… do you have to start your non-threads using “dispatchasync”?

Reply

Ed April 24, 2013 at 2:12 pm

Yes, you can do it from different threads. That’s kind of the whole point ;-) Doesn’t matter if they’re threads created by dispatch or by NSThread.

Reply

Anner van Hardenbroek December 23, 2012 at 3:13 pm

You can make it fly by using a concurrent queue and use dispatchbarrierasync() to set the value.

Reply

Ramy Al Zuhouri April 10, 2013 at 9:33 am

I have a doubt is about this getter:

  • (Status)status {
    __block Status result;

    dispatchsync(lockQueue, ^{
    result = _status;
    }
    return result;
    }

Shouldn’t this still cause a race condition? I mean that once assigned result, you’re out the block, and since result is like global because it is declared with the __block specifier, it’s value may be changed at any time by another call of the getter on another thread. So for example thread A calls the getter, result gets set to 0×800, but the method still does not return. Then thread B calls the setter and changes _status to 0×700, then thread C calls the getter so that result is set to 0×700, and then A is re-scheduled result has a different value.

Reply

Ed April 24, 2013 at 2:19 pm

You’re going to run into those types of larger scope race conditions regardless even if you use a standard lock. The point of the exercise was to show how you could do these types of things in a simple example. It really all depends on how this getter is used. But my presumption here is that if you were exposing a getter like this the result of any stale values would not cause any harm.

A classic example is actually an invalidation flag. Yes, thread A might be asking for the value and it might be false, and then thread B sets it to true. Thread A might try to do something now that it ‘knows’ the object is still valid. But typically you defend against this by ensuring the rest of your object can deal with calls (generally by ignoring them) while you are invalid. This idiom is used all over Foundation.

Reply

Jonah Neugass June 12, 2013 at 10:43 am

I like this pattern, but aren’t you creating retain cycles in your blocks?

Reply

Ed June 12, 2013 at 12:09 pm

No. The block is being invoked and then gets released. ‘self’ is being retained until the block gets released, sure, but the object never holds onto the block so there’s no cycle.

Reply

Pablo Roca August 25, 2014 at 5:01 am

Hi Ed,

Thanks for this great article. Can you share the code you used for the benchmarks?

Reply

Leave a Comment

Previous post: