proxygen
proxygen/folly/folly/docs/Synchronized.md
Go to the documentation of this file.
1 `folly/Synchronized.h`
2 ----------------------
3 
4 `folly/Synchronized.h` introduces a simple abstraction for mutex-
5 based concurrency. It replaces convoluted, unwieldy, and just
6 plain wrong code with simple constructs that are easy to get
7 right and difficult to get wrong.
8 
9 ### Motivation
10 
11 Many of our multithreaded C++ programs use shared data structures
12 associated with locks. This follows the time-honored adage of
13 mutex-based concurrency control "associate mutexes with data, not code".
14 Consider the following example:
15 
16 ``` Cpp
17 
18  class RequestHandler {
19  ...
20  RequestQueue requestQueue_;
21  SharedMutex requestQueueMutex_;
22 
23  std::map<std::string, Endpoint> requestEndpoints_;
24  SharedMutex requestEndpointsMutex_;
25 
26  HandlerState workState_;
27  SharedMutex workStateMutex_;
28  ...
29  };
30 ```
31 
32 Whenever the code needs to read or write some of the protected
33 data, it acquires the mutex for reading or for reading and
34 writing. For example:
35 
36 ``` Cpp
37  void RequestHandler::processRequest(const Request& request) {
38  stop_watch<> watch;
39  checkRequestValidity(request);
40  SharedMutex::WriteHolder lock(requestQueueMutex_);
41  requestQueue_.push_back(request);
42  stats_->addStatValue("requestEnqueueLatency", watch.elapsed());
43  LOG(INFO) << "enqueued request ID " << request.getID();
44  }
45 ```
46 
47 However, the correctness of the technique is entirely predicated on
48 convention. Developers manipulating these data members must take care
49 to explicitly acquire the correct lock for the data they wish to access.
50 There is no ostensible error for code that:
51 
52 * manipulates a piece of data without acquiring its lock first
53 * acquires a different lock instead of the intended one
54 * acquires a lock in read mode but modifies the guarded data structure
55 * acquires a lock in read-write mode although it only has `const` access
56  to the guarded data
57 
58 ### Introduction to `folly/Synchronized.h`
59 
60 The same code sample could be rewritten with `Synchronized`
61 as follows:
62 
63 ``` Cpp
64  class RequestHandler {
65  ...
66  Synchronized<RequestQueue> requestQueue_;
67  Synchronized<std::map<std::string, Endpoint>> requestEndpoints_;
68  Synchronized<HandlerState> workState_;
69  ...
70  };
71 
72  void RequestHandler::processRequest(const Request& request) {
73  stop_watch<> watch;
74  checkRequestValidity(request);
75  requestQueue_.wlock()->push_back(request);
76  stats_->addStatValue("requestEnqueueLatency", watch.elapsed());
77  LOG(INFO) << "enqueued request ID " << request.getID();
78  }
79 ```
80 
81 The rewrite does at maximum efficiency what needs to be done:
82 acquires the lock associated with the `RequestQueue` object, writes to
83 the queue, and releases the lock immediately thereafter.
84 
85 On the face of it, that's not much to write home about, and not
86 an obvious improvement over the previous state of affairs. But
87 the features at work invisible in the code above are as important
88 as those that are visible:
89 
90 * Unlike before, the data and the mutex protecting it are
91  inextricably encapsulated together.
92 * If you tried to use `requestQueue_` without acquiring the lock you
93  wouldn't be able to; it is virtually impossible to access the queue
94  without acquiring the correct lock.
95 * The lock is released immediately after the insert operation is
96  performed, and is not held for operations that do not need it.
97 
98 If you need to perform several operations while holding the lock,
99 `Synchronized` provides several options for doing this.
100 
101 The `wlock()` method (or `lock()` if you have a non-shared mutex type)
102 returns a `LockedPtr` object that can be stored in a variable. The lock
103 will be held for as long as this object exists, similar to a
104 `std::unique_lock`. This object can be used as if it were a pointer to
105 the underlying locked object:
106 
107 ``` Cpp
108  {
109  auto lockedQueue = requestQueue_.wlock();
110  lockedQueue->push_back(request1);
111  lockedQueue->push_back(request2);
112  }
113 ```
114 
115 The `rlock()` function is similar to `wlock()`, but acquires a shared lock
116 rather than an exclusive lock.
117 
118 We recommend explicitly opening a new nested scope whenever you store a
119 `LockedPtr` object, to help visibly delineate the critical section, and
120 to ensure that the `LockedPtr` is destroyed as soon as it is no longer
121 needed.
122 
123 Alternatively, `Synchronized` also provides mechanisms to run a function while
124 holding the lock. This makes it possible to use lambdas to define brief
125 critical sections:
126 
127 ``` Cpp
128  void RequestHandler::processRequest(const Request& request) {
129  stop_watch<> watch;
130  checkRequestValidity(request);
131  requestQueue_.withWLock([&](auto& queue) {
132  // withWLock() automatically holds the lock for the
133  // duration of this lambda function
134  queue.push_back(request);
135  });
136  stats_->addStatValue("requestEnqueueLatency", watch.elapsed());
137  LOG(INFO) << "enqueued request ID " << request.getID();
138  }
139 ```
140 
141 One advantage of the `withWLock()` approach is that it forces a new
142 scope to be used for the critical section, making the critical section
143 more obvious in the code, and helping to encourage code that releases
144 the lock as soon as possible.
145 
146 ### Template class `Synchronized<T>`
147 
148 #### Template Parameters
149 
150 `Synchronized` is a template with two parameters, the data type and a
151 mutex type: `Synchronized<T, Mutex>`.
152 
153 If not specified, the mutex type defaults to `folly::SharedMutex`. However, any
154 mutex type supported by `folly::LockTraits` can be used instead.
155 `folly::LockTraits` can be specialized to support other custom mutex
156 types that it does not know about out of the box. See
157 `folly/LockTraitsBoost.h` for an example of how to support additional mutex
158 types.
159 
160 `Synchronized` provides slightly different APIs when instantiated with a
161 shared mutex type or an upgrade mutex type then with a plain exclusive mutex.
162 If instantiated with either of the two mutex types above (either through
163 having a member called lock_shared() or specializing `LockTraits` as in
164 `folly/LockTraitsBoost.h`) the `Synchronized` object has corresponding
165 `wlock`, `rlock` or `ulock` methods to acquire different lock types. When
166 using a shared or upgrade mutex type, these APIs ensure that callers make an
167 explicit choice to acquire a shared, exclusive or upgrade lock and that
168 callers do not unintentionally lock the mutex in the incorrect mode. The
169 `rlock()` APIs only provide `const` access to the underlying data type,
170 ensuring that it cannot be modified when only holding a shared lock.
171 
172 #### Constructors
173 
174 The default constructor default-initializes the data and its
175 associated mutex.
176 
177 
178 The copy constructor locks the source for reading and copies its
179 data into the target. (The target is not locked as an object
180 under construction is only accessed by one thread.)
181 
182 Finally, `Synchronized<T>` defines an explicit constructor that
183 takes an object of type `T` and copies it. For example:
184 
185 ``` Cpp
186  // Default constructed
187  Synchronized<map<string, int>> syncMap1;
188 
189  // Copy constructed
190  Synchronized<map<string, int>> syncMap2(syncMap1);
191 
192  // Initializing from an existing map
193  map<string, int> init;
194  init["world"] = 42;
195  Synchronized<map<string, int>> syncMap3(init);
196  EXPECT_EQ(syncMap3->size(), 1);
197 ```
198 
199 #### Assignment, swap, and copying
200 
201 The copy assignment operator copies the underlying source data
202 into a temporary with the source mutex locked, and then move the
203 temporary into the destination data with the destination mutex
204 locked. This technique avoids the need to lock both mutexes at
205 the same time. Mutexes are not copied or moved.
206 
207 The move assignment operator assumes the source object is a true
208 rvalue and does lock lock the source mutex. It moves the source
209 data into the destination data with the destination mutex locked.
210 
211 `swap` acquires locks on both mutexes in increasing order of
212 object address, and then swaps the underlying data. This avoids
213 potential deadlock, which may otherwise happen should one thread
214 do `a = b` while another thread does `b = a`.
215 
216 The data copy assignment operator copies the parameter into the
217 destination data while the destination mutex is locked.
218 
219 The data move assignment operator moves the parameter into the
220 destination data while the destination mutex is locked.
221 
222 To get a copy of the guarded data, there are two methods
223 available: `void copy(T*)` and `T copy()`. The first copies data
224 to a provided target and the second returns a copy by value. Both
225 operations are done under a read lock. Example:
226 
227 ``` Cpp
228  Synchronized<vector<string>> syncVec1, syncVec2;
229  vector<string> vec;
230 
231  // Assign
232  syncVec1 = syncVec2;
233  // Assign straight from vector
234  syncVec1 = vec;
235 
236  // Swap
237  syncVec1.swap(syncVec2);
238  // Swap with vector
239  syncVec1.swap(vec);
240 
241  // Copy to given target
242  syncVec1.copy(&vec);
243  // Get a copy by value
244  auto copy = syncVec1.copy();
245 ```
246 
247 #### `lock()`
248 
249 If the mutex type used with `Synchronized` is a simple exclusive mutex
250 type (as opposed to a shared mutex), `Synchronized<T>` provides a
251 `lock()` method that returns a `LockedPtr<T>` to access the data while
252 holding the lock.
253 
254 The `LockedPtr` object returned by `lock()` holds the lock for as long
255 as it exists. Whenever possible, prefer declaring a separate inner
256 scope for storing this variable, to make sure the `LockedPtr` is
257 destroyed as soon as the lock is no longer needed:
258 
259 ``` Cpp
260  void fun(Synchronized<vector<string>, std::mutex>& vec) {
261  {
262  auto locked = vec.lock();
263  locked->push_back("hello");
264  locked->push_back("world");
265  }
266  LOG(INFO) << "successfully added greeting";
267  }
268 ```
269 
270 #### `wlock()` and `rlock()`
271 
272 If the mutex type used with `Synchronized` is a shared mutex type,
273 `Synchronized<T>` provides a `wlock()` method that acquires an exclusive
274 lock, and an `rlock()` method that acquires a shared lock.
275 
276 The `LockedPtr` returned by `rlock()` only provides const access to the
277 internal data, to ensure that it cannot be modified while only holding a
278 shared lock.
279 
280 ``` Cpp
281  int computeSum(const Synchronized<vector<int>>& vec) {
282  int sum = 0;
283  auto locked = vec.rlock();
284  for (int n : *locked) {
285  sum += n;
286  }
287  return sum;
288  }
289 
290  void doubleValues(Synchronized<vector<int>>& vec) {
291  auto locked = vec.wlock();
292  for (int& n : *locked) {
293  n *= 2;
294  }
295  }
296 ```
297 
298 This example brings us to a cautionary discussion. The `LockedPtr`
299 object returned by `lock()`, `wlock()`, or `rlock()` only holds the lock
300 as long as it exists. This object makes it difficult to access the data
301 without holding the lock, but not impossible. In particular you should
302 never store a raw pointer or reference to the internal data for longer
303 than the lifetime of the `LockedPtr` object.
304 
305 For instance, if we had written the following code in the examples
306 above, this would have continued accessing the vector after the lock had
307 been released:
308 
309 ``` Cpp
310  // No. NO. NO!
311  for (int& n : *vec.wlock()) {
312  n *= 2;
313  }
314 ```
315 
316 The `vec.wlock()` return value is destroyed in this case as soon as the
317 internal range iterators are created. The range iterators point into
318 the vector's data, but lock is released immediately, before executing
319 the loop body.
320 
321 Needless to say, this is a crime punishable by long debugging nights.
322 
323 Range-based for loops are slightly subtle about the lifetime of objects
324 used in the initializer statement. Most other problematic use cases are
325 a bit easier to spot than this, since the lifetime of the `LockedPtr` is
326 more explicitly visible.
327 
328 #### `withLock()`
329 
330 As an alternative to the `lock()` API, `Synchronized` also provides a
331 `withLock()` method that executes a function or lambda expression while
332 holding the lock. The function receives a reference to the data as its
333 only argument.
334 
335 This has a few benefits compared to `lock()`:
336 
337 * The lambda expression requires its own nested scope, making critical
338  sections more visible in the code. Callers are recommended to define
339  a new scope when using `lock()` if they choose to, but this is not
340  required. `withLock()` ensures that a new scope must always be
341  defined.
342 * Because a new scope is required, `withLock()` also helps encourage
343  users to release the lock as soon as possible. Because the critical
344  section scope is easily visible in the code, it is harder to
345  accidentally put extraneous code inside the critical section without
346  realizing it.
347 * The separate lambda scope makes it more difficult to store raw
348  pointers or references to the protected data and continue using those
349  pointers outside the critical section.
350 
351 For example, `withLock()` makes the range-based for loop mistake from
352 above much harder to accidentally run into:
353 
354 ``` Cpp
355  vec.withLock([](auto& locked) {
356  for (int& n : locked) {
357  n *= 2;
358  }
359  });
360 ```
361 
362 This code does not have the same problem as the counter-example with
363 `wlock()` above, since the lock is held for the duration of the loop.
364 
365 When using `Synchronized` with a shared mutex type, it provides separate
366 `withWLock()` and `withRLock()` methods instead of `withLock()`.
367 
368 #### `ulock()` and `withULockPtr()`
369 
370 `Synchronized` also supports upgrading and downgrading mutex lock levels as
371 long as the mutex type used to instantiate the `Synchronized` type has the
372 same interface as the mutex types in the C++ standard library, or if
373 `LockTraits` is specialized for the mutex type and the specialization is
374 visible. See below for an intro to upgrade mutexes.
375 
376 An upgrade lock can be acquired as usual either with the `ulock()` method or
377 the `withULockPtr()` method as so
378 
379 ``` Cpp
380  {
381  // only const access allowed to the underlying object when an upgrade lock
382  // is acquired
383  auto ulock = vec.ulock();
384  auto newSize = ulock->size();
385  }
386 
387  auto newSize = vec.withULockPtr([](auto ulock) {
388  // only const access allowed to the underlying object when an upgrade lock
389  // is acquired
390  return ulock->size();
391  });
392 ```
393 
394 An upgrade lock acquired via `ulock()` or `withULockPtr()` can be upgraded or
395 downgraded by calling any of the following methods on the `LockedPtr` proxy
396 
397 * `moveFromUpgradeToWrite()`
398 * `moveFromWriteToUpgrade()`
399 * `moveFromWriteToRead()`
400 * `moveFromUpgradeToRead()`
401 
402 Calling these leaves the `LockedPtr` object on which the method was called in
403 an invalid `null` state and returns another LockedPtr proxy holding the
404 specified lock. The upgrade or downgrade is done atomically - the
405 `Synchronized` object is never in an unlocked state during the lock state
406 transition. For example
407 
408 ``` Cpp
409  auto ulock = obj.ulock();
410  if (ulock->needsUpdate()) {
411  auto wlock = ulock.moveFromUpgradeToWrite();
412 
413  // ulock is now null
414 
415  wlock->updateObj();
416  }
417 ```
418 
419 This "move" can also occur in the context of a `withULockPtr()`
420 (`withWLockPtr()` or `withRLockPtr()` work as well!) function as so
421 
422 ``` Cpp
423  auto newSize = obj.withULockPtr([](auto ulock) {
424  if (ulock->needsUpdate()) {
425 
426  // release upgrade lock get write lock atomically
427  auto wlock = ulock.moveFromUpgradeToWrite();
428  // ulock is now null
429  wlock->updateObj();
430 
431  // release write lock and acquire read lock atomically
432  auto rlock = wlock.moveFromWriteToRead();
433  // wlock is now null
434  return rlock->newSize();
435 
436  } else {
437 
438  // release upgrade lock and acquire read lock atomically
439  auto rlock = ulock.moveFromUpgradeToRead();
440  // ulock is now null
441  return rlock->newSize();
442  }
443  });
444 ```
445 
446 #### Intro to upgrade mutexes:
447 
448 An upgrade mutex is a shared mutex with an extra state called `upgrade` and an
449 atomic state transition from `upgrade` to `unique`. The `upgrade` state is more
450 powerful than the `shared` state but less powerful than the `unique` state.
451 
452 An upgrade lock permits only const access to shared state for doing reads. It
453 does not permit mutable access to shared state for doing writes. Only a unique
454 lock permits mutable access for doing writes.
455 
456 An upgrade lock may be held concurrently with any number of shared locks on the
457 same mutex. An upgrade lock is exclusive with other upgrade locks and unique
458 locks on the same mutex - only one upgrade lock or unique lock may be held at a
459 time.
460 
461 The upgrade mutex solves the problem of doing a read of shared state and then
462 optionally doing a write to shared state efficiently under contention. Consider
463 this scenario with a shared mutex:
464 
465 ``` Cpp
466  struct MyObect {
467  bool isUpdateRequired() const;
468  void doUpdate();
469  };
470 
471  struct MyContainingObject {
472  folly::Synchronized<MyObject> sync;
473 
474  void mightHappenConcurrently() {
475  // first check
476  if (!sync.rlock()->isUpdateRequired()) {
477  return;
478  }
479  sync.withWLock([&](auto& state) {
480  // second check
481  if (!state.isUpdateRequired()) {
482  return;
483  }
484  state.doUpdate();
485  });
486  }
487  };
488 ```
489 
490 Here, the second `isUpdateRequired` check happens under a unique lock. This
491 means that the second check cannot be done concurrently with other threads doing
492 first `isUpdateRequired` checks under the shared lock, even though the second
493 check, like the first check, is read-only and requires only const access to the
494 shared state.
495 
496 This may even introduce unnecessary blocking under contention. Since the default
497 mutex type, `folly::SharedMutex`, has write priority, the unique lock protecting
498 the second check may introduce unnecessary blocking to all the other threads
499 that are attempting to acquire a shared lock to protect the first check. This
500 problem is called reader starvation.
501 
502 One solution is to use a shared mutex type with read priority, such as
503 `folly::SharedMutexReadPriority`. That can introduce less blocking under
504 contention to the other threads attemping to acquire a shared lock to do the
505 first check. However, that may backfire and cause threads which are attempting
506 to acquire a unique lock (for the second check) to stall, waiting for a moment
507 in time when there are no shared locks held on the mutex, a moment in time that
508 may never even happen. This problem is called writer starvation.
509 
510 Starvation is a tricky problem to solve in general. But we can partially side-
511 step it in our case.
512 
513 An alternative solution is to use an upgrade lock for the second check. Threads
514 attempting to acquire an upgrade lock for the second check do not introduce
515 unnecessary blocking to all other threads that are attempting to acquire a
516 shared lock for the first check. Only after the second check passes, and the
517 upgrade lock transitions atomically from an upgrade lock to a unique lock, does
518 the unique lock introduce *necessary* blocking to the other threads attempting
519 to acquire a shared lock. With this solution, unlike the solution without the
520 upgrade lock, the second check may be done concurrently with all other first
521 checks rather than blocking or being blocked by them.
522 
523 The example would then look like:
524 
525 ``` Cpp
526  struct MyObect {
527  bool isUpdateRequired() const;
528  void doUpdate();
529  };
530 
531  struct MyContainingObject {
532  folly::Synchronized<MyObject> sync;
533 
534  void mightHappenConcurrently() {
535  // first check
536  if (!sync.rlock()->isUpdateRequired()) {
537  return;
538  }
539  sync.withULockPtr([&](auto ulock) {
540  // second check
541  if (!ulock->isUpdateRequired()) {
542  return;
543  }
544  auto wlock = ulock.moveFromUpgradeToWrite();
545  wlock->doUpdate();
546  });
547  }
548  };
549 ```
550 
551 Note: Some shared mutex implementations offer an atomic state transition from
552 `shared` to `unique` and some upgrade mutex implementations offer an atomic
553 state transition from `shared` to `upgrade`. These atomic state transitions are
554 dangerous, however, and can deadlock when done concurrently on the same mutex.
555 For example, if threads A and B both hold shared locks on a mutex and are both
556 attempting to transition atomically from shared to upgrade locks, the threads
557 are deadlocked. Likewise if they are both attempting to transition atomically
558 from shared to unique locks, or one is attempting to transition atomically from
559 shared to upgrade while the other is attempting to transition atomically from
560 shared to unique. Therefore, `LockTraits` does not expose either of these
561 dangerous atomic state transitions even when the underlying mutex type supports
562 them. Likewise, `Synchronized`'s `LockedPtr` proxies do not expose these
563 dangerous atomic state transitions either.
564 
565 #### Timed Locking
566 
567 When `Synchronized` is used with a mutex type that supports timed lock
568 acquisition, `lock()`, `wlock()`, and `rlock()` can all take an optional
569 `std::chrono::duration` argument. This argument specifies a timeout to
570 use for acquiring the lock. If the lock is not acquired before the
571 timeout expires, a null `LockedPtr` object will be returned. Callers
572 must explicitly check the return value before using it:
573 
574 ``` Cpp
575  void fun(Synchronized<vector<string>>& vec) {
576  {
577  auto locked = vec.lock(10ms);
578  if (!locked) {
579  throw std::runtime_error("failed to acquire lock");
580  }
581  locked->push_back("hello");
582  locked->push_back("world");
583  }
584  LOG(INFO) << "successfully added greeting";
585  }
586 ```
587 
588 #### `unlock()` and `scopedUnlock()`
589 
590 `Synchronized` is a good mechanism for enforcing scoped
591 synchronization, but it has the inherent limitation that it
592 requires the critical section to be, well, scoped. Sometimes the
593 code structure requires a fleeting "escape" from the iron fist of
594 synchronization, while still inside the critical section scope.
595 
596 One common pattern is releasing the lock early on error code paths,
597 prior to logging an error message. The `LockedPtr` class provides an
598 `unlock()` method that makes this possible:
599 
600 ``` Cpp
601  Synchronized<map<int, string>> dic;
602  ...
603  {
604  auto locked = dic.rlock();
605  auto iter = locked->find(0);
606  if (iter == locked.end()) {
607  locked.unlock(); // don't hold the lock while logging
608  LOG(ERROR) << "key 0 not found";
609  return false;
610  }
611  processValue(*iter);
612  }
613  LOG(INFO) << "succeeded";
614 ```
615 
616 For more complex nested control flow scenarios, `scopedUnlock()` returns
617 an object that will release the lock for as long as it exists, and will
618 reacquire the lock when it goes out of scope.
619 
620 ``` Cpp
621 
622  Synchronized<map<int, string>> dic;
623  ...
624  {
625  auto locked = dic.wlock();
626  auto iter = locked->find(0);
627  if (iter == locked->end()) {
628  {
629  auto unlocker = locked.scopedUnlock();
630  LOG(INFO) << "Key 0 not found, inserting it."
631  }
632  locked->emplace(0, "zero");
633  } else {
634  *iter = "zero";
635  }
636  }
637 ```
638 
639 Clearly `scopedUnlock()` comes with specific caveats and
640 liabilities. You must assume that during the `scopedUnlock()`
641 section, other threads might have changed the protected structure
642 in arbitrary ways. In the example above, you cannot use the
643 iterator `iter` and you cannot assume that the key `0` is not in the
644 map; another thread might have inserted it while you were
645 bragging on `LOG(INFO)`.
646 
647 Whenever a `LockedPtr` object has been unlocked, whether with `unlock()`
648 or `scopedUnlock()`, it will behave as if it is null. `isNull()` will
649 return true. Dereferencing an unlocked `LockedPtr` is not allowed and
650 will result in undefined behavior.
651 
652 #### `Synchronized` and `std::condition_variable`
653 
654 When used with a `std::mutex`, `Synchronized` supports using a
655 `std::condition_variable` with its internal mutex. This allows a
656 `condition_variable` to be used to wait for a particular change to occur
657 in the internal data.
658 
659 The `LockedPtr` returned by `Synchronized<T, std::mutex>::lock()` has a
660 `getUniqueLock()` method that returns a reference to a
661 `std::unique_lock<std::mutex>`, which can be given to the
662 `std::condition_variable`:
663 
664 ``` Cpp
665  Synchronized<vector<string>, std::mutex> vec;
666  std::condition_variable emptySignal;
667 
668  // Assuming some other thread will put data on vec and signal
669  // emptySignal, we can then wait on it as follows:
670  auto locked = vec.lock();
671  emptySignal.wait(locked.getUniqueLock(),
672  [&] { return !locked->empty(); });
673 ```
674 
675 ### `acquireLocked()`
676 
677 Sometimes locking just one object won't be able to cut the mustard. Consider a
678 function that needs to lock two `Synchronized` objects at the
679 same time - for example, to copy some data from one to the other.
680 At first sight, it looks like sequential `wlock()` calls will work just
681 fine:
682 
683 ``` Cpp
684  void fun(Synchronized<vector<int>>& a, Synchronized<vector<int>>& b) {
685  auto lockedA = a.wlock();
686  auto lockedB = b.wlock();
687  ... use lockedA and lockedB ...
688  }
689 ```
690 
691 This code compiles and may even run most of the time, but embeds
692 a deadly peril: if one threads call `fun(x, y)` and another
693 thread calls `fun(y, x)`, then the two threads are liable to
694 deadlocking as each thread will be waiting for a lock the other
695 is holding. This issue is a classic that applies regardless of
696 the fact the objects involved have the same type.
697 
698 This classic problem has a classic solution: all threads must
699 acquire locks in the same order. The actual order is not
700 important, just the fact that the order is the same in all
701 threads. Many libraries simply acquire mutexes in increasing
702 order of their address, which is what we'll do, too. The
703 `acquireLocked()` function takes care of all details of proper
704 locking of two objects and offering their innards. It returns a
705 `std::tuple` of `LockedPtr`s:
706 
707 ``` Cpp
708  void fun(Synchronized<vector<int>>& a, Synchronized<vector<int>>& b) {
709  auto ret = folly::acquireLocked(a, b);
710  auto& lockedA = std::get<0>(ret);
711  auto& lockedB = std::get<1>(ret);
712  ... use lockedA and lockedB ...
713  }
714 ```
715 
716 Note that C++ 17 introduces
717 (structured binding syntax)[(http://wg21.link/P0144r2)]
718 which will make the returned tuple more convenient to use:
719 
720 ``` Cpp
721  void fun(Synchronized<vector<int>>& a, Synchronized<vector<int>>& b) {
722  auto [lockedA, lockedB] = folly::acquireLocked(a, b);
723  ... use lockedA and lockedB ...
724  }
725 ```
726 
727 An `acquireLockedPair()` function is also available, which returns a
728 `std::pair` instead of a `std::tuple`. This is more convenient to use
729 in many situations, until compiler support for structured bindings is
730 more widely available.
731 
732 ### Synchronizing several data items with one mutex
733 
734 The library is geared at protecting one object of a given type
735 with a mutex. However, sometimes we'd like to protect two or more
736 members with the same mutex. Consider for example a bidirectional
737 map, i.e. a map that holds an `int` to `string` mapping and also
738 the converse `string` to `int` mapping. The two maps would need
739 to be manipulated simultaneously. There are at least two designs
740 that come to mind.
741 
742 #### Using a nested `struct`
743 
744 You can easily pack the needed data items in a little struct.
745 For example:
746 
747 ``` Cpp
748  class Server {
749  struct BiMap {
750  map<int, string> direct;
751  map<string, int> inverse;
752  };
753  Synchronized<BiMap> bimap_;
754  ...
755  };
756  ...
757  bimap_.withLock([](auto& locked) {
758  locked.direct[0] = "zero";
759  locked.inverse["zero"] = 0;
760  });
761 ```
762 
763 With this code in tow you get to use `bimap_` just like any other
764 `Synchronized` object, without much effort.
765 
766 #### Using `std::tuple`
767 
768 If you won't stop short of using a spaceship-era approach,
769 `std::tuple` is there for you. The example above could be
770 rewritten for the same functionality like this:
771 
772 ``` Cpp
773  class Server {
774  Synchronized<tuple<map<int, string>, map<string, int>>> bimap_;
775  ...
776  };
777  ...
778  bimap_.withLock([](auto& locked) {
779  get<0>(locked)[0] = "zero";
780  get<1>(locked)["zero"] = 0;
781  });
782 ```
783 
784 The code uses `std::get` with compile-time integers to access the
785 fields in the tuple. The relative advantages and disadvantages of
786 using a local struct vs. `std::tuple` are quite obvious - in the
787 first case you need to invest in the definition, in the second
788 case you need to put up with slightly more verbose and less clear
789 access syntax.
790 
791 ### Summary
792 
793 `Synchronized` and its supporting tools offer you a simple,
794 robust paradigm for mutual exclusion-based concurrency. Instead
795 of manually pairing data with the mutexes that protect it and
796 relying on convention to use them appropriately, you can benefit
797 of encapsulation and typechecking to offload a large part of that
798 task and to provide good guarantees.