proxygen
proxygen/wangle/_build/gtest/src/gtest/googletest/docs/AdvancedGuide.md
Go to the documentation of this file.
1 
2 
3 Now that you have read [Primer](Primer.md) and learned how to write tests
4 using Google Test, it's time to learn some new tricks. This document
5 will show you more assertions as well as how to construct complex
6 failure messages, propagate fatal failures, reuse and speed up your
7 test fixtures, and use various flags with your tests.
8 
9 # More Assertions #
10 
11 This section covers some less frequently used, but still significant,
12 assertions.
13 
14 ## Explicit Success and Failure ##
15 
16 These three assertions do not actually test a value or expression. Instead,
17 they generate a success or failure directly. Like the macros that actually
18 perform a test, you may stream a custom failure message into the them.
19 
20 | `SUCCEED();` |
21 |:-------------|
22 
23 Generates a success. This does NOT make the overall test succeed. A test is
24 considered successful only if none of its assertions fail during its execution.
25 
26 Note: `SUCCEED()` is purely documentary and currently doesn't generate any
27 user-visible output. However, we may add `SUCCEED()` messages to Google Test's
28 output in the future.
29 
30 | `FAIL();` | `ADD_FAILURE();` | `ADD_FAILURE_AT("`_file\_path_`", `_line\_number_`);` |
31 |:-----------|:-----------------|:------------------------------------------------------|
32 
33 `FAIL()` generates a fatal failure, while `ADD_FAILURE()` and `ADD_FAILURE_AT()` generate a nonfatal
34 failure. These are useful when control flow, rather than a Boolean expression,
35 deteremines the test's success or failure. For example, you might want to write
36 something like:
37 
38 ```
39 switch(expression) {
40  case 1: ... some checks ...
41  case 2: ... some other checks
42  ...
43  default: FAIL() << "We shouldn't get here.";
44 }
45 ```
46 
47 Note: you can only use `FAIL()` in functions that return `void`. See the [Assertion Placement section](#assertion-placement) for more information.
48 
49 _Availability_: Linux, Windows, Mac.
50 
51 ## Exception Assertions ##
52 
53 These are for verifying that a piece of code throws (or does not
54 throw) an exception of the given type:
55 
56 | **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
57 |:--------------------|:-----------------------|:-------------|
58 | `ASSERT_THROW(`_statement_, _exception\_type_`);` | `EXPECT_THROW(`_statement_, _exception\_type_`);` | _statement_ throws an exception of the given type |
59 | `ASSERT_ANY_THROW(`_statement_`);` | `EXPECT_ANY_THROW(`_statement_`);` | _statement_ throws an exception of any type |
60 | `ASSERT_NO_THROW(`_statement_`);` | `EXPECT_NO_THROW(`_statement_`);` | _statement_ doesn't throw any exception |
61 
62 Examples:
63 
64 ```
65 ASSERT_THROW(Foo(5), bar_exception);
66 
67 EXPECT_NO_THROW({
68  int n = 5;
69  Bar(&n);
70 });
71 ```
72 
73 _Availability_: Linux, Windows, Mac; since version 1.1.0.
74 
75 ## Predicate Assertions for Better Error Messages ##
76 
77 Even though Google Test has a rich set of assertions, they can never be
78 complete, as it's impossible (nor a good idea) to anticipate all the scenarios
79 a user might run into. Therefore, sometimes a user has to use `EXPECT_TRUE()`
80 to check a complex expression, for lack of a better macro. This has the problem
81 of not showing you the values of the parts of the expression, making it hard to
82 understand what went wrong. As a workaround, some users choose to construct the
83 failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this
84 is awkward especially when the expression has side-effects or is expensive to
85 evaluate.
86 
87 Google Test gives you three different options to solve this problem:
88 
89 ### Using an Existing Boolean Function ###
90 
91 If you already have a function or a functor that returns `bool` (or a type
92 that can be implicitly converted to `bool`), you can use it in a _predicate
93 assertion_ to get the function arguments printed for free:
94 
95 | **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
96 |:--------------------|:-----------------------|:-------------|
97 | `ASSERT_PRED1(`_pred1, val1_`);` | `EXPECT_PRED1(`_pred1, val1_`);` | _pred1(val1)_ returns true |
98 | `ASSERT_PRED2(`_pred2, val1, val2_`);` | `EXPECT_PRED2(`_pred2, val1, val2_`);` | _pred2(val1, val2)_ returns true |
99 | ... | ... | ... |
100 
101 In the above, _predn_ is an _n_-ary predicate function or functor, where
102 _val1_, _val2_, ..., and _valn_ are its arguments. The assertion succeeds
103 if the predicate returns `true` when applied to the given arguments, and fails
104 otherwise. When the assertion fails, it prints the value of each argument. In
105 either case, the arguments are evaluated exactly once.
106 
107 Here's an example. Given
108 
109 ```
110 // Returns true iff m and n have no common divisors except 1.
111 bool MutuallyPrime(int m, int n) { ... }
112 const int a = 3;
113 const int b = 4;
114 const int c = 10;
115 ```
116 
117 the assertion `EXPECT_PRED2(MutuallyPrime, a, b);` will succeed, while the
118 assertion `EXPECT_PRED2(MutuallyPrime, b, c);` will fail with the message
119 
120 <pre>
121 !MutuallyPrime(b, c) is false, where<br>
122 b is 4<br>
123 c is 10<br>
124 </pre>
125 
126 **Notes:**
127 
128  1. If you see a compiler error "no matching function to call" when using `ASSERT_PRED*` or `EXPECT_PRED*`, please see [this FAQ](FAQ.md#the-compiler-complains-no-matching-function-to-call-when-i-use-assert_predn-how-do-i-fix-it) for how to resolve it.
129  1. Currently we only provide predicate assertions of arity <= 5. If you need a higher-arity assertion, let us know.
130 
131 _Availability_: Linux, Windows, Mac
132 
133 ### Using a Function That Returns an AssertionResult ###
134 
135 While `EXPECT_PRED*()` and friends are handy for a quick job, the
136 syntax is not satisfactory: you have to use different macros for
137 different arities, and it feels more like Lisp than C++. The
138 `::testing::AssertionResult` class solves this problem.
139 
140 An `AssertionResult` object represents the result of an assertion
141 (whether it's a success or a failure, and an associated message). You
142 can create an `AssertionResult` using one of these factory
143 functions:
144 
145 ```
146 namespace testing {
147 
148 // Returns an AssertionResult object to indicate that an assertion has
149 // succeeded.
150 AssertionResult AssertionSuccess();
151 
152 // Returns an AssertionResult object to indicate that an assertion has
153 // failed.
154 AssertionResult AssertionFailure();
155 
156 }
157 ```
158 
159 You can then use the `<<` operator to stream messages to the
160 `AssertionResult` object.
161 
162 To provide more readable messages in Boolean assertions
163 (e.g. `EXPECT_TRUE()`), write a predicate function that returns
164 `AssertionResult` instead of `bool`. For example, if you define
165 `IsEven()` as:
166 
167 ```
168 ::testing::AssertionResult IsEven(int n) {
169  if ((n % 2) == 0)
170  return ::testing::AssertionSuccess();
171  else
172  return ::testing::AssertionFailure() << n << " is odd";
173 }
174 ```
175 
176 instead of:
177 
178 ```
179 bool IsEven(int n) {
180  return (n % 2) == 0;
181 }
182 ```
183 
184 the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print:
185 
186 <pre>
187 Value of: IsEven(Fib(4))<br>
188 Actual: false (*3 is odd*)<br>
189 Expected: true<br>
190 </pre>
191 
192 instead of a more opaque
193 
194 <pre>
195 Value of: IsEven(Fib(4))<br>
196 Actual: false<br>
197 Expected: true<br>
198 </pre>
199 
200 If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE`
201 as well, and are fine with making the predicate slower in the success
202 case, you can supply a success message:
203 
204 ```
205 ::testing::AssertionResult IsEven(int n) {
206  if ((n % 2) == 0)
207  return ::testing::AssertionSuccess() << n << " is even";
208  else
209  return ::testing::AssertionFailure() << n << " is odd";
210 }
211 ```
212 
213 Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print
214 
215 <pre>
216 Value of: IsEven(Fib(6))<br>
217 Actual: true (8 is even)<br>
218 Expected: false<br>
219 </pre>
220 
221 _Availability_: Linux, Windows, Mac; since version 1.4.1.
222 
223 ### Using a Predicate-Formatter ###
224 
225 If you find the default message generated by `(ASSERT|EXPECT)_PRED*` and
226 `(ASSERT|EXPECT)_(TRUE|FALSE)` unsatisfactory, or some arguments to your
227 predicate do not support streaming to `ostream`, you can instead use the
228 following _predicate-formatter assertions_ to _fully_ customize how the
229 message is formatted:
230 
231 | **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
232 |:--------------------|:-----------------------|:-------------|
233 | `ASSERT_PRED_FORMAT1(`_pred\_format1, val1_`);` | `EXPECT_PRED_FORMAT1(`_pred\_format1, val1_`);` | _pred\_format1(val1)_ is successful |
234 | `ASSERT_PRED_FORMAT2(`_pred\_format2, val1, val2_`);` | `EXPECT_PRED_FORMAT2(`_pred\_format2, val1, val2_`);` | _pred\_format2(val1, val2)_ is successful |
235 | `...` | `...` | `...` |
236 
237 The difference between this and the previous two groups of macros is that instead of
238 a predicate, `(ASSERT|EXPECT)_PRED_FORMAT*` take a _predicate-formatter_
239 (_pred\_formatn_), which is a function or functor with the signature:
240 
241 `::testing::AssertionResult PredicateFormattern(const char* `_expr1_`, const char* `_expr2_`, ... const char* `_exprn_`, T1 `_val1_`, T2 `_val2_`, ... Tn `_valn_`);`
242 
243 where _val1_, _val2_, ..., and _valn_ are the values of the predicate
244 arguments, and _expr1_, _expr2_, ..., and _exprn_ are the corresponding
245 expressions as they appear in the source code. The types `T1`, `T2`, ..., and
246 `Tn` can be either value types or reference types. For example, if an
247 argument has type `Foo`, you can declare it as either `Foo` or `const Foo&`,
248 whichever is appropriate.
249 
250 A predicate-formatter returns a `::testing::AssertionResult` object to indicate
251 whether the assertion has succeeded or not. The only way to create such an
252 object is to call one of these factory functions:
253 
254 As an example, let's improve the failure message in the previous example, which uses `EXPECT_PRED2()`:
255 
256 ```
257 // Returns the smallest prime common divisor of m and n,
258 // or 1 when m and n are mutually prime.
259 int SmallestPrimeCommonDivisor(int m, int n) { ... }
260 
261 // A predicate-formatter for asserting that two integers are mutually prime.
262 ::testing::AssertionResult AssertMutuallyPrime(const char* m_expr,
263  const char* n_expr,
264  int m,
265  int n) {
266  if (MutuallyPrime(m, n))
267  return ::testing::AssertionSuccess();
268 
269  return ::testing::AssertionFailure()
270  << m_expr << " and " << n_expr << " (" << m << " and " << n
271  << ") are not mutually prime, " << "as they have a common divisor "
272  << SmallestPrimeCommonDivisor(m, n);
273 }
274 ```
275 
276 With this predicate-formatter, we can use
277 
278 ```
279 EXPECT_PRED_FORMAT2(AssertMutuallyPrime, b, c);
280 ```
281 
282 to generate the message
283 
284 <pre>
285 b and c (4 and 10) are not mutually prime, as they have a common divisor 2.<br>
286 </pre>
287 
288 As you may have realized, many of the assertions we introduced earlier are
289 special cases of `(EXPECT|ASSERT)_PRED_FORMAT*`. In fact, most of them are
290 indeed defined using `(EXPECT|ASSERT)_PRED_FORMAT*`.
291 
292 _Availability_: Linux, Windows, Mac.
293 
294 
295 ## Floating-Point Comparison ##
296 
297 Comparing floating-point numbers is tricky. Due to round-off errors, it is
298 very unlikely that two floating-points will match exactly. Therefore,
299 `ASSERT_EQ` 's naive comparison usually doesn't work. And since floating-points
300 can have a wide value range, no single fixed error bound works. It's better to
301 compare by a fixed relative error bound, except for values close to 0 due to
302 the loss of precision there.
303 
304 In general, for floating-point comparison to make sense, the user needs to
305 carefully choose the error bound. If they don't want or care to, comparing in
306 terms of Units in the Last Place (ULPs) is a good default, and Google Test
307 provides assertions to do this. Full details about ULPs are quite long; if you
308 want to learn more, see
309 [this article on float comparison](http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm).
310 
311 ### Floating-Point Macros ###
312 
313 | **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
314 |:--------------------|:-----------------------|:-------------|
315 | `ASSERT_FLOAT_EQ(`_val1, val2_`);` | `EXPECT_FLOAT_EQ(`_val1, val2_`);` | the two `float` values are almost equal |
316 | `ASSERT_DOUBLE_EQ(`_val1, val2_`);` | `EXPECT_DOUBLE_EQ(`_val1, val2_`);` | the two `double` values are almost equal |
317 
318 By "almost equal", we mean the two values are within 4 ULP's from each
319 other.
320 
321 The following assertions allow you to choose the acceptable error bound:
322 
323 | **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
324 |:--------------------|:-----------------------|:-------------|
325 | `ASSERT_NEAR(`_val1, val2, abs\_error_`);` | `EXPECT_NEAR`_(val1, val2, abs\_error_`);` | the difference between _val1_ and _val2_ doesn't exceed the given absolute error |
326 
327 _Availability_: Linux, Windows, Mac.
328 
329 ### Floating-Point Predicate-Format Functions ###
330 
331 Some floating-point operations are useful, but not that often used. In order
332 to avoid an explosion of new macros, we provide them as predicate-format
333 functions that can be used in predicate assertion macros (e.g.
334 `EXPECT_PRED_FORMAT2`, etc).
335 
336 ```
337 EXPECT_PRED_FORMAT2(::testing::FloatLE, val1, val2);
338 EXPECT_PRED_FORMAT2(::testing::DoubleLE, val1, val2);
339 ```
340 
341 Verifies that _val1_ is less than, or almost equal to, _val2_. You can
342 replace `EXPECT_PRED_FORMAT2` in the above table with `ASSERT_PRED_FORMAT2`.
343 
344 _Availability_: Linux, Windows, Mac.
345 
346 ## Windows HRESULT assertions ##
347 
348 These assertions test for `HRESULT` success or failure.
349 
350 | **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
351 |:--------------------|:-----------------------|:-------------|
352 | `ASSERT_HRESULT_SUCCEEDED(`_expression_`);` | `EXPECT_HRESULT_SUCCEEDED(`_expression_`);` | _expression_ is a success `HRESULT` |
353 | `ASSERT_HRESULT_FAILED(`_expression_`);` | `EXPECT_HRESULT_FAILED(`_expression_`);` | _expression_ is a failure `HRESULT` |
354 
355 The generated output contains the human-readable error message
356 associated with the `HRESULT` code returned by _expression_.
357 
358 You might use them like this:
359 
360 ```
361 CComPtr shell;
362 ASSERT_HRESULT_SUCCEEDED(shell.CoCreateInstance(L"Shell.Application"));
363 CComVariant empty;
364 ASSERT_HRESULT_SUCCEEDED(shell->ShellExecute(CComBSTR(url), empty, empty, empty, empty));
365 ```
366 
367 _Availability_: Windows.
368 
369 ## Type Assertions ##
370 
371 You can call the function
372 ```
373 ::testing::StaticAssertTypeEq<T1, T2>();
374 ```
375 to assert that types `T1` and `T2` are the same. The function does
376 nothing if the assertion is satisfied. If the types are different,
377 the function call will fail to compile, and the compiler error message
378 will likely (depending on the compiler) show you the actual values of
379 `T1` and `T2`. This is mainly useful inside template code.
380 
381 _Caveat:_ When used inside a member function of a class template or a
382 function template, `StaticAssertTypeEq<T1, T2>()` is effective _only if_
383 the function is instantiated. For example, given:
384 ```
385 template <typename T> class Foo {
386  public:
387  void Bar() { ::testing::StaticAssertTypeEq<int, T>(); }
388 };
389 ```
390 the code:
391 ```
392 void Test1() { Foo<bool> foo; }
393 ```
394 will _not_ generate a compiler error, as `Foo<bool>::Bar()` is never
395 actually instantiated. Instead, you need:
396 ```
397 void Test2() { Foo<bool> foo; foo.Bar(); }
398 ```
399 to cause a compiler error.
400 
401 _Availability:_ Linux, Windows, Mac; since version 1.3.0.
402 
403 ## Assertion Placement ##
404 
405 You can use assertions in any C++ function. In particular, it doesn't
406 have to be a method of the test fixture class. The one constraint is
407 that assertions that generate a fatal failure (`FAIL*` and `ASSERT_*`)
408 can only be used in void-returning functions. This is a consequence of
409 Google Test not using exceptions. By placing it in a non-void function
410 you'll get a confusing compile error like
411 `"error: void value not ignored as it ought to be"`.
412 
413 If you need to use assertions in a function that returns non-void, one option
414 is to make the function return the value in an out parameter instead. For
415 example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You
416 need to make sure that `*result` contains some sensible value even when the
417 function returns prematurely. As the function now returns `void`, you can use
418 any assertion inside of it.
419 
420 If changing the function's type is not an option, you should just use
421 assertions that generate non-fatal failures, such as `ADD_FAILURE*` and
422 `EXPECT_*`.
423 
424 _Note_: Constructors and destructors are not considered void-returning
425 functions, according to the C++ language specification, and so you may not use
426 fatal assertions in them. You'll get a compilation error if you try. A simple
427 workaround is to transfer the entire body of the constructor or destructor to a
428 private void-returning method. However, you should be aware that a fatal
429 assertion failure in a constructor does not terminate the current test, as your
430 intuition might suggest; it merely returns from the constructor early, possibly
431 leaving your object in a partially-constructed state. Likewise, a fatal
432 assertion failure in a destructor may leave your object in a
433 partially-destructed state. Use assertions carefully in these situations!
434 
435 # Teaching Google Test How to Print Your Values #
436 
437 When a test assertion such as `EXPECT_EQ` fails, Google Test prints the
438 argument values to help you debug. It does this using a
439 user-extensible value printer.
440 
441 This printer knows how to print built-in C++ types, native arrays, STL
442 containers, and any type that supports the `<<` operator. For other
443 types, it prints the raw bytes in the value and hopes that you the
444 user can figure it out.
445 
446 As mentioned earlier, the printer is _extensible_. That means
447 you can teach it to do a better job at printing your particular type
448 than to dump the bytes. To do that, define `<<` for your type:
449 
450 ```
451 #include <iostream>
452 
453 namespace foo {
454 
455 class Bar { ... }; // We want Google Test to be able to print instances of this.
456 
457 // It's important that the << operator is defined in the SAME
458 // namespace that defines Bar. C++'s look-up rules rely on that.
459 ::std::ostream& operator<<(::std::ostream& os, const Bar& bar) {
460  return os << bar.DebugString(); // whatever needed to print bar to os
461 }
462 
463 } // namespace foo
464 ```
465 
466 Sometimes, this might not be an option: your team may consider it bad
467 style to have a `<<` operator for `Bar`, or `Bar` may already have a
468 `<<` operator that doesn't do what you want (and you cannot change
469 it). If so, you can instead define a `PrintTo()` function like this:
470 
471 ```
472 #include <iostream>
473 
474 namespace foo {
475 
476 class Bar { ... };
477 
478 // It's important that PrintTo() is defined in the SAME
479 // namespace that defines Bar. C++'s look-up rules rely on that.
480 void PrintTo(const Bar& bar, ::std::ostream* os) {
481  *os << bar.DebugString(); // whatever needed to print bar to os
482 }
483 
484 } // namespace foo
485 ```
486 
487 If you have defined both `<<` and `PrintTo()`, the latter will be used
488 when Google Test is concerned. This allows you to customize how the value
489 appears in Google Test's output without affecting code that relies on the
490 behavior of its `<<` operator.
491 
492 If you want to print a value `x` using Google Test's value printer
493 yourself, just call `::testing::PrintToString(`_x_`)`, which
494 returns an `std::string`:
495 
496 ```
497 vector<pair<Bar, int> > bar_ints = GetBarIntVector();
498 
499 EXPECT_TRUE(IsCorrectBarIntVector(bar_ints))
500  << "bar_ints = " << ::testing::PrintToString(bar_ints);
501 ```
502 
503 # Death Tests #
504 
505 In many applications, there are assertions that can cause application failure
506 if a condition is not met. These sanity checks, which ensure that the program
507 is in a known good state, are there to fail at the earliest possible time after
508 some program state is corrupted. If the assertion checks the wrong condition,
509 then the program may proceed in an erroneous state, which could lead to memory
510 corruption, security holes, or worse. Hence it is vitally important to test
511 that such assertion statements work as expected.
512 
513 Since these precondition checks cause the processes to die, we call such tests
514 _death tests_. More generally, any test that checks that a program terminates
515 (except by throwing an exception) in an expected fashion is also a death test.
516 
517 Note that if a piece of code throws an exception, we don't consider it "death"
518 for the purpose of death tests, as the caller of the code could catch the exception
519 and avoid the crash. If you want to verify exceptions thrown by your code,
520 see [Exception Assertions](#exception-assertions).
521 
522 If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see [Catching Failures](#catching-failures).
523 
524 ## How to Write a Death Test ##
525 
526 Google Test has the following macros to support death tests:
527 
528 | **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
529 |:--------------------|:-----------------------|:-------------|
530 | `ASSERT_DEATH(`_statement, regex_`);` | `EXPECT_DEATH(`_statement, regex_`);` | _statement_ crashes with the given error |
531 | `ASSERT_DEATH_IF_SUPPORTED(`_statement, regex_`);` | `EXPECT_DEATH_IF_SUPPORTED(`_statement, regex_`);` | if death tests are supported, verifies that _statement_ crashes with the given error; otherwise verifies nothing |
532 | `ASSERT_EXIT(`_statement, predicate, regex_`);` | `EXPECT_EXIT(`_statement, predicate, regex_`);` |_statement_ exits with the given error and its exit code matches _predicate_ |
533 
534 where _statement_ is a statement that is expected to cause the process to
535 die, _predicate_ is a function or function object that evaluates an integer
536 exit status, and _regex_ is a regular expression that the stderr output of
537 _statement_ is expected to match. Note that _statement_ can be _any valid
538 statement_ (including _compound statement_) and doesn't have to be an
539 expression.
540 
541 As usual, the `ASSERT` variants abort the current test function, while the
542 `EXPECT` variants do not.
543 
544 **Note:** We use the word "crash" here to mean that the process
545 terminates with a _non-zero_ exit status code. There are two
546 possibilities: either the process has called `exit()` or `_exit()`
547 with a non-zero value, or it may be killed by a signal.
548 
549 This means that if _statement_ terminates the process with a 0 exit
550 code, it is _not_ considered a crash by `EXPECT_DEATH`. Use
551 `EXPECT_EXIT` instead if this is the case, or if you want to restrict
552 the exit code more precisely.
553 
554 A predicate here must accept an `int` and return a `bool`. The death test
555 succeeds only if the predicate returns `true`. Google Test defines a few
556 predicates that handle the most common cases:
557 
558 ```
559 ::testing::ExitedWithCode(exit_code)
560 ```
561 
562 This expression is `true` if the program exited normally with the given exit
563 code.
564 
565 ```
566 ::testing::KilledBySignal(signal_number) // Not available on Windows.
567 ```
568 
569 This expression is `true` if the program was killed by the given signal.
570 
571 The `*_DEATH` macros are convenient wrappers for `*_EXIT` that use a predicate
572 that verifies the process' exit code is non-zero.
573 
574 Note that a death test only cares about three things:
575 
576  1. does _statement_ abort or exit the process?
577  1. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status satisfy _predicate_? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`) is the exit status non-zero? And
578  1. does the stderr output match _regex_?
579 
580 In particular, if _statement_ generates an `ASSERT_*` or `EXPECT_*` failure, it will **not** cause the death test to fail, as Google Test assertions don't abort the process.
581 
582 To write a death test, simply use one of the above macros inside your test
583 function. For example,
584 
585 ```
586 TEST(MyDeathTest, Foo) {
587  // This death test uses a compound statement.
588  ASSERT_DEATH({ int n = 5; Foo(&n); }, "Error on line .* of Foo()");
589 }
590 TEST(MyDeathTest, NormalExit) {
591  EXPECT_EXIT(NormalExit(), ::testing::ExitedWithCode(0), "Success");
592 }
593 TEST(MyDeathTest, KillMyself) {
594  EXPECT_EXIT(KillMyself(), ::testing::KilledBySignal(SIGKILL), "Sending myself unblockable signal");
595 }
596 ```
597 
598 verifies that:
599 
600  * calling `Foo(5)` causes the process to die with the given error message,
601  * calling `NormalExit()` causes the process to print `"Success"` to stderr and exit with exit code 0, and
602  * calling `KillMyself()` kills the process with signal `SIGKILL`.
603 
604 The test function body may contain other assertions and statements as well, if
605 necessary.
606 
607 _Important:_ We strongly recommend you to follow the convention of naming your
608 test case (not test) `*DeathTest` when it contains a death test, as
609 demonstrated in the above example. The `Death Tests And Threads` section below
610 explains why.
611 
612 If a test fixture class is shared by normal tests and death tests, you
613 can use typedef to introduce an alias for the fixture class and avoid
614 duplicating its code:
615 ```
616 class FooTest : public ::testing::Test { ... };
617 
618 typedef FooTest FooDeathTest;
619 
620 TEST_F(FooTest, DoesThis) {
621  // normal test
622 }
623 
624 TEST_F(FooDeathTest, DoesThat) {
625  // death test
626 }
627 ```
628 
629 _Availability:_ Linux, Windows (requires MSVC 8.0 or above), Cygwin, and Mac (the latter three are supported since v1.3.0). `(ASSERT|EXPECT)_DEATH_IF_SUPPORTED` are new in v1.4.0.
630 
631 ## Regular Expression Syntax ##
632 
633 On POSIX systems (e.g. Linux, Cygwin, and Mac), Google Test uses the
634 [POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04)
635 syntax in death tests. To learn about this syntax, you may want to read this [Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions).
636 
637 On Windows, Google Test uses its own simple regular expression
638 implementation. It lacks many features you can find in POSIX extended
639 regular expressions. For example, we don't support union (`"x|y"`),
640 grouping (`"(xy)"`), brackets (`"[xy]"`), and repetition count
641 (`"x{5,7}"`), among others. Below is what we do support (Letter `A` denotes a
642 literal character, period (`.`), or a single `\\` escape sequence; `x`
643 and `y` denote regular expressions.):
644 
645 | `c` | matches any literal character `c` |
646 |:----|:----------------------------------|
647 | `\\d` | matches any decimal digit |
648 | `\\D` | matches any character that's not a decimal digit |
649 | `\\f` | matches `\f` |
650 | `\\n` | matches `\n` |
651 | `\\r` | matches `\r` |
652 | `\\s` | matches any ASCII whitespace, including `\n` |
653 | `\\S` | matches any character that's not a whitespace |
654 | `\\t` | matches `\t` |
655 | `\\v` | matches `\v` |
656 | `\\w` | matches any letter, `_`, or decimal digit |
657 | `\\W` | matches any character that `\\w` doesn't match |
658 | `\\c` | matches any literal character `c`, which must be a punctuation |
659 | `\\.` | matches the `.` character |
660 | `.` | matches any single character except `\n` |
661 | `A?` | matches 0 or 1 occurrences of `A` |
662 | `A*` | matches 0 or many occurrences of `A` |
663 | `A+` | matches 1 or many occurrences of `A` |
664 | `^` | matches the beginning of a string (not that of each line) |
665 | `$` | matches the end of a string (not that of each line) |
666 | `xy` | matches `x` followed by `y` |
667 
668 To help you determine which capability is available on your system,
669 Google Test defines macro `GTEST_USES_POSIX_RE=1` when it uses POSIX
670 extended regular expressions, or `GTEST_USES_SIMPLE_RE=1` when it uses
671 the simple version. If you want your death tests to work in both
672 cases, you can either `#if` on these macros or use the more limited
673 syntax only.
674 
675 ## How It Works ##
676 
677 Under the hood, `ASSERT_EXIT()` spawns a new process and executes the
678 death test statement in that process. The details of of how precisely
679 that happens depend on the platform and the variable
680 `::testing::GTEST_FLAG(death_test_style)` (which is initialized from the
681 command-line flag `--gtest_death_test_style`).
682 
683  * On POSIX systems, `fork()` (or `clone()` on Linux) is used to spawn the child, after which:
684  * If the variable's value is `"fast"`, the death test statement is immediately executed.
685  * If the variable's value is `"threadsafe"`, the child process re-executes the unit test binary just as it was originally invoked, but with some extra flags to cause just the single death test under consideration to be run.
686  * On Windows, the child is spawned using the `CreateProcess()` API, and re-executes the binary to cause just the single death test under consideration to be run - much like the `threadsafe` mode on POSIX.
687 
688 Other values for the variable are illegal and will cause the death test to
689 fail. Currently, the flag's default value is `"fast"`. However, we reserve the
690 right to change it in the future. Therefore, your tests should not depend on
691 this.
692 
693 In either case, the parent process waits for the child process to complete, and checks that
694 
695  1. the child's exit status satisfies the predicate, and
696  1. the child's stderr matches the regular expression.
697 
698 If the death test statement runs to completion without dying, the child
699 process will nonetheless terminate, and the assertion fails.
700 
701 ## Death Tests And Threads ##
702 
703 The reason for the two death test styles has to do with thread safety. Due to
704 well-known problems with forking in the presence of threads, death tests should
705 be run in a single-threaded context. Sometimes, however, it isn't feasible to
706 arrange that kind of environment. For example, statically-initialized modules
707 may start threads before main is ever reached. Once threads have been created,
708 it may be difficult or impossible to clean them up.
709 
710 Google Test has three features intended to raise awareness of threading issues.
711 
712  1. A warning is emitted if multiple threads are running when a death test is encountered.
713  1. Test cases with a name ending in "DeathTest" are run before all other tests.
714  1. It uses `clone()` instead of `fork()` to spawn the child process on Linux (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely to cause the child to hang when the parent process has multiple threads.
715 
716 It's perfectly fine to create threads inside a death test statement; they are
717 executed in a separate process and cannot affect the parent.
718 
719 ## Death Test Styles ##
720 
721 The "threadsafe" death test style was introduced in order to help mitigate the
722 risks of testing in a possibly multithreaded environment. It trades increased
723 test execution time (potentially dramatically so) for improved thread safety.
724 We suggest using the faster, default "fast" style unless your test has specific
725 problems with it.
726 
727 You can choose a particular style of death tests by setting the flag
728 programmatically:
729 
730 ```
731 ::testing::FLAGS_gtest_death_test_style = "threadsafe";
732 ```
733 
734 You can do this in `main()` to set the style for all death tests in the
735 binary, or in individual tests. Recall that flags are saved before running each
736 test and restored afterwards, so you need not do that yourself. For example:
737 
738 ```
739 TEST(MyDeathTest, TestOne) {
740  ::testing::FLAGS_gtest_death_test_style = "threadsafe";
741  // This test is run in the "threadsafe" style:
742  ASSERT_DEATH(ThisShouldDie(), "");
743 }
744 
745 TEST(MyDeathTest, TestTwo) {
746  // This test is run in the "fast" style:
747  ASSERT_DEATH(ThisShouldDie(), "");
748 }
749 
750 int main(int argc, char** argv) {
751  ::testing::InitGoogleTest(&argc, argv);
752  ::testing::FLAGS_gtest_death_test_style = "fast";
753  return RUN_ALL_TESTS();
754 }
755 ```
756 
757 ## Caveats ##
758 
759 The _statement_ argument of `ASSERT_EXIT()` can be any valid C++ statement.
760 If it leaves the current function via a `return` statement or by throwing an exception,
761 the death test is considered to have failed. Some Google Test macros may return
762 from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid them in _statement_.
763 
764 Since _statement_ runs in the child process, any in-memory side effect (e.g.
765 modifying a variable, releasing memory, etc) it causes will _not_ be observable
766 in the parent process. In particular, if you release memory in a death test,
767 your program will fail the heap check as the parent process will never see the
768 memory reclaimed. To solve this problem, you can
769 
770  1. try not to free memory in a death test;
771  1. free the memory again in the parent process; or
772  1. do not use the heap checker in your program.
773 
774 Due to an implementation detail, you cannot place multiple death test
775 assertions on the same line; otherwise, compilation will fail with an unobvious
776 error message.
777 
778 Despite the improved thread safety afforded by the "threadsafe" style of death
779 test, thread problems such as deadlock are still possible in the presence of
780 handlers registered with `pthread_atfork(3)`.
781 
782 # Using Assertions in Sub-routines #
783 
784 ## Adding Traces to Assertions ##
785 
786 If a test sub-routine is called from several places, when an assertion
787 inside it fails, it can be hard to tell which invocation of the
788 sub-routine the failure is from. You can alleviate this problem using
789 extra logging or custom failure messages, but that usually clutters up
790 your tests. A better solution is to use the `SCOPED_TRACE` macro:
791 
792 | `SCOPED_TRACE(`_message_`);` |
793 |:-----------------------------|
794 
795 where _message_ can be anything streamable to `std::ostream`. This
796 macro will cause the current file name, line number, and the given
797 message to be added in every failure message. The effect will be
798 undone when the control leaves the current lexical scope.
799 
800 For example,
801 
802 ```
803 10: void Sub1(int n) {
804 11: EXPECT_EQ(1, Bar(n));
805 12: EXPECT_EQ(2, Bar(n + 1));
806 13: }
807 14:
808 15: TEST(FooTest, Bar) {
809 16: {
810 17: SCOPED_TRACE("A"); // This trace point will be included in
811 18: // every failure in this scope.
812 19: Sub1(1);
813 20: }
814 21: // Now it won't.
815 22: Sub1(9);
816 23: }
817 ```
818 
819 could result in messages like these:
820 
821 ```
822 path/to/foo_test.cc:11: Failure
823 Value of: Bar(n)
824 Expected: 1
825  Actual: 2
826  Trace:
827 path/to/foo_test.cc:17: A
828 
829 path/to/foo_test.cc:12: Failure
830 Value of: Bar(n + 1)
831 Expected: 2
832  Actual: 3
833 ```
834 
835 Without the trace, it would've been difficult to know which invocation
836 of `Sub1()` the two failures come from respectively. (You could add an
837 extra message to each assertion in `Sub1()` to indicate the value of
838 `n`, but that's tedious.)
839 
840 Some tips on using `SCOPED_TRACE`:
841 
842  1. With a suitable message, it's often enough to use `SCOPED_TRACE` at the beginning of a sub-routine, instead of at each call site.
843  1. When calling sub-routines inside a loop, make the loop iterator part of the message in `SCOPED_TRACE` such that you can know which iteration the failure is from.
844  1. Sometimes the line number of the trace point is enough for identifying the particular invocation of a sub-routine. In this case, you don't have to choose a unique message for `SCOPED_TRACE`. You can simply use `""`.
845  1. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer scope. In this case, all active trace points will be included in the failure messages, in reverse order they are encountered.
846  1. The trace dump is clickable in Emacs' compilation buffer - hit return on a line number and you'll be taken to that line in the source file!
847 
848 _Availability:_ Linux, Windows, Mac.
849 
850 ## Propagating Fatal Failures ##
851 
852 A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that
853 when they fail they only abort the _current function_, not the entire test. For
854 example, the following test will segfault:
855 ```
856 void Subroutine() {
857  // Generates a fatal failure and aborts the current function.
858  ASSERT_EQ(1, 2);
859  // The following won't be executed.
860  ...
861 }
862 
863 TEST(FooTest, Bar) {
864  Subroutine();
865  // The intended behavior is for the fatal failure
866  // in Subroutine() to abort the entire test.
867  // The actual behavior: the function goes on after Subroutine() returns.
868  int* p = NULL;
869  *p = 3; // Segfault!
870 }
871 ```
872 
873 Since we don't use exceptions, it is technically impossible to
874 implement the intended behavior here. To alleviate this, Google Test
875 provides two solutions. You could use either the
876 `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the
877 `HasFatalFailure()` function. They are described in the following two
878 subsections.
879 
880 ### Asserting on Subroutines ###
881 
882 As shown above, if your test calls a subroutine that has an `ASSERT_*`
883 failure in it, the test will continue after the subroutine
884 returns. This may not be what you want.
885 
886 Often people want fatal failures to propagate like exceptions. For
887 that Google Test offers the following macros:
888 
889 | **Fatal assertion** | **Nonfatal assertion** | **Verifies** |
890 |:--------------------|:-----------------------|:-------------|
891 | `ASSERT_NO_FATAL_FAILURE(`_statement_`);` | `EXPECT_NO_FATAL_FAILURE(`_statement_`);` | _statement_ doesn't generate any new fatal failures in the current thread. |
892 
893 Only failures in the thread that executes the assertion are checked to
894 determine the result of this type of assertions. If _statement_
895 creates new threads, failures in these threads are ignored.
896 
897 Examples:
898 
899 ```
900 ASSERT_NO_FATAL_FAILURE(Foo());
901 
902 int i;
903 EXPECT_NO_FATAL_FAILURE({
904  i = Bar();
905 });
906 ```
907 
908 _Availability:_ Linux, Windows, Mac. Assertions from multiple threads
909 are currently not supported.
910 
911 ### Checking for Failures in the Current Test ###
912 
913 `HasFatalFailure()` in the `::testing::Test` class returns `true` if an
914 assertion in the current test has suffered a fatal failure. This
915 allows functions to catch fatal failures in a sub-routine and return
916 early.
917 
918 ```
919 class Test {
920  public:
921  ...
922  static bool HasFatalFailure();
923 };
924 ```
925 
926 The typical usage, which basically simulates the behavior of a thrown
927 exception, is:
928 
929 ```
930 TEST(FooTest, Bar) {
931  Subroutine();
932  // Aborts if Subroutine() had a fatal failure.
933  if (HasFatalFailure())
934  return;
935  // The following won't be executed.
936  ...
937 }
938 ```
939 
940 If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test
941 fixture, you must add the `::testing::Test::` prefix, as in:
942 
943 ```
944 if (::testing::Test::HasFatalFailure())
945  return;
946 ```
947 
948 Similarly, `HasNonfatalFailure()` returns `true` if the current test
949 has at least one non-fatal failure, and `HasFailure()` returns `true`
950 if the current test has at least one failure of either kind.
951 
952 _Availability:_ Linux, Windows, Mac. `HasNonfatalFailure()` and
953 `HasFailure()` are available since version 1.4.0.
954 
955 # Logging Additional Information #
956 
957 In your test code, you can call `RecordProperty("key", value)` to log
958 additional information, where `value` can be either a string or an `int`. The _last_ value recorded for a key will be emitted to the XML output
959 if you specify one. For example, the test
960 
961 ```
962 TEST_F(WidgetUsageTest, MinAndMaxWidgets) {
963  RecordProperty("MaximumWidgets", ComputeMaxUsage());
964  RecordProperty("MinimumWidgets", ComputeMinUsage());
965 }
966 ```
967 
968 will output XML like this:
969 
970 ```
971 ...
972  <testcase name="MinAndMaxWidgets" status="run" time="6" classname="WidgetUsageTest"
973  MaximumWidgets="12"
974  MinimumWidgets="9" />
975 ...
976 ```
977 
978 _Note_:
979  * `RecordProperty()` is a static member of the `Test` class. Therefore it needs to be prefixed with `::testing::Test::` if used outside of the `TEST` body and the test fixture class.
980  * `key` must be a valid XML attribute name, and cannot conflict with the ones already used by Google Test (`name`, `status`, `time`, `classname`, `type_param`, and `value_param`).
981  * Calling `RecordProperty()` outside of the lifespan of a test is allowed. If it's called outside of a test but between a test case's `SetUpTestCase()` and `TearDownTestCase()` methods, it will be attributed to the XML element for the test case. If it's called outside of all test cases (e.g. in a test environment), it will be attributed to the top-level XML element.
982 
983 _Availability_: Linux, Windows, Mac.
984 
985 # Sharing Resources Between Tests in the Same Test Case #
986 
987 
988 
989 Google Test creates a new test fixture object for each test in order to make
990 tests independent and easier to debug. However, sometimes tests use resources
991 that are expensive to set up, making the one-copy-per-test model prohibitively
992 expensive.
993 
994 If the tests don't change the resource, there's no harm in them sharing a
995 single resource copy. So, in addition to per-test set-up/tear-down, Google Test
996 also supports per-test-case set-up/tear-down. To use it:
997 
998  1. In your test fixture class (say `FooTest` ), define as `static` some member variables to hold the shared resources.
999  1. In the same test fixture class, define a `static void SetUpTestCase()` function (remember not to spell it as **`SetupTestCase`** with a small `u`!) to set up the shared resources and a `static void TearDownTestCase()` function to tear them down.
1000 
1001 That's it! Google Test automatically calls `SetUpTestCase()` before running the
1002 _first test_ in the `FooTest` test case (i.e. before creating the first
1003 `FooTest` object), and calls `TearDownTestCase()` after running the _last test_
1004 in it (i.e. after deleting the last `FooTest` object). In between, the tests
1005 can use the shared resources.
1006 
1007 Remember that the test order is undefined, so your code can't depend on a test
1008 preceding or following another. Also, the tests must either not modify the
1009 state of any shared resource, or, if they do modify the state, they must
1010 restore the state to its original value before passing control to the next
1011 test.
1012 
1013 Here's an example of per-test-case set-up and tear-down:
1014 ```
1015 class FooTest : public ::testing::Test {
1016  protected:
1017  // Per-test-case set-up.
1018  // Called before the first test in this test case.
1019  // Can be omitted if not needed.
1020  static void SetUpTestCase() {
1021  shared_resource_ = new ...;
1022  }
1023 
1024  // Per-test-case tear-down.
1025  // Called after the last test in this test case.
1026  // Can be omitted if not needed.
1027  static void TearDownTestCase() {
1028  delete shared_resource_;
1029  shared_resource_ = NULL;
1030  }
1031 
1032  // You can define per-test set-up and tear-down logic as usual.
1033  virtual void SetUp() { ... }
1034  virtual void TearDown() { ... }
1035 
1036  // Some expensive resource shared by all tests.
1037  static T* shared_resource_;
1038 };
1039 
1040 T* FooTest::shared_resource_ = NULL;
1041 
1042 TEST_F(FooTest, Test1) {
1043  ... you can refer to shared_resource here ...
1044 }
1045 TEST_F(FooTest, Test2) {
1046  ... you can refer to shared_resource here ...
1047 }
1048 ```
1049 
1050 _Availability:_ Linux, Windows, Mac.
1051 
1052 # Global Set-Up and Tear-Down #
1053 
1054 Just as you can do set-up and tear-down at the test level and the test case
1055 level, you can also do it at the test program level. Here's how.
1056 
1057 First, you subclass the `::testing::Environment` class to define a test
1058 environment, which knows how to set-up and tear-down:
1059 
1060 ```
1061 class Environment {
1062  public:
1063  virtual ~Environment() {}
1064  // Override this to define how to set up the environment.
1065  virtual void SetUp() {}
1066  // Override this to define how to tear down the environment.
1067  virtual void TearDown() {}
1068 };
1069 ```
1070 
1071 Then, you register an instance of your environment class with Google Test by
1072 calling the `::testing::AddGlobalTestEnvironment()` function:
1073 
1074 ```
1075 Environment* AddGlobalTestEnvironment(Environment* env);
1076 ```
1077 
1078 Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of
1079 the environment object, then runs the tests if there was no fatal failures, and
1080 finally calls `TearDown()` of the environment object.
1081 
1082 It's OK to register multiple environment objects. In this case, their `SetUp()`
1083 will be called in the order they are registered, and their `TearDown()` will be
1084 called in the reverse order.
1085 
1086 Note that Google Test takes ownership of the registered environment objects.
1087 Therefore **do not delete them** by yourself.
1088 
1089 You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is
1090 called, probably in `main()`. If you use `gtest_main`, you need to call
1091 this before `main()` starts for it to take effect. One way to do this is to
1092 define a global variable like this:
1093 
1094 ```
1095 ::testing::Environment* const foo_env = ::testing::AddGlobalTestEnvironment(new FooEnvironment);
1096 ```
1097 
1098 However, we strongly recommend you to write your own `main()` and call
1099 `AddGlobalTestEnvironment()` there, as relying on initialization of global
1100 variables makes the code harder to read and may cause problems when you
1101 register multiple environments from different translation units and the
1102 environments have dependencies among them (remember that the compiler doesn't
1103 guarantee the order in which global variables from different translation units
1104 are initialized).
1105 
1106 _Availability:_ Linux, Windows, Mac.
1107 
1108 
1109 # Value Parameterized Tests #
1110 
1111 _Value-parameterized tests_ allow you to test your code with different
1112 parameters without writing multiple copies of the same test.
1113 
1114 Suppose you write a test for your code and then realize that your code is affected by a presence of a Boolean command line flag.
1115 
1116 ```
1117 TEST(MyCodeTest, TestFoo) {
1118  // A code to test foo().
1119 }
1120 ```
1121 
1122 Usually people factor their test code into a function with a Boolean parameter in such situations. The function sets the flag, then executes the testing code.
1123 
1124 ```
1125 void TestFooHelper(bool flag_value) {
1126  flag = flag_value;
1127  // A code to test foo().
1128 }
1129 
1130 TEST(MyCodeTest, TestFoo) {
1131  TestFooHelper(false);
1132  TestFooHelper(true);
1133 }
1134 ```
1135 
1136 But this setup has serious drawbacks. First, when a test assertion fails in your tests, it becomes unclear what value of the parameter caused it to fail. You can stream a clarifying message into your `EXPECT`/`ASSERT` statements, but it you'll have to do it with all of them. Second, you have to add one such helper function per test. What if you have ten tests? Twenty? A hundred?
1137 
1138 Value-parameterized tests will let you write your test only once and then easily instantiate and run it with an arbitrary number of parameter values.
1139 
1140 Here are some other situations when value-parameterized tests come handy:
1141 
1142  * You want to test different implementations of an OO interface.
1143  * You want to test your code over various inputs (a.k.a. data-driven testing). This feature is easy to abuse, so please exercise your good sense when doing it!
1144 
1145 ## How to Write Value-Parameterized Tests ##
1146 
1147 To write value-parameterized tests, first you should define a fixture
1148 class. It must be derived from both `::testing::Test` and
1149 `::testing::WithParamInterface<T>` (the latter is a pure interface),
1150 where `T` is the type of your parameter values. For convenience, you
1151 can just derive the fixture class from `::testing::TestWithParam<T>`,
1152 which itself is derived from both `::testing::Test` and
1153 `::testing::WithParamInterface<T>`. `T` can be any copyable type. If
1154 it's a raw pointer, you are responsible for managing the lifespan of
1155 the pointed values.
1156 
1157 ```
1158 class FooTest : public ::testing::TestWithParam<const char*> {
1159  // You can implement all the usual fixture class members here.
1160  // To access the test parameter, call GetParam() from class
1161  // TestWithParam<T>.
1162 };
1163 
1164 // Or, when you want to add parameters to a pre-existing fixture class:
1165 class BaseTest : public ::testing::Test {
1166  ...
1167 };
1168 class BarTest : public BaseTest,
1169  public ::testing::WithParamInterface<const char*> {
1170  ...
1171 };
1172 ```
1173 
1174 Then, use the `TEST_P` macro to define as many test patterns using
1175 this fixture as you want. The `_P` suffix is for "parameterized" or
1176 "pattern", whichever you prefer to think.
1177 
1178 ```
1179 TEST_P(FooTest, DoesBlah) {
1180  // Inside a test, access the test parameter with the GetParam() method
1181  // of the TestWithParam<T> class:
1182  EXPECT_TRUE(foo.Blah(GetParam()));
1183  ...
1184 }
1185 
1186 TEST_P(FooTest, HasBlahBlah) {
1187  ...
1188 }
1189 ```
1190 
1191 Finally, you can use `INSTANTIATE_TEST_CASE_P` to instantiate the test
1192 case with any set of parameters you want. Google Test defines a number of
1193 functions for generating test parameters. They return what we call
1194 (surprise!) _parameter generators_. Here is a summary of them,
1195 which are all in the `testing` namespace:
1196 
1197 | `Range(begin, end[, step])` | Yields values `{begin, begin+step, begin+step+step, ...}`. The values do not include `end`. `step` defaults to 1. |
1198 |:----------------------------|:------------------------------------------------------------------------------------------------------------------|
1199 | `Values(v1, v2, ..., vN)` | Yields values `{v1, v2, ..., vN}`. |
1200 | `ValuesIn(container)` and `ValuesIn(begin, end)` | Yields values from a C-style array, an STL-style container, or an iterator range `[begin, end)`. `container`, `begin`, and `end` can be expressions whose values are determined at run time. |
1201 | `Bool()` | Yields sequence `{false, true}`. |
1202 | `Combine(g1, g2, ..., gN)` | Yields all combinations (the Cartesian product for the math savvy) of the values generated by the `N` generators. This is only available if your system provides the `<tr1/tuple>` header. If you are sure your system does, and Google Test disagrees, you can override it by defining `GTEST_HAS_TR1_TUPLE=1`. See comments in [include/gtest/internal/gtest-port.h](../include/gtest/internal/gtest-port.h) for more information. |
1203 
1204 For more details, see the comments at the definitions of these functions in the [source code](../include/gtest/gtest-param-test.h).
1205 
1206 The following statement will instantiate tests from the `FooTest` test case
1207 each with parameter values `"meeny"`, `"miny"`, and `"moe"`.
1208 
1209 ```
1210 INSTANTIATE_TEST_CASE_P(InstantiationName,
1211  FooTest,
1212  ::testing::Values("meeny", "miny", "moe"));
1213 ```
1214 
1215 To distinguish different instances of the pattern (yes, you can
1216 instantiate it more than once), the first argument to
1217 `INSTANTIATE_TEST_CASE_P` is a prefix that will be added to the actual
1218 test case name. Remember to pick unique prefixes for different
1219 instantiations. The tests from the instantiation above will have these
1220 names:
1221 
1222  * `InstantiationName/FooTest.DoesBlah/0` for `"meeny"`
1223  * `InstantiationName/FooTest.DoesBlah/1` for `"miny"`
1224  * `InstantiationName/FooTest.DoesBlah/2` for `"moe"`
1225  * `InstantiationName/FooTest.HasBlahBlah/0` for `"meeny"`
1226  * `InstantiationName/FooTest.HasBlahBlah/1` for `"miny"`
1227  * `InstantiationName/FooTest.HasBlahBlah/2` for `"moe"`
1228 
1229 You can use these names in [--gtest\_filter](#running-a-subset-of-the-tests).
1230 
1231 This statement will instantiate all tests from `FooTest` again, each
1232 with parameter values `"cat"` and `"dog"`:
1233 
1234 ```
1235 const char* pets[] = {"cat", "dog"};
1236 INSTANTIATE_TEST_CASE_P(AnotherInstantiationName, FooTest,
1237  ::testing::ValuesIn(pets));
1238 ```
1239 
1240 The tests from the instantiation above will have these names:
1241 
1242  * `AnotherInstantiationName/FooTest.DoesBlah/0` for `"cat"`
1243  * `AnotherInstantiationName/FooTest.DoesBlah/1` for `"dog"`
1244  * `AnotherInstantiationName/FooTest.HasBlahBlah/0` for `"cat"`
1245  * `AnotherInstantiationName/FooTest.HasBlahBlah/1` for `"dog"`
1246 
1247 Please note that `INSTANTIATE_TEST_CASE_P` will instantiate _all_
1248 tests in the given test case, whether their definitions come before or
1249 _after_ the `INSTANTIATE_TEST_CASE_P` statement.
1250 
1251 You can see
1252 [these](../samples/sample7_unittest.cc)
1253 [files](../samples/sample8_unittest.cc) for more examples.
1254 
1255 _Availability_: Linux, Windows (requires MSVC 8.0 or above), Mac; since version 1.2.0.
1256 
1257 ## Creating Value-Parameterized Abstract Tests ##
1258 
1259 In the above, we define and instantiate `FooTest` in the same source
1260 file. Sometimes you may want to define value-parameterized tests in a
1261 library and let other people instantiate them later. This pattern is
1262 known as <i>abstract tests</i>. As an example of its application, when you
1263 are designing an interface you can write a standard suite of abstract
1264 tests (perhaps using a factory function as the test parameter) that
1265 all implementations of the interface are expected to pass. When
1266 someone implements the interface, he can instantiate your suite to get
1267 all the interface-conformance tests for free.
1268 
1269 To define abstract tests, you should organize your code like this:
1270 
1271  1. Put the definition of the parameterized test fixture class (e.g. `FooTest`) in a header file, say `foo_param_test.h`. Think of this as _declaring_ your abstract tests.
1272  1. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes `foo_param_test.h`. Think of this as _implementing_ your abstract tests.
1273 
1274 Once they are defined, you can instantiate them by including
1275 `foo_param_test.h`, invoking `INSTANTIATE_TEST_CASE_P()`, and linking
1276 with `foo_param_test.cc`. You can instantiate the same abstract test
1277 case multiple times, possibly in different source files.
1278 
1279 # Typed Tests #
1280 
1281 Suppose you have multiple implementations of the same interface and
1282 want to make sure that all of them satisfy some common requirements.
1283 Or, you may have defined several types that are supposed to conform to
1284 the same "concept" and you want to verify it. In both cases, you want
1285 the same test logic repeated for different types.
1286 
1287 While you can write one `TEST` or `TEST_F` for each type you want to
1288 test (and you may even factor the test logic into a function template
1289 that you invoke from the `TEST`), it's tedious and doesn't scale:
1290 if you want _m_ tests over _n_ types, you'll end up writing _m\*n_
1291 `TEST`s.
1292 
1293 _Typed tests_ allow you to repeat the same test logic over a list of
1294 types. You only need to write the test logic once, although you must
1295 know the type list when writing typed tests. Here's how you do it:
1296 
1297 First, define a fixture class template. It should be parameterized
1298 by a type. Remember to derive it from `::testing::Test`:
1299 
1300 ```
1301 template <typename T>
1302 class FooTest : public ::testing::Test {
1303  public:
1304  ...
1305  typedef std::list<T> List;
1306  static T shared_;
1307  T value_;
1308 };
1309 ```
1310 
1311 Next, associate a list of types with the test case, which will be
1312 repeated for each type in the list:
1313 
1314 ```
1315 typedef ::testing::Types<char, int, unsigned int> MyTypes;
1316 TYPED_TEST_CASE(FooTest, MyTypes);
1317 ```
1318 
1319 The `typedef` is necessary for the `TYPED_TEST_CASE` macro to parse
1320 correctly. Otherwise the compiler will think that each comma in the
1321 type list introduces a new macro argument.
1322 
1323 Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test
1324 for this test case. You can repeat this as many times as you want:
1325 
1326 ```
1327 TYPED_TEST(FooTest, DoesBlah) {
1328  // Inside a test, refer to the special name TypeParam to get the type
1329  // parameter. Since we are inside a derived class template, C++ requires
1330  // us to visit the members of FooTest via 'this'.
1331  TypeParam n = this->value_;
1332 
1333  // To visit static members of the fixture, add the 'TestFixture::'
1334  // prefix.
1335  n += TestFixture::shared_;
1336 
1337  // To refer to typedefs in the fixture, add the 'typename TestFixture::'
1338  // prefix. The 'typename' is required to satisfy the compiler.
1339  typename TestFixture::List values;
1340  values.push_back(n);
1341  ...
1342 }
1343 
1344 TYPED_TEST(FooTest, HasPropertyA) { ... }
1345 ```
1346 
1347 You can see `samples/sample6_unittest.cc` for a complete example.
1348 
1349 _Availability:_ Linux, Windows (requires MSVC 8.0 or above), Mac;
1350 since version 1.1.0.
1351 
1352 # Type-Parameterized Tests #
1353 
1354 _Type-parameterized tests_ are like typed tests, except that they
1355 don't require you to know the list of types ahead of time. Instead,
1356 you can define the test logic first and instantiate it with different
1357 type lists later. You can even instantiate it more than once in the
1358 same program.
1359 
1360 If you are designing an interface or concept, you can define a suite
1361 of type-parameterized tests to verify properties that any valid
1362 implementation of the interface/concept should have. Then, the author
1363 of each implementation can just instantiate the test suite with his
1364 type to verify that it conforms to the requirements, without having to
1365 write similar tests repeatedly. Here's an example:
1366 
1367 First, define a fixture class template, as we did with typed tests:
1368 
1369 ```
1370 template <typename T>
1371 class FooTest : public ::testing::Test {
1372  ...
1373 };
1374 ```
1375 
1376 Next, declare that you will define a type-parameterized test case:
1377 
1378 ```
1379 TYPED_TEST_CASE_P(FooTest);
1380 ```
1381 
1382 The `_P` suffix is for "parameterized" or "pattern", whichever you
1383 prefer to think.
1384 
1385 Then, use `TYPED_TEST_P()` to define a type-parameterized test. You
1386 can repeat this as many times as you want:
1387 
1388 ```
1389 TYPED_TEST_P(FooTest, DoesBlah) {
1390  // Inside a test, refer to TypeParam to get the type parameter.
1391  TypeParam n = 0;
1392  ...
1393 }
1394 
1395 TYPED_TEST_P(FooTest, HasPropertyA) { ... }
1396 ```
1397 
1398 Now the tricky part: you need to register all test patterns using the
1399 `REGISTER_TYPED_TEST_CASE_P` macro before you can instantiate them.
1400 The first argument of the macro is the test case name; the rest are
1401 the names of the tests in this test case:
1402 
1403 ```
1404 REGISTER_TYPED_TEST_CASE_P(FooTest,
1405  DoesBlah, HasPropertyA);
1406 ```
1407 
1408 Finally, you are free to instantiate the pattern with the types you
1409 want. If you put the above code in a header file, you can `#include`
1410 it in multiple C++ source files and instantiate it multiple times.
1411 
1412 ```
1413 typedef ::testing::Types<char, int, unsigned int> MyTypes;
1414 INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, MyTypes);
1415 ```
1416 
1417 To distinguish different instances of the pattern, the first argument
1418 to the `INSTANTIATE_TYPED_TEST_CASE_P` macro is a prefix that will be
1419 added to the actual test case name. Remember to pick unique prefixes
1420 for different instances.
1421 
1422 In the special case where the type list contains only one type, you
1423 can write that type directly without `::testing::Types<...>`, like this:
1424 
1425 ```
1426 INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, int);
1427 ```
1428 
1429 You can see `samples/sample6_unittest.cc` for a complete example.
1430 
1431 _Availability:_ Linux, Windows (requires MSVC 8.0 or above), Mac;
1432 since version 1.1.0.
1433 
1434 # Testing Private Code #
1435 
1436 If you change your software's internal implementation, your tests should not
1437 break as long as the change is not observable by users. Therefore, per the
1438 _black-box testing principle_, most of the time you should test your code
1439 through its public interfaces.
1440 
1441 If you still find yourself needing to test internal implementation code,
1442 consider if there's a better design that wouldn't require you to do so. If you
1443 absolutely have to test non-public interface code though, you can. There are
1444 two cases to consider:
1445 
1446  * Static functions (_not_ the same as static member functions!) or unnamed namespaces, and
1447  * Private or protected class members
1448 
1449 ## Static Functions ##
1450 
1451 Both static functions and definitions/declarations in an unnamed namespace are
1452 only visible within the same translation unit. To test them, you can `#include`
1453 the entire `.cc` file being tested in your `*_test.cc` file. (`#include`ing `.cc`
1454 files is not a good way to reuse code - you should not do this in production
1455 code!)
1456 
1457 However, a better approach is to move the private code into the
1458 `foo::internal` namespace, where `foo` is the namespace your project normally
1459 uses, and put the private declarations in a `*-internal.h` file. Your
1460 production `.cc` files and your tests are allowed to include this internal
1461 header, but your clients are not. This way, you can fully test your internal
1462 implementation without leaking it to your clients.
1463 
1464 ## Private Class Members ##
1465 
1466 Private class members are only accessible from within the class or by friends.
1467 To access a class' private members, you can declare your test fixture as a
1468 friend to the class and define accessors in your fixture. Tests using the
1469 fixture can then access the private members of your production class via the
1470 accessors in the fixture. Note that even though your fixture is a friend to
1471 your production class, your tests are not automatically friends to it, as they
1472 are technically defined in sub-classes of the fixture.
1473 
1474 Another way to test private members is to refactor them into an implementation
1475 class, which is then declared in a `*-internal.h` file. Your clients aren't
1476 allowed to include this header but your tests can. Such is called the Pimpl
1477 (Private Implementation) idiom.
1478 
1479 Or, you can declare an individual test as a friend of your class by adding this
1480 line in the class body:
1481 
1482 ```
1483 FRIEND_TEST(TestCaseName, TestName);
1484 ```
1485 
1486 For example,
1487 ```
1488 // foo.h
1489 #include "gtest/gtest_prod.h"
1490 
1491 // Defines FRIEND_TEST.
1492 class Foo {
1493  ...
1494  private:
1495  FRIEND_TEST(FooTest, BarReturnsZeroOnNull);
1496  int Bar(void* x);
1497 };
1498 
1499 // foo_test.cc
1500 ...
1501 TEST(FooTest, BarReturnsZeroOnNull) {
1502  Foo foo;
1503  EXPECT_EQ(0, foo.Bar(NULL));
1504  // Uses Foo's private member Bar().
1505 }
1506 ```
1507 
1508 Pay special attention when your class is defined in a namespace, as you should
1509 define your test fixtures and tests in the same namespace if you want them to
1510 be friends of your class. For example, if the code to be tested looks like:
1511 
1512 ```
1513 namespace my_namespace {
1514 
1515 class Foo {
1516  friend class FooTest;
1517  FRIEND_TEST(FooTest, Bar);
1518  FRIEND_TEST(FooTest, Baz);
1519  ...
1520  definition of the class Foo
1521  ...
1522 };
1523 
1524 } // namespace my_namespace
1525 ```
1526 
1527 Your test code should be something like:
1528 
1529 ```
1530 namespace my_namespace {
1531 class FooTest : public ::testing::Test {
1532  protected:
1533  ...
1534 };
1535 
1536 TEST_F(FooTest, Bar) { ... }
1537 TEST_F(FooTest, Baz) { ... }
1538 
1539 } // namespace my_namespace
1540 ```
1541 
1542 # Catching Failures #
1543 
1544 If you are building a testing utility on top of Google Test, you'll
1545 want to test your utility. What framework would you use to test it?
1546 Google Test, of course.
1547 
1548 The challenge is to verify that your testing utility reports failures
1549 correctly. In frameworks that report a failure by throwing an
1550 exception, you could catch the exception and assert on it. But Google
1551 Test doesn't use exceptions, so how do we test that a piece of code
1552 generates an expected failure?
1553 
1554 `"gtest/gtest-spi.h"` contains some constructs to do this. After
1555 `#include`ing this header, you can use
1556 
1557 | `EXPECT_FATAL_FAILURE(`_statement, substring_`);` |
1558 |:--------------------------------------------------|
1559 
1560 to assert that _statement_ generates a fatal (e.g. `ASSERT_*`) failure
1561 whose message contains the given _substring_, or use
1562 
1563 | `EXPECT_NONFATAL_FAILURE(`_statement, substring_`);` |
1564 |:-----------------------------------------------------|
1565 
1566 if you are expecting a non-fatal (e.g. `EXPECT_*`) failure.
1567 
1568 For technical reasons, there are some caveats:
1569 
1570  1. You cannot stream a failure message to either macro.
1571  1. _statement_ in `EXPECT_FATAL_FAILURE()` cannot reference local non-static variables or non-static members of `this` object.
1572  1. _statement_ in `EXPECT_FATAL_FAILURE()` cannot return a value.
1573 
1574 _Note:_ Google Test is designed with threads in mind. Once the
1575 synchronization primitives in `"gtest/internal/gtest-port.h"` have
1576 been implemented, Google Test will become thread-safe, meaning that
1577 you can then use assertions in multiple threads concurrently. Before
1578 that, however, Google Test only supports single-threaded usage. Once
1579 thread-safe, `EXPECT_FATAL_FAILURE()` and `EXPECT_NONFATAL_FAILURE()`
1580 will capture failures in the current thread only. If _statement_
1581 creates new threads, failures in these threads will be ignored. If
1582 you want to capture failures from all threads instead, you should use
1583 the following macros:
1584 
1585 | `EXPECT_FATAL_FAILURE_ON_ALL_THREADS(`_statement, substring_`);` |
1586 |:-----------------------------------------------------------------|
1587 | `EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(`_statement, substring_`);` |
1588 
1589 # Getting the Current Test's Name #
1590 
1591 Sometimes a function may need to know the name of the currently running test.
1592 For example, you may be using the `SetUp()` method of your test fixture to set
1593 the golden file name based on which test is running. The `::testing::TestInfo`
1594 class has this information:
1595 
1596 ```
1597 namespace testing {
1598 
1599 class TestInfo {
1600  public:
1601  // Returns the test case name and the test name, respectively.
1602  //
1603  // Do NOT delete or free the return value - it's managed by the
1604  // TestInfo class.
1605  const char* test_case_name() const;
1606  const char* name() const;
1607 };
1608 
1609 } // namespace testing
1610 ```
1611 
1612 
1613 > To obtain a `TestInfo` object for the currently running test, call
1614 `current_test_info()` on the `UnitTest` singleton object:
1615 
1616 ```
1617 // Gets information about the currently running test.
1618 // Do NOT delete the returned object - it's managed by the UnitTest class.
1619 const ::testing::TestInfo* const test_info =
1620  ::testing::UnitTest::GetInstance()->current_test_info();
1621 printf("We are in test %s of test case %s.\n",
1622  test_info->name(), test_info->test_case_name());
1623 ```
1624 
1625 `current_test_info()` returns a null pointer if no test is running. In
1626 particular, you cannot find the test case name in `TestCaseSetUp()`,
1627 `TestCaseTearDown()` (where you know the test case name implicitly), or
1628 functions called from them.
1629 
1630 _Availability:_ Linux, Windows, Mac.
1631 
1632 # Extending Google Test by Handling Test Events #
1633 
1634 Google Test provides an <b>event listener API</b> to let you receive
1635 notifications about the progress of a test program and test
1636 failures. The events you can listen to include the start and end of
1637 the test program, a test case, or a test method, among others. You may
1638 use this API to augment or replace the standard console output,
1639 replace the XML output, or provide a completely different form of
1640 output, such as a GUI or a database. You can also use test events as
1641 checkpoints to implement a resource leak checker, for example.
1642 
1643 _Availability:_ Linux, Windows, Mac; since v1.4.0.
1644 
1645 ## Defining Event Listeners ##
1646 
1647 To define a event listener, you subclass either
1648 [testing::TestEventListener](../include/gtest/gtest.h#L991)
1649 or [testing::EmptyTestEventListener](../include/gtest/gtest.h#L1044).
1650 The former is an (abstract) interface, where <i>each pure virtual method<br>
1651 can be overridden to handle a test event</i> (For example, when a test
1652 starts, the `OnTestStart()` method will be called.). The latter provides
1653 an empty implementation of all methods in the interface, such that a
1654 subclass only needs to override the methods it cares about.
1655 
1656 When an event is fired, its context is passed to the handler function
1657 as an argument. The following argument types are used:
1658  * [UnitTest](../include/gtest/gtest.h#L1151) reflects the state of the entire test program,
1659  * [TestCase](../include/gtest/gtest.h#L778) has information about a test case, which can contain one or more tests,
1660  * [TestInfo](../include/gtest/gtest.h#L644) contains the state of a test, and
1661  * [TestPartResult](../include/gtest/gtest-test-part.h#L47) represents the result of a test assertion.
1662 
1663 An event handler function can examine the argument it receives to find
1664 out interesting information about the event and the test program's
1665 state. Here's an example:
1666 
1667 ```
1668  class MinimalistPrinter : public ::testing::EmptyTestEventListener {
1669  // Called before a test starts.
1670  virtual void OnTestStart(const ::testing::TestInfo& test_info) {
1671  printf("*** Test %s.%s starting.\n",
1672  test_info.test_case_name(), test_info.name());
1673  }
1674 
1675  // Called after a failed assertion or a SUCCEED() invocation.
1676  virtual void OnTestPartResult(
1677  const ::testing::TestPartResult& test_part_result) {
1678  printf("%s in %s:%d\n%s\n",
1679  test_part_result.failed() ? "*** Failure" : "Success",
1680  test_part_result.file_name(),
1681  test_part_result.line_number(),
1682  test_part_result.summary());
1683  }
1684 
1685  // Called after a test ends.
1686  virtual void OnTestEnd(const ::testing::TestInfo& test_info) {
1687  printf("*** Test %s.%s ending.\n",
1688  test_info.test_case_name(), test_info.name());
1689  }
1690  };
1691 ```
1692 
1693 ## Using Event Listeners ##
1694 
1695 To use the event listener you have defined, add an instance of it to
1696 the Google Test event listener list (represented by class
1697 [TestEventListeners](../include/gtest/gtest.h#L1064)
1698 - note the "s" at the end of the name) in your
1699 `main()` function, before calling `RUN_ALL_TESTS()`:
1700 ```
1701 int main(int argc, char** argv) {
1702  ::testing::InitGoogleTest(&argc, argv);
1703  // Gets hold of the event listener list.
1704  ::testing::TestEventListeners& listeners =
1705  ::testing::UnitTest::GetInstance()->listeners();
1706  // Adds a listener to the end. Google Test takes the ownership.
1707  listeners.Append(new MinimalistPrinter);
1708  return RUN_ALL_TESTS();
1709 }
1710 ```
1711 
1712 There's only one problem: the default test result printer is still in
1713 effect, so its output will mingle with the output from your minimalist
1714 printer. To suppress the default printer, just release it from the
1715 event listener list and delete it. You can do so by adding one line:
1716 ```
1717  ...
1718  delete listeners.Release(listeners.default_result_printer());
1719  listeners.Append(new MinimalistPrinter);
1720  return RUN_ALL_TESTS();
1721 ```
1722 
1723 Now, sit back and enjoy a completely different output from your
1724 tests. For more details, you can read this
1725 [sample](../samples/sample9_unittest.cc).
1726 
1727 You may append more than one listener to the list. When an `On*Start()`
1728 or `OnTestPartResult()` event is fired, the listeners will receive it in
1729 the order they appear in the list (since new listeners are added to
1730 the end of the list, the default text printer and the default XML
1731 generator will receive the event first). An `On*End()` event will be
1732 received by the listeners in the _reverse_ order. This allows output by
1733 listeners added later to be framed by output from listeners added
1734 earlier.
1735 
1736 ## Generating Failures in Listeners ##
1737 
1738 You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`,
1739 `FAIL()`, etc) when processing an event. There are some restrictions:
1740 
1741  1. You cannot generate any failure in `OnTestPartResult()` (otherwise it will cause `OnTestPartResult()` to be called recursively).
1742  1. A listener that handles `OnTestPartResult()` is not allowed to generate any failure.
1743 
1744 When you add listeners to the listener list, you should put listeners
1745 that handle `OnTestPartResult()` _before_ listeners that can generate
1746 failures. This ensures that failures generated by the latter are
1747 attributed to the right test by the former.
1748 
1749 We have a sample of failure-raising listener
1750 [here](../samples/sample10_unittest.cc).
1751 
1752 # Running Test Programs: Advanced Options #
1753 
1754 Google Test test programs are ordinary executables. Once built, you can run
1755 them directly and affect their behavior via the following environment variables
1756 and/or command line flags. For the flags to work, your programs must call
1757 `::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`.
1758 
1759 To see a list of supported flags and their usage, please run your test
1760 program with the `--help` flag. You can also use `-h`, `-?`, or `/?`
1761 for short. This feature is added in version 1.3.0.
1762 
1763 If an option is specified both by an environment variable and by a
1764 flag, the latter takes precedence. Most of the options can also be
1765 set/read in code: to access the value of command line flag
1766 `--gtest_foo`, write `::testing::GTEST_FLAG(foo)`. A common pattern is
1767 to set the value of a flag before calling `::testing::InitGoogleTest()`
1768 to change the default value of the flag:
1769 ```
1770 int main(int argc, char** argv) {
1771  // Disables elapsed time by default.
1772  ::testing::GTEST_FLAG(print_time) = false;
1773 
1774  // This allows the user to override the flag on the command line.
1775  ::testing::InitGoogleTest(&argc, argv);
1776 
1777  return RUN_ALL_TESTS();
1778 }
1779 ```
1780 
1781 ## Selecting Tests ##
1782 
1783 This section shows various options for choosing which tests to run.
1784 
1785 ### Listing Test Names ###
1786 
1787 Sometimes it is necessary to list the available tests in a program before
1788 running them so that a filter may be applied if needed. Including the flag
1789 `--gtest_list_tests` overrides all other flags and lists tests in the following
1790 format:
1791 ```
1792 TestCase1.
1793  TestName1
1794  TestName2
1795 TestCase2.
1796  TestName
1797 ```
1798 
1799 None of the tests listed are actually run if the flag is provided. There is no
1800 corresponding environment variable for this flag.
1801 
1802 _Availability:_ Linux, Windows, Mac.
1803 
1804 ### Running a Subset of the Tests ###
1805 
1806 By default, a Google Test program runs all tests the user has defined.
1807 Sometimes, you want to run only a subset of the tests (e.g. for debugging or
1808 quickly verifying a change). If you set the `GTEST_FILTER` environment variable
1809 or the `--gtest_filter` flag to a filter string, Google Test will only run the
1810 tests whose full names (in the form of `TestCaseName.TestName`) match the
1811 filter.
1812 
1813 The format of a filter is a '`:`'-separated list of wildcard patterns (called
1814 the positive patterns) optionally followed by a '`-`' and another
1815 '`:`'-separated pattern list (called the negative patterns). A test matches the
1816 filter if and only if it matches any of the positive patterns but does not
1817 match any of the negative patterns.
1818 
1819 A pattern may contain `'*'` (matches any string) or `'?'` (matches any single
1820 character). For convenience, the filter `'*-NegativePatterns'` can be also
1821 written as `'-NegativePatterns'`.
1822 
1823 For example:
1824 
1825  * `./foo_test` Has no flag, and thus runs all its tests.
1826  * `./foo_test --gtest_filter=*` Also runs everything, due to the single match-everything `*` value.
1827  * `./foo_test --gtest_filter=FooTest.*` Runs everything in test case `FooTest`.
1828  * `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full name contains either `"Null"` or `"Constructor"`.
1829  * `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests.
1830  * `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test case `FooTest` except `FooTest.Bar`.
1831 
1832 _Availability:_ Linux, Windows, Mac.
1833 
1834 ### Temporarily Disabling Tests ###
1835 
1836 If you have a broken test that you cannot fix right away, you can add the
1837 `DISABLED_` prefix to its name. This will exclude it from execution. This is
1838 better than commenting out the code or using `#if 0`, as disabled tests are
1839 still compiled (and thus won't rot).
1840 
1841 If you need to disable all tests in a test case, you can either add `DISABLED_`
1842 to the front of the name of each test, or alternatively add it to the front of
1843 the test case name.
1844 
1845 For example, the following tests won't be run by Google Test, even though they
1846 will still be compiled:
1847 
1848 ```
1849 // Tests that Foo does Abc.
1850 TEST(FooTest, DISABLED_DoesAbc) { ... }
1851 
1852 class DISABLED_BarTest : public ::testing::Test { ... };
1853 
1854 // Tests that Bar does Xyz.
1855 TEST_F(DISABLED_BarTest, DoesXyz) { ... }
1856 ```
1857 
1858 _Note:_ This feature should only be used for temporary pain-relief. You still
1859 have to fix the disabled tests at a later date. As a reminder, Google Test will
1860 print a banner warning you if a test program contains any disabled tests.
1861 
1862 _Tip:_ You can easily count the number of disabled tests you have
1863 using `grep`. This number can be used as a metric for improving your
1864 test quality.
1865 
1866 _Availability:_ Linux, Windows, Mac.
1867 
1868 ### Temporarily Enabling Disabled Tests ###
1869 
1870 To include [disabled tests](#temporarily-disabling-tests) in test
1871 execution, just invoke the test program with the
1872 `--gtest_also_run_disabled_tests` flag or set the
1873 `GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other
1874 than `0`. You can combine this with the
1875 [--gtest\_filter](#running-a-subset-of-the-tests) flag to further select
1876 which disabled tests to run.
1877 
1878 _Availability:_ Linux, Windows, Mac; since version 1.3.0.
1879 
1880 ## Repeating the Tests ##
1881 
1882 Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it
1883 will fail only 1% of the time, making it rather hard to reproduce the bug under
1884 a debugger. This can be a major source of frustration.
1885 
1886 The `--gtest_repeat` flag allows you to repeat all (or selected) test methods
1887 in a program many times. Hopefully, a flaky test will eventually fail and give
1888 you a chance to debug. Here's how to use it:
1889 
1890 | `$ foo_test --gtest_repeat=1000` | Repeat foo\_test 1000 times and don't stop at failures. |
1891 |:---------------------------------|:--------------------------------------------------------|
1892 | `$ foo_test --gtest_repeat=-1` | A negative count means repeating forever. |
1893 | `$ foo_test --gtest_repeat=1000 --gtest_break_on_failure` | Repeat foo\_test 1000 times, stopping at the first failure. This is especially useful when running under a debugger: when the testfails, it will drop into the debugger and you can then inspect variables and stacks. |
1894 | `$ foo_test --gtest_repeat=1000 --gtest_filter=FooBar` | Repeat the tests whose name matches the filter 1000 times. |
1895 
1896 If your test program contains global set-up/tear-down code registered
1897 using `AddGlobalTestEnvironment()`, it will be repeated in each
1898 iteration as well, as the flakiness may be in it. You can also specify
1899 the repeat count by setting the `GTEST_REPEAT` environment variable.
1900 
1901 _Availability:_ Linux, Windows, Mac.
1902 
1903 ## Shuffling the Tests ##
1904 
1905 You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE`
1906 environment variable to `1`) to run the tests in a program in a random
1907 order. This helps to reveal bad dependencies between tests.
1908 
1909 By default, Google Test uses a random seed calculated from the current
1910 time. Therefore you'll get a different order every time. The console
1911 output includes the random seed value, such that you can reproduce an
1912 order-related test failure later. To specify the random seed
1913 explicitly, use the `--gtest_random_seed=SEED` flag (or set the
1914 `GTEST_RANDOM_SEED` environment variable), where `SEED` is an integer
1915 between 0 and 99999. The seed value 0 is special: it tells Google Test
1916 to do the default behavior of calculating the seed from the current
1917 time.
1918 
1919 If you combine this with `--gtest_repeat=N`, Google Test will pick a
1920 different random seed and re-shuffle the tests in each iteration.
1921 
1922 _Availability:_ Linux, Windows, Mac; since v1.4.0.
1923 
1924 ## Controlling Test Output ##
1925 
1926 This section teaches how to tweak the way test results are reported.
1927 
1928 ### Colored Terminal Output ###
1929 
1930 Google Test can use colors in its terminal output to make it easier to spot
1931 the separation between tests, and whether tests passed.
1932 
1933 You can set the GTEST\_COLOR environment variable or set the `--gtest_color`
1934 command line flag to `yes`, `no`, or `auto` (the default) to enable colors,
1935 disable colors, or let Google Test decide. When the value is `auto`, Google
1936 Test will use colors if and only if the output goes to a terminal and (on
1937 non-Windows platforms) the `TERM` environment variable is set to `xterm` or
1938 `xterm-color`.
1939 
1940 _Availability:_ Linux, Windows, Mac.
1941 
1942 ### Suppressing the Elapsed Time ###
1943 
1944 By default, Google Test prints the time it takes to run each test. To
1945 suppress that, run the test program with the `--gtest_print_time=0`
1946 command line flag. Setting the `GTEST_PRINT_TIME` environment
1947 variable to `0` has the same effect.
1948 
1949 _Availability:_ Linux, Windows, Mac. (In Google Test 1.3.0 and lower,
1950 the default behavior is that the elapsed time is **not** printed.)
1951 
1952 ### Generating an XML Report ###
1953 
1954 Google Test can emit a detailed XML report to a file in addition to its normal
1955 textual output. The report contains the duration of each test, and thus can
1956 help you identify slow tests.
1957 
1958 To generate the XML report, set the `GTEST_OUTPUT` environment variable or the
1959 `--gtest_output` flag to the string `"xml:_path_to_output_file_"`, which will
1960 create the file at the given location. You can also just use the string
1961 `"xml"`, in which case the output can be found in the `test_detail.xml` file in
1962 the current directory.
1963 
1964 If you specify a directory (for example, `"xml:output/directory/"` on Linux or
1965 `"xml:output\directory\"` on Windows), Google Test will create the XML file in
1966 that directory, named after the test executable (e.g. `foo_test.xml` for test
1967 program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left
1968 over from a previous run), Google Test will pick a different name (e.g.
1969 `foo_test_1.xml`) to avoid overwriting it.
1970 
1971 The report uses the format described here. It is based on the
1972 `junitreport` Ant task and can be parsed by popular continuous build
1973 systems like [Hudson](https://hudson.dev.java.net/). Since that format
1974 was originally intended for Java, a little interpretation is required
1975 to make it apply to Google Test tests, as shown here:
1976 
1977 ```
1978 <testsuites name="AllTests" ...>
1979  <testsuite name="test_case_name" ...>
1980  <testcase name="test_name" ...>
1981  <failure message="..."/>
1982  <failure message="..."/>
1983  <failure message="..."/>
1984  </testcase>
1985  </testsuite>
1986 </testsuites>
1987 ```
1988 
1989  * The root `<testsuites>` element corresponds to the entire test program.
1990  * `<testsuite>` elements correspond to Google Test test cases.
1991  * `<testcase>` elements correspond to Google Test test functions.
1992 
1993 For instance, the following program
1994 
1995 ```
1996 TEST(MathTest, Addition) { ... }
1997 TEST(MathTest, Subtraction) { ... }
1998 TEST(LogicTest, NonContradiction) { ... }
1999 ```
2000 
2001 could generate this report:
2002 
2003 ```
2004 <?xml version="1.0" encoding="UTF-8"?>
2005 <testsuites tests="3" failures="1" errors="0" time="35" name="AllTests">
2006  <testsuite name="MathTest" tests="2" failures="1" errors="0" time="15">
2007  <testcase name="Addition" status="run" time="7" classname="">
2008  <failure message="Value of: add(1, 1)&#x0A; Actual: 3&#x0A;Expected: 2" type=""/>
2009  <failure message="Value of: add(1, -1)&#x0A; Actual: 1&#x0A;Expected: 0" type=""/>
2010  </testcase>
2011  <testcase name="Subtraction" status="run" time="5" classname="">
2012  </testcase>
2013  </testsuite>
2014  <testsuite name="LogicTest" tests="1" failures="0" errors="0" time="5">
2015  <testcase name="NonContradiction" status="run" time="5" classname="">
2016  </testcase>
2017  </testsuite>
2018 </testsuites>
2019 ```
2020 
2021 Things to note:
2022 
2023  * The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells how many test functions the Google Test program or test case contains, while the `failures` attribute tells how many of them failed.
2024  * The `time` attribute expresses the duration of the test, test case, or entire test program in milliseconds.
2025  * Each `<failure>` element corresponds to a single failed Google Test assertion.
2026  * Some JUnit concepts don't apply to Google Test, yet we have to conform to the DTD. Therefore you'll see some dummy elements and attributes in the report. You can safely ignore these parts.
2027 
2028 _Availability:_ Linux, Windows, Mac.
2029 
2030 ## Controlling How Failures Are Reported ##
2031 
2032 ### Turning Assertion Failures into Break-Points ###
2033 
2034 When running test programs under a debugger, it's very convenient if the
2035 debugger can catch an assertion failure and automatically drop into interactive
2036 mode. Google Test's _break-on-failure_ mode supports this behavior.
2037 
2038 To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value
2039 other than `0` . Alternatively, you can use the `--gtest_break_on_failure`
2040 command line flag.
2041 
2042 _Availability:_ Linux, Windows, Mac.
2043 
2044 ### Disabling Catching Test-Thrown Exceptions ###
2045 
2046 Google Test can be used either with or without exceptions enabled. If
2047 a test throws a C++ exception or (on Windows) a structured exception
2048 (SEH), by default Google Test catches it, reports it as a test
2049 failure, and continues with the next test method. This maximizes the
2050 coverage of a test run. Also, on Windows an uncaught exception will
2051 cause a pop-up window, so catching the exceptions allows you to run
2052 the tests automatically.
2053 
2054 When debugging the test failures, however, you may instead want the
2055 exceptions to be handled by the debugger, such that you can examine
2056 the call stack when an exception is thrown. To achieve that, set the
2057 `GTEST_CATCH_EXCEPTIONS` environment variable to `0`, or use the
2058 `--gtest_catch_exceptions=0` flag when running the tests.
2059 
2060 **Availability**: Linux, Windows, Mac.
2061 
2062 ### Letting Another Testing Framework Drive ###
2063 
2064 If you work on a project that has already been using another testing
2065 framework and is not ready to completely switch to Google Test yet,
2066 you can get much of Google Test's benefit by using its assertions in
2067 your existing tests. Just change your `main()` function to look
2068 like:
2069 
2070 ```
2071 #include "gtest/gtest.h"
2072 
2073 int main(int argc, char** argv) {
2074  ::testing::GTEST_FLAG(throw_on_failure) = true;
2075  // Important: Google Test must be initialized.
2076  ::testing::InitGoogleTest(&argc, argv);
2077 
2078  ... whatever your existing testing framework requires ...
2079 }
2080 ```
2081 
2082 With that, you can use Google Test assertions in addition to the
2083 native assertions your testing framework provides, for example:
2084 
2085 ```
2086 void TestFooDoesBar() {
2087  Foo foo;
2088  EXPECT_LE(foo.Bar(1), 100); // A Google Test assertion.
2089  CPPUNIT_ASSERT(foo.IsEmpty()); // A native assertion.
2090 }
2091 ```
2092 
2093 If a Google Test assertion fails, it will print an error message and
2094 throw an exception, which will be treated as a failure by your host
2095 testing framework. If you compile your code with exceptions disabled,
2096 a failed Google Test assertion will instead exit your program with a
2097 non-zero code, which will also signal a test failure to your test
2098 runner.
2099 
2100 If you don't write `::testing::GTEST_FLAG(throw_on_failure) = true;` in
2101 your `main()`, you can alternatively enable this feature by specifying
2102 the `--gtest_throw_on_failure` flag on the command-line or setting the
2103 `GTEST_THROW_ON_FAILURE` environment variable to a non-zero value.
2104 
2105 Death tests are _not_ supported when other test framework is used to organize tests.
2106 
2107 _Availability:_ Linux, Windows, Mac; since v1.3.0.
2108 
2109 ## Distributing Test Functions to Multiple Machines ##
2110 
2111 If you have more than one machine you can use to run a test program,
2112 you might want to run the test functions in parallel and get the
2113 result faster. We call this technique _sharding_, where each machine
2114 is called a _shard_.
2115 
2116 Google Test is compatible with test sharding. To take advantage of
2117 this feature, your test runner (not part of Google Test) needs to do
2118 the following:
2119 
2120  1. Allocate a number of machines (shards) to run the tests.
2121  1. On each shard, set the `GTEST_TOTAL_SHARDS` environment variable to the total number of shards. It must be the same for all shards.
2122  1. On each shard, set the `GTEST_SHARD_INDEX` environment variable to the index of the shard. Different shards must be assigned different indices, which must be in the range `[0, GTEST_TOTAL_SHARDS - 1]`.
2123  1. Run the same test program on all shards. When Google Test sees the above two environment variables, it will select a subset of the test functions to run. Across all shards, each test function in the program will be run exactly once.
2124  1. Wait for all shards to finish, then collect and report the results.
2125 
2126 Your project may have tests that were written without Google Test and
2127 thus don't understand this protocol. In order for your test runner to
2128 figure out which test supports sharding, it can set the environment
2129 variable `GTEST_SHARD_STATUS_FILE` to a non-existent file path. If a
2130 test program supports sharding, it will create this file to
2131 acknowledge the fact (the actual contents of the file are not
2132 important at this time; although we may stick some useful information
2133 in it in the future.); otherwise it will not create it.
2134 
2135 Here's an example to make it clear. Suppose you have a test program
2136 `foo_test` that contains the following 5 test functions:
2137 ```
2138 TEST(A, V)
2139 TEST(A, W)
2140 TEST(B, X)
2141 TEST(B, Y)
2142 TEST(B, Z)
2143 ```
2144 and you have 3 machines at your disposal. To run the test functions in
2145 parallel, you would set `GTEST_TOTAL_SHARDS` to 3 on all machines, and
2146 set `GTEST_SHARD_INDEX` to 0, 1, and 2 on the machines respectively.
2147 Then you would run the same `foo_test` on each machine.
2148 
2149 Google Test reserves the right to change how the work is distributed
2150 across the shards, but here's one possible scenario:
2151 
2152  * Machine #0 runs `A.V` and `B.X`.
2153  * Machine #1 runs `A.W` and `B.Y`.
2154  * Machine #2 runs `B.Z`.
2155 
2156 _Availability:_ Linux, Windows, Mac; since version 1.3.0.
2157 
2158 # Fusing Google Test Source Files #
2159 
2160 Google Test's implementation consists of ~30 files (excluding its own
2161 tests). Sometimes you may want them to be packaged up in two files (a
2162 `.h` and a `.cc`) instead, such that you can easily copy them to a new
2163 machine and start hacking there. For this we provide an experimental
2164 Python script `fuse_gtest_files.py` in the `scripts/` directory (since release 1.3.0).
2165 Assuming you have Python 2.4 or above installed on your machine, just
2166 go to that directory and run
2167 ```
2168 python fuse_gtest_files.py OUTPUT_DIR
2169 ```
2170 
2171 and you should see an `OUTPUT_DIR` directory being created with files
2172 `gtest/gtest.h` and `gtest/gtest-all.cc` in it. These files contain
2173 everything you need to use Google Test. Just copy them to anywhere
2174 you want and you are ready to write tests. You can use the
2175 [scripts/test/Makefile](../scripts/test/Makefile)
2176 file as an example on how to compile your tests against them.
2177 
2178 # Where to Go from Here #
2179 
2180 Congratulations! You've now learned more advanced Google Test tools and are
2181 ready to tackle more complex testing tasks. If you want to dive even deeper, you
2182 can read the [Frequently-Asked Questions](FAQ.md).