Rules for Writing Automated Tests

  
Rules for Writing Automated Tests
  
Rule 1: Prioritize: Most apps include thousands of scenarios. Start by listing
the most important user flows, for example login, add to cart, checkout, etc.
  
Rule 2: Reduce, Recycle, Reuse: Break the user’s scenarios down to simple, single 
purpose, almost atomic flows. e.g. “Create user,” “Login,”  “Send email,” etc. Create a 
test for each of these flows. When completed, compare this to the the user stories.
Naming convention – Defining a good naming convention is an important part. For example,
including the component name in the flow name, which can result in flow structure (e.g 
“Account.login“ and “Account.logout”).
Reuse components. Don’t Copy/paste – for non-experienced developers/testers, a copy/paste 
seems like a reuse. However, the challenge is to update those steps when the flow changes. 
Even a single step/line that repeats in hundreds of tests is a huge hassle.

  
Rule 3: Create Structured, Single-Purpose Tests: Single purpose tests verify one
thing only!. 
  
Rule 4: Tests’ Initial State Should Always be Consistent: Since automated tests always
repeat the same set of steps, they should always start from the same initial state. The most
common maintenance issues when running automated tests are ensuring the integrity of the initial 
state. Since automated tests depend on that state, if it is not consistent test results won’t be
, either. The initial state is usually derived from user previous actions.
  
Rule 5: Compose Complex Tests from Simple Steps: Complex tests should emulate real user 
scenarios, you should prefer composing those test from simple tests parts (shared steps) 
instead of recording the entire scenario directly. Since you already tested all the simple
actions in the simple tests, the complex tests should only fail on integration issues 
  
Rule 6: Add Validations in Turnover Points: Validations are usually used at the end
of the test to signal the test is in a passed/failed state. It is also best to add them
at points of major changes as a checkpoint to stop the test if an action failed. 
  
Rule 7: No Sleep to Improve Stability: Sleep with random duration (what I call magic
numbers) is one of the main sources of flaky tests.The reason to avoid static sleep is you 
rarely know the load of machine upon you run the test on. Only in performance testing do
you have the machine to yourself (knowing for sure all CPU and memory are for you).
  
Rule 8: Use a Minimum Two Levels of Abstractions: If your test is composed mostly of
user interactions, such as clicks and set-text actions, then you’re probably doing something
wrong. If you used low-level frameworks (e.g. Selenium), you might have heard of the PageObject
Design Pattern, which recommends to separate the business logic (e.g. login) from the low-level 
implementation (e.g. set username, set password and click login button). 
  
  
Rule 9: Reduce the Occurrences of Conditions:- A test should have as few as possible 
conditions (if-statements). Those that have many are usually unpredictable (don’t know 
the exact state you’re in) or complex (you might even see loops, too). Try to simplify your tests by:

Start your test at a predefined state;
Disable random popups; and
Disable random A/B testing, and choose a specific flow.
  
Rule 10: Write Independent and Isolated Tests: An important methodology of test 
authoring is creating self contained, independent flows. This allows to run tests in 
high parallelism, which is crucial for scaling the test suites.