What is test driven development?
Simply put, TDD is a development methodology whereby you use unit tests to determine your implementation code. In other words, where you might typically write your code, implement it into your application and then write unit tests for what you've written, the process is flipped; first you write your test, then you write the code that passes that test.
The Fail - Implement - Pass cycle
Write a failing test
After you figure out what your function needs to do, you'll write a test to call that function with some input, and assert that you receive the output you need. Obviously, at this point your function hasn't been implemented and the test will fail.
Next, you write the function implementation so as to receive the expected output, thereby passing the test. You're usually expected to run all your tests to ensure your new implementation hasn't had side effects which causes other code to fail.
Update test so it fails
Now you've got one aspect of the function doing what it needs to, edit the test as necessary to assert some other aspect you require. This will mean the test will fail again.
Now you continue writing the implementation until the test passes once more.
You continue this cycle until the function you're writing satisfies all requirements and all your tests are passing.
How do I test what doesn’t exist yet?!
Critics of TDD might say that you can't accurately anticipate what you need to write and all it's edge cases before you actually try to build it, or that you'll find you end up writing code that will behave differently to what you anticipated as you write it, so the test you wrote in advance would not apply.
That's a flawed perspective to TDD.
You shouldn't focus on what the test needs to catch and edge case scenarios - but rather what you need to happen so the requirements are met.
Whether what we initially anticipated is correct or not is irrelevant. When we write implementation code we discover things we did not initially think about, that our code needs and adjust accordingly, the process is in our heads. TDD simply has us write down that thinking, as we think it, in a way that it can be programmatically confirmed - the tests should evolve as your implementation evolves.
A general rule for unit testing is to not test implementation details, but this idea is arguably even more important for TDD.
When formulating what to test with TDD in the first place, you must focus on what the function does, not how it will do it, so you assert the outcomes only. Doing this means you will write tests that asserts the code you write has the intended outcome and/or effects, how it does this is irrelevant to the test.
The goal is to be able to completely refactor/reimplement the code in the future without changing your test at all. As long as the same tests pass, you're confident your new implementation delivers the same functionality. This allows you to catch regression issues before they ever make it to production.
However, in some rare cases, you might find testing implementation details is actually the better way to go. For example, when dealing with DOM APIs; if your function needs to update a DOM node, it would be inefficient to set up and manipulate actual DOM nodes or fragments, when you could simply assert that the DOM API method you need has been called with the expected arguments. It's also not necessary to assert the change has actually taken place, because then you are just testing the DOM API itself, and that's not something you maintain. This applies also to third-party code; don't waste your time testing it, focus on the parts of the code you are maintaining.
Focus less on edge case
Less does not mean never. As you practice the mindset of testing what needs to happen, you might find that the edge cases start to become apparent to you as you continue in the refactoring cycle. If this happens, that's your cue to write a test for it.
But uncaught and unanticipated edge cases will always occur, however, this should not discourage you from applying TDD. they're called edge cases for a reason; they seldom occur which means even if they get to production, they will mostly likely occur rarely and to small volumes of users. This is where you would add a test for the edge case, apply the TDD cycle and deploy the fix. If the edge case does happen to occur in high volumes, that's okay too, because you're using TDD, it means you'll be able to focus on fixing that specific issue, confident in the fact that your existing tests will prevent you from implementing something that breaks any other part of your code.
We want to fail fast and fix forward, and TDD allows you to do that efficiently.
Test positive assertions
You could approach your tests to assert that something happens or something doesn't happen. Like a function specifically returns a desired value, or specifically does not return an undesired value respectively. Using the former approach means you'll be taking a step by step approach to ensure the code you write fulfills the requirements you've been given; this is always a known, finite set. The latter approach however, is an unknown, potentially infinite (not true infinity 😅) set - this is where the edge cases fall and so this approach might even be the source/promoter of the idea that you can never truly know what the code needs to do in advance.
TDD helps improve your development process by helping you to:
- Write LEAN code: writing code by focusing on one requirement at a time helps reduce the addition of superfluous code (muri) and implementing unnecessary features (muda)
- Minimise unknown/unanticipated behaviour or side effects
- End up with code that is documented via the tests
- Catch regression issues before deployment
- Ensure test coverage grows parallel to the code: in test after approach in practice, tests are often sacrificed in preference to deploying features fast, leading to much of the codebase not actually having the confidence of unit tests for long periods of time, or worse, for it's lifetime.
- Gives a lot more confidence in a continous delivery pipeline
Some drawbacks of TDD include:
- It will slow down your process… initially: you'll find your speed of development will get a little worse as you acclimate, but you may find an overall increase of your speed to production as you'll be much more confident in what you're shipping and manual QA becomes less involved
- Mocking is a lengthy and painful, but often required process
- You'll miss some edge cases
It's just a tool
TDD is just a tool. There's no need to love or hate it. Some of the things we want from software we write is reliability, predictability and efficiency. TDD is one approach that provides exactly that when applied with the right mindset.
The approach does not mean you will not introduce any bugs at all. Rather, it will help in reducing the number of them, and ensure that once they're discovered, they're easier to handle and implementation changes will be much less likely to result in new bugs introduced or regression issues.