Why create automated tests
Automated tests are similar to manual tests (e.g., "dev-test") regarding how they work. But since they are automated, they become a powerful tool that lets us develop faster, enjoy a more effortless development flow, and become fearless when refactoring existing code. It also helps produce more readable/maintainable artifacts.
Faster development
At first glance, it may seem like automated tests would increase the amount of time it takes to implement features. After all, you are writing more code in the end since automated tests are code too. We save time, though, because our writing to testing feedback loop becomes much shorter. We no longer need to get our apps into a usable state before we can test functionality.
For example, let's say we write a service that performs some computation on user input and returns the result to the user. To test this, we would need to write out the entire service. Then create a controller with endpoints so we can pass to and receive from this service. Finally, we need to write out our HTTP requests to interact with the controller.
We will likely run into issues with controller concerns like validation or marshaling/unmarshalling, which are very distracting to correct when we only care about the service at the moment. We also have to run the entire application, which could be time-consuming depending on your tech stack.
With automated tests, we get to focus on one piece of the application at a time. In our example, that is just the service. We write a few automated tests that cover each of the happy or unhappy paths we can think of; then, we run all the tests. We know it's not due to the controller or the HTTP request syntax should the tests fail. So with this smaller scope to worry about, we can find and correct the issue faster. Less scope also means we have less code to write before testing, so we find out if our functionality works immediately after it.
We also gain speed since we don't have to set up everything to test again manually. We don't need to boot up the application or switch back and forth to and from the client to send requests. Furthermore, Manual tests often have a particular state we need to get the application into before testing. With automated tests, the developer only needs to be concerned with the input and output of a specific method.
Effortless Development Flow
Manual testing typically involves setting up many things before accessing the part we want to test, including:
Automated tests deal with a smaller scope, meaning that the number of unique paths taken through a method is significantly less than the overall workflow. For example, method A has three possible flows, and method B has four. If workflow F used both methods in its flow, then workflow F will inherit 12 (A * B) unique flows from A and B. Workflow F also depends on a third method called method C, which has three possible flows. Now the inherited flows are A * B * C = 36. The developer must perform Thirty-six manual tests to ensure all possible flows in workflow F are covered. If an issue is found and corrected, the developer would be prudent to complete all 36 tests again to ensure the correction changes didn't break anything.
That is a lot of manual testing. If we instead had an automated test to cover the flow of each method, we would convert our flow-total equation from (A * B * C) to (A + B + C). Tests would target the individual methods instead of the whole workflow, amounting to 10 automated tests. Furthermore, these tests are automated, so rerunning them is as simple as pressing a hotkey or a mouse click.
Better code quality
Sometimes, we may write 7+ tests for a single method when writing automated tests. Too many tests for a single method is usually an indicator that our method is doing too much, and we should break it up into smaller methods. If we refactor the same code to need only two or three automated tests per method, we may discover that our code naturally shapes itself into units of clear responsibility. Services are now solving one problem instead of ten. Reusability becomes more apparent, and refactoring objectives become clear-cut.
The number of tests may not be the only indicator we need to refactor to make our code more testable. It may just be that the code is tricky to test or even not testable, period. Regardless of the reason, thinking from the mindset of trying to make things testable improves the quality of our code.
Fearless refactoring
Automated tests don't disappear after we run them. We can rerun the test at any time, which makes them perfect candidates for regression testing, where we need to know if making a change causes something else to break. When a test covers each traversable path of some functionality, you can take comfort in knowing that functionality is working as intended post changes by merely rerunning your test suite.
Integration Tests vs. Unit Tests
The terms integration and unit test are commonly used interchangeably but do not represent the same thing. The distinction is helpful when considering things like the portability of the test to different environments. It can also be a performance concern when increasing your development speed.
Unit tests are tiny tests. Typically you would have a single method with several unit tests, each testing a possible use-case of a method. Unit tests do not work if the tested method depends on some external functionality not present in the project. For example, repository methods in Spring are not unit-testable because a database and a connection to the database are required to execute the queries associated with the repository method.
Integration tests can support all testing use-cases, unlike unit tests. But there is a cost of performance and complexity associated with integration tests. With an integration test, you spin up the framework context, which requires you to define a run configuration. In addition to having a run configuration to maintain, we now can't port our tests to a different environment without changing the existing run configuration or maintaining multiple run configurations.
Another porting concern is whether or not your tests depend on pre-existing data in that environment or not. Tests that rely on pre-existing data are chained to the environment where the data lives. We could make the data into properties to assist in porting, but this is still a tedious task. An even better way may be to create some helper functions that generate data on demand. Finally, there is the option to use the integration test to develop the initial functionality but exclude it from regression testing after a code handover to a different environment. Exclusion from the regression test suite is an attractive option when data structures are too large/complex to make on-demand data generation feasible. We may lose the advantage of long-term regression testing, but we still gain the rest of the benefits associated with automated testing.
Integration tests also run orders of magnitude slower than unit tests. Unit tests typically involve only compiling a few project files (the ones containing the tested functionality). Whereas integration tests, at a minimum, require us to compile the entire project and spin up the framework.
Disclaimer
Automated testing does not replace all manual testing. It's up to the developer's discretion to decide what might still need a manual test post-implementation. For example, with automated testing, we commonly mock the data used in the test. Since we are using test data, we are making assumptions about how the data is supposed to be structured. Since these assumptions may be wrong, it makes sense to follow up the automated test with a single, one-time manual test with actual data to ensure our automated test is grounded in reality. This scenario is common when your application interacts with another service to which you may not have optimal access levels (e.g., a third-party API).