Test automation has become a near-standard in today’s IT projects. But despite its many benefits, there are plenty of examples where bold automation initiatives end in failure — and not just occasionally. In fact, these situations happen more often than you might think.
Why does this happen? Below I’ve collected three of the most common problems I’ve seen in various automation projects.
Lack of a Clear Automation Goal
In many test automation projects, you’ll often come across the “let’s automate everything” mindset — where the goal is to cover as much of the application as possible with tests.
In my opinion, that’s a mistake. Personally, I prefer an approach where UI automation is focused only on critical paths and the highest-priority scenarios. The rest can be handled by other means — like API tests, for example.
Why this approach? Mainly because UI tests are usually the most fragile and the most expensive to maintain.Even minor interface changes — purely cosmetic ones included — can require updating multiple tests. When the UI coverage is too broad, you often end up spending more time fixing broken tests than actually running them.
The critical paths of the application should be monitored through UI automation, as their failure poses the highest risk. Everything else is better tested where it’s cheaper, faster, and more stable — at the API level.
Unstable Tests
This is a big one. It’s surprisingly common to see automated tests written in a way that makes their outcome unpredictable. Sometimes they pass with no issues — and other times, without any code changes, they fail. Or worse: everything runs fine locally, but the test randomly fails on CI for no clear reason. These are known as flaky tests — and they’re one of the biggest threats to successful automation.
Flakiness is usually caused by things like missing waits for elements to load, tests depending on each other, or an unstable environment. Another common culprit: weak selectors that break after even the smallest UI changes.
The biggest problem with flaky tests is that the team stops trusting them. If a test passes one day and fails the next, its results quickly get ignored — and eventually, the same happens to the entire automation suite. Instead of supporting quality, it starts doing real damage.
So how do you write stable tests? We’ve covered this in a separate article, but in short, you should:
- ensure your tests are independent and properly isolated
- use dynamic waits for elements (explicit waits in Selenium, auto-waits in Playwright)
- use selectors that are as resilient to UI changes as possible
Lack of Meaningful Reporting
Based on my experience, if a “test report” is just an unreadable, deeply nested JSON or a dump of raw logs, most of the team won’t even look at it — and the test failures will simply be ignored.
Good test automation isn’t just about the test code — it’s also about having an effective reporting mechanism. A clear, readable test report (for example, one generated with the Allure framework), including logs, screenshots, and a step-by-step execution trace, helps quickly identify the root cause of failures and significantly reduces the team’s response time.
The integration of testing with Slack also worked well for the project work – abbreviated results were automatically sent to one of the channels, allowing the team to quickly respond to errors found.