Top Automation Anti-Patterns to Avoid (With Real Examples)

Top Automation Anti-Patterns to Avoid (With Real Examples)

Automation frameworks are becoming more powerful, scalable, and AI-assisted. But, even today, the same types of failures continue to appear across teams: flaky tests, slow pipelines, unstable locators, and brittle architecture.

These issues rarely come from the tools themselves. Usually, they come from using specific anti-patterns that repeat again and again across projects.

Because automation is not just about writing tests — it’s about writing good tests, designing test architecture, thinking about maintainability, and making long-term decisions. Unfortunately, some QAs still fall into dangerous habits that slowly destroy the reliability of their framework.

Let’s go through the most damaging automation anti-patterns, understand why they happen, what they cause, and how to fix them.


Creating Tests That Try to Cover Everything at Once

We have all seen or done this at least once in our careers. There is one massive test script that tries to simulate the entire user journey in one go.

It looks impressive at first glance, and it feels like we are covering a lot. But in reality, it’s one of the biggest automation traps.

This kind of test usually includes:

  • logging in
  • navigating through multiple pages
  • creating data
  • editing that data
  • deleting the data
  • validating multiple UI sections
  • logging out

It becomes a miniature movie with 50 steps — and if step number 6 fails, the entire scenario falls apart.

These tests are:

  • Extremely brittle — one changed button breaks the whole test suite.
  • Hard to debug — you spend more time investigating than fixing.
  • Slow — they force our CI pipeline to crawl.
  • Misleading — they give a false sense of coverage but miss details.

A real example

it("should test the full user journey", async () => {
  await loginPage.login();
  await users.createUser();
  await dashboard.checkWidgets();
  await profile.updateInfo();
  await users.deleteUser();
});

This test does everything, and that’s the problem.

To fix this problem, we need to do a couple of things:

  • Each test should focus on one specific behavior.
  • The setup/cleanup should happen via API or DB, not UI.
  • Don’t test everything in one scenario—test each part independently.

This will reduce complexity, speed up execution, and make the automation more stable.

Picking the First Locator That Works

One of the biggest sources of UI flakiness is choosing the wrong locators.

Some QAs inspect the web page, grab the first XPath or CSS selector that appears, paste it into the code, and move on. This might initially work, but the problem is that the first locator that “works” is often the most fragile one.

Some of the bad practices are looking like this:

  • Using full XPaths with 15 levels
  • Using dynamically generated IDs
  • Using giant CSS selectors tied to styling
  • Using text-based locators that change every sprint
  • Using index-based elements like (//button)[3]

Example of a terrible locator

await $('//*[@id="app"]/div/div[2]/div[1]/div[3]/span[2]')

If a designer moves one div, the entire regression suite collapses.

Choosing the first locator that works is bad, because:

  • Tests become flaky.
  • Every small UI change triggers a cascade of failures.
  • Our automation requires constant maintenance.
  • New team members struggle to understand what’s being tested.

To fix this problem, we need to use stable, readable, long-term locators:

  • data-testid
  • accessibility IDs
  • short, meaningful IDs
  • predictable CSS classes

And always collaborate with developers on improving testability, because good automation starts with a testable product.

The “Copy-Paste Framework” — When The Framework Has No Architecture

This happens more often than people admit. A QA is in a rush to deliver automation, so they create a framework in 1–2 days, push it to GitHub, and start writing tests immediately.

At first, this seems “fast and efficient,” but after a few months, it becomes a nightmare.

The result of this approach is usually:

  • duplicated code across tests
  • no config file
  • environment URLs hardcoded in scripts
  • no folder structure
  • same selectors and functions repeated 20 times
  • no reusable components
  • no abstraction layer

Real-life example:

await browser.url("https://staging.myapp.com");
await $('#email').setValue("user");
await $('#pass').setValue("123");
await $('#submit').click();
// (this is copy-pasted into 40 different tests)

One day, the login button changes… and suddenly 40 tests fail. And you need to make a change in 40 tests. What a nightmare to maintain…

This approach is dangerous because:

  • Maintenance slows to a crawl.
  • Small UI changes require huge refactoring.
  • New QAs can’t onboard quickly.
  • Tests become inconsistent in style, logic, and stability.

To avoid this nightmare, we need to set a strong foundation that will save us months of work later. We need to invest in a proper automation architecture:

  • A clear folder structure
  • Page Object Model or Screenplay
  • Centralized waits
  • Global config and environment variables
  • Reusable helpers
  • Logging, reporting, and error handling

The “Wait Until It Works” — Overusing Sleeps and Pauses

Believe it or not, this is one of the oldest anti-patterns that refuses to die.

Is a test failing?
Let’s add 5 seconds.
Still failing?
Let’s add 10 seconds.
Still failing?
Let’s add 20 seconds for safety.

Classic example

await browser.pause(15000);

Suddenly, the entire test suite takes 30 minutes longer every day.

This is a very bad practice, because:

  • It increases the execution time drastically
  • Makes tests pass by luck rather than by accuracy
  • Hides real performance issues in the application
  • Makes debugging much harder
  • Creates false confidence in test stability

We should always wait for something, not wait for nothing. To fix this anti-pattern, we need to use:

  • explicit waits
  • condition-based waits
  • element state checks
  • API polling instead of UI waits
  • visual waits from modern AI frameworks

The “Automate Everything” Mentality — Not Every Test Should Be Automated

100% automation coverage should definitely not be our goal. There are tests for scenarios that change too often, that are not stable enough, that are too complex to automate, that don’t bring real business value, or maybe they require human judgment. These scenarios should not be part of our automation testing suite.

Also, there are some other examples of tests you should not automate:

  • UI tests for animations
  • Tests for rapidly changing UI screens
  • Very deep exploratory scenarios
  • One-time flows for rare features
  • Tests with inconsistent external dependencies

If you automate these kinds of scenarios, you will:

  • Waste time automating tests that constantly break
  • End up maintaining tests instead of improving quality
  • Get false positives and false negatives
  • Spend days fixing tests that don’t matter

So before starting with automating some scenario, we should ask ourselves these 4 simple questions:

  • Is the test stable?
  • Will this test still be relevant in 3 months?
  • Does it save manual effort?
  • Is it critical for the business?

If the answer is no → keep it manual.

Preparing Data Through UI Instead of API

Some QA teams prepare all test data using the UI. This includes creating users, uploading files, updating settings, setting up environment state, etc. Every one of those steps makes our UI tests slower and more fragile.

This is not a good practice because the UI is the slowest and most fragile layer. Also, by doing this through UI, we introduce unnecessary steps, which multiply the flakiness of our tests. API or DB is always easier and faster.

To avoid this issue, it is a good practice to:

  • Create test data via API
  • Clean up via API or database
  • Use UI only for the actual behavior validation

Ignoring Flaky Tests

We have already discussed flaky tests. They usually exist because of unstable locators, bad waits, inconsistent environments, race conditions in UI, broken test data setups, or third-party dependencies.

Whenever we notice that we have flaky tests, we should not ignore them. We should treat flakiness like a P1 bug.

Once we have noticed a flaky test, we should:

  • Track flakiness per test
  • Use dashboards or tags to monitor unstable tests
  • Work on root causes
  • Don’t rely on retries to “hide” the issue

Automation without stability is not automation — it’s chaos.

Testing on Only One Browser or Device

If we only run all our tests on Chrome, only one Android device, only one iOS simulator, then this is not enough, because users are everywhere. The users are using Safari, different iPhones, foldable Android devices, high-resolution tablets, dark mode, etc.

If we are only using one device, bugs can appear on devices that automation never touched, some browser-specific bugs can slip into production, layout issues can appear on real devices, and our automation tests can give us a false sense of safety.

To avoid this issue, we should always create a test matrix based on the requirements of the product that we are testing, for example:

  • Chrome + Safari (minimum)
  • One real Android device
  • One real iPhone
  • Light mode + dark mode
  • At least 2–3 screen sizes

This catches UI issues early and increases confidence.

Relying on CI Without Running Anything Locally

We should not use the CI pipeline as our personal debugging environment, and we should not push code directly to the pipeline and hope for the best.

This is not a good practice because it can block the entire pipeline, it can break the main branch, it can slow down the whole team, and it will definitely make debugging harder because CI takes longer.

To avoid these issues, we should always:

  • Verify that the tests are working locally before pushing the changes to the pipeline.
  • Run a small subset of tests locally
  • Leave full regression for CI, but test the basics yourself

No Logging, No Screenshots, No Traces

Our tests should provide rich information about the result, not just a “pass/fail” status. Some frameworks don’t save screenshots, capture console logs, record videos, collect network traces, or generate clear reports in general, which makes the debugging harder and more complicated.

We should always investigate the possibilities of our framework, and maybe use some additional plugins, HTML reporters, or third-party tools to generate this valuable information for our test results. This information is valuable for us as QAs, but also for the whole team and for the other stakeholders.


As you can see from these examples, the biggest automation failures are most often caused by rushed decisions, poor habits, and a lack of structure.

By avoiding these anti-patterns, our automation suite becomes more stable, more maintainable, faster, more reliable, and more trusted by our team.

And remember, automation requires planning and discipline, not just tools.

Level up your QA skills — subscribe for quick weekly insights

We don’t spam! Read our privacy policy for more info.

Level up your QA skills — subscribe for quick weekly insights

We don’t spam! Read our privacy policy for more info.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *