Automated Anarchy: The Rising Tide of Bad Bots on the Internet

Humans will soon be a minority on the internet. What does it mean for our democracy?

By Chris Kremidas-Courtney, Senior Advisor, Defend Democracy

According to the just-released 2024 Imperva Bad Bot Report, automated bots were found to be responsible for 49.6% of all internet traffic. Over half of these were classified as “bad bots.” In 2023, humans accounted for 50.4% of internet traffic, down almost 2% from 2022.

What are Bots? A bot is software application that runs repetitive automated tasks. These can range from indexing websites for search engines, monitoring website performance or interacting with a customer via a chatbot. Another example is a social bot that interacts with people on social media.

Imperva defines bad bots as software applications that perform automated tasks with malicious intent. This includes actions such as denial of service attacks, content scraping, account takeovers, credit card fraud, fake account creation, spreading disinformation, and comment spam.

Internet traffic from bad bots grew in 2023, rising to 32% of all internet traffic, up by 2% from 2022. The report indicates bad bots are costing industries billions of euros due to attacks on websites, application programming interfaces and applications.

Mixed findings in Europe. Looking closer at individual countries, some EU members have even more bot traffic. Ireland had the most concerning numbers with 71.4% of all its internet traffic found being bad bots, with only 26.6% human traffic and 2% good bots. Not far behind was Germany, where bad bots accounted for 67.5% of internet traffic, humans only 25.5%, and good bots 7%.

In other words, bad bots outnumber humans by 2:1 in Ireland and Germany’s internet traffic. The situation in France is much better with 75% of internet traffic by humans and only 18% from bad bots. The UK was not far behind with 69% or traffic from humans and 24.8% from bad bots.

In the United States, humans only accounted for 38.4% of internet traffic while bad bots took up 35% of internet traffic and good bots accounted for 26.2%. Australia fared much better with rates of internet traffic measured at 63.7% human, 30.2% bad bots, and 6.1% good bots.

China was also above the global average for human internet traffic at 56.2% but worse than the global average for bad bots at 39.7% with 4.1% good bots.

Artificial intelligence. The report attributes the rise in bot traffic to artificial intelligence with the recent adoption of generative AI and large language models resulting in the number of simple bots increasing to 39.6% in 2023, up 5% from 2022. These simple bots range from ones designed to scrape data from websites and automated web crawlers.

Other findings include that 11% of all login attempts on the internet were attributed to account takeover attacks, with the largest number of attacks targeting the banking industry.

Disinformation and elections. Bots have also featured prominently in the distribution and amplification of disinformation, and we’ve seen their impact in recent election campaigns throughout democratic countries. Considering the global trend-line in which humans will soon be outnumbered on the internet by AI-powered bots, protecting our infosphere is becoming more challenging than ever.

Just last week, a new report from Democracy Reporting International presented evidence of how AI-driven bots are spreading disinformation about the upcoming EU elections. In recent years, the European Digital Media Observatory (EDMO), has uncovered pro-Russian bot networks spreading disinformation about the war in Ukraine.

Information pollution. While these dynamics are not new, the impact of AI is increasing their reach and intensity. In fact, a recent Europol report indicated that “90% of internet content may be synthetically generated by 2026.” Its difficult to imagine how citizens can be expected to discern truth from fiction in such a scenario. The potential impact on societies and how they are governed could be deeply influenced regardless.

Defend Democracy. In order to protect and build our societies’ cognitive resilience, we need better monitoring and regulation on the use of automated bots and stiffer penalties on the companies that fail to adhere to regulations. It should be seen as no different than penalties for putting an unsafe truck on the road or an unsafe train on the rails. If we fail to protect cognitive safety in the same way we protect our physical safety, the potential damage to our societies could take decades to recover from.

Since the EU’s new AI Act is over 20 months from being fully in force, member states must redouble their efforts now to ensure safeguards against malicious bots can protect citizens’ bank accounts, data, and privacy. More importantly, they must seek to better regulate and strengthen safeguards to reduce these risks to human cognition which endanger our democratic institutions.

Defend Democracy, 19 April 2024