![](https://sites.psu.edu/rnaircivicissuesblog/files/2022/04/hard-captcha-test-200x300.jpeg)
At some point in our life, we’ve all had to prove to a website that we weren’t robots. Usually, this involves pressing the “I’m not a robot” button and then occasionally proving it by choosing squares with fire hydrants, traffic lights, or cross walks. Although I can remember a time when these tests were very straightforward, recently I have felt that the tests have been getting more and more complicated with the objects that are required to be found being subtly covered by bushes, hidden behind weird camera angles, or crossed over just enough into another square to make you wonder if it is enough to count. While reading articles about the CAPTCHA software, I was happy to find out that I was not going crazy and that a gradual increase in difficulty to prove you are not a robot is something that has been intentionally implemented by programmers in recent years.
![](https://sites.psu.edu/rnaircivicissuesblog/files/2022/04/warped-text-captcha-300x150.png)
In the early 2000s, CAPTCHAs (which is an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart), used to just consist of simple warped text images which were enough to stump robots, however, with AI rapidly becoming more intelligent, the tests have needed to match their increase in intelligence as well. The switch from text to images happened around 2014, when Google put one of its own machine learning algorithms against humans to see which group had a high success rate at solving warped text CAPTCHAs. Embarrassingly, the results of this experiment showed that the robots got the test right 99.8% of the time, while humans only got it right 33% of the time. Learning from these results, Google moved onto a new method called “NoCaptcha ReCaptcha” where they observe the user’s data and behavior and allow some to pass after clicking the “I’m not a robot” button and making others choose objects from pictures. However, as expected, the machines and robots have once again caught up to the level of intelligence that humans use to distinguish between objects in pictures. One computer science professor from the University of Illinois was even able to come up with an algorithm to solve image and audio CAPTCHAs using Google’s own reverse image searching and audio recognition programs.
With recent tests showing how machines are basically as good as humans are in basic text, image and voice recognition tasks, the necessity to come up with alternative tests have been noted. However many alternative test possibilities are limited by human capabilities. Since CAPTCHA tests need to be somewhat easy for humans, hard for computers, and something that someone from both hemispheres of the world can solve, it does not leave developers with many options on alternative tests. Many CAPTCHA developing teams have started to think about getting rid of single test authentications and have started to favor continuous authentication. Through continuous authentication, the software would continuously monitor a user’s behavior including their browsing history and track mouse movements, since the average human cannot mimic their mouse movements more than once, whereas a bot definitely can. Although this form of testing makes the most sense for authentication, it opens a whole new set of problems relating to security and privacy, something often debated in relation to developing technology. Another interesting form of authentication that Amazon received a patent for in 2017 is called the “Turing Test via failure.” These tests are created to be impossibly hard on purpose, including logic puzzles and optical illusions. The only way to pass authentication is to get the “correct” wrong answer, a wrong answer that only the human brain can come to. I think this is such an interesting concept, since it goes against the structure of every CAPTCHA test in existence today.
This is a very interesting topic for a civic issues blog and this post was particularly fascinating because it offers insight into an aspect of browsing the internet that not many people think a lot about (myself included). I have definitely noticed the slight increase in the difficulty of CAPTCHA tests, but I never really thought about if there was a specific reason since the increase was so gradual over many years. Obviously, technology is going to become better over time and machines/robots will keep getting better at their job, which in this case is to mimic human interaction with technology. I think this raises a question about the future: how far can updating CAPTCHA tests go before no bot can possibly pass? I think Amazon’s use of logic puzzles and optical illusions is really cool because it makes use of the intricacy of the human brain and our perception of reality to get the “correct” wrong answer, although I’m not sure how accessible this would be to people with certain disabilities.
This is a very interesting post to say the least. I had not thought about how much harder these tests were getting over the years, but now that you’ve pointed it out, it feels like it used to be as simple as typing in a wonky word and now it takes me forever to get through one section (this is definitely heightened by forgetting my password and having to do the captcha again every time I try a different password). As much as I like the idea of continuous authentication, I don’t think that I could ever support it simply because the amount of destruction that could be caused by a bot getting onto a website is much lower than what a company can do with all of my web information. It will be interesting to see how and if humans can outsmart the ever-evolving robots of the future. Great post!
I thought I had noticed the “I am not a robot” type tests increasing in difficulty over the last few years – I am glad that it is not just me who was having slight difficulties! Now that you brought it up, I can definitely say that it does make sense in a comparison with the increasing intelligence of AI technology. As the speed that this technology advances at an ever-increasing pace, we too must advance our thinking to consistently be able to outmatch the artificial minds of our creations. Computational intelligence is used widely in surveillance, art, routing of calls, and more. This has always reminded me of Victor Frankenstein and his monster. Just as Frankenstein ran from the life he created with his bare hands, humanity does too with computers and AI. With innovation comes fear and a bleak reality. The reality I speak of is the lack of regulations to structure the fairness of AI as well as allow humans to create a consistent pace for ourselves to match with the innovation of our creations. If robots can really beat us in “I am not a robot” tests, what does that say for ourselves? Really interesting blog post!