AI Can Code Faster Than Humans, but Speed Comes With Far-Reaching Risks

Artificial intelligence-generated code has become a daily fixture for developers across the technological spectrum. These digital tools have made writing lengthy code much easier. However, experts say this trade-off comes with new security risks and a continued need for human oversight.

Developers say artificial intelligence (AI) slashes a lot of the grunt work in writing code, but seasoned developers are spotting flaws at an alarming rate.

The security testing company Veracode published research in July—gathered from more than 100 large language model (LLM) AI tools—that showed while AI generates working code at astonishing speed, it’s also rife with cyberattack potential.

The report noted 45 percent of code samples failed security tests and introduced vulnerabilities outlined by the cybersecurity nonprofit, the Open Worldwide Application Security Project.

Veracode researchers called the study’s findings a “wake-up call for developers, security leaders, and anyone relying on AI to move faster.”

Some experts say the high number of security flaws isn’t shocking given AI’s current limitations with coding.

Keep reading

Unknown's avatar

Author: HP McLovincraft

Seeker of rabbit holes. Pessimist. Libertine. Contrarian. Your huckleberry. Possibly true tales of sanity-blasting horror also known as abject reality. Prepare yourself. Veteran of a thousand psychic wars. I have seen the fnords. Deplatformed on Tumblr and Twitter.

Leave a comment