
A new study by a team of Stanford-affiliated researchers has found that code-generating AI tools like Github Copilot may pose more security risks than many users may realize.
The research is specific to OpenAI’s product Codex, of which Elon Musk is a co-founder.
Codex powers the Microsoft-owned GitHub Copilot platform, which aims to make coding easier and more accessible by converting natural language into code and suggesting changes based on contextual evidence.
AI coding problems
“Code generation systems are not currently a substitute for human developers,” explains Neil Perry, lead co-author of the study.
The study asked 47 developers of varying abilities to use Codex to solve security-related problems, using the Python, JavaScript, and C programming languages. It concluded that participants who relied on Codex were more likely to write unsafe code than a control group.
Perry explained: “Developers use [coding tools] Completing tasks outside of their own area of expertise should be cause for concern, and those using them to speed up tasks they are already proficient in should carefully scrutinize the output and the context in which they are used throughout the project. “
This isn’t the first time AI-powered coding tools have come under scrutiny. In fact, one of GitHub’s solutions for improving code quality in Copilot is that the Microsoft-owned company is facing legal action for failing to attribute the work of other developers. The result was $9 billion in lawsuits against 3.6 million individuals for violations of Section 1202.
For now, AI-powered code generation tools are best thought of as helpers that can speed up programming rather than full-scale replacements, but if developments over the past few years are any indication, they may soon replace traditional coding.
pass technology crisis (opens in a new tab)