Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Today, a significant proportion of Python applications are partially — and sometimes entirely — generated using AI coding assistants.

For the security tester, this should immediately trigger heightened scrutiny. AI-generated code often:

AI output is not inherently insecure — but it is frequently under-hardened.

Understanding the common fingerprints of AI-generated Python code can help you identify areas that deserve deeper manual review.


Why AI-Generated Code Deserves Extra Attention

Large language models generate code by predicting statistically likely patterns. They:

This results in code that “works” — but may not be secure, robust, or production-ready.

Common AI Coding Patterns That Signal Risk

Below are recurring patterns frequently observed in AI-generated Python code that may introduce direct weaknesses.

use of assert

Example:

assert user.is_admin

Security-relevant checks must use explicit condition handling and raise proper exceptions.

except Exception: pass

Example:

try:
    process_payment(data)
except Exception:
    pass

Why This Is Dangerous:

This pattern is extremely common in AI-generated code because it “prevents crashes”.

Overuse of the os Module Instead of pathlib

Example:

import os
file_path = os.path.join("uploads", filename)

While not inherently insecure, this pattern may indicate:

Modern Python encourages the use of pathlib.Path, which provides:

AI tools often default to older os-based patterns.

As a tester, you should carefully audit:

Absence of Modern Python (3.10+) Features

AI-generated code frequently avoids newer language features such as:

While this is not a vulnerability in itself, it may indicate:

Outdated style can correlate with:

All of which increase the likelihood of hidden weaknesses.

Additional AI Red Flags

Experienced testers also report the following patterns in AI-produced Python code:

These patterns are not proof of AI usage — but they often correlate with it.

The Security Tester’s Approach

When you suspect AI-generated code:

  1. Increase scrutiny on input handling.

  2. Audit exception handling rigorously.

  3. Search for unsafe defaults.

  4. Review authentication and authorisation boundaries manually.

  5. Verify that security controls are explicit — not implied.

Remember:

A skilled Python security tester recognises the patterns, then performs deeper manual analysis where automated reasoning has likely fallen short.