The OWASP Top 10
1. Broken Access Control
This is the “users touching things they shouldn’t” category.
Example: A simple missing WHERE user_id = ? lets User A view or modify User B’s data.
Access control is not UI logic—it is backend logic.
2. Cryptographic Failures
Encryption done wrong is effectively no encryption.
Example: Sending passwords over HTTP or storing tokens without encryption.
3. Injection (SQL, NoSQL, OS, etc.)
If your code directly mixes “data” with “commands,” you’re done.
Parameterized queries everywhere. No exceptions.
4. Insecure Design
This is not a bug—it’s an architecture flaw.
Example: A payment workflow that trusts client-side prices.
5. Security Misconfiguration
Default credentials. Open ports. Debug mode in production.
This is the most boring category—and the most common root cause.
6. Vulnerable and Outdated Components
You inherit the security posture of the slowest-patched library in your stack.
If your dependencies age, your attack surface grows.
7. Identification & Authentication Failures
Weak login flows, session mismanagement, insecure cookies.
Account takeover happens not because attackers are smart—but because sessions are sloppy.
8. Software & Data Integrity Failures
This is the supply-chain attack category.
Example: A rogue dependency update compromises your CI/CD pipeline.
9. Security Logging & Monitoring Failures
If you don’t log it, you can’t investigate it.
If you don’t monitor it, you won’t know it happened.
10. Server-Side Request Forgery (SSRF)
This one is subtle.
It happens when your server fetches URLs on behalf of a user—and gets tricked into calling internal services.
Secure Coding in Python — The Practical Foundations
Python is powerful, dynamic, expressive—and easy to misuse. Below are the fundamentals that every Python engineer should anchor themselves to.
1. Setting Up a Clean Environment
Your environment is part of your security boundary.
- Use
venvorcondato isolate dependencies - Pin versions with
pip-toolsorpoetry - Never install system-wide packages
- Automate vulnerability scans (
pip auditorsafety)
2. Watching Out for Python-Specific Pitfalls
Dynamic typing
A blessing for velocity, a curse for security.
Type hints + mypy + strict linting reduce entire classes of runtime errors.
Unsafe use of assert
Never use assert for security checks—it disappears in -O optimized mode.
Serialization dangers
Avoid pickle entirely for untrusted data.
Prefer JSON or Pydantic models.
Hands-On Challenge: Securing a Simple Endpoint
The insecure version:
@app.get("/user")
def get_user(id):
return db.query(f"SELECT * FROM users WHERE id={id}")
The secure version:
@app.get("/user")
def get_user(id: int):
return db.execute("SELECT * FROM users WHERE id = %s", [id])
- Parameterized query
- Type-validated input
- No string concatenation
- Framework handles encoding safely
Small change, massive impact.
Securing Python Frameworks
Django
- Never hardcode secrets — use environment variables or secret managers
- Rotate keys regularly
- Enforce HTTPS and secure cookies
- Use Django’s built-in auth and permission system
- Disable
DEBUG=Truein production - Keep the admin interface off the public internet
Flask
Flask is powerful but minimalist—meaning you must secure what Django secures for you.
- Manually handle sessions (signed cookies or server-side store)
- Use
flask-talismanto enforce HTTPS and security headers - Validate input with Marshmallow or Pydantic
- Avoid arbitrary
eval()or dynamic routing tricks
Securing RESTful APIs
- JWTs with proper expiration
- Rate limiting
- CORS restrictions
- Input validation (never trust request payloads)
- Zero trust between internal and external services
AI + Security: A New Surface of Vulnerabilities
We’re entering an era where software is increasingly generated, not written.
This introduces two new challenges:
1. Prompt Injection
When user input manipulates the AI system into revealing, modifying, or ignoring instructions.
This is the “SQL injection of LLMs.”
2. Misuse of Auto-Generated Code
AI can write insecure code confidently.
Developers must:
- Treat AI output as untrusted
- Run static analysis on AI-generated code
- Add type systems + linters to catch mistakes
- Avoid letting agents write directly into production systems