Backend Developer • Python, Docker, React • Codex CLI
In about half your sessions, the AI didn't have enough context to make progress — it couldn't find the right files or didn't understand what you wanted changed. In another ~1 in 5 sessions, Codex CLI's sandbox couldn't read your project files at all, so the AI was working blind.
Short, open-ended prompts like "fix this" or "do it" led to twice as many problems compared to when you gave more detail. Interestingly, even your longer spec prompts sometimes struggled — usually because they packed multiple features into one session, which is hard for the AI to handle in one go.
About 1 in 5 of your sessions went nowhere because Codex couldn't actually see your files. Quick fix — start by asking it to confirm it can read what you need:
List the files in backend/src/search/ and show me the first 50 lines of main.py
When you asked for multiple things at once (e.g. "add CSV export AND cosine similarity search"), those sessions almost always stalled. One feature per session works much better:
Your prompts often specify the right files but leave the implementation vague. Add concrete acceptance criteria:
In backend/src/search/routes.py, add a POST /export endpoint that takes a CSV with a "slug" column, queries the DB for each slug's metadata (exclude vector field), and returns a CSV response.
You consistently reference existing patterns ("use the same LLM client as search", "modularized like everything else"). This is excellent practice for AI-assisted development.
From Docker configs to React frontends to Python pipelines — you use AI across the full stack, showing comfort with AI as a general-purpose tool.