Thoughts on Vibe coding.
Thoughts on Vibe coding and LLM assisted code generation.
Listen to this article
Let's start with the core of LLM technology: Transformer's architecture (https://shorturl.at/I3w7B) "In machine learning, attention is a method that determines the importance of each component in a sequence relative to the other components in that sequence."
The transformer's attention mechanism is truly groundbreaking, yet it does face subtle challenges when generating code. While LLMs excel at understanding patterns, they must carefully balance syntax precision with deeper logical meaning. Sometimes, the model might prioritize superficial similarities over actual correctness, especially when handling complex logic.
Hallucination challenge is met by solutions like RAG (Retrieval Augmented Generation), a process in which LLM's are fed with vectorized data for reference and retrieval and optimized by prompt optimization techniques (prompt engineering).
RAG helps ground responses in reliable sources, which is a big step forward. Still, it introduces some gentle complexities we shouldn't overlook. When retrieving code snippets, the system might surface outdated or insecure examples that “look” right.
The evolution of LLM's gave us reasoning models. Now we are in the MCP phase, a form of API for LLM's. This enabled the emergence of context engineering for agentic workflows (agents.md) and specific tools for chaining different LLM's for a set of different tasks.
Agentic workflows represent an exciting leap in capability - they let LLMs handle multistep tasks with memory and context. But with that sophistication comes a quiet expansion of risk. These systems create larger attack surfaces where issues like memory poisoning or tool misuse can emerge. One compromised agent might unintentionally spread problems across systems or create a ripple effects. It's not that we shouldn't explore this frontier, but perhaps we should move forward with a bit more care and awareness.
As any new tech, LLM (AI) is realized in cycles (https://shorturl.at/x5swE). Now we are approaching the slope of Gartner hype cycle (https://shorturl.at/bRVFD) Peak of inflated expectations.
We're currently riding that wave of enthusiasm where expectations run high. Many teams are understandably eager to adopt AI coding tools, but it's easy to overlook how scalability challenges might surface later. This reminds us that allowing time to learn and adapt is critical.
LLM's can be amplification for experts and useful assistants in first draft, prototyping and coding. Agentic workflows for production code generation are risky and generate not only technical dept but unexpected cost.
There's genuine value in using LLMs as collaborative partners, but the hidden costs can catch teams off guard. While setup seems straightforward, the reality often includes extra work for security fixes, performance tuning, and technical debt cleanup.
So we have to create best practices and realistic expectations for LLM's usage in design and software development. The thesis is not against usage of AI but for adopting critical and proven mindset with focus on scalability, security and ethics.
Rather than rejecting AI tools, we might gently cultivate habits that keep us grounded. The goal isn't to slow innovation, but to use these considerations naturally for our workflow.
Some articles on this topic:
The Hidden Costs of Coding With Generative AI (https://sloanreview.mit.edu/article/the-hidden-costs-of-coding-with-generative-ai/).
Tokens are getting more expensive
The Top Agentic AI Security Threats You Need to Know in 2025 (https://www.lasso.security/blog/agentic-ai-security-threats-2025).
Developing Agentic AI Workflows with Safety and Accuracy (https://www.fiddler.ai/blog/developing-agentic-ai-workflows-with-safety-and-accuracy).
3 best practices for building software in the era of LLMs (https://about.gitlab.com/blog/3-best-practices-for-building-software-in-the-era-of-llms/).
Security risks of AI-generated code and how to manage them (https://www.techtarget.com/searchsecurity/tip/Security-risks-of-AI-generated-code-and-how-to-manage-them).




