CS146S
I learned about CS146S (aka The Modern Software Developer) in recent days.
· 5 min read
I’ve been exploring CS146S recently and eventually realized that the full course isn’t quite right for me at this stage (though it might be a great fit for someone aiming to become a full-stack developer via “vibe coding”). Even so, the parts I did complete gave me fresh insights into prompting and MCP.
Things about CS146S #
CS146S, aka The Modern Software Developer, a Stanford course about how LLMs are reshaping software engineering — from coding agents, to AI IDEs, to modern tooling around debugging, testing, and deployment.
Sources:
Website: https://themodernsoftware.dev/
Website archive: https://web.archive.org/web/20251208073247/https://themodernsoftware.dev/
Assignments Github Repo:
https://github.com/mihail911/modern-software-dev-assignments
The syllabus covers:
- Prompting and what an LLM actually is
- Coding agents and MCP (Model Context Protocol)
- AI IDEs and context management
- Agent patterns and collaboration
- AI-augmented terminals
- AI testing & security
- AI-assisted debugging and support
- Automated UI & app building
- Post-deployment agents: monitoring & incidents
- Where software development is heading
The slides of this courses offer little information and the main content is included in reading materials and assignments.
Rethinking prompt formats: Markdown vs XML tags #
While going through this lecture, I noticed that most of the prompt examples were annotated using the <tag></tag> style. Previously, I was used to designing prompts in Markdown.
The reason people often use Markdown or XML-tag formats for prompts is that LLMs are pretrained on a large amount of structured text in exactly these formats.
My previous justification for always using Markdown as a prompt format was that it makes the prompt more structured and helps distinguish the different logical layers within it, which I assumed would improve performance. On top of that, most papers use Markdown for their prompts, and even OpenAI’s own system prompts are written in Markdown. So I was really curious about which works better in practice — Markdown or XML tags — and ended up finding this discussion on the official community: XML vs Markdown for high performance tasks.
But in the comparison between plain text / JSON / Markdown / XML, there was this point:
The effectiveness of the format seems to change depending on the AI model. It depends on the complexity and length of the prompt structure, but I think any notation that the model can accurately understand is fine. Maintainability on the human side is also important.
So I get the feeling that in some cases, a mix of Markdown and XML might also work quite well.
# Task
<task>
Refactor the following function to improve readability.
</task>
# Constraints
- Keep the public API compatible
- Add type annotations
- Explain major changes in comments
# Code
<code>
# ...
</code>
MCP: from “tools” to “conversations with systems” #
MCP, aka Model Context Protocol.
MCP Website: https://modelcontextprotocol.io/
Readings #
MCP Intro #
MCP is essentially a universal adapter between AI applications and external tools or data sources. It defines a common protocol (built on JSON-RPC 2.0) that lets an AI assistant invoke functions, fetch data, or use predefined prompts from external services in a structured manner. Instead of every LLM app needing custom code for each API or database, MCP provides one standardized “language” for all interactions.
At the “Why MCP is Valuable” part, the author mentions:
- Two-Way Context: Unlike simple API calls, MCP supports maintaining context and ongoing dialogue between the model and the tool. An MCP server can provide Prompts (predefined prompt templates for certain tasks) and Resources (data context like documents) in addition to tools. This means the AI can not only “call an API” but also ingest reference data or follow complex workflows guided by the server. The protocol was designed to support rich interactions, not just one-off queries. This is especially useful in applications like coding assistants (where an AI might iterate with a development environment via MCP) or complex decision-making tasks that require back-and-forth with various data sources.
How exactly MCP supports stateful, two-way context between the model and tools is something I find very intriguing.
Comparing MCP to other approaches
In the comparison section, the article contrasts MCP with:
- Custom integrations & API key management
- ChatGPT Plugins (OpenAI Plugins)
- LLM tool frameworks (LangChain, other agentic libraries)
(And nowadays, frameworks like LangChain also support MCP.)
APIs don’t make good MCP tools #
The Model Context Protocol (MCP) is a pretty big deal these days. It’s become the de facto standard for giving LLMs access to tools that someone else wrote, which, of course, turns them into agents. But writing tools for a new MCP server is hard, and so people often propose auto-converting existing APIs into MCP tools; typically using OpenAPI metadata (1, 2).
The article points out that people are already trying to automatically turn existing APIs into MCP tools, and uses this to introduce the author’s argument that directly using APIs as MCP tools doesn’t really work:
- Agents ≠ traditional API clients.
- Even though you can generate MCP tools automatically from OpenAPI or other metadata, this kind of “one-click conversion” often performs poorly:
- You end up with too many tools and bloated descriptions that waste context.
- The returned data isn’t shaped in a way that fits the model’s context or reasoning.
- It fails to leverage what agents are actually good at: natural language, suggestions, chaining calls, and higher-level reasoning.
The best practice, according to the author, is to design MCP tools specifically for an agent’s strengths and limitations, instead of naively wrapping an existing web API and calling it done.