Getting Started with Claude Code: Installation to First Real Output

When I first installed Claude Code, I treated it like a fancy chat window. I'd open my terminal, type a question, get an answer, and move on. For about a week, I was essentially using a $20/month command-line ChatGPT. It worked fine, but something felt off. I wasn't getting the productivity gains people were raving about. I was getting slightly better Stack Overflow.
Then it clicked. Claude Code isn't a chat assistant. It's an agent. You don't ask it questions. You give it tasks and an environment, and it works. That single mental shift changed everything about how I use it.
This is the getting started guide I wish someone had handed me on day one. Not a copy of the docs. The practical version, with the mental model that makes the rest of it make sense.
Installation
This part is mercifully simple. You need Node.js 18 or newer. If you don't have it, grab it from nodejs.org or use nvm. Then:
npm install -g @anthropic-ai/claude-codeThat's it. Run claude in your terminal to start a session. You'll need either an Anthropic API key or an active Claude Pro/Max subscription. The CLI will walk you through authentication the first time.
I recommend running claude from the root of a project directory. Claude Code is context-aware. It reads your codebase, understands your project structure, and uses that understanding to make better decisions. Starting it from your home directory is like hiring a developer and not telling them which project they're working on.
One thing the docs don't emphasize enough: Claude Code runs in your terminal, in your project directory, with access to your tools. It's not a web app with a sandbox. It's operating in your real environment. That's what makes it powerful, and it's why the permission and security model matters, which I'll get to.
The Mental Model That Makes Everything Click
This is the section I'd want someone to drill into my head before I wasted a week treating Claude Code like a chatbot.
Claude Code is an autonomous agent with tools. When you give it a prompt, it doesn't just generate text. It operates. Here's what it actually does under the hood:
It reads your codebase. Claude Code has access to file search tools. It can glob for file patterns, grep through your code, and read specific files. When you ask it to modify a React component, it first goes and reads that component, understands how it's structured, checks what styling approach you're using, and looks at related files for context.
It plans an approach. Before writing a single line of code, it figures out what needs to change. Which files need editing? Are there tests to update? Does the change affect other parts of the system? You can watch this happen in real time. The agent's reasoning is visible.
It edits files. Not by dumping code into the chat for you to copy-paste. It makes direct edits to your actual files, using precise string replacements or writing new files when needed. Your editor will show the changes live if you have auto-reload.
It runs commands. Need to install a dependency? Run a build? Execute tests? Claude Code can run bash commands in your project. It uses the same terminal environment you do.
It manages git. It can check status, create branches, stage changes, and commit. It won't force-push to main, but it can handle the standard git workflow.
It asks permission. Before doing anything potentially dangerous, it pauses and asks. More on this in the next section.
The right mental model is this: you are the architect, Claude Code is the senior developer you're delegating to. You set direction. You define what "done" looks like. It executes. You review its work. It iterates based on your feedback.
This is fundamentally different from asking ChatGPT to write you a function. You're not requesting code snippets. You're delegating engineering tasks to an agent that can see your entire project and operate within it.
Once this clicks, you stop writing prompts like "How do I add dark mode to a React app?" and start writing prompts like "Add a dark mode toggle to the Navbar component. Use the existing Tailwind config. Store the preference in localStorage. Make sure it respects the user's system preference on first load."
The first is a question. The second is a task. Claude Code is built for the second.
The Permission Model
Claude Code operates on a principle of graduated trust, and honestly, this is one of the best design decisions in the tool.
By default, it's read-only. Claude Code can look at your files and explore your codebase without asking. But the moment it wants to edit a file, run a shell command, or make a network request, it asks for your permission.
You get three choices when it asks:
Allow once. The tool performs this specific action and asks again next time. Good for when you want to review every step.
Allow for session. The tool can perform this type of action for the rest of your current session without asking. I use this frequently for file edits once I trust the direction it's heading.
Add to allowlist. Permanently allow this type of action. I use this for safe, common operations like running npm test or git status.
Here's why this matters more than it seems. When I started, the permission prompts felt like friction. I wanted to just let it run. But those prompts forced me to actually look at what it was doing. I caught mistakes early. I noticed when it was heading in the wrong direction. The permission model isn't a limitation. It's a review mechanism that prevents you from rubber-stamping changes you don't understand.
The first few sessions, keep it tight. Approve one at a time. Watch what it does. As you build trust and understanding, open things up gradually. You'll find your own comfort level. Mine is session-level approval for edits and bash commands, with permanent allowlists for read-only operations and safe tools.
Your First Real Task
Let me walk through a realistic first use case. Not "hello world." Something you'd actually need to do.
Say you have an existing React app with a Navbar component, and you want to add a dark mode toggle. You've been meaning to do it for weeks. Here's how this goes with Claude Code.
You open your terminal in the project root and run claude. Then you type something like:
"Add a dark mode toggle button to the Navbar component. The app uses Tailwind CSS. Store the user's preference in localStorage and respect their OS-level color scheme preference on first visit. The toggle should be a simple sun/moon icon swap."
Now watch what happens. Claude Code doesn't immediately start writing code. First, it reads. It'll look for your Navbar component. It'll check your Tailwind config. It'll look at your global CSS to understand how your styles are structured. It'll check if you have any existing theme utilities.
Then it plans. It might say something like: "I'll create a useTheme hook for the dark mode logic, add a toggle button to Navbar, update the Tailwind config to use the class strategy for dark mode, and add the dark class to the document root."
Then it starts executing. You'll see it edit your tailwind.config.js to add darkMode: 'class'. It'll create the hook with localStorage persistence and system preference detection. It'll modify the Navbar to include the toggle. Each edit, it asks for permission, and you can see exactly what it's changing.
If something doesn't look right, you just say so. "Use a button element instead of a div for the toggle. Accessibility matters." It adjusts. You're reviewing and directing, not writing the code yourself.
The whole thing takes maybe five minutes. Not because the code is trivial, but because the agent understands the context. It read your existing code and made decisions that fit your project. That's the difference.
Compare this to the alternative. You open ChatGPT, ask for a dark mode implementation, get a generic code block that assumes a project structure you don't have, spend twenty minutes adapting it to your codebase, realize the localStorage logic conflicts with your existing state management, and start over. Claude Code skips all of that because it reads your code first. It knows what you have. It builds on it rather than guessing.
One more thing about this workflow. After the changes are made, you can ask Claude Code to run your dev server and verify the toggle works. You can ask it to check for accessibility issues. You can ask it to write a test. The task doesn't have to end at "code written." It ends when you're satisfied with the result.
Essential Commands
Claude Code has a handful of slash commands that you'll use constantly. Here are the ones that matter.
/help does what you'd expect. Lists available commands and options. Check it once and move on.
/clear wipes the conversation context and starts fresh. Use this when you're switching to a completely different task. Claude Code carries context from earlier in the conversation, and stale context from a previous task can confuse a new one.
/compact compresses the current conversation to save context window space. This is critical. Claude Code has a finite context window, and long sessions will fill it up. When I notice the conversation getting long, or when Claude Code starts forgetting things I mentioned earlier, I run /compact. It summarizes the conversation history and frees up space for new context. I use this multiple times per session on any non-trivial task.
/cost shows you how much the current session has used. Helpful for keeping tabs on API spend if you're on usage-based billing.
/init generates a CLAUDE.md file for your project by analyzing your codebase. It's a good starting point, though you'll want to customize it. I cover this in depth in the CLAUDE.md post.
There are other commands, but these are the ones I reach for daily. The full list is in /help, and you'll naturally discover the ones relevant to your workflow.
One habit I built early: after every major subtask, I either /compact or /clear depending on whether the next task relates to the current one. It keeps the agent sharp and prevents the "context window death spiral" where the model starts producing worse output because it's juggling too much history.
Another habit: when starting a complex task, I'll sometimes say "Before you start, explain your plan." This forces Claude Code to lay out its approach before touching any files. If the plan has a flaw, I can redirect before any code is written. It's like a design review in thirty seconds. I formalized this into a proper workflow with plan mode, which I consider mandatory for anything non-trivial.
What Clicks After a Week
The first few days with Claude Code, you're learning the commands and getting used to the workflow. Around day four or five, something shifts. You start realizing that Claude Code gets dramatically better the more context you give it.
I don't just mean longer prompts, though those help. I mean structural context. Claude Code can read a file called CLAUDE.md in your project root, and it reads this file at the start of every session. Whatever you put in there, architectural decisions, coding conventions, project-specific rules, becomes part of every interaction.
This is a game-changer. My CLAUDE.md for this portfolio site includes rules like "all design attributes must be defined in the global CSS / Tailwind config, no hardcoded values in components" and "always run npx next build after writing new MDX files." Claude Code follows these rules automatically. I don't have to repeat them.
The other thing that clicks is the importance of letting it read before it writes. When I first started, I'd give terse prompts and hope for the best. Now I'll often start with something like "Read through the components in src/components and understand the project structure before making any changes." That extra step costs a few seconds and saves minutes of wrong-direction work.
The pattern I settled into: start with context, give clear tasks, review the output, iterate. It sounds obvious written down, but it took me a week to internalize. You're not chatting. You're collaborating with an agent that rewards specificity and context.
There's also a subtler shift. You start thinking about your own code differently. When you know an agent is going to read your codebase, you write cleaner code. You name things more clearly. You keep your project structure logical. The agent becomes a forcing function for good engineering hygiene, because messy codebases produce messy agent output.
I wrote a deeper post about CLAUDE.md and how to structure project instructions for maximum impact. It's the single highest-leverage thing you can do to improve your Claude Code experience.
Common Mistakes
I made all of these. You probably will too, but maybe this saves you a few days.
Treating it like ChatGPT. "How do I implement authentication in Next.js?" is a question for a chatbot. "Implement email/password authentication using NextAuth.js with a PostgreSQL adapter. Add sign-in and sign-up pages that match the existing design system." is a task for an agent. The second version gives Claude Code everything it needs to execute. The first gives it nothing to work with except your project files and a vague direction.
Not reading what it produces. Claude Code is good. It's not infallible. I've caught subtle bugs, missed edge cases, and occasionally questionable architectural choices. The permission model forces you to see each change, but seeing and reading are different things. Actually review the diffs. Especially early on, when you're still calibrating your trust.
Approving changes without understanding them. Related to the above, but distinct. Sometimes Claude Code will propose a refactor that technically works but doesn't match how you want your codebase to evolve. If you approve everything reflexively, you end up with a codebase that works but feels foreign. You should understand every change that goes into your project, even if you didn't write it.
Burning through context without compacting. This one bit me hard. I'd have a long session, notice the output quality degrading, and not understand why. The context window was full. The model was dropping important earlier context to make room for new messages. Now I compact religiously. If a session has gone on for more than fifteen or twenty exchanges, it's time. I go deep on this in my post about plan mode and context management.
Writing the code yourself and then asking Claude Code to fix it. This sounds counterintuitive, but I've caught myself doing it. I'll start manually editing a file, make a mess, and then ask Claude Code to clean it up. The better workflow is to let Claude Code write it from scratch. It'll make fewer mistakes starting fresh than trying to salvage your half-finished edits, because it can reason about the whole problem at once.
Giving up too early. The first few sessions will feel awkward. You won't know the right way to phrase things. You'll over-specify or under-specify. You'll approve something wrong and have to undo it. This is normal. The tool has a learning curve, and the payoff is on the other side of that curve. Give it a week of real daily use before you form an opinion.
Where to Go From Here
If you've read this far, you have everything you need to start using Claude Code productively. Install it, open a real project, and give it a real task. Not a toy example. Something you actually need done.
The gap between "installed Claude Code" and "productive with Claude Code" is smaller than you think. It's mostly the mental model. Once you stop chatting and start delegating, the tool meets you where you are.
Next up is the post about CLAUDE.md, the project instruction file that turns Claude Code from a good tool into a great one. That's where the real leverage is. If you want to understand the broader shift from prompt engineering to this kind of structural context, I also wrote about why context engineering matters more than prompt engineering.
Related Posts
Custom Commands and Slash Commands: Building Your Own Claude Code CLI
Slash commands turn Claude Code into a personalized CLI. A markdown file becomes a reusable workflow you invoke with a single slash. Here's how to build them.
Subagents and Parallel Execution: Making Claude Code 5x Faster
Claude Code can spawn autonomous worker agents that run in parallel. Here's how subagents work, when to use them, and why they make complex tasks dramatically faster.
Hooks, Statuslines, and the Automation Layer Nobody Talks About
Hooks let you run shell commands when Claude Code starts, stops, or uses tools. Combined with a custom statusline, they turn Claude Code into a self-monitoring, self-correcting system.