In November 2025, Google released Antigravity — an AI-powered IDE built on a modified VS Code fork, powered by Gemini 3, and designed from the ground up for what Google calls "agent-first" software development. We've been using it daily since launch. Here's our honest, unfiltered assessment after three months in the trenches.
What Antigravity Actually Is
Antigravity isn't a VS Code extension. It isn't a chatbot bolted onto an editor. It's a fundamentally different way of writing software.
The core idea: instead of you writing code while an AI suggests completions, the AI writes code while you provide direction. You describe what you want — a feature, a fix, a refactor — and an autonomous agent plans the approach, writes the code, runs terminal commands, tests in a browser, and delivers the result. You review, approve, and iterate.
This isn't theoretical. This is how we build production software every day.
The Dual Interface: Editor vs. Manager
Editor View
The Editor view looks familiar — it's VS Code with an agent sidebar. You can write code normally, use traditional autocomplete, or invoke the agent for specific tasks. This is the synchronous workflow: you and the agent working together on the same file, the same problem, the same screen.
This is where most developers start, and where many stay. It feels like pair programming with an exceptionally knowledgeable colleague who never gets tired, never forgets the API documentation, and can refactor an entire module in seconds.
Manager View
The Manager view is where Antigravity becomes something genuinely new. It's a control centre for orchestrating multiple agents simultaneously across different workspaces, different projects, and different tasks.
Imagine this: you kick off three agents in the morning. Agent 1 is building a new API endpoint. Agent 2 is writing tests for yesterday's feature. Agent 3 is refactoring a legacy module. You go to your standup meeting. When you come back, all three have completed their work, generated implementation plans, recorded browser sessions showing their testing, and are waiting for your review.
This is asynchronous development at a scale that was impossible six months ago. One developer, managing a team of AI agents, shipping far more than would traditionally be possible working solo.
Powered by Gemini 3: What It Means in Practice
Antigravity runs on Google's Gemini 3 family — primarily Gemini 3 Pro for complex reasoning, Gemini 3 Flash for speed-critical operations, and Gemini 3 Deep Think for architectural decisions that require deeper analysis.
The practical implications:
- 2M token context window: The agent can read your entire codebase — not snippets, not summaries, the actual code. For our largest project (~180 files), Gemini 3 Pro holds the complete repository in context. It understands the relationships between components, the naming conventions, the architectural patterns. It writes code that fits.
- Native multimodal: Show the agent a screenshot of a UI bug and say "fix this." It sees the image, identifies the visual issue, traces it to the CSS, and patches it. No description needed.
- Tool use: The agent doesn't just write code. It runs terminal commands, installs dependencies, starts dev servers, opens browsers, takes screenshots, and verifies its own work. The full development loop, autonomous.
The "Vibe Coding" Paradigm
Google calls it "vibe coding" — you express the intent, the vibe, of what you want, and the agent handles the implementation. This sounds like marketing fluff until you experience it.
Here's a real example from last week:
The prompt: "Add a newsletter signup section to the blog page. Make it premium — glassmorphism effect, gradient background, animated input field. Should integrate with our existing form handling."
What the agent did:
- Read the existing blog page component to understand the structure
- Created an implementation plan with file changes
- Created a new BlogNewsletter component with glassmorphism CSS
- Added framer-motion animations for the input field
- Integrated with the existing form submission handler
- Updated the blog page to include the new component
- Ran the dev server, opened the browser, took a screenshot
- Presented the result for review
Elapsed time: 4 minutes. Quality: production-ready on the first pass. Would this have taken a developer an hour? Two hours? It doesn't matter. The agent did it in four minutes, and the result was excellent.
Multi-Model Support: The Safety Net
While Gemini 3 is the primary engine, Antigravity also supports models from Anthropic and OpenAI. This isn't just about choice — it's about resilience.
We've found that different models excel at different tasks:
- Gemini 3 Pro: Best all-rounder. Excellent at full-stack development, large codebase understanding, and complex refactoring.
- Claude (Anthropic): Superior for nuanced writing, documentation, and code review where subtlety matters.
- Gemini 3 Deep Think: Best for architectural decisions, debugging complex race conditions, and problems requiring multi-step reasoning.
- Gemini 3 Flash: Fastest for quick completions, simple edits, and high-volume repetitive tasks.
The ability to switch models mid-task means you're never stuck. If one model struggles with a particular problem, try another. The IDE makes this effortless.
The Honest Downsides
No tool is perfect. Here's where Antigravity frustrates us:
Rate Limits and Throttling
During peak hours, Gemini 3 Pro can be slower. Google's "generous rate limits" are generous for a free tool, but if you're running three agents simultaneously on complex tasks, you'll hit them. For our workload, we've needed to stagger heavy tasks and use Flash for lighter work to stay within limits.
The Planning Trap
If you give the agent a vague prompt, it will produce a vague result. Antigravity is most powerful when you use its planning mode — let it create an implementation plan first, review and refine the plan, then execute. Skip the planning step and you get code that technically works but doesn't fit your architecture.
Context Window Isn't Magic
Yes, the 2M token context window is enormous. But context window size and context utilisation are different things. The agent sometimes misses a relevant file in a large codebase, or applies a pattern from one part of the code that's inconsistent with another part. Regular checkpoints and explicit references ("look at how we handle this in src/lib/auth.ts") dramatically improve results.
How We Use It: The QFA Workflow
Our daily workflow with Antigravity:
- Morning planning: Define the day's tasks as specific, well-scoped prompts. "Add pagination to the blog listing page" not "improve the blog."
- Agent kickoff: Start agents in Manager view for independent tasks. Use Editor view for tasks requiring real-time collaboration.
- Review cycles: Review implementation plans before execution. Review code after execution. Never auto-merge without review.
- Verification: The agent runs builds, tests, and browser verification. We review the recordings and screenshots.
- Commit and push: Approved changes get committed with descriptive messages. The agent writes the commit messages too.
Output increase since adopting Antigravity: substantial. Not because of faster typing. Because the agent eliminates the gap between intention and implementation. You think it, the agent builds it, you review it. The bottleneck shifts from "how do I implement this?" to "what should I build next?"
The Bottom Line
Google Antigravity isn't perfect. But it's the most significant change to how we write software since the invention of the IDE itself. It doesn't replace developers — it transforms what a single developer can accomplish. And for a small, ambitious team like ours, that transformation is everything.
If you're still writing every line by hand, you're not being careful. You're being slow. The developers who thrive in 2026 aren't the ones who type fastest. They're the ones who direct AI agents most effectively. Antigravity is the best tool we've found for making that transition.