Industry News | 8/20/2025

JetBrains AI Assistant Expands to Proactive File-Wide Edits

JetBrains’ AI Assistant now quietly takes a broader view of your code. In beta, it suggests file-wide edits based on recent changes, renaming variables, adding helpers, and adjusting logic across a file. The move signals deeper IDE integration and a race with other AI coding tools.

JetBrains AI Assistant Evolves to Proactively Suggest File-Wide Edits

"Next Edit Suggestions" is JetBrains’ latest beta feature inside the AI Assistant, designed to move beyond single-line completions into intelligent, file-spanning edits. The idea is simple on the surface: after you make a change, the assistant looks at the surrounding code, the structure of the file, and your recent edits to propose related updates across the same file. It’s a bit like having a thoughtful teammate who can scan the whole chapter you’re writing and suggest edits that keep your story consistent from top to bottom.

What this feature does

  • Proactive edits across a file: Instead of waiting for you to finish a thought, the assistant suggests complementary changes such as renaming a variable, inserting a helper method, or reworking a block of logic to keep things cohesive.
  • Context-aware suggestions: The model uses your immediate edits as the prompt, then returns a set of changes for other parts of the file. You review and apply them, or you skip them if they don’t fit.
  • Deep IDE integration: This isn’t a separate tool you open in a browser. It’s woven into JetBrains’ IDEs, leveraging the code structure and project context to surface relevant recommendations.
  • Beyond code edits: The AI Assistant can also propose refactorings, generate documentation, craft commit messages, and explain code when you need a quick gut check.

Think of it as moving from a flashlight that lights the next line to a lantern that arcs light across the entire file, helping you see inconsistencies you might miss in a single glance.

How it works under the hood

  • Multi-LLM strategy: JetBrains uses a mix of large language models, including OpenAI and Google models, alongside Mellum—their in-house model tailored for coding tasks.
  • Vendor-neutral approach: They blend different models to fit the task, from generic code generation to deeper debugging questions.
  • Privacy-forward options: Users can opt for cloud-based models for many features, or run local/offline modes that don’t send code over the internet. Enterprise customers can also deploy on-premises in air-gapped environments, with full control over data and models.

This flexibility is a core part of JetBrains’ AI philosophy: make AI feel like a native, seamless part of the developer workflow rather than an add-on dialog box. By running inside the IDE, the assistant can better understand project structure, dependencies, and naming conventions, which helps in generating more relevant file-wide edits.

The competitive landscape and reception

JetBrains isn’t operating in a vacuum. Microsoft’s GitHub Copilot remains a major player with strong autocompletion and broad editor support. JetBrains differentiates itself with deep IDE integration and a focus on project-wide context rather than line-by-line prompts. Proponents say this yields more cohesive changes and less friction when you’re juggling several files at once.

But not everyone is sold yet. Early user feedback has been mixed: some praise the productivity boost, others report performance concerns and latency. A notable controversy around marketplace reviews surfaced recently, with JetBrains defending its moderation decisions and acknowledging transparency could be improved. These teething issues aren’t exclusive to JetBrains; many AI tools wrestle with how to balance speed, accuracy, and user trust as features become more ambitious.

Deployment choices that matter

  • Cloud vs. local: For many tasks, cloud-based models are used, but you can also run full-line code completion locally. This matters for developers who want to keep sensitive code away from the cloud.
  • On-premises options: For enterprises with strict security requirements, JetBrains provides air-gapped deployments that let organizations control data and model usage.

In a world where sensitive code and private logic sit at the heart of many teams, those options aren’t a luxury—they’re a baseline expectation for many users evaluating AI coding assistants.

What this means for daily coding

  • You’re not just asking for a line; you’re inviting a partner to help you rethink an entire file. If you start refactoring or renaming, Next Edit Suggestions can surface related edits that keep the whole file harmonious.
  • The feature is designed to be review-first: you see highlighted suggestions and decide what to apply. It’s iterative, not prescriptive.
  • The ultimate payoff is smoother onboarding for new developers on a project. If a file has a naming convention or a shared helper pattern, the assistant can nudge edits that reinforce those patterns automatically.

Looking ahead

As JetBrains rolls out this beta, the emphasis is on refining model accuracy, latency, and the depth of contextual understanding. The company’s strategy—support multiple models, offer privacy-conscious deployment, and maintain tight IDE integration—suggests they want to be the “native” AI experience inside a developer’s day-to-day tools. If the beta proves durable and the edits stay relevant across different languages and project styles, it could push the industry toward a more proactive, context-aware approach to AI-assisted coding.

The longer arc

With more sophisticated, context-aware AI edits, we’re watching the line between "coding tool" and "coding partner" blur. JetBrains is betting that developers don’t just want code that’s faster to write—they want code that’s more coherent, easier to maintain, and connected to the rest of the project in ways that only intelligent tooling can enable.