How to handle multiple Repositories and a large Codebase in Cursor
![Written by [object Object]](https://a.storyblok.com/f/316774/320x320/e07f300c40/kevinkernegger.jpg)
By Kevin Kern

Workspace Setup for Multiple Repositories
Cursor doesn’t yet fully support multi-root workspaces out of the box, but there are workarounds. If you simply add separate folders for each repo in one workspace, the AI’s codebase context might only index the first folder due to a known bug. The recommended solution is to combine the repos under a single parent directory and open that parent as your Cursor project. This effectively treats your multi-repo setup like a monorepo so the AI can index all sub-projects together.
Keep in mind that codebase indexing across many files still has limits – Cursor will retrieve only the most relevant snippets from your code when answering a query. For very large combined projects, the context is smartly narrowed to what’s needed. In practice, this means your backend, frontend, and agent code can live side by side in one workspace, enabling the AI to search and reference any part of any repo when you prompt it.
Project-Specific Context and Cursor Rules
Cursor’s Project Rules (the .cursor/rules files) let you provide guidance or context to the AI for your codebase. In a multi-repo workspace, you likely want different rules for each project (for example, coding style for the Next.js frontend vs. the Flask backend). Originally, Cursor would only load the rules from the first folder, ignoring rules files in other sub-projects. This was a limitation, but a recent update introduced support for nested rules directories, meaning Cursor can now detect and apply rules in subfolders of your workspace. In other words, you can have separate .cursor/rules files in each repo folder and Cursor will consider them when you’re working in that context (ensure you’re on a Cursor version that supports this).
When multiple rule sets are present, the AI agent decides which rules to apply based on context. Each rule file can specify a glob/path and a description to hint when it’s relevant. It’s wise to make those descriptions explicit. The more specific the trigger condition, the better Cursor can pick the right rules for a given file. In practice, you might maintain a set of global rules (in the root .cursor folder) that apply to all repos, along with project-specific ones in each sub-repo. As of the latest update, Cursor will scan nested rule folders and even the UI now indicates which rules are active, making it clearer which guidelines are being applied in each context.
Using Git Worktrees and External Tools
Git worktrees are a native Git feature that let you have multiple working directories from the same repository. In Cursor’s context, worktrees can be handy if you want to run multiple Cursor instances in parallel – for example, checking out different branches or variants of your code and having an AI agent work in each one simultaneously. However, Git worktrees don’t inherently merge contexts from different repositories – they’re more about multitasking within a single repo. If your goal is sharing AI context across distinct repos, worktrees alone won’t solve that, but they can help manage separate instances if you decide to keep repos separate and coordinate changes manually.
The community has created external tools to feed code into AI models. One example is Repoprompt made by Eric Provencher, a tool that lets you select one or more directories and inject those files into a prompt for an LLM. Similarly, Repomix is another open-source script that composes a whole repository’s content into a prompt. Even now, advanced users sometimes prefer these tools to bypass Cursor’s context size limits and get a “whole codebase dump” into the AI.
Cursor Tools (by Andrew Jefferson) is a community-driven CLI plugin that automates a lot of this, integrating Repomix under the hood. For example, you can ask the Cursor agent to “use cursor-tools to analyze the repo,” and it will fetch the entire repository and query a powerful model with it. This approach essentially gives the AI a comprehensive view of your code outside the normal context window. These tools aren’t officially part of Cursor, but they are community-endorsed workarounds for multi-repo or large-codebase scenarios. If you’re comfortable experimenting, they can greatly enhance cross-repo context sharing – just be mindful of API token costs when dumping large codebases into an LLM.
Community Workflows and Best Practices
A common workflow is indeed to open all related repos in one Cursor workspace and then ensure the AI understands the relationships. You can have one .cursor/rules set (always: true) Including an architecture overview for the workspace is highly beneficial.
If your repos have a docs outlining the system design, the AI can use that to orient itself. You might have the backend README describe its Supabase schema and endpoints, the frontend README describe how it calls the backend, and the agent README explain its role – each available for the AI to draw context from.
When actually coding with this setup, be explicit with the AI about which project you’re working in. For example, if you ask the AI to implement a feature that touches multiple repos, you might need to guide it step-by-step. Cursor’s @
syntax can reference files across subdirectories, but the AI may not automatically hop between projects unless prompted. In practice, this means you as the developer should double-check that changes in one repo don’t require complementary changes in another that the AI might have overlooked. It helps to ask the AI for a plan or overview of changes across repos before implementing, so you can verify its understanding of the full scope.
Another challenge is managing different environments and dependencies in one workspace. If you have multiple repos likely have distinct stack setups. In a combined Cursor workspace, the editor’s language server might not resolve all imports or types correctly because it’s not aware of multiple separate package environments. A workaround some use is the direnv tool, which auto-loads environment variables or virtual environments when you cd into a directory. By setting up direnv
for each subfolder, whenever Cursor’s agent enters that project, it can activate the right Node or Python environment. This is especially useful if you have the AI running tests or executing code in each repo – it ensures the right dependencies are in place.
Iterative development is key: Many devs build multi-repo features in pieces. You might let the AI work on the backend first (with the other repos open for reference), then the frontend, then the agent. This compartmentalization can make it easier to validate each part. However, because you have all code loaded, the AI can cross-reference as needed. If you find the AI is confused by having too much at once, you can temporarily limit the scope by closing one folder or using the multiple workspace approach where you open each repo in separate Cursor windows when focusing on it. It’s a trade-off between focus and breadth of context.
Finally, consider consistency across your repos. If you maintain similar coding styles or shared utilities, encode those in your Cursor rules or documentation. By telling the AI about that, it will try to produce more unified code across your backend, frontend, and agent. Also, keep an eye on Cursor’s updates and the community forum. Multi-repo support is a popular request in the forums. In the meantime, the strategies above – single parent workspace, clear project-specific rules, possibly leveraging tools like Repoprompt or cursor-tools for extra context, and guiding the AI with careful prompts – are the best practices the community has honed for multi-repo development in Cursor.
Want to learn more?
Access full video courses as well as:
Unlimited access to all course modules
Get access to all course updates
Unlimited Access to Guides & Prompts
Access to the private Discord community
Other interesting posts:
Have a look at the rest of my Cursor guides: