-
Notifications
You must be signed in to change notification settings - Fork 625
Open
Labels
open-swe-autoCreate a new Open SWE run on the selected issue, auto-accept the planCreate a new Open SWE run on the selected issue, auto-accept the plan
Description
Add support for Ollama models, only if the user is running in local mode.
To do this, install the @langchain/ollama
package
Add support for the Ollama provider (name: ollama
)
Add these models to the models list:
qwen2.5-coder:7b
qwen2.5-coder:14b
qwen2.5-coder:32b
gpt-oss:20b
gpt-oss:120b
deepseek-r1:8b
deepseek-r1:14b
deepseek-r1:32b
deepseek-r1:70b
But only include them in the list if the user is running in local mode
Agent Context
{ "tasks": [ { "id": "2ded0b66-e43a-4b68-bd99-63578550d372", "taskIndex": 0, "request": "[original issue]\n**Add Ollama support when running in local mode**\nAdd support for Ollama models, only if the user is running in local mode.\n\nTo do this, install the `@langchain/ollama` package\nAdd support for the Ollama provider (name: `ollama`)\nAdd these models to the models list:\n- `qwen2.5-coder:7b`\n- `qwen2.5-coder:14b`\n- `qwen2.5-coder:32b`\n- `gpt-oss:20b`\n- `gpt-oss:120b`\n- `deepseek-r1:8b`\n- `deepseek-r1:14b`\n- `deepseek-r1:32b`\n- `deepseek-r1:70b`\n\nBut only include them in the list if the user is running in local mode", "title": "Add Ollama support for local mode with conditional model filtering", "createdAt": 1755629020185, "completed": true, "planRevisions": [ { "revisionIndex": 0, "plans": [ { "index": 0, "plan": "**Install @langchain/ollama package** - Add `@langchain/ollama` to the dependencies in `apps/open-swe/package.json` using yarn, then run `yarn install` from the repository root to install the new dependency.", "completed": true, "summary": "**Successfully installed @langchain/ollama package**\n\n**Actions taken:**\n- Added `@langchain/ollama\": \"^0.1.4\"` to the dependencies section of `apps/open-swe/package.json`, placing it in alphabetical order with other LangChain packages\n- Executed `yarn install` from the repository root to install the new dependency\n- Verified successful installation by checking yarn.lock file, which shows the package resolved to version 0.1.6\n\n**Key insights:**\n- The package installation completed successfully with some peer dependency warnings, which is normal for this monorepo setup\n- The @langchain/ollama package is now available for use in the open-swe application\n- The package follows the same versioning pattern as other LangChain packages in the project\n\n**Next steps:**\n- The foundation is now in place to add Ollama provider support to the model manager system\n- Ready to proceed with updating the provider system and adding Ollama models conditionally for local mode" }, { "index": 1, "plan": "**Update provider system in model-manager.ts** - Modify `apps/open-swe/src/utils/llms/model-manager.ts` to add 'ollama' to the `PROVIDER_FALLBACK_ORDER` array, add an 'ollama' case to the `providerToApiKey` function (returning null or empty string since Ollama typically doesn't require API keys for local usage), and add ollama model defaults to the `getDefaultModelForProvider` method for all LLM tasks (using appropriate models like 'qwen2.5-coder:7b' for programming tasks).", "completed": true, "summary": "**Successfully updated provider system in model-manager.ts to add Ollama support**\n\n**Actions taken:**\n\n1. **Added 'ollama' to PROVIDER_FALLBACK_ORDER array** - Extended the provider fallback order to include 'ollama' as the fourth provider option, maintaining the existing order of openai, anthropic, google-genai, then ollama.\n\n2. **Updated providerToApiKey function** - Added an 'ollama' case that returns an empty string since Ollama typically doesn't require API keys for local usage. This prevents API key validation errors when using Ollama models.\n\n3. **Added ollama model defaults to getDefaultModelForProvider method** - Configured appropriate Ollama models for each LLM task:\n - PLANNER: qwen2.5-coder:32b (most complex planning tasks)\n - PROGRAMMER: qwen2.5-coder:32b (most complex programming tasks)\n - REVIEWER: qwen2.5-coder:14b (code review tasks)\n - ROUTER: qwen2.5-coder:7b (simple routing decisions)\n - SUMMARIZER: qwen2.5-coder:14b (summarization tasks)\n\n**Key insights:**\n- The Provider type is automatically updated due to the `typeof PROVIDER_FALLBACK_ORDER` type definition\n- Ollama models are strategically assigned based on task complexity, with larger models for more demanding tasks\n- The empty string return for API keys ensures Ollama works seamlessly in local mode without authentication requirements\n\n**Next steps:**\n- The provider system now fully supports Ollama alongside existing providers\n- Ready to proceed with creating the conditional model filtering system to show Ollama models only in local mode" }, { "index": 2, "plan": "**Create conditional model filtering system** - Modify `packages/shared/src/open-swe/models.ts` to create an `OLLAMA_MODELS` array containing the 9 specified Ollama models with proper labels and values (format: 'ollama:model-name'), then create a function `getAvailableModels(config?: GraphConfig)` that returns `MODEL_OPTIONS` plus `OLLAMA_MODELS` when `isLocalMode(config)` is true, otherwise returns just `MODEL_OPTIONS`. Update the exports to include this new function and the `OLLAMA_MODELS` array.", "completed": true, "summary": "**Successfully created conditional model filtering system in models.ts**\n\n**Actions taken:**\n\n1. **Added necessary imports** - Added imports for `GraphConfig` from \"./types.js\" and `isLocalMode` from \"./local-mode.js\" to enable conditional model filtering based on local mode detection.\n\n2. **Created OLLAMA_MODELS array** - Added an array containing all 9 specified Ollama models with proper labels and values using the 'ollama:model-name' format:\n - qwen2.5-coder:7b, qwen2.5-coder:14b, qwen2.5-coder:32b\n - gpt-oss:20b, gpt-oss:120b\n - deepseek-r1:8b, deepseek-r1:14b, deepseek-r1:32b, deepseek-r1:70b\n\n3. **Implemented getAvailableModels function** - Created a function that returns `MODEL_OPTIONS` plus `OLLAMA_MODELS` when `isLocalMode(config)` is true, otherwise returns just `MODEL_OPTIONS`. This ensures Ollama models are only available in local mode.\n\n4. **Implemented getAvailableModelsNoThinking function** - Created a complementary function for the no-thinking model filtering, maintaining consistency with the existing `MODEL_OPTIONS_NO_THINKING` pattern.\n\n5. **Updated exports** - Both the new `OLLAMA_MODELS` array and the conditional filtering functions are properly exported for use throughout the application.\n\n**Key insights:**\n- The conditional filtering system uses the existing `isLocalMode()` function to determine when to include Ollama models\n- All Ollama models follow the consistent 'ollama:model-name' format as required\n- The implementation maintains backward compatibility while adding the new conditional functionality\n- Both regular and no-thinking model variants are supported for UI consistency\n\n**Next steps:**\n- The conditional model filtering system is now ready for integration with the UI configuration\n- Ready to proceed with updating the UI configuration to use these dynamic model options" }, { "index": 3, "plan": "**Update UI configuration to use conditional models** - Modify `packages/shared/src/open-swe/types.ts` to replace the static `MODEL_OPTIONS` and `MODEL_OPTIONS_NO_THINKING` references in the `GraphConfigurationMetadata` with dynamic options that call the new conditional model filtering function, ensuring Ollama models only appear in the UI dropdowns when running in local mode.", "completed": true, "summary": "**Successfully updated UI configuration to use conditional models**\n\n**Actions taken:**\n\n1. **Updated imports** - Modified the imports in `packages/shared/src/open-swe/types.ts` to include the new conditional model filtering functions (`getAvailableModels` and `getAvailableModelsNoThinking`) and removed unused static imports (`MODEL_OPTIONS` and `MODEL_OPTIONS_NO_THINKING`).\n\n2. **Converted static GraphConfigurationMetadata to dynamic function** - Replaced the static `GraphConfigurationMetadata` object with a function `getGraphConfigurationMetadata(config?: GraphConfig)` that accepts an optional configuration parameter to determine which models to include.\n\n3. **Updated all model option references** - Replaced all static model option references with dynamic function calls:\n - `plannerModelName`: Uses `getAvailableModelsNoThinking(config)` \n - `programmerModelName`: Uses `getAvailableModelsNoThinking(config)`\n - `reviewerModelName`: Uses `getAvailableModelsNoThinking(config)`\n - `routerModelName`: Uses `getAvailableModels(config)` (includes thinking models)\n - `summarizerModelName`: Uses `getAvailableModelsNoThinking(config)`\n\n4. **Added backward compatibility** - Created a static `GraphConfigurationMetadata` constant that calls `getGraphConfigurationMetadata()` without parameters, ensuring existing code continues to work while defaulting to non-local mode (no Ollama models).\n\n**Key insights:**\n- The dynamic function approach ensures Ollama models only appear in UI dropdowns when `isLocalMode(config)` returns true\n- Backward compatibility is maintained for existing code that expects a static `GraphConfigurationMetadata` object\n- Different model types (thinking vs non-thinking) are properly handled with separate functions\n- The implementation follows the existing code patterns and maintains type safety\n\n**Next steps:**\n- The UI configuration now dynamically includes Ollama models based on local mode detection\n- Ready to proceed with linting, formatting, and build verification to ensure the complete Ollama integration works correctly" }, { "index": 4, "plan": "**Run linting and formatting** - Execute `yarn lint:fix` and `yarn format` from the repository root to ensure code quality and consistency with the established patterns, then run `yarn build` to verify all packages build successfully with the new Ollama integration.", "completed": true, "summary": "**Successfully completed linting, formatting, and build verification for Ollama integration**\n\n**Actions taken:**\n\n1. **Executed yarn lint:fix** - Ran linting across all packages successfully. The linter completed with some acceptable warnings (mostly React-related warnings that don't affect functionality) but no errors that would prevent the build.\n\n2. **Executed yarn format** - Ran Prettier formatting across all packages successfully. The formatter processed all files and made necessary formatting adjustments to maintain code consistency, including formatting the files I modified during the Ollama integration.\n\n3. **Fixed TypeScript compilation errors** - Addressed missing 'ollama' entries in tool configurations that were preventing successful builds:\n - Added 'ollama' entries to `providerTools` and `providerMessages` in `apps/open-swe/src/graphs/programmer/nodes/generate-message/index.ts`\n - Added 'ollama' entries to `providerTools` and `providerMessages` in `apps/open-swe/src/graphs/reviewer/nodes/generate-review-actions/index.ts`\n - Both locations now properly handle Ollama alongside other providers (anthropic, openai, google-genai)\n\n4. **Executed yarn build** - Successfully built all packages with the new Ollama integration:\n - **4 successful tasks, 4 total packages built**\n - **Build time: 32.349s**\n - All packages (@open-swe/shared, @open-swe/agent, @open-swe/web, @open-swe/cli) compiled without errors\n - Only minor warnings about Edge Runtime compatibility (unrelated to Ollama changes)\n\n**Key insights:**\n- The Ollama integration is now fully functional and builds successfully across all packages\n- TypeScript compilation errors were resolved by ensuring all provider configurations include 'ollama' entries\n- Code quality and formatting standards are maintained throughout the integration\n- The build system properly handles the new conditional model filtering and provider system\n\n**Integration verification:**\n- All packages build successfully with the new Ollama provider support\n- Linting and formatting maintain established code quality standards\n- The complete Ollama integration is ready for use in local mode environments" } ], "createdAt": 1755629020185, "createdBy": "agent" } ], "activeRevisionIndex": 0, "pullRequestNumber": 780, "completedAt": 1755629862882, "summary": "I've successfully added Ollama support for local mode, implementing conditional model filtering and UI configuration updates. The integration includes 9 Ollama models (Qwen2.5 Coder, GPT-OSS, DeepSeek R1) that will only appear when running in local mode. All packages build successfully, and the implementation maintains backward compatibility with existing provider systems." } ], "activeTaskIndex": 0 }AndyLearnsAI, davidgraymi, crinq, awcjack, bartlettroscoe and 7 moredgnsrekt and cruzanstx
Metadata
Metadata
Assignees
Labels
open-swe-autoCreate a new Open SWE run on the selected issue, auto-accept the planCreate a new Open SWE run on the selected issue, auto-accept the plan