Tanzu Platform Chat (cf-mcp-client) is a Spring chatbot application that can be deployed to Cloud Foundry and consume platform AI services. It's built with Spring AI and works with LLMs, Vector Databases, and Model Context Protocol Agents.
- Java 21 or higher
- e.g. using sdkman
sdk install java 21.0.7-oracle
- e.g. using sdkman
- Maven 3.8+
- e.g. using sdkman
sdk install maven
- e.g. using sdkman
- Access to a Cloud Foundry Foundation with the GenAI tile or other LLM services
- Developer access to your Cloud Foundry environment
- Create a directory for the application and navigate to it:
mkdir tanzu-platform-chat
cd tanzu-platform-chat
-
Download the latest JAR file and manifest.yml from the Releases page into this directory
-
Push the application to Cloud Foundry from the directory containing the downloaded files:
cf push
- Create a service instance that provides chat LLM capabilities:
cf create-service genai [plan-name] chat-llm
- Bind the service to your application:
cf bind-service ai-tool-chat chat-llm
- Restart your application to apply the binding:
cf restart ai-tool-chat
Now your chatbot will use the LLM to respond to chat requests.
- Create a service instance that provides embedding LLM capabilities
cf create-service genai [plan-name] embeddings-llm
- Create a Postgres service instance to use as a vector database
cf create-service postgres on-demand-postgres-db vector-db
- Bind the services to your application
cf bind-service ai-tool-chat embeddings-llm
cf bind-service ai-tool-chat vector-db
- Restart your application to apply the binding:
cf restart ai-tool-chat
- Click on the document tool on the right-side of the screen, and upload a .PDF File
Now your chatbot will respond to queries about the uploaded document
Model Context Protocol (MCP) servers are lightweight programs that expose specific capabilities to AI models through a standardized interface. These servers act as bridges between LLMs and external tools, data sources, or services, allowing your AI application to perform actions like searching databases, accessing files, or calling external APIs without complex custom integrations.
- Create a user-provided service for an SSE-based MCP server:
cf cups mcp-server-sse -p '{"mcpSseURL":"https://your-sse-mcp-server.example.com"}'
- Bind the MCP service to your application:
cf bind-service ai-tool-chat mcp-server-sse
- Create a user-provided service for a Streamable HTTP-based MCP server:
cf cups mcp-server-streamable -p '{"mcpStreamableURL":"https://your-streamable-mcp-server.example.com"}'
- Bind the MCP service to your application:
cf bind-service ai-tool-chat mcp-server-streamable
- Restart your application to apply the bindings:
cf restart ai-tool-chat
Your chatbot will now register with the MCP agents, and the LLM will be able to invoke the agents' capabilities when responding to chat requests. The application supports both SSE and Streamable HTTP protocols simultaneously.
If you are bound to a vector database and an embedding model, then your chat memory will persist across application restarts and scaling.
- Follow the instructions above in Binding to Vector Databases