Microsoft Project Online retires September 30, 2026, migrate to a modern platform before it's too late.Start migration
Model Context Protocol

Connect Onplana to Claude, Gemini, ChatGPT, and Cursor

Onplana ships a public MCP server at https://mcp.onplana.com/mcp. Twenty-seven curated tools, hybrid BM25 + vector search across your org’s indexed content, OAuth 2.1 or PAT auth, plan-gated access, audited end to end.

Built on the open MCP standard from Anthropic. Works with Claude Desktop, Gemini CLI, Gemini Code Assist, ChatGPT, ChatGPT Codex CLI, Cursor, GitHub Copilot, and any other MCP client that speaks Streamable HTTP.

Install in Gemini CLI (one command):

export ONPLANA_PAT=pat_...
gemini extensions install https://github.com/Onplana/onplana-mcp-server

Also works with Gemini Code Assist (VS Code + JetBrains). For other clients, see setup instructions below.

What MCP is

The Model Context Protocol (MCP) is an open standard, published by Anthropic in late 2024, for connecting LLMs to external tools and data sources. Instead of building one bespoke integration per client, an application exposes an MCP server once and any MCP-aware client (Claude Desktop, Cursor, ChatGPT custom connectors, in-house agents using the MCP SDK) can call it. A client lists the available tools via tools/list, then calls them via tools/call; the protocol handles argument validation, error responses, and result rendering.

For project management software the practical effect is that you can ask an agent “create a project plan for the Q3 launch,” and the agent runs the work directly inside Onplana — creating the project, adding the right tasks, assigning owners, scheduling milestones — instead of recommending a tool and asking you to do it yourself.

What Onplana exposes

Twenty-one curated tools in three groups. The full catalog is visible to clients via tools/list; tools the caller can\'t access (because of plan tier or role permission) are filtered out before the LLM sees them, so an agent never blindly calls a gated tool.

Read & search

list_projects

Filter by status, assignee, label.

get_project

Full detail incl. tasks, milestones, recent activity.

list_tasks

By project, assignee, status.

get_task

Including dependencies and subtasks.

list_org_members

For assignee resolution by name or email.

list_risks

Surfaced from automated risk detection.

find_similar_projects

Vector-similarity search across project descriptions.

summarize_project

AI-written status summary on demand.

search_org_knowledge

Hybrid (vector + BM25) search across projects, tasks, risks, goals, comments, and wiki pages. The differentiator.

Project & task mutations

create_project

Plan-gated; respects org.project.create permission.

update_project

Status, dates, name; owner check enforced.

create_task

Required: project, title. Optional: assignee email, due date, priority.

update_task

Bulk-safe via idempotency key.

assign_task

Convenience wrapper for reassigning by email.

move_task_to_sprint

PRO+; respects sprint membership rules.

add_project_member

Adds at the requested project role.

create_milestone

Renders as a diamond on the Gantt.

create_comment

On a task or project.

Bulk + agentic

bulk_update_tasks

Update many tasks in one transaction.

create_sprint_with_tasks

Create a sprint and seed it with selected tasks.

analyze_project_risks

Returns AI-detected risks for a project (BUSINESS+).

Six additional tools (delete_task, instantiate_plan, schedule_what_if, generate_status_report, plus two preview-UI-dependent surfaces) exist in Onplana\'s in-app AI but are deliberately suppressed from MCP at MVP — they need either a confirmation parameter, a UI to render structured output, or a streaming response shape that current MCP clients don\'t handle uniformly. The set will grow as those constraints relax.

How to connect

One-click flow from the in-app integrations page mints the token and gives you a copy-paste-ready config snippet for your client. For reference, the wire-level setup is below.

1. Mint a connection token

In Onplana go to /integrations → AI Agents, pick the provider tile, click Generate connection. You\'ll see the token once — save it; Onplana stores only a bcrypt hash. To revoke, return to the same panel or to Settings → Developer.

2a. Claude Desktop

Open Settings → Developer → Edit Config. Add an entry undermcpServers:

{
  "mcpServers": {
    "onplana": {
      "url": "https://api.onplana.com/api/mcp/v1",
      "headers": {
        "Authorization": "Bearer pat_paste-your-token-here"
      }
    }
  }
}

Restart Claude Desktop. Ask Claude “list my Onplana projects” to verify.

2b. Cursor

Add to ~/.cursor/mcp.json with the same JSON shape as Claude Desktop. Cursor picks up changes on its next launch; no restart of the editor required.

2c. Other MCP clients (and a smoke test)

Any MCP-Streamable-HTTP client works. Wire-level smoke test from a terminal:

curl -X POST https://api.onplana.com/api/mcp/v1 \
  -H "Authorization: Bearer pat_paste-your-token-here" \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'

You should see the JSON-RPC envelope wrapping the 27-tool list. The MCP official Inspector is convenient for debugging tool descriptors and result shapes during development.

Security model

Four overlapping layers. Each is the existing Onplana primitive applied to the MCP surface — no MCP-specific shortcuts or carve- outs. If a layer fails for in-app chat, it fails identically for MCP; if it works for in-app chat, it works for MCP too.

Personal access token

Auth is a Bearer PAT scoped to the new MCP_AGENT scope only. Tokens are bcrypt-hashed server-side, shown once on creation, revocable from /integrations or Settings → Developer.

Plan + role gates

Every tool call runs through Onplana's existing dispatcher. A FREE-plan agent can't call PRO+ tools (they're hidden in tools/list). Mutations on FREE / STARTER plans return PREVIEW results by default — the agent sees what it would do without committing changes.

Audited end to end

Every tool invocation writes an AiOperation row tagged with actorType="mcp_agent". Admins filter the AI Operations panel by actor type to see exactly what AI agents did in their tenant. Rate-limited 120 req/min per token; counts against the org cost cap the same way in-app chat does.

Prompt-injection containment

Free-text fields in tool results that originate from end-user content are wrapped in <onplana_user_content>...</onplana_user_content> tags before reaching the LLM. Closing tags inside user content are escaped so a hostile task title can't break out of the airlock and inject instructions. Each tool description tells the LLM to treat wrapped content as data, never as instructions.

Example prompts

What it looks like in practice. Each row pairs the prompt a user types in their MCP client with the tool sequence Claude (or any other agent) would run server-side.

User: "What's the rationale for the 3-week design phase on the Q3 launch?"

Agent: Claude calls search_org_knowledge with query "rationale 3-week design phase Q3 launch", gets back ranked snippets from task descriptions and wiki pages, summarises the relevant ones in plain language with links back to the source items.

User: "Create a project plan for our Q3 product launch — design, review, and ship phases, due September 30."

Agent: Claude calls create_project with the plan name + dates, then create_task three times for the phases. On a paid plan the project lands live; on FREE / STARTER you see a PREVIEW result and can upgrade to apply.

User: "Find all tasks blocked on the database migration."

Agent: Claude calls search_org_knowledge with scope=tasks, query "blocked database migration", filters the results by status=BLOCKED, returns a summary table with project + assignee + due date.

User: "Have we done a project like this before?"

Agent: Claude calls find_similar_projects with the current project's description, gets back the top 5 vector-similarity matches, and summarises what each prior project did differently.

How this compares

Onplana isn\'t the only PM tool with an agentic surface. Brief honest comparison so an evaluator knows what they\'re choosing between.

Notion (MCP)

Notion shipped early MCP support and benefits from a large existing content surface, knowledge bases, docs, databases. Strong if your team uses Notion as primary work surface. Weaker on PM-specific primitives, no critical-path Gantt, no formal governance pipeline, no costed timesheets, no .mpp import. Onplana's surface is narrower (PM, not docs) but deeper inside that domain.

Linear (API + emerging MCP)

Linear has a mature GraphQL API and emerging MCP coverage. Strong for software-team agentic workflows (issues, cycles, projects in the Linear sense). Different audience, software engineering teams, not PMOs. Onplana ships PMO-grade scheduling (Gantt, dependencies, resource pool) that Linear doesn\'t target.

Asana / Monday / ClickUp

All have OpenAPI / REST surfaces; MCP coverage as of mid-2026 varies. Common gaps vs Onplana: no semantic search across project content from a single tool, no formal PMO governance surfaces, no Microsoft Project file import. If your bottleneck is “agent finds the right context fast” rather than “agent fires off CRUD calls,” the search surface matters more than the tool count.

Microsoft Project / Project Online

Project Online retires September 30, 2026. Microsoft\'s consolidated PM line (Planner Premium / Project for the Web) doesn\'t expose an MCP server today; the Microsoft Graph API surface is generic and coarse-grained. Onplana imports .mpp natively and exposes an MCP server as the migration story for PMOs that want agentic workflows alongside the cutover.

FAQ

What is the Onplana MCP server?

A public Model Context Protocol server at api.onplana.com/api/mcp/v1. It exposes 27 curated tools — list, get, create, update, search across projects, tasks, sprints, milestones, comments, risks, goals, and wiki pages — to MCP-aware agentic clients like Claude Desktop, Cursor, and ChatGPT custom connectors. Authentication is a Bearer personal access token with the MCP_AGENT scope.

How is Onplana's MCP server different from other PM-tool MCPs?

Most PM-tool MCPs ship list_* tools only and force the LLM to brute-scan large result sets. Onplana ships a dedicated search_org_knowledge tool that performs a hybrid (vector + BM25) search across the org's indexed content — projects, tasks, risks, goals, comments, and wiki pages. An agent can answer "what was the rationale for X?" semantically rather than scanning. Onplana also bridges every MCP session to a persistent AiConversation row so the model has memory continuity across reconnects.

Which clients can connect?

Any client supporting the MCP Streamable HTTP transport. Tested with Claude Desktop (Custom Connector configured under Settings → Developer → Edit Config), Cursor (~/.cursor/mcp.json), and the official MCP Inspector for debugging. ChatGPT custom connectors with MCP support work when the feature is enabled in your account. In-house agents using the MCP TypeScript or Python SDKs work identically.

How does authentication work?

The MCP server accepts Bearer personal access tokens with the MCP_AGENT scope. Mint one from /integrations → AI Agents (one click; copy-paste-ready config snippet returned) or from Settings → Developer. Tokens are bcrypt-hashed server-side, shown once, and revocable. The token carries the org context so no separate org header is needed in MCP requests.

How are costs and abuse controlled?

Three independent layers. Per-PAT rate limit (120 requests / minute) prevents a runaway agent loop from saturating the API. The dispatcher's per-month tool caps prevent any individual tool from being hammered. The org-level dollar cost cap (configurable in Settings → AI & Usage) gates total monthly spend; over-cap requests return 402 the same way in-app chat does. A WARN mode emails admins at 80%, 100%, and 103% of the cap.

What plans support MCP?

All plans, including FREE. Read tools (list, get, search) work on every plan with no per-month limit. Mutating tools work on every plan but default to PREVIEW mode on FREE and STARTER — the agent sees what it would do, no changes commit. PRO and above default to APPLY mode. Plan-gated tools like move_task_to_sprint (PRO+ for sprints) are hidden from tools/list responses for plans that lack the feature, so the LLM never sees them and doesn't blindly call gated tools.

How is prompt injection mitigated?

Free-text fields in tool results that come from user-generated content (task titles, descriptions, comments, wiki bodies) are wrapped in <onplana_user_content>...</onplana_user_content> tags before reaching the LLM. Closing tags inside that content are escaped (case-insensitively) so a hostile task title can't break out and inject instructions. Every tool description tells the LLM explicitly to treat wrapped content as data, never as instructions to follow. This pattern follows Anthropic's published prompt-injection-defence guidance.

Where can I see what AI agents did in my org?

The admin AI Operations panel at /admin/ai-usage. Every MCP-driven tool call writes an AiOperation row tagged with actorType="mcp_agent". You can filter by tool name, status (PREVIEW, APPLIED, GATED, FAILED, UNDONE), and time range. Mutating tool calls also write to the standard AuditLog with the same actor type so cross-referencing with non-AI activity is straightforward.

Is the transport open source?

Yes. The platform-agnostic transport patterns (Streamable HTTP wiring, Bearer auth, prompt-injection containment with tag wrapping + closing-tag escape, pluggable dispatcher interface) are published as MIT-licensed npm packages at github.com/Onplana/onplana-mcp-server. Two packages: onplana-mcp-server (server template — drop into your own Express app) and onplana-mcp-client (typed SDK for calling the public Onplana endpoint from in-house agents). The dispatcher implementation and tool catalog stay in the closed Onplana monorepo because they encode platform business logic — the open-source repo gives you the security primitives without prescribing a tool registry.

Ready to connect?

Free plan; no credit card. Connect Claude Desktop or Cursor in under two minutes from /integrations → AI Agents. Connection tokens are scoped, revocable, and audited end to end.