I skimmed over this, but did I see a reference sandbox implementation? And then basically the chat UI interacts with that with postMessage (and receiving) and forwards tool calls to the MCP server. Does it also forward tool calls the MCP server doesn't handle to the host backend?
What I am imagining is something like a meta UI tool call that just creates a menu. The whole MCP server's purpose might be to add this menu creation capability to the chat user interface. But what you are selecting from isn't known ahead of time, it's the input to the UI.
When they select something I assume it would output a tool call like menuItemSelected('option B'). I suppose if you want your server to do anything specific with this then you would have to handle that in the particular server. But I guess you could also just have a tool call that just sends the inputs to the agent. This could make for what is a very slow to respond but extremely flexible overall UX.
I guess this is not the intended use, but suppose you give your agent generic MCP UI tools for showing any menu, showing any data table, showing a form, etc. So the inputSchemas would be somehow (if this is possible) quite loosely defined.
I guess the purpose is probably more about not having to go through the LLM rather than giving it the ability to dynamically put up UI elements that it has to react to individual interactions with.
But maybe one of the inputs to the dataTable are the query parameters for its data, and the table has a refresh button. Maybe another input is the URI for the details form MCP UI that slides over when you click a row.
Maybe there is an MCP UI for Layout what allows you to embed other MCP UIs in a specific structure.
This might not make sense, but I am wondering if I can use MCP Apps as an alternative to always building custom MindRoot plugins (my Python/web components agentic app framework) to provide unique web pages and UI for each client's agentic application.
I think I may have gotten the MCP Apps and MCP UI a bit conflated here so I probably need to read it again.
What I am imagining is something like a meta UI tool call that just creates a menu. The whole MCP server's purpose might be to add this menu creation capability to the chat user interface. But what you are selecting from isn't known ahead of time, it's the input to the UI.
When they select something I assume it would output a tool call like menuItemSelected('option B'). I suppose if you want your server to do anything specific with this then you would have to handle that in the particular server. But I guess you could also just have a tool call that just sends the inputs to the agent. This could make for what is a very slow to respond but extremely flexible overall UX.
I guess this is not the intended use, but suppose you give your agent generic MCP UI tools for showing any menu, showing any data table, showing a form, etc. So the inputSchemas would be somehow (if this is possible) quite loosely defined.
I guess the purpose is probably more about not having to go through the LLM rather than giving it the ability to dynamically put up UI elements that it has to react to individual interactions with.
But maybe one of the inputs to the dataTable are the query parameters for its data, and the table has a refresh button. Maybe another input is the URI for the details form MCP UI that slides over when you click a row.
Maybe there is an MCP UI for Layout what allows you to embed other MCP UIs in a specific structure.
This might not make sense, but I am wondering if I can use MCP Apps as an alternative to always building custom MindRoot plugins (my Python/web components agentic app framework) to provide unique web pages and UI for each client's agentic application.
I think I may have gotten the MCP Apps and MCP UI a bit conflated here so I probably need to read it again.