I think having the LLM output as rich visual interface is the future of AI interaction. I think the app as I have tested it, it's great, posting some interesting visuals. In its current form it reminds me a bit about how this feature is already implemented in Claude for instance.
During the generation process, the UX was generally great, and the webapp felt responsive. However, a lot of " Error executing code: Failed to access notebook: calc_notebook" errors where thrown. The final result looked great, I'm not sure whether these errors did have any impact.
Have you thought about getting this feature shipped through an API with some React components so that it could be used for building websites that handle AI interactions?
During the generation process, the UX was generally great, and the webapp felt responsive. However, a lot of " Error executing code: Failed to access notebook: calc_notebook" errors where thrown. The final result looked great, I'm not sure whether these errors did have any impact.
Have you thought about getting this feature shipped through an API with some React components so that it could be used for building websites that handle AI interactions?