Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there really a difference between a multimodal vs a text LLM + stable diffusion?


Multimodal can refer to a lot of different types of models, but feeding LLM text into stable diffusion definitely doesn’t count.

LLaVA is the first one that comes to my mind, it takes images and text as input and outputs text.

There’s an unreleased version of GPT4 that can do that same thing.


Sure technically not the same, but won't there be the same affect?

How do our brains work? Isn't there a separation between image and text processing?


Surely there needs to be some amount of training with both models in the loop before it can be considered a multimodal system.


What do you mean?


That if you have 3 separate models, text -> text, image -> text and text -> image you can just glue them together to make it behave like a multimodal.

(Just like gpt4 is rumored to be a few different sub models and not just one giant model)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: