I’ve started taking pictures of nearly every meal I eat to feed into a multimodal AI doctor that has context on my wearable, how I’m feeling, blood work, meds/supplements, genome variants interpreted with a reasoning model, etc It’s a very hacky prototype now but I’m surprised no one has built the definitive version of this yet. Clear network effects to be unlocked at scale, eg RCTs
Ping me if you’re building this
Basically we need to lower the activation barrier for multiple sensory data streams to get more personalized AI. What are you seeing, thinking, doing, eating, or feeling across space and time?
166,27K