Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Bilawal Sidhu
omg, my brain hurts imagining doing this the old way:
track the balls --> physics sim + fire ball shader --> compute coarse scene depth + character normal map to relight, render passes, then composite together.
OR... just take day time video, and ask AI to turn the lights off and set the juggling balls on fire, and volia:

Cristóbal Valenzuela27.7. klo 16.50
Aleph can handle complex motion and moving objects. The input video was in daylight, so I asked it to turn the lights off and set the juggling balls on fire.
22,6K
Google just discovered a powerful emergent capability in Veo 3 - visually annotate your instructions on the start frame, and Veo just does it for you!
Instead of iterating endlessly on the perfect prompt, defining complex spatial relationships in words, you can just draw it out like you would for a human artist.
This capability is begging for a proper UX, but for now just doodle away in your app of choice, and use "frames to video" in Google Flow.
161,3K
Pretty soon your iPhone will be using FaceID to make sure it’s actually you using your device while scrolling, engaging and posting.
Apple is uniquely positioned to do this all on-device in a privacy preserving manner.
“Attention aware” features are already a step in this direction — your iPhone takes a low res infrared photo every few seconds to check your eye gaze to see if you are paying attention to the screen.
The next step would be letting an app know it’s actually you, or simply that it’s a real human using the device, and not a bot.

Balaji23.7. klo 03.37
An important kind of social network will be one where no bots whatsoever are allowed.
8,37K
Combining the explicit control of 3D software with the creativity of generative AI models is a promising yet underrated workflow.
Build your 3D scenes procedurally by describing them in natural language, then take them all the way with your image & video models of choice.
Tools like intangible are built around such a workflow so you don't need to duct-tape apps together. Pretty cool!
35,96K
SceneScript treats 3D reconstruction as a language problem rather than a geometry one.
The model watches a video of a room and just learns to write a script for it. It autoregressively spits out text commands like make_wall(...) or make_bbox(...) that define the scene.
Stanford's new "Scene Language" paper goes a step further adding CLIP embeddings to capture visual appearance too.
The fact that language models already understand spatial relationships well enough to write out scene graphs is pretty wild.
100,46K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin