Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Louround 🥂
Co-founder of @a1research__ 🀄️ & @steak_studio 🥩
I spent the past weeks going through @codecopenflow and its big brain documentation, so let me explain it in plain English.
The next breakthrough for AI isn’t another powerful LLM; it’s giving agents eyes, hands, and their own desktop (VLA).
That’s exactly what Codec is building, and even if the market is catching on, it's not too late👇
Despite all the hype around Gen-AI, most real-world workflows are still tied together with rigid scripts and hardcoded tasks.
The moment a UI pixel shifts, the automation breaks, and anything without an API (legacy desktop apps, factory robots, gaming clients...) becomes off-limits. Codec attacks that blind spot by giving agents their own secure desktops, camera feeds, and input drivers so they can see, decide, and act exactly like a human but tireless and programmable.
This is the real game changer to understand.
What is it going to unlock?
Every repetitive software task, such as those found in office work, SAP, and ERPs, where people manually copy and paste numbers, can be automated. A bot can observe the screen like a human, click the right boxes, and complete the task while you enjoy a coffee.
Hardware and Robotics
Vision Language Action (VLA) technology will enable the control of multiple robots simultaneously and refine their interactions using only a few minutes of data. Robots can become aware of their surroundings and take appropriate actions. (See video:
How is Codec tackling this sector?
Run Tasks on Sensitive Data
X-rays, police files, or tax records can’t be stored online for privacy reasons. The agent lives inside a locked virtual machine, reads the images, types the results back in, and never ships data to the cloud.
Operators: the “app” primitive
Every autonomous workflow you build, whether it’s reconciling SAP invoices, gaming, robotics, and more, is packaged as an Operator. Operators carry their own VLA model, metadata, and permissions, can be installed with one click, and (soon) monetized in a public Codec marketplace where usage fees flow back to the publisher.
Training is flexible: record yourself completing the task and let Codec fine-tune from demonstrations, or drop to the SDK for full programmatic control.
Fabric: The GPU marketplace aggregator
At the core is Fabric, Codec’s open-source scheduler. It efficiently dispatches workloads across AWS, GCP, on-site locations, or any decentralized GPU grid it can access, while enforcing zero-trust networking and cost-aware placement.
This will provide:
- Cheaper compute through real-time arbitrage across clouds
- No single point of failure; if AWS goes down, jobs move to another region or vendor in minutes.
Fabric basically turns “my AI operator needs a box” into “my operator will always find the right box, at the right price, under the right security rules, automatically.”
How big can this get?
AI agents: projected to grow from $5.4 B in 2024 to $50.3 B by 2030 (45.8 % CAGR).
Robotic Process Automation (RPA): $3.8 B → $30.9 B over the same window (43.9 % CAGR).
Codec sits at the intersection with agents that look at pixels instead of waiting for tidy APIs. For reference, legacy-only RPA giant UiPath is worth ≈ $7 B today. Or @Figure_robot rumors of a $40B post money valuation.
Meanwhile, $CODEC’s FDV is at ~$13 M. I'll let you do the maths on the potential (and sry, I couldn't wait to post this to load bags).
I had the opportunity to hop on a call with the team and ask a few questions, they're trusted builders with experience at Hugging Face + Elixir Games and are using their own capital as runway (more than a year in treasury).
There is so much more that I did not cover through like gaming collaborations, MCP&TEEs, the team adding liquidity from their own funds, roadmap, etc as it would make this post way too long but I will definitely share the progresses made by the team.
Coded 🥂



6,73K
Louround 🥂 kirjasi uudelleen
If you're in AI, pivot to Robotics
VLA models will change everything about how AI interacts with the real world in real-time
LLMs are great for historical data (and limited live data)
VLA changes everything by using vision (video streams, cameras, sensor data) to make real-time decisions and send action commands to a desktop, gaming NPC or robot.
One of the next big unlocks for AI and we're still extremely early
@codecopenflow handles the infrastructure of AI x Robotics by allowing users to spin up virtual desktops or robot training environments quickly to train your AI Operators before releasing them into production
AI x Robotics will be a multi-trillion dollar industry within a few years.
I'm buying the picks and shovels

32,57K
Louround 🥂 kirjasi uudelleen
OpenAI just announced that its Operator Agent can now control an entire computer to perform a complex set of tasks using VLA models paired with LLM models
Vision
Language
Action
If only there was a crypto project that could already do this...
Study @codecopenflow
In addition to controlling desktops, Codec can control robotics- and gaming operators
Spin up a virtual sandbox enviroment to train the Operator before releasing it into production in the real world
AI x Robotics and automation of games/desktops/robots will be the next big step for AI development and its my goal to be positioned early
Coded coded

6,41K
Louround 🥂 kirjasi uudelleen
OpenAI just confirmed my northern star thesis for AI today by releasing their operator agent.
Not only was this my guiding thesis for $CODEC, but every other AI investment I made, including those from earlier in the year during AI mania.
There’s been a lot of discussion with Codec in regards to Robotics, while that vertical will have its own narrative very soon, the underlying reason I was so bullish on Codec from day 1 is due to how its architecture powers operator agents.
People still underestimate how much market share is at stake by building software that runs autonomously, outperforming human workers without the need for constant prompts or oversight.
I’ve seen a lot of comparisons to $NUIT. Firstly I want to say I’m a big fan of what Nuit is building and wish nothing but for their success. If you type “nuit” into my telegram, you’ll see that back in April I said that if I had to hold one coin for multiple months it would have been Nuit due to my operator thesis.
Nuit was the most promising operator project on paper, but after extensive research, I found their architecture lacked the depth needed to justify a major investment or putting my reputation behind it.
With this in mind, I was already aware of the architectural gaps in existing operator agent teams and actively searching for a project that addressed them. Shortly after Codec appeared (thanks to @0xdetweiler insisting I look deeper into them) and this is the difference between the two:
$CODEC vs $NUIT
Codec’s architecture is built across three layers; Machine, System, and Intelligence, that separate infrastructure, environment interface, and AI logic. Each Operator agent in Codec runs in its own isolated VM or container, allowing near native performance and fault isolation. This layered design means components can scale or evolve independently without breaking the system.
Nuit’s architecture takes a different path by being more monolithic. Their stack revolves around a specialized web browser agent that combines parsing, AI reasoning, and action. Meaning they deeply parse web pages into structured data for the AI to consume and relies on cloud processing for heavy AI tasks.
Codec’s approach of embedding a lightweight Vision-Language-Action (VLA) model within each agent means it can run fully local. Which doesn’t require constant pinging back to the cloud for instructions, cutting out latency and avoiding dependency on uptime and bandwidth.
Nuit’s agent processes tasks by first converting web pages into a semantic format and then using an LLM brain to figure out what to do, which improves over time with reinforcement learning. While effective for web automation, this flow depends on heavy cloud side AI processing and predefined page structures. Codec’s local device intelligence means decisions happen closer to the data, reducing overhead and making the system more stable to unexpected changes (no fragile scripts or DOM assumptions).
Codec’s operators follow a continuous perceive–think–act loop. The machine layer streams the environment (e.g. a live app or robot feed) to the intelligence layer via the system layer’s optimized channels, giving the AI “eyes” on the current state. The agent’s VLA model then interprets the visuals and instructions together to decide on an action, which the System layer executes through keyboard/mouse events or robot control. This integrated loop means it adapts to live events, even if the UI shifts around, you won’t break the flow.
To put all of this in a more simple analogy, think of Codec’s operators like a self sufficient employee who adapts to surprises on the job. Nuit’s agent is like an employee who needs to pause, describe the situation to a supervisor over the phone, and wait for instructions.
Without going down too much of a technical rabbit hole, this should give you a high level idea on why I chose Codec as my primary bet on Operators.
Yes Nuit has backing from YC, a stacked team and S tier github. Although Codec’s architecture has been built with horizontal scaling in mind, meaning you can deploy thousands of agents in parallel with zero shared memory or execution context between agents. Codec’s team isn’t your average devs either.
Their VLA architecture opens a multitude of use cases which wasn’t possible with previous agent models due to seeing through pixels, not screenshots.
I could go on but I’ll save that for future posts.
15,98K
Hats off to @anoma's testnet!
It has a super smooth and fun experience with side quests and daily tasks.
A new UI and UX world is emerging, and it's intent-based ⏳

Anoma15.7. klo 22.08
A world of pure intent awaits…
The Anoma testnet is live.
2,9K
It's unbelievable that in 2025, we still see such fragmentation and projects bouncing between chains and layers just to chase hype.
🫳 Arbitrum to Berachain to Base to HyperEVM to [INSERT_NEXT_HYPED_CHAIN]
Just build on the intent-centric world
Build on @anoma

Anoma10.7. klo 23.57
oh no you built your app on the 23rd Ethereum Layer 2 and all the users have already moved onto the 24th???

4,85K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin