Here's why we decided to (1) make Cline open source and (2) not make inference reselling part of our business model: When you control the inference (the AI model calls) and we build the harness (the system directing those calls), neither party can obscure what's happening. You see exactly which models are called, how much context is used, what decisions are made. We can't quietly degrade performance to improve margins because you're paying the inference provider directly. This separation means we succeed only when Cline becomes more capable. Not when we find clever ways to reduce your token usage. Not when we route to cheaper models without telling you. Not when we artificially limit context windows. The result: Cline uses the right model for each task (as defined by you), integrates any tool you need via MCP, and operates without arbitrary constraints. You get pure, unfiltered access to AI capability. We built this way because when incentives align correctly, you don't need to trust us. The architecture itself guarantees we're working toward the same goal: the most powerful AI coding experience possible. The bottom line is that Cline gives you the best possible performance out of the best models, full-stop.
153,16K