Confluent (Pt.2) - Assessing Open-Core Business Model & Prospects
Summary
- Confluent (CFLT) excels in executing the open-core business model, leveraging proprietary components, an enterprise go-to-market strategy, managed services, and continually evolving features.
- The build vs. buy dilemma remains a significant challenge for open-core companies.
- However, future trends are likely to favor open-core companies due to increasing technical complexity, a stringent compliance and regulatory landscape, and rising labor costs that cannot be managed by GenAI.
- Despite some drawbacks of the open-core model, CFLT stands to gain substantially from open-sourcing Kafka and maintaining it as the leading event streaming standard, along with Flink as the standard for stream processing.
In Part 1, we analyzed the technical architecture of Kafka, and its competitors. In Parts 2 and 3, we are going to conduct a high-level business overview that is more tailored for financial analysts. To start off, we will recap on basic thoughts as to why we believe open-core, vis-a-vis sharing all source code for your core product, is a great business, if executed correctly. Then, in Part 3 we will discuss the CFLT's execution so far, its TAM potential having expanded into stream processing, and finally, why we believe it has a good tailwind ahead.
Understanding Open-Core
In our previous open-core series, we've explained why open-core is a viable business. For a quick recap, many financial analysts are skeptical about the open-core business model. But we think this skepticism, if priced-in accordingly, can deliver a good risk-reward for investors who can comprehend the value and peculiarity of open-core.
In a nutshell, open-core companies are ones who typically have founders who originally created a highly popular open-source project, and then subsequently started a dedicated commercial entity on top of the open-source project.
Generally, the first generation of open-source projects, started before 2007, were created by developers who were plagued by legacy tech giants like IBM, MSFT, ORCL, EMC (now DELL), and others who have tight control of the software, often highly coupled with their dedicated hardware with expensive deployment, maintenance, and upgrade costs. These solutions are often expensive, and not that flexible, agile, and scalable, albeit fairly stable. As tech and the consumer Internet saw a parabolic rise, these digital-native companies were compelled to avoid the drag of these legacy vendors, and they needed to find more effective solutions to deploy on top of commodity-of-the-shelf (COTS) x86 servers, powered by INTC and AMD.
The second generation of open-source projects blossomed roughly from 2008 to 2015. These solutions are not attempting to replace legacy solutions, or to build new solutions for old problems per se. They were built to solve the new problems that arisen as the Internet scaled and created new engineering problems. Kafka, for instance, was created to handle the billions of messages on LinkedIn in a timely manner. These new problems were non-existent in the old world of commercial banking, for example, whereby old banks only need servers to keep records of account balances. Therefore, these new open-source projects often create new concepts, paradigms, and new ways to handle problems that can also further unlock new use cases for other industries and companies to build new applications. For instance, Kafka allows banks to have real-time event-driven architectures, such that KYC and customer onboarding can happen within seconds instead of days. Therefore, the second generation of open-source projects saw even more popularity across all users and broadened the reach of the concept of open-source.
The third generation of open-source projects is still gradually unfolding. They often rely on new paradigms and infrastructure concepts like Wireguard (VPN), eBPF (Linux kernel visibility & control), and now GenAI.