Over the last decade, we’ve all agreed that “Software is eating the world”. No matter what industry you work in, software plays a huge role in it. This has allowed product teams to gain more and more relevance within organizations, leading to higher budget allocations. At the same time, consumers and enterprises alike expect more from software in terms of speed, UX, availability, and frequency of updates.
We call the “engineering value chain” the group of services and processes that make product development possible, starting from the code editor, all the way to the application performance monitoring platforms. We believe that over the next decade, the tooling and infrastructure that supports these teams will continue to evolve, and we’ll see the rise of more open core product offerings that will allow each organization to customize their stack to fit their specific needs. We also believe that as the complexity of the dev value chain increases, so does the need for inter-tool communication, which will lead to more API-driven design in software products. This applies to design, engineering, and data teams alike.
We also believe that the software engineering value chain is much more advanced and refined than the data science and engineering one. Source code version control, package managers, APM platforms, and infrastructure orchestration tools are all tablestake for software; the same can’t be said for data version control, dataset consumption, and data processes management. We believe there’s a large opportunity for new $1B+ companies to emerge here in order to address these issues.
The responsibilities of developers keep growing, creating new markets and expanding existing ones. The DevOps space was almost non-existent in the early 2000s, in the early 2010s you had Jenkins and CircleCI starting to gain popularity. The shift from monolithic codebases to microservices put containers in the spotlight, and a new wave of tools like Kubernetes and HashiCorp's TerraForm emerged. As infrastructure got more complex, new tools for reliability and performance monitoring like Grafana, PagerDuty, and Datadog were born. We have also seen more tools trying to help developers be more productive by helping with code reviews (Codacy, DeepSource*) and automated security audits (Snyk, ShiftLeft).
On the data science side, we are not even in the early innings, we are still announcing the lineups. We have seen the creation of tools for data ingestion, warehousing, and analysis; this includes Snowflake, Hadoop, Apache Spark, DataBricks, Pachyderm, and the like. The next wave of companies will improve on these tools and bring the data engineering ecosystem to a more mature state with features like version control and reproducibility.
It’s very hard to build a one-size-fits-all product, especially when it comes to developer tools and infrastructure. Open core, API-driven products have the chance to replace incumbent products by allowing developers to solve for edge cases that closed source, web-driven products can’t adapt to, as well as allowing them to extend their functionality to fit within existing stacks.
Enterprise software is moving from sales-y, Gartner-quadrant-filled websites to landing pages with easy signups and one-line install instructions through brew, pip, npm and other package managers. Self-serve products with a bottoms-up sales model (B2D) should allow teams within large organizations to find value in the tool right away.
For some products in this category, enterprises were usually solving the problem by either having a dedicated employee on staff (i.e. database admins) or by throwing money and hardware at the problem. New companies will offer products that take over these problems and allow developers to focus on building products and solving problems that are core to the business.