Generative AI companies continue to secure substantial amounts of funding to support their commercial endeavors, and in some instances, to promote open-source initiatives as well.
For instance, Together, a startup dedicated to developing open-source generative AI and AI model development infrastructure, recently announced the successful closure of a Series A funding round, amassing a staggering $102.5 million. This funding round was spearheaded by Kleiner Perkins, with notable participation from Nvidia and Emergence Capital. The capital injection, exceeding five times the size of the company’s prior round, will be channeled towards the expansion of Together’s cloud platform. This platform empowers developers to construct open and customized AI models, as stated by Vipul Ved Prakash, the co-founder and CEO of Together.
In a blog post on Together’s website, Prakash emphasized the growing interest among startups and enterprises in crafting a generative AI strategy devoid of exclusive vendor lock-ins. Open-source AI serves as a robust foundation for such applications, with increasingly potent generative models becoming available on a near-weekly basis. Prakash further asserted that generative AI represents a platform technology, akin to a new operating system for applications, with far-reaching implications for human society. He emphasized the importance of fostering a future AI ecosystem that accommodates both proprietary and open models, ensuring a landscape characterized by choice and options.
Together, established in June 2022 by Vipul Ved Prakash, Ce Zhang, Chris Re, and Percy Liang, aims to create open-source models and services that facilitate the integration of AI into organizational applications. To this end, Together has developed a cloud platform capable of running, training, and fine-tuning models. The founders claim that this platform offers scalable compute resources at more cost-effective rates compared to dominant vendors like Google Cloud, AWS, and Azure.
According to Together’s own description, they have achieved significant cost reductions for interactive inference workloads involving large models. Their optimization efforts span across various facets, including the deployment of thousands of GPUs in secure facilities, virtualization software, scheduling mechanisms, and model optimizations, all of which collectively contribute to substantial operating cost reductions.
Presently, Together operates a cloud infrastructure spanning data centers in the U.S. and EU, collaborating with partners such as Crusoe Cloud and Vultr. This infrastructure delivers a total compute power of around 20 exaflops, with the ability to scale in clusters ranging from 16 GPUs to 2,048 GPUs. Among their customers are companies like NexusFlow, Voyage AI, and Cartesia, some of whom also utilize Together’s APIs for model serving.
Pika Labs, which coincidentally secured $55 million in funding the same week, has also harnessed Together’s GPU clusters to build a text-to-video model. They have trained new iterations of the model from scratch using these clusters, enabling them to generate millions of videos each month for their early access users.
Vipul Ved Prakash continued to highlight the advantages of their custom infrastructure, which affords significantly improved economics. The Together platform facilitates swift integration of leading open-source models or the creation of custom models through pre-training or fine-tuning. Customers are drawn to Together due to its industry-leading performance and reliability, while retaining ownership of their AI investments and the flexibility to run their models on any platform of their choosing.
In addition to its cloud service, Together offers “Custom Models,” a consulting service that enables customers to bring their own data to the Together cloud platform, where Together’s team collaborates to design, build, and test custom models.
Furthermore, Together actively invests in open-source AI research. One of their initial projects, RedPajama, aimed to foster a collection of open-source generative AI models, including “chat” models akin to OpenAI’s ChatGPT. They have also released a fine-tuned version of Meta’s Llama 2 text-generating model, GPT-JT, a derivative of the open-source text-analyzing model GPT-J-6B from the EleutherAI research group, and OpenChatKit, an attempt to create an equivalent of ChatGPT.
The demand for generative AI across various domains is insatiable, both from companies and investors. According to IDC, investments in generative AI are projected to surge from $16 billion in the current year to a staggering $143 billion by 2027. Generative AI firms have attracted significant venture capital investments, accounting for approximately 40% of all startup funding rounds, equivalent to $11.9 billion, in the first half of 2023.
Nevertheless, it’s worth noting that the success of generative AI is far from guaranteed, as exemplified by Stability AI, a company that was once a favorite of VC firms like Lightspeed Venture Partners, O’Shaughnessy Ventures, and Coatue. Reports suggest that Stability AI is exploring a sale due to a lack of profitability and deteriorating financial circumstances.