Developers, rejoice: You’ll soon be able to use several new large language models (LLMs) in GitHub Copilot, the company’s coding assistant.
On Tuesday at its annual GitHub Universe conference, the Microsoft subsidiary announced support for four new LLMs in Copilot: Claude 3.5 Sonnet, Gemini 1.5 Pro, and OpenAI’s o1-preview alongside o1-mini. The OpenAI models are available now in Copilot Chat, with Claude 3.5 Sonnet up next, followed by Gemini 1.5 Pro “in the coming weeks,” according to the announcement.
Also: You could win $25,000 for pushing Google’s Gemini 1.5 to its limit
“From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot’s surface areas and functions soon,” the company noted.
GitHub first launched Copilot with Codex, a nascent version of OpenAI’s GPT-3. Last year, GitHub released Copilot Chat, first with GPT-3.5 and then later GPT-4. The company says it has continually updated its base models depending on quality and latency needs with everything from GPT-3.5 Turbo to GPT-4o mini.
<!–>
GitHub says it’s seen a “boom” in the ability of both small and large LMs to serve different programming needs.
“The next phase of AI code generation will not only be defined by multi-model functionality, but by multi-model choice,” GitHub said in the announcement. “GitHub is committed to its ethos as an open developer platform, and ensuring every developer has the agency to build with the models that work best for them.”
Also: OpenAI plans to offer its 250 million ChaptGPT users even more services
The company also released GitHub Spark, an AI tool for building apps completely in natural language. With it, users can create Sparks, or “micro apps” that leverage AI and external data without eating up cloud space. Sign up for an early preview at GitHub Spark.