Dylan Patel, chief analyst at semiconductor research firm SemiAnalysis, informed The Information that utilizing ChatGPT for composing cover letters, crafting lesson plans, and revamping dating profiles could lead to potential chatGPT cost up to $700,000 per day for OpenAI due to the high-priced technological infrastructure that supports the AI. Microsoft decided to help in reducing this cost.
The computational requirements to process responses based on user inputs necessitate significant computing power, as indicated by Patel.
The bulk of these expenses are centered around the substantial resources needed for the servers, Patel conveyed to the tech publication. In a discussion with Insider, Patel suggested that the operational expenses are likely even higher now, as his initial estimation was based on OpenAI’s GPT-3 model. The more recent GPT-4 iteration, OpenAI’s latest model, would likely result in even greater operational costs, he revealed.
OpenAI did not provide an immediate response to the inquiry prior to the article’s publication.
Although training ChatGPT’s extensive language models might incur costs amounting to tens of millions of dollars, the practical chatGPT cost of operational deployment, often referred to as inference costs, far surpass training expenditures when implementing the model at any significant scale.
This perspective was shared by Patel and Afzal Ahmad, another analyst at SemiAnalysis, as reported by Forbes. They stated that the costs for deploying ChatGPT in terms of inference surpass the costs of training on a weekly basis.
Businesses that employ OpenAI’s language models have been grappling with substantial price tags for years. Nick Walton, CEO of Latitude, the startup behind an AI dungeon game that employs prompts to generate storylines, disclosed that utilizing the model, along with expenses related to Amazon Web Services servers, amounted to $200,000 monthly for the AI to address millions of user queries in 2021, according to CNBC.
The elevated expenses prompted Walton to transition to a language software provider backed by AI21 Labs, a move that reportedly halved the AI costs for his company to $100,000 monthly. Walton humorously mentioned, “We joked that we had human employees and we had AI employees, and we spent about as much on each of them. We spent hundreds of thousands of dollars a month on AI, and we are not a big startup, so it was a very massive cost.”
Microsoft working on secret chip aims to chatGPT cost
In a bid to curtail the expenses associated with running generative AI models, Microsoft has been working on a clandestine project to develop an AI chip named Athena, as initially reported by The Information.
This initiative commenced in 2019, years after Microsoft’s $1 billion agreement with OpenAI, which stipulated that OpenAI’s models must be exclusively operated on Microsoft’s Azure cloud servers.
The motivation behind the chip was two-fold, sources familiar with the matter revealed to The Information. Microsoft executives recognized their lag behind Google and Amazon in the realm of proprietary chip development.
Concurrently, Microsoft sought more cost-effective alternatives, considering its AI models ran on Nvidia’s graphics processing units (GPUs). As a response, Microsoft decided to create a chip that would yield reduced costs.
After nearly four years, the report indicates that over 300 Microsoft employees are actively contributing to the chip project. Sources with knowledge of the matter disclosed that the chip could potentially be ready for internal utilization by Microsoft and OpenAI as early as the following year.