Published on

xAi Grok Model Open Source Access

  • avatar
    Tinker Assist


Update March 28 2024

Today, xAI announced Grok 1.5 via their blog.

Update March 18 2024

On March 11, 2024, Elon Musk announced via that xAI would be open-sourcing the Grok model.

On March 17, 2024, xAI announced the release of Grok via a blog post on their website. The post contained a link to the Grok-1 repository on GitHub which contains JAX code for running inference on the raw model weights. The raw model weights can be found on HuggingFace or can be downloaded directly from a magnet link found in the repository README

November 2023


On Friday November 3, 2023, Elon Musk announced via X (fka Twitter) that in the upcoming weekend "x.AI will release its first AI to a select group." This post comes less than four months after Musk first announced the creation of x.AI. He announced later in the evening that the model is named Grok, and will be available to X Premium+ subscribers when it's out of "early beta." He also teased a couple screenshots of an online chat interface in which users were asking Grok questions. Unironically, they were questions ChatGPT would probably refuse to or be unable to respond to.

Zero to Grok in just Four Months

Four months is an awfully short amount of time between conception of an AI company and model release. By comparison, OpenAI was conceived in December of 2015, and did not release a notable model until over 3 years later when GPT-2 was released. This is an order of magnitude slower than x.AI's pace to inaugural release. Not to mention, simply training an AI model can take many weeks and even months. Notably, Training Llama 65B occupied 2048 A100 GPUs for 21 days.

In the formal Grok Release Announcement on the following day, the xAi team released many more details regarding Grok, including the progression from prototype LLM Grok-0 to Grok-1, benchmark results, etc.

Grok Benchmark Results

The xAi team released Grok-0 and Grok-1 benchmark data for many industry-standard math and reasoning benchmarks. Grok performed better than GPT-3.5 and LlaMa 2 70B in each of the listed benchmarks, but performed notably worse than GPT-4 in each benchmark.

Limited Benchmark Data

It is curious that the xAi team chose to only release the results from four different industry benchmarks, and with variability in the zero-shot/few-shot distinction among tests. For instance, they released the results of GSM8k 8-shot but HumanEval 0-shot.

By contrast, the Llama 2 release paper "Llama 2: Open Foundation and Fine-Tuned Chat Models" provided 24 different benchmark results, sometimes reporting multiple zero-shot/few-shot distinction results.

This seems to indicate that xAi may have cherry-picked the positive benchmark results for publishing and witheld the results of others that may have been less favorable.

Emphasis on Efficiency?

After discussing benchmark results in the formal release, the team commented on the "exceptional efficiency" of the model compared to larger models like GPT-4. While this may be true, the # of parameters in Grok-1 (the LLM powering the Grok chatbot) is never explicitly stated. They do mention that Grok-0, the prototype LLM, is 33B parameters. However, they never confirm (and it is quite unlikely given the disparity in benchmark data), that the Grok-1 model is the same parameter quantity.

Grok Public Access

Musk claimed that the model will only be available to a limited group to start and would soon be available to X Premium+ subcribers. Based on Musk's typical timelines, it is safe to assume we are only weeks or even days away from a more public release.

Will Grok Be Open Source?

Recall that Musk's founding of X.AI was largely a result of OpenAI's shift in culture between inception and the release of ChatGPT in 2022. OpenAI was founded as a non-profit on the basis of democratizing AI, which implies that discoveries and foundation models would be made open source. However, OpenAI's top-performing models can only be interacted with via web interface or paid API. And to date OpenAI still will not provide basic information regarding it's top models such as sources of training data and quantity of parameters.

It's implied that Musk set out to fulfill OpenAIs flunked mission of democratizing AI, yet he is off to a poor start with xAi, who has not been very forthcoming in its announcement of Grok. Similar to OpenAI's GPT-3.5 and GPT-4, we don't know even basic metadata about the model such as parameter quantity and specifics about training data.

Another Close-Source Chatbot?

We were hopeful that the inaugaral model released by xAi would feel similar to the Llama 2 release. That is, more extensive benchmark data, breakdown of training data sources, details about model architecture, and even model weights. However, without the release of even simple metadata, it is unlikely we will be getting the release of model weights.

Has Musk pivoted on his plan to democratize AI? What we hear regarding Grok in the coming weeks will be telling.