BUD Jr Model
7.89 MT Benchmark
Just behind Claude-2
2.8B
Version 1
Chat Model
2.8B
Version 2
29/32evals
Outperforms LLaMa-70B on

Our LLMs, SLMs, and diffusion models are used by over 60,000 developers, startups, and enterprises.

github
huggingFace
Models Built
From Scratch

SLMs and LLMs are engineered and trained from the ground up to efficiently run on client devices and edge environments, delivering state-of-the-art performance.

20+
Models

We’ve open-sourced over 20 models, making them available for the community of AI enthusiasts to try.

Used By
60K+ Users

Our open-source models are used by over 60,000 developers, startups, SMEs, and enterprises.

model
GenZ 70B

An advanced Large Language Model (LLM) fine-tuned on the foundation of Meta's open-source Llama V2 70B parameter model.  

github
huggingFace
model
GenZ Vision Assistant 7B

An advanced multimodal AI model fine-tuned to understand text and visual inputs to provide contextually relevant responses. 

github
huggingFace
model
GenZ 13B V2 (gmml)

The instruction fine tuned model with 4K input length. The model is finetuned on top of pretrained LLaMa2 

github
huggingFace
model
GenZ 13B V2 (4 bit)

The instruction finetuned model with 4K input length. The model is finetuned on top of pretrained LLaMa2 

github
huggingFace
model
GenZ 13B V2

An advanced Large Language Model (LLM) fine-tuned on the foundation of Meta's open-source Llama V2 13B parameter model. 

github
huggingFace
model
GenZv13B

The instruction finetuned model with 4K input length. The model is finetuned on top of pretrained LLaMa2 

github
huggingFace
model
GenZ 13B Infinite

The model is a finetuned version of Genz-13B-v2 with a context size of 16K. The model architecture is updated to have lamda attention. 

github
huggingFace
model
Code Millennials 8B

Bud Millennial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. 

github
huggingFace
model
Code Millennials 34B

Bud Millennial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. 

github
huggingFace
model
Code Millennials 3B

Bud Millennial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. 

github
huggingFace
model
Code Millennials 13B

Bud Millennial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. 

github
huggingFace
model
Code Millennials 1B

Bud Millennial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. 

github
huggingFace
model
Boomer 634M

This model, with 634 million parameters, was meticulously pre-trained from scratch on a custom synthetic dataset comprising 12 billion tokens. 

github
huggingFace
model
Boomer 4B

Our 3.51 billion parameter marvel pretrained from custom synthetic data generated with textbook style. 

github
huggingFace
model
Boomer Bitnet 634M

This 634m parameter model is pre-trained from scratch using a custom synthetic dataset of 5B tokens. 

github
huggingFace
model
Boomer 1B

This 1.1B parameter model is pre-trained from scratch using a custom-curated dataset of 41B tokens. 

github
huggingFace
model
Tansen

Tansen is a text-to-speech program designed for strong multi-voice capabilities, highly realistic prosody and intonation, and precise speaking rate control.  

github
huggingFace
model
Chhavi

A Latent Diffusion Model (LDM) fine-tuned on the foundation of StablitiyAI's open-source SDXL model. 

github
huggingFace
model
SQL Millennials 13B

Built on CodeLLaMa 13B, our model has been meticulously fine-tuned with a curated dataset comprising 100k SQL query generation instructions, ensuring quality and precision. 

github
huggingFace
model
SQL Millennials 7B

Built on Mistral 7B, our model has been meticulously fine-tuned with a curated dataset comprising 100k SQL query generation instructions, ensuring quality and precision. 

github
huggingFace
Built for
Speed
High performance and accuracy is achieved by using bud’s proprietary training methodology—with custom optimizers, learning functions, and batching techniques.
Models for every device

Bud models are optimized for all devices: 2.8B for client devices, 600M for edge, and 10M, 40M, or 60M for super edge applications.

Highly Accurate SLMs

Bud’s Small language models maintain Cloud LLM like accuracy using Hybrid technology.

Adaptable Models

With an expanding architecture, bud models deliver high accuracy, maintain statefulness, & easily personalized.

Real time Inferencing

Real time TTFT and Inferencing optimize performance for minimal delay in AI responses.

Image Generation

<3 sec

Data Requirement

6x

Speech to text conversion
of 60 mins clip

<30 sec

GenAI Made Practical, Profitable and Scalable!

Company
Blog
Contact
Products

Runtime Inference Engine

Models

Resources

Case studies

Research & Thoughts

Blogs

News and Updates

© 2024, Bud Ecosystem Inc. All right reserved.

Privacy Policy
Company

Thesis