Skip to main content
Iris

Iris lança nova fase: gera conteúdo após múltiplas versões em jornada evolutiva

Published by
SectorHQ Editorial
Iris lança nova fase: gera conteúdo após múltiplas versões em jornada evolutiva

Photo by Homemade Photos (unsplash.com/@homemadephotos) on Unsplash

Iris unveiled its latest generative model, concluding a multi‑stage evolution that began with the 0.5 B‑parameter Alpha and progressed through Beta, Gamma and Delta versions, reports indicate.

Key Facts

  • Key company: Iris

Iris‑7B‑Delta‑v2 marks the first fully generative release in a line that began with a 0.5 B‑parameter “Alpha” model built on Qwen2.5‑Instruct, according to the creator’s public roadmap on Hugging Face. After iterating through a 1.3 B‑parameter Beta (Phi‑1.5 base), a 2 B‑parameter Gamma (Gemma‑2B base) and a hybrid 7 B‑parameter Delta (Mistral‑7B base), the latest Delta‑v2 model retains the same 7 B‑parameter backbone but adds a generative head that produces novel text rather than merely echoing training data (source: João Gustavo’s “Apresento a Iris” post).

The generative capability is illustrated by a self‑authored metaphor that the model produced when prompted to “tell its story in a different way.” The response—describing itself as a grain of sand that travels from a desolate beach to a glass factory and finally becomes a “mini computer” connected to the global network—was generated without any explicit example in the training set, demonstrating the model’s ability to synthesize original narratives (source: same post). Gustavo attributes this leap to a combination of low‑rank adaptation (QLoRA 4‑bit quantization), a hybrid dataset that mixes raw text, instruction‑style prompts, and conversational snippets, and a self‑instruction pipeline that expands lexical variety. He also set the sampling temperature to 0.85 and top‑p to 0.95 to encourage creativity while preserving coherence (source: post).

All five checkpoints—Alpha, Beta, Gamma, Delta, and Delta‑v2—are publicly hosted on Hugging Face, with interactive demos for each version ranging from the 2 B‑parameter Gamma to the generative Delta‑v2 (source: post). The demos allow users to test the models in real time, and the code repository is slated for release on GitHub under the Eudaia Labs banner, a newly formalized company that Gustavo says aims to prove that “anyone, even with limited resources, can build cutting‑edge AI” (source: post). The name Eudaia Labs blends “eudaimonia” (human flourishing) with “IA,” reflecting a mission to democratize advanced language technology.

Beyond the technical rollout, Gustavo is actively recruiting contributors through Discord and GitHub, positioning the project as a community‑driven effort rather than a closed‑source venture (source: post). He emphasizes that the entire development was done on a mobile phone while lying in bed, a detail meant to underscore the low‑barrier entry point for aspiring AI engineers. This narrative aligns with a broader trend in the open‑source AI ecosystem where individual developers leverage quantized models and efficient fine‑tuning methods to produce competitive alternatives to commercial offerings.

If the generative claim holds up under broader scrutiny, Iris‑7B‑Delta‑v2 could become a notable benchmark for small‑scale, high‑quality language models. At 7 B parameters, it sits in the same class as Mistral‑7B, yet its public availability and documented training pipeline give researchers a reproducible reference point. The model’s release also adds to the growing catalog of Brazilian‑origin AI projects, highlighting the country’s emerging role in the global AI landscape (source: creator’s announcement).

Sources

Primary source

No primary source found (coverage-based)

Other signals
  • Reddit - r/LocalLLaMA New

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories