Funding Announcement

$1.7M Pre-Seed Round to Enable Community-Owned AI

Assisterr has successfully closed a $1.7M pre-seed funding round — advancing a network of Small Language Models and SLM-powered agents built on community ownership.

Assisterr Team·Nov 6, 2024·6 min read

We are maturing and testing our infrastructure for launching and maintaining Small Language Models (SLMs) and SLM-powered AI Agents. Before scaling to different verticals, we are announcing a big onboarding of 100 Web3 projects to Assisterr — free of charge.

Assisterr, a Cambridge-based AI infrastructure startup dedicated to revolutionizing artificial intelligence through community ownership and a network of Small Language Models, has successfully closed a $1.7 million pre-seed funding round.

The investment round saw participation from prominent Web3 venture funds including Web3.com Ventures, Moonhill Capital, Contango, Outlier Ventures, Decasonic, Zephyrus Capital, Wise3 Ventures, Saxon, GFI Ventures, X Ventures, Koyamaki, Lucid Drakes Ventures, and notable angels including Michael Heinrich (co-founder & CEO, 0g.ai), Mark Rydon (co-founder & CEO, Aethir), Nader Dabit (Director of Developer Relations, Eigen Labs), Anthony Lesoismier-Geniaux (co-founder, SwissBorg), Ethan Francis (Head of Developer Relationships, Particle Network), and more committed to advancing decentralized AI solutions.


The Assisterr Approach

Assisterr is revolutionizing the AI landscape with its innovative approach to Small Language Models. Leveraging the Solana blockchain, Assisterr empowers communities to collaborate, aggregate, and monetize their data and expertise in specialized subjects.

Small Language Models (SLMs) are tailored AI models optimized for efficient performance on edge devices, focusing on specific domain tasks. They are smaller and more efficient than traditional Large Language Models — capable of handling up to 8 billion parameters — and can be deployed on laptops and smartphones where data privacy and efficiency matter.


Milestones to Date

  • 150,000 registered users on the Assisterr platform.
  • 60+ Small Language Models launched for leading Web3 protocols — Solana, Optimism, 0g.ai, NEAR, and more.
  • Multiple global hackathon wins, including the recent AI × Crypto event hosted by BeWater, OKX, and Binance Labs.
  • Selected for Google's AI Startups program with $350,000 in funding covering GPU, CPU, and cloud infrastructure.

“At Assisterr, we are building an AI tokenization stack to ensure fair compensation for data owners and contributors. Our platform manages the entire lifecycle of SLM training, enabling features like data provenance tracking, fine-tuning, and the launch of SLM-powered agents. The Web3 component allows data owners to contribute their data and expertise to domain-specific SLMs and capture the value from such contributions.”

— Nick Havryliak, CEO & Co-founder, Assisterr


Assisterr's Core Technology Stack

The Assisterr stack is built around several core components, each addressing a specific layer of the AI tokenization challenge:

Data Provenance Protocol

Enables decentralized coordination of data contributions. Every dataset used to train an SLM can be traced back to its contributors, and value flows back to them automatically.

SLM Training & Fine-Tuning Layer

Orchestrates the training lifecycle — from curating domain-specific corpora to continuous fine-tuning as new contributions arrive. Optimized for small models with fast iteration cycles.

Community Ownership Layer

Each SLM and each agent built on top of an SLM is community-owned. Contributors, curators, and end-users all participate in the value captured by the model.

SLM-Powered Agents

Autonomous agents that run on top of an SLM — specialized, efficient, and deployable at the edge. They execute domain-specific tasks with orders-of-magnitude lower inference cost than general-purpose LLM agents.


What's Next

With the fresh funding, Assisterr will focus on several initiatives: protocol development, strategic partnerships with leading Web3 ecosystems, and scaling the community of SLM contributors.

We are also introducing a no-code AI Lab module, designed to attract a broader range of builders — from technical founders to domain experts without ML backgrounds — and let them launch specialized SLM-powered agents without writing code.

The first 100 Web3 projects to onboard will get access free of charge. If you are building in Web3 and want a specialized AI model trained on your ecosystem's data, this is the moment to join.

Build Your Own SLM-Powered Agent

Join the first 100 Web3 projects onboarding to Assisterr.