Build with
open-source AI.

A better way to run & fine-tune open-source models on your data.
Your data, your models, your AI.

Forefront enables developers to build on open-source AI with the familiar experience of leading closed-source platforms.

Forget deprecated models, inconsistent performance, arbitrary usage policies, and lack of control & transparency.

Don’t settle for AI you don’t own. The future is open.

Models designed to be
your own.

Start fine-tuning models on your data in minutes.
Fine-tune models for any use case.

No data? No problem. Start with the best model for your use case. Use our API to store the responses. Then seamlessly fine-tune a model when you’re ready.

from openai import OpenAI
from forefront import ff

openai = OpenAI(api_key="OPENAI_API_KEY")
pipe = ff.pipelines.get_by_id("PIPELINE_ID")

messages = [{
    "role": "user",
    "content": "What is the meaning of 42?"
}]

completion = openai.complete(
    engine="gpt-4",
    messages=messages
)

messages.append({
    "role": "assistant",
    "content": completion["choices"][0]["text"]
})

pipe.add(messages)

Validate model performance. Assess how your fine-tuned model performs on a validation set.

Validation results

Sample

of 10

Watch your model learn. Analyze built-in loss charts as your model trains.

Training loss

0.132

Evaluations made easy. Choose from a variety of evals to automatically run your model on.

Evals

MMLU

58.0%

TruthfulQA

56.2%

MT-Bench

62.3%

ARC

75.6%

HumanEval

75.6%

AGIEval

75.6%

AGIEval

75.6%

Run AI with an API.

Inference with serverless endpoints for every model.
Run models in a few lines of code or experiment in the Playground.

Chat or completion endpoints. Choose the prompt syntax best for your task.

import Forefront from "forefront";

const ff = new Forefront(process.env.FOREFRONT_API_KEY);

try {
    const response = await ff.chat.completions.create({
      model: "team-name/fine-tuned-llm",
      messages: [
        {
            role: "system",
            content: "You are Deep Thought."
        }
        {
          role: "user",
          content: "What is the meaning of life?",
        },
      ],
      max_tokens: 64,
      temperature: 0.5,
      stop: ["\n"],
      stream: false
    });
    const completion = response.choices[0].content
} catch (e) {
    console.log(e);
}

Integration made simple. Three lines of code and you’re good to go.

+3

-3

Take your model and run. Prefer self-hosting or hosting with another provider? Export your models and host them where you want.

Import from HuggingFace. Forget loading models into Colab. Just copy and paste the model string into Forefront and inference in minutes.

Your AI data warehouse.

Bring your training, validation, and evaluation data.
Start storing your production data in ready to fine-tune datasets in a few lines of code.

All your data in a single place. Forefront gives you a single source of truth for all your AI data.

File name

Purpose

email_summaries.jsonl

Training

validate_email_summaries.jsonl

Validation

enrich_company.jsonl

Training

validate_enrich_company.jsonl

Validation

enrich_contact.jsonl

Training

validate_enrich_contact.jsonl

Validation

email_hooks.jsonl

Training

validate_email_hooks.jsonl

Validation

Build your data moat. Pipe your production data to Forefront in a few lines of code to store it in ready to fine-tune datasets.

from openai import OpenAI
from forefront import ff

openai = OpenAI(api_key="OPENAI_API_KEY")
pipe = ff.pipelines.get_by_id("PIPELINE_ID")

messages = [{
    "role": "user",
    "content": "What is the meaning of 42?"
}]

completion = openai.complete(
    engine="gpt-4",
    messages=messages
)

messages.append({
    "role": "assistant",
    "content": completion["choices"][0]["text"]
})

pipe.add(messages)

Become one with your data. Navigate your data in the Inspector—built to help you thoroughly and quickly inspect your samples.

Sample

of 12

Instant insights. Get a sense of your data’s distribution and patterns. Discover imbalances and biases without painstaking effort.

Tokens per sample

Tokens by label per sample

From zero to IPO.

Designed for every stage of your journey.
From research to startups to enterprises.

Forget about infrastructure. API servers, GPUs, out of memory errors, dependency hell, CUDA, batching? Don’t bother.

Don't sweat scaling. Lots of traffic? Forefront scales automatically to meet demand. No traffic? You don’t pay a thing.

Only pay for what you use. Don’t pay for expensive GPUs when you’re not using them.

Phi-2

$0.0006 / 1k tokens

Mistral-7B

$0.001 / 1k tokens

Mixtral-7Bx8

$0.004 / 1k tokens

Seriouly secure. Private by design.

We don’t log any requests and never use your data to train models.
For enterprise customers, Forefront offers flexibility to deploy in a variety of secure clouds.

Your questions, answered.

Have more questions?

Forefront is constantly evolving and we’re here to help along the way. If you have additional questions, feel free to reach out.

Your path to open AI is ready. Are you?

© Forefront 2024

All rights reserved