You can use LangChain Node.js SDK to interact with FriendliAI. This makes migration of existing applications already using LangChain particularly easy.

How to use

Before you start, ensure you’ve already obtained the FRIENDLI_TOKEN from the Friendli Suite. Our products are entirely compatible with OpenAI, so we use the @langchain/openai package by referring to the FriendliAI baseURL.

Instantiation

Now we can instantiate our model object and generate chat completions. We provide usage examples for each type of endpoint. Choose the one that best suits your needs:

Runnable interface

We support both synchronous and asynchronous runnable methods to generate a response.

import { HumanMessage, SystemMessage } from "@langchain/core/messages";

const messages = [
  new SystemMessage("Translate the following from English into Italian"),
  new HumanMessage("hi!"),
];

const result = await model.invoke(messages);
console.log(result);

Chaining

We can chain our model with a prompt template. Prompt templates convert raw user input to better input to the LLM.

import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a world class technical documentation writer."],
  ["user", "{input}"],
]);

const chain = prompt.pipe(model);

console.log(
  await chain.invoke({ input: "how can langsmith help with testing?" })
);

To get the string value instead of the message, we can add an output parser to the chain.

import { StringOutputParser } from "@langchain/core/output_parsers";

const outputParser = new StringOutputParser();

const chain = prompt.pipe(model).pipe(outputParser);

console.log(
  await chain.invoke({ input: "how can langsmith help with testing?" })
);

Tool calling

Describe tools and their parameters, and let the model return a tool to invoke with the input arguments. Tool calling is extremely useful for enhancing the model’s capability to provide more comprehensive and actionable responses.

Define tools to use

We can define tools with Zod schemas and use them to generate tool calls.

import { tool } from "@langchain/core/tools";
import { z } from "zod";

/**
 * Note that the descriptions here are crucial, as they will be passed along
 * to the model along with the class name.
 */
const calculatorSchema = z.object({
  operation: z
    .enum(["add", "subtract", "multiply", "divide"])
    .describe("The type of operation to execute."),
  number1: z.number().describe("The first number to operate on."),
  number2: z.number().describe("The second number to operate on."),
});

const calculatorTool = tool(
  async ({ operation, number1, number2 }) => {
    // Functions must return strings
    if (operation === "add") {
      return `${number1 + number2}`;
    } else if (operation === "subtract") {
      return `${number1 - number2}`;
    } else if (operation === "multiply") {
      return `${number1 * number2}`;
    } else if (operation === "divide") {
      return `${number1 / number2}`;
    } else {
      throw new Error("Invalid operation.");
    }
  },
  {
    name: "calculator",
    description: "Can perform mathematical operations.",
    schema: calculatorSchema,
  }
);

console.log(
  await calculatorTool.invoke({ operation: "add", number1: 3, number2: 4 })
);

Bind tools to the model

Now models can generate a tool calling response.

const modelWithTools = model.bindTools([calculatorTool]);

const messages = [new HumanMessage("What is 3 * 12? Also, what is 11 + 49?")];

const aiMessage = await modelWithTools.invoke(messages);

console.log(aiMessage);

Generate a tool assisted message

Use the tool call results to generate a message.

messages.push(aiMessage);

const toolsByName = {
  calculator: calculatorTool,
};

for (const toolCall of aiMessage.tool_calls) {
  const selectedTool = toolsByName[toolCall.name];
  const toolMessage = await selectedTool.invoke(toolCall);
  messages.push(toolMessage);
}

console.log(await modelWithTools.invoke(messages));

For more information on how to use tools, check out the LangChain documentation.