Skip to main content

Starting With NPM

Overview

Welcome to the quick start guide for using NPM SVECTOR-SDK!


Step 1: Install SVECTOR-SDK

To get started with NPM in SVECTOR, you first need to install the SVECTOR-SDK. You can do this by running the following command:

npm install svector-sdk

Step 2: Get Your API Key

To use NPM with SVECTOR, you need an API key. You can find your API key in the Dashboard in Spec Chat. Make sure to copy it, as you will need it to authenticate your requests.

Key location: Dashboard > Spec Chat > API Key.


Step 3: Configure Your Connection

To configure your NPM connection, you need to set the API key in your SVECTOR-SDK client. Here’s how you can do it:

import { SVECTOR } from "svector-sdk";

const client = new SVECTOR({
apiKey: process.env.SVECTOR_API_KEY,
});

const stream = await client.chat.completions.create({
model: "spec-3-turbo",
messages: [
{
role: "user",
content: "Tell me about quantum computing.",
},
],
stream: true,
});

for await (const chunk of stream) {
console.log(chunk.choices[0]?.delta?.content || "");
}

This code initializes the SVECTOR client with your API key and sets up a simple chat completion request. Make sure to replace process.env.SVECTOR_API_KEY with your actual API key or set it in your environment variables.


Step 4: Models

SVECTOR has developed a comprehensive suite of state-of-the-art artificial intelligence models, each optimized for specific use cases and computational requirements. Our model family combines cutting-edge research in natural language processing, machine learning, and computational intelligence to deliver enterprise-grade AI solutions.

Model Specifications and Capabilities

spec-3-turbo - High-Performance General Purpose Model

  • Primary Use Case: Production applications requiring fast response times
  • Optimizations: Streamlined architecture for reduced latency while maintaining quality
  • Best For: Real-time chat applications, API integrations, customer service automation
  • Performance: Sub-second response times with excellent accuracy
  • Context Window: Up to 1M tokens for extensive document processing

spec-3 - Balanced Performance and Quality Model

  • Primary Use Case: Applications requiring optimal balance between speed and sophistication
  • Optimizations: Enhanced reasoning capabilities with moderate computational overhead
  • Best For: Content creation, analysis tasks, educational applications
  • Performance: Superior quality outputs with reasonable processing times
  • Context Window: Up to 1M tokens with advanced context retention

theta-35 - Advanced Reasoning and Analysis Model

  • Primary Use Case: Complex problem-solving and deep analytical tasks
  • Optimizations: Maximum reasoning capabilities and nuanced understanding
  • Best For: Research analysis, strategic planning, complex document interpretation
  • Performance: Highest quality outputs for demanding intellectual tasks
  • Context Window: Up to 40k tokens for extensive context handling for reasoning tasks

theta-35-mini - Efficient Lightweight Model

  • Primary Use Case: Lightweight applications with reasoning capabilities
  • Optimizations: Minimal computational requirements while maintaining core functionality
  • Best For: Basic Q&A, simple content generation, embedded applications
  • Performance: Fast execution with lower resource consumption
  • Context Window: Up to 40k tokens for extensive context handling, optimized for faster reasoning tasks

spec-2-mini - Super Fast Responses Model

  • Primary Use Case: Applications requiring extremely fast responses with basic quality
  • Optimizations: High-speed processing with minimal computational overhead
  • Best For: Simple chatbots, quick information retrieval, low-latency applications
  • Performance: Sub-second response times with basic quality outputs
  • Context Window: Up to 32k tokens for rapid context handling

Model Selection Guidelines

When choosing the appropriate model for your application, consider the following factors:

# Performance-critical applications
model = "spec-3-turbo" # Optimized for speed

# Balanced applications requiring quality and performance
model = "spec-3" # Best overall choice for most use cases

# Complex analytical tasks requiring deep reasoning
model = "theta-35" # Advanced reasoning capabilities

# Resource-constrained or simple applications
model = "theta-35-mini" # Efficient processing for faster reasoning tasks

Model Availability

SVECTOR provides a wide range of models, each tailored for specific tasks and performance requirements. You can view the complete list of available models in the Models section of the SVECTOR documentation.


All Set!

Step 5: Explore More

To learn more about the SVECTOR-SDK and its capabilities & code examples, check out the following resources: