Skip to main content

Starting with JSR

Overview

The SVECTOR SDK is officially available on the JavaScript Registry (JSR), providing seamless integration for TypeScript and JavaScript applications across all modern JavaScript runtimes. This comprehensive guide covers installation, configuration, and advanced usage patterns for production applications.

What you'll learn:

  • Installing SVECTOR SDK across different package managers and runtimes
  • Quick start examples for Deno, Node.js, Bun, and browser environments
  • Advanced features including streaming, document processing, and error handling
  • Production-ready implementation patterns and best practices
  • Complete API reference and troubleshooting guidance

Installation Guide

Deno provides native JSR support with zero configuration:

// Direct import - no installation needed
import { SVECTOR } from "jsr:@svector/svector";

Or use the deno add command for better dependency management:

deno add jsr:@svector/svector

Node.js

npx jsr add @svector/svector

Alternative: npm with official package

npm install svector-sdk

pnpm

pnpm add jsr:@svector/svector

Yarn

Yarn Modern (v2+)

yarn add jsr:@svector/svector

Yarn Classic (v1.x)

yarn dlx jsr add @svector/svector

vlt (Velte Package Manager)

vlt install jsr:@svector/svector

Bun

Using JSR integration

bunx jsr add @svector/svector

Alternative: Bun native

bun add jsr:@svector/svector

Quick Start Examples

Deno Implementation

#!/usr/bin/env -S deno run --allow-env --allow-net

import { SVECTOR } from "jsr:@svector/svector";

const client = new SVECTOR({
apiKey: Deno.env.get("SVECTOR_API_KEY"),
});

// Simple conversation
const response = await client.conversations.create({
model: "spec-3-turbo",
instructions: "You are a helpful AI assistant specializing in software development.",
input: "Explain the benefits of using TypeScript in modern web development.",
temperature: 0.7,
});

console.log("AI Response:", response.output);

Node.js Implementation

import { SVECTOR } from "@svector/svector";
import dotenv from 'dotenv';

dotenv.config();

const client = new SVECTOR({
apiKey: process.env.SVECTOR_API_KEY,
timeout: 30000,
maxRetries: 3,
});

async function main() {
try {
const response = await client.conversations.create({
model: "spec-3-turbo",
instructions: "You are an expert software architect.",
input: "Design a scalable microservices architecture for an e-commerce platform.",
temperature: 0.8,
});

console.log("Architecture Recommendation:", response.output);
} catch (error) {
console.error("Error:", error.message);
}
}

main();

Browser Implementation (Available Soon)

<!DOCTYPE html>
<html>
<head>
<title>SVECTOR Browser Integration</title>
</head>
<body>
<script type="module">
import { SVECTOR } from "https://esm.sh/@svector/svector";

const client = new SVECTOR({
apiKey: "your-api-key-here",
dangerouslyAllowBrowser: true, // Enable browser usage
});

async function askAI() {
const response = await client.conversations.create({
model: "spec-3-turbo",
instructions: "You are a helpful web development assistant.",
input: "What are the latest trends in frontend development?",
});

document.getElementById('response').textContent = response.output;
}

// Expose function globally
window.askAI = askAI;
</script>

<button onclick="askAI()">Ask AI</button>
<div id="response"></div>
</body>
</html>

Core Features & Advanced Usage

1. Conversations API

The Conversations API provides a user-friendly interface with instructions and input:

// Basic conversation
const response = await client.conversations.create({
model: "spec-3-turbo",
instructions: "You are a senior software engineer with expertise in distributed systems.",
input: "How would you design a real-time messaging system that can handle millions of users?",
temperature: 0.7,
});

console.log(response.output);

With Context History

const response = await client.conversations.create({
model: "spec-3-turbo",
instructions: "You are a technical mentor providing step-by-step guidance.",
input: "Can you show me a practical implementation example?",
context: [
"How do I implement JWT authentication in a Node.js application?",
"JWT authentication involves creating signed tokens that contain user information..."
],
temperature: 0.6,
});

2. Real-time Streaming

Streaming Conversations

async function streamingConversation(query: string) {
const stream = await client.conversations.createStream({
model: "spec-3-turbo",
instructions: "You are an expert code reviewer providing detailed feedback.",
input: query,
stream: true,
temperature: 0.8,
});

console.log("Streaming response:");
for await (const event of stream) {
if (!event.done) {
process.stdout.write(event.content);
}
}
console.log("\nStream completed");
}

await streamingConversation("Review this React component for performance optimizations");

3. Advanced Chat Completions

For applications requiring full control over conversation flow:

const response = await client.chat.create({
model: "spec-3-turbo",
messages: [
{
role: "system",
content: "You are an expert DevOps engineer specializing in Kubernetes and cloud infrastructure."
},
{
role: "user",
content: "Design a CI/CD pipeline for a microservices application using GitOps principles."
}
],
temperature: 0.7,
max_tokens: 2000,
});

console.log(response.choices[0].message.content);

Streaming Chat Completions

async function streamingChat() {
const stream = await client.chat.createStream({
model: "spec-3-turbo",
messages: [
{ role: "system", content: "You are a helpful programming assistant." },
{ role: "user", content: "Explain the differences between SQL and NoSQL databases with practical examples." }
],
stream: true,
temperature: 0.6,
});

for await (const event of stream) {
if (event.choices?.[0]?.delta?.content) {
process.stdout.write(event.choices[0].delta.content);
}
}
}

4. Document Processing & Analysis

Basic Document Analysis

async function analyzeDocument(filePath: string, analysisQuery = "Analyze this document and provide key insights.") {
try {
// Read file content (Deno example)
const fileContent = await Deno.readFile(filePath);

// Upload to SVECTOR
const fileResponse = await client.files.create(
fileContent,
'default',
filePath.split('/').pop()
);

// Analyze with AI
const analysis = await client.conversations.create({
model: 'spec-3-turbo',
instructions: 'You are a professional document analyst. Provide comprehensive, structured analysis with key findings, recommendations, and actionable insights.',
input: `${analysisQuery}\n\nDocument content:\n${fileResponse.data.content}`,
temperature: 0.3,
});

return {
filename: filePath.split('/').pop(),
analysis: analysis.output,
fileId: fileResponse.data.id
};
} catch (error) {
throw new Error(`Document analysis failed: ${error.message}`);
}
}

// Usage
const result = await analyzeDocument('./reports/quarterly-report.pdf', 'Summarize the financial performance and identify key risks.');
console.log(result);

Multi-Document Comparative Analysis

async function compareDocuments(filePaths: string[], comparisonQuery: string) {
const documentContents = [];

console.log(`📄 Processing ${filePaths.length} documents...`);

for (const filePath of filePaths) {
try {
const fileContent = await Deno.readFile(filePath);
const fileResponse = await client.files.create(
fileContent,
'default',
filePath.split('/').pop()
);

documentContents.push({
filename: filePath.split('/').pop(),
content: fileResponse.data.content,
fileId: fileResponse.data.id
});

console.log(`Processed: ${filePath.split('/').pop()}`);
} catch (error) {
console.error(`❌ Failed to process ${filePath}: ${error.message}`);
}
}

// Prepare comparative analysis prompt
const documentsText = documentContents.map(doc =>
`Document: ${doc.filename}\n${doc.content}`
).join('\n\n---\n\n');

const analysis = await client.conversations.create({
model: 'theta-35', // Use advanced model for complex analysis
instructions: 'You are an expert document analyst specializing in comparative analysis. Provide detailed comparisons, identify patterns, discrepancies, and provide actionable recommendations.',
input: `${comparisonQuery}\n\nDocuments to analyze:\n${documentsText}`,
temperature: 0.2,
});

return {
documentCount: documentContents.length,
processedFiles: documentContents.map(doc => doc.filename),
comparativeAnalysis: analysis.output
};
}

// Usage
const comparison = await compareDocuments(
['./contracts/contract-2023.pdf', './contracts/contract-2024.pdf'],
'Compare these contracts and highlight key differences in terms, pricing, and obligations.'
);

Available Models & Selection Guide

Model Specifications

ModelUse CasePerformanceContext Length
spec-3-turboGeneral purpose, fast responsesHigh speed, good quality1M tokens
spec-3Best performance and qualityStandard speed, high quality1M tokens
theta-35Complex reasoning, analysisSlower, highest quality40K tokens
theta-35-miniCost-effective, reasoning capabilitiesFast, reasoning model40K tokens
spec-2-miniSuper fast responsesHigh speed, basic quality32K tokens
// List all available models
async function listModels() {
const models = await client.models.list();
console.log("Available models:", models.models);

models.models.forEach(model => {
console.log(`- ${model.id}: ${model.description || 'No description available'}`);
});
}

await listModels();

Model Selection Strategy

function selectOptimalModel(taskType: string, complexity: 'low' | 'medium' | 'high') {
const modelMap = {
'conversation': {
'low': 'theta-35-mini',
'medium': 'spec-3-turbo',
'high': 'spec-3'
},
'analysis': {
'low': 'spec-3-turbo',
'medium': 'spec-3',
'high': 'theta-35'
},
'coding': {
'low': 'spec-3-turbo',
'medium': 'spec-3',
'high': 'theta-35'
}
};

return modelMap[taskType]?.[complexity] || 'spec-3-turbo';
}

// Usage
const model = selectOptimalModel('analysis', 'high');
console.log(`Selected model: ${model}`);

Error Handling & Resilience

Comprehensive Error Handling

import { 
SVECTOR,
AuthenticationError,
RateLimitError,
APIConnectionError,
UnprocessableEntityError,
NotFoundError
} from "jsr:@svector/svector";

async function robustAPICall(input: string, maxRetries = 3) {
let attempt = 0;

while (attempt < maxRetries) {
try {
const response = await client.conversations.create({
model: "spec-3-turbo",
instructions: "You are a helpful assistant.",
input: input,
});

return response;

} catch (error) {
attempt++;

if (error instanceof AuthenticationError) {
throw new Error("Authentication failed: Please check your API key");
}

if (error instanceof RateLimitError) {
const waitTime = Math.pow(2, attempt) * 1000; // Exponential backoff
console.log(`Rate limit hit. Waiting ${waitTime}ms before retry ${attempt}/${maxRetries}`);
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}

if (error instanceof APIConnectionError) {
console.log(`Connection error on attempt ${attempt}/${maxRetries}. Retrying...`);
if (attempt < maxRetries) {
await new Promise(resolve => setTimeout(resolve, 1000 * attempt));
continue;
}
}

if (error instanceof UnprocessableEntityError) {
throw new Error(`Invalid request: ${error.message}`);
}

if (error instanceof NotFoundError) {
throw new Error(`Resource not found: ${error.message}`);
}

// For other errors, retry with exponential backoff
if (attempt < maxRetries) {
const waitTime = Math.pow(2, attempt) * 1000;
console.log(`Error on attempt ${attempt}/${maxRetries}. Retrying in ${waitTime}ms...`);
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}

throw error;
}
}
}

// Usage with error handling
try {
const result = await robustAPICall("Explain quantum computing in simple terms.");
console.log("Success:", result.output);
} catch (error) {
console.error("Final error after retries:", error.message);
}

Configuration & Environment Setup

Production Configuration

// .env file
SVECTOR_API_KEY=your_api_key_here
SVECTOR_BASE_URL=https://spec-chat.tech
SVECTOR_TIMEOUT=30000
SVECTOR_MAX_RETRIES=3

// Configuration setup
const client = new SVECTOR({
apiKey: process.env.SVECTOR_API_KEY,
baseURL: process.env.SVECTOR_BASE_URL || "https://spec-chat.tech",
timeout: parseInt(process.env.SVECTOR_TIMEOUT || "30000"),
maxRetries: parseInt(process.env.SVECTOR_MAX_RETRIES || "3"),
dangerouslyAllowBrowser: false, // Never enable in production for security
});

Development vs Production Setup

// config/svector.ts
export function createSVECTORClient(environment: 'development' | 'production') {
const config = {
development: {
apiKey: process.env.SVECTOR_DEV_API_KEY,
baseURL: "https://dev-api.spec-chat.tech",
timeout: 10000,
maxRetries: 1,
dangerouslyAllowBrowser: true,
},
production: {
apiKey: process.env.SVECTOR_API_KEY,
baseURL: "https://spec-chat.tech",
timeout: 30000,
maxRetries: 3,
dangerouslyAllowBrowser: false,
}
};

return new SVECTOR(config[environment]);
}

Complete Production Example

Enterprise-Grade Implementation

#!/usr/bin/env -S deno run --allow-env --allow-net --allow-read --allow-write

import { SVECTOR, AuthenticationError, RateLimitError } from "jsr:@svector/svector";

class SVECTORService {
private client: SVECTOR;
private rateLimitDelay = 1000;

constructor(apiKey: string) {
this.client = new SVECTOR({
apiKey,
timeout: 30000,
maxRetries: 3,
});
}

async processQuery(query: string, options: {
model?: string;
temperature?: number;
stream?: boolean;
instructions?: string;
} = {}) {
const {
model = "spec-3-turbo",
temperature = 0.7,
stream = false,
instructions = "You are a helpful AI assistant."
} = options;

try {
if (stream) {
return await this.handleStreamingResponse(query, model, instructions, temperature);
} else {
return await this.handleStandardResponse(query, model, instructions, temperature);
}
} catch (error) {
return this.handleError(error);
}
}

private async handleStreamingResponse(query: string, model: string, instructions: string, temperature: number) {
const stream = await this.client.conversations.createStream({
model,
instructions,
input: query,
temperature,
stream: true,
});

let fullResponse = "";
const chunks: string[] = [];

for await (const event of stream) {
if (!event.done && event.content) {
chunks.push(event.content);
fullResponse += event.content;
}
}

return {
type: 'streaming',
response: fullResponse,
chunks,
model,
timestamp: new Date().toISOString()
};
}

private async handleStandardResponse(query: string, model: string, instructions: string, temperature: number) {
const response = await this.client.conversations.create({
model,
instructions,
input: query,
temperature,
});

return {
type: 'standard',
response: response.output,
model,
timestamp: new Date().toISOString()
};
}

private handleError(error: any) {
if (error instanceof AuthenticationError) {
return {
error: 'Authentication failed',
message: 'Please verify your API key',
type: 'auth_error'
};
}

if (error instanceof RateLimitError) {
return {
error: 'Rate limit exceeded',
message: 'Please wait before making another request',
type: 'rate_limit_error',
retryAfter: this.rateLimitDelay
};
}

return {
error: 'API Error',
message: error.message || 'Unknown error occurred',
type: 'general_error'
};
}

async analyzeDocument(filePath: string, analysisType = 'general') {
try {
const fileContent = await Deno.readFile(filePath);
const fileName = filePath.split('/').pop();

const fileResponse = await this.client.files.create(
fileContent,
'default',
fileName
);

const analysisPrompts = {
'general': 'Provide a comprehensive analysis of this document.',
'summary': 'Summarize the key points and main findings.',
'technical': 'Perform a technical analysis focusing on implementation details.',
'business': 'Analyze from a business perspective including opportunities and risks.'
};

const result = await this.client.conversations.create({
model: 'theta-35',
instructions: 'You are an expert document analyst.',
input: `${analysisPrompts[analysisType] || analysisPrompts.general}\n\nDocument: ${fileResponse.data.content}`,
temperature: 0.3,
});

return {
filename: fileName,
analysisType,
analysis: result.output,
fileId: fileResponse.data.id,
timestamp: new Date().toISOString()
};

} catch (error) {
throw new Error(`Document analysis failed: ${error.message}`);
}
}
}

// Usage example
async function main() {
const apiKey = Deno.env.get("SVECTOR_API_KEY");
if (!apiKey) {
console.error("❌ SVECTOR_API_KEY environment variable is required");
Deno.exit(1);
}

const svectorService = new SVECTORService(apiKey);

// Standard conversation
console.log("🤖 Processing standard query...");
const standardResult = await svectorService.processQuery(
"Explain the benefits of microservices architecture and potential challenges.",
{
model: "spec-3-turbo",
temperature: 0.8,
instructions: "You are a senior software architect."
}
);
console.log("Response:", standardResult);

// Streaming conversation
console.log("\n🌊 Processing streaming query...");
const streamResult = await svectorService.processQuery(
"Design a scalable real-time notification system.",
{
stream: true,
model: "theta-35",
instructions: "You are a system design expert."
}
);
console.log("Streaming result:", streamResult);

// Document analysis (if file exists)
try {
console.log("\n📄 Analyzing document...");
const docResult = await svectorService.analyzeDocument("./sample-document.pdf", "technical");
console.log("Document analysis:", docResult);
} catch (error) {
console.log("No document to analyze, skipping...");
}
}

if (import.meta.main) {
main().catch(console.error);
}

API Reference

SVECTOR Client

interface SVECTOROptions {
apiKey: string;
baseURL?: string;
timeout?: number;
maxRetries?: number;
dangerouslyAllowBrowser?: boolean;
}

class SVECTOR {
constructor(options: SVECTOROptions);

// API endpoints
conversations: ConversationsAPI;
chat: ChatAPI;
models: ModelsAPI;
files: FilesAPI;
knowledge: KnowledgeAPI;
}

Core APIs

Conversations API

interface ConversationsAPI {
create(params: ConversationCreateParams): Promise<ConversationResponse>;
createStream(params: ConversationStreamParams): Promise<AsyncIterable<StreamEvent>>;
}

Chat Completions API

interface ChatAPI {
create(params: ChatCompletionCreateParams): Promise<ChatCompletionResponse>;
createStream(params: ChatCompletionStreamParams): Promise<AsyncIterable<ChatCompletionStreamEvent>>;
}

Error Types

// Base error classes
class SVECTORError extends Error {}
class APIError extends SVECTORError {}
class AuthenticationError extends APIError {}
class RateLimitError extends APIError {}
class NotFoundError extends APIError {}
class UnprocessableEntityError extends APIError {}
class APIConnectionError extends SVECTORError {}
class APIConnectionTimeoutError extends APIConnectionError {}

Troubleshooting

Common Issues & Solutions

1. Authentication Errors

// Problem: AuthenticationError
// Solution: Verify API key
const isValidKey = /^sv-[a-zA-Z0-9]{48}$/.test(apiKey);
if (!isValidKey) {
throw new Error("Invalid API key format");
}

2. Rate Limiting

// Problem: RateLimitError
// Solution: Implement exponential backoff
async function withBackoff<T>(fn: () => Promise<T>, maxRetries = 3): Promise<T> {
let attempt = 0;
while (attempt < maxRetries) {
try {
return await fn();
} catch (error) {
if (error instanceof RateLimitError && attempt < maxRetries - 1) {
const delay = Math.pow(2, attempt) * 1000;
await new Promise(resolve => setTimeout(resolve, delay));
attempt++;
} else {
throw error;
}
}
}
throw new Error("Max retries exceeded");
}

3. Large Document Processing

// Problem: Document too large
// Solution: Chunk processing
async function processLargeDocument(content: string, chunkSize = 4000) {
const chunks = [];
for (let i = 0; i < content.length; i += chunkSize) {
chunks.push(content.slice(i, i + chunkSize));
}

const results = [];
for (const chunk of chunks) {
const result = await client.conversations.create({
model: "spec-3-turbo",
instructions: "Analyze this document chunk.",
input: chunk,
});
results.push(result.output);
}

return results;
}

Best Practices

1. Security

  • Never expose API keys in client-side code
  • Use environment variables for sensitive configuration
  • Implement proper error handling to avoid information leakage
  • Set dangerouslyAllowBrowser: false in production

2. Performance

  • Use appropriate models for your use case
  • Implement caching for repeated queries
  • Use streaming for long responses
  • Implement request queuing for high-volume applications

3. Reliability

  • Implement exponential backoff for retries
  • Handle rate limiting gracefully
  • Monitor API usage and costs
  • Set appropriate timeouts