API Summary — v0.9.1
One-page reference of all public functions and objects exported by @qvac/sdk
Auto-generated from
.d.tsdeclarations and TSDoc comments.For per-parameter and per-field details, hover symbols in your IDE or open
node_modules/@qvac/sdk/dist. This page is intentionally a high-level index.Fields shown: description, signature, throws, examples, deprecation. Fields intentionally omitted: parameter descriptions, return field descriptions (covered by IDE hover and
.d.tsdeclarations).Scope: 33 functions in
packages/sdk/dist/client/api/plus theprofilerobject.
Functions
cancel
Cancels an ongoing operation.
Signature:
declare function cancel(params: CancelParams): Promise<void>;Throws:
QvacErrorBase- When the response type is invalid or when the cancellation fails
Examples:
// Cancel inference
await cancel({ operation: "inference", modelId: "model-123" });// Pause download (preserves partial file for automatic resume)
await cancel({ operation: "downloadAsset", downloadKey: "download-key" });// Cancel download completely (deletes partial file)
await cancel({
operation: "downloadAsset",
downloadKey: "download-key",
clearCache: true,
});// Cancel delegated remote download
await cancel({
operation: "downloadAsset",
downloadKey: "download-key",
delegate: { topic: "topicHex", providerPublicKey: "peerHex" },
});// Cancel RAG operation on default workspace
await cancel({ operation: "rag" });// Cancel RAG operation on specific workspace
await cancel({ operation: "rag", workspace: "my-workspace" });completion
Generates completion from a language model based on conversation history.
Returns a CompletionRun whose canonical surfaces are:
events—AsyncIterable<CompletionEvent>of ordered, typed events.final—Promise<CompletionFinal>with aggregated results once the stream ends (content, thinking, tool calls, stats, raw text).
Legacy convenience fields (tokenStream, text, toolCallStream, toolCalls, stats) are still available but deprecated — they derive from events / final internally.
Signature:
declare function completion(params: CompletionParams): CompletionRun;Example:
import { z } from "zod";
const run = completion({
modelId: "llama-2",
history: [
{ role: "user", content: "What's the weather in Tokyo?" }
],
stream: true,
captureThinking: true,
tools: [{
name: "get_weather",
description: "Get current weather",
parameters: z.object({
city: z.string().describe("City name"),
}),
handler: async (args) => {
return { temperature: 22, condition: "sunny" };
}
}]
});
for await (const event of run.events) {
if (event.type === "contentDelta") process.stdout.write(event.text);
if (event.type === "toolCall") console.log(event.call.name, event.call.arguments);
}
const result = await run.final;
for (const toolCall of await result.toolCalls) {
if (toolCall.invoke) {
const toolResult = await toolCall.invoke();
console.log(toolResult);
}
}deleteCache
Deletes KV cache files.
Signature:
declare function deleteCache(
params: { all: true } | { kvCacheKey: string; modelId?: string },
): Promise<{ success: boolean }>;Throws:
QvacErrorBase- When the cache parameters are invalid (InvalidDeleteCacheParamsError) or the server reports a delete failure (DeleteCacheFailedError)
Example:
// Delete all caches
await deleteCache({ all: true });
// Delete entire cache key (all models)
await deleteCache({ kvCacheKey: "my-session" });
// Delete only specific model within cache key
await deleteCache({ kvCacheKey: "my-session", modelId: "model-abc123" });diffusion
Generates images using a loaded diffusion model.
Supports both txt2img (no init_image) and img2img (with init_image).
Signature:
declare function diffusion(params: DiffusionClientParams): {
progressStream: AsyncGenerator<DiffusionProgressTick>;
outputs: Promise<Uint8Array[]>;
stats: Promise<DiffusionStats | undefined>;
};Example:
// txt2img
const { outputs, stats } = diffusion({ modelId, prompt: "a cat" });
const buffers = await outputs;
fs.writeFileSync("output.png", buffers[0]);
// img2img (SD/SDXL — SDEdit)
const initImage = fs.readFileSync("input.png");
const { outputs } = diffusion({
modelId,
prompt: "oil painting style",
init_image: initImage,
strength: 0.7,
});
// img2img (FLUX.2 — in-context conditioning)
// IMPORTANT: FLUX img2img requires `prediction: "flux2_flow"` to be set on the
// model config at loadModel time (e.g. `loadModel(src, { modelType: "diffusion",
// modelConfig: { prediction: "flux2_flow" } })`).
const { outputs } = diffusion({
modelId,
prompt: "turn into watercolor",
init_image: initImage,
});
// With progress tracking
const { progressStream, outputs } = diffusion({ modelId, prompt: "a cat" });
for await (const { step, totalSteps } of progressStream) {
console.log(`${step}/${totalSteps}`);
}
const buffers = await outputs;downloadAsset
Downloads an asset (model file) without loading it into memory.
This function is specifically designed for download-only operations and doesn't accept runtime configuration options like modelConfig or delegate. Use this for download-only operations instead of loadModel for better semantic clarity.
Signature:
declare function downloadAsset(
options: DownloadAssetOptions,
rpcOptions?: RPCOptions,
): Promise<string>;Throws:
QvacErrorBase- When asset download fails, with details in the error messageQvacErrorBase- When streaming ends unexpectedly (only when usingonProgress)QvacErrorBase- When receiving an invalid response type from the server
Example:
// Download model without loading
const assetId = await downloadAsset({
assetSrc: "/path/to/model.gguf",
seed: true,
});
// Download with progress tracking
const assetId = await downloadAsset({
assetSrc: "pear://key123/model.gguf",
onProgress: (progress) => {
console.log(`Downloaded: ${progress.percentage}%`);
},
});embed
Has 2 overloads.
Overload 1 — Single text
Generates embeddings for a single text using a specified model.
Signature:
declare function embed(
params: { modelId: string; text: string },
options?: RPCOptions,
): Promise<{ embedding: number[]; stats?: EmbedStats }>;Throws:
QvacErrorBase- When the response type is invalid or when the embedding fails
Overload 2 — Multiple texts
Generates embeddings for multiple texts using a specified model.
Signature:
declare function embed(
params: { modelId: string; text: string[] },
options?: RPCOptions,
): Promise<{ embedding: number[][]; stats?: EmbedStats }>;Throws:
QvacErrorBase- When the response type is invalid or when the embedding fails
finetune
Has 2 overloads. Starts, resumes, inspects, pauses, or cancels a finetuning job for a loaded model.
Overload 1 — Run / start / resume
Returns a handle with a progressStream generator and a terminal result promise.
Signature:
declare function finetune(
params: FinetuneRunParams,
rpcOptions?: RPCOptions,
): FinetuneHandle;Example:
const handle = finetune({
modelId,
options: {
trainDatasetDir: "./dataset/train",
validation: { type: "split", fraction: 0.05 },
outputParametersDir: "./artifacts/lora",
numberOfEpochs: 2,
},
});
for await (const progress of handle.progressStream) {
console.log(progress.global_steps, progress.loss);
}
console.log(await handle.result);Overload 2 — Stop / getState / pause / cancel
Returns a promise that resolves to the current finetune state/result.
Signature:
declare function finetune(
params: FinetuneStopParams | FinetuneGetStateParams,
rpcOptions?: RPCOptions,
): Promise<FinetuneResult>;Example:
const pauseResult = await finetune({ modelId, operation: "pause" });
console.log(pauseResult.status);getModelInfo
Returns status information for a catalog model, including cache state and loaded instances.
Signature:
declare function getModelInfo(params: GetModelInfoParams): Promise<{
name: string;
modelId: string;
expectedSize: number;
sha256Checksum: string;
addon:
| "embeddings"
| "llm"
| "whisper"
| "nmt"
| "parakeet"
| "tts"
| "ocr"
| "diffusion"
| "vad"
| "other";
isCached: boolean;
isLoaded: boolean;
cacheFiles: {
filename: string;
path: string;
expectedSize: number;
sha256Checksum: string;
isCached: boolean;
actualSize?: number;
cachedAt?: Date;
}[];
registryPath?: string;
registrySource?: string;
blobCoreKey?: string;
blobBlockOffset?: number;
blobBlockLength?: number;
blobByteOffset?: number;
engine?: string;
quantization?: string;
params?: string;
actualSize?: number;
cachedAt?: Date;
loadedInstances?: {
registryId: string;
loadedAt: Date;
config?: unknown;
}[];
}>;Throws:
QvacErrorBase- When the response type is invalid (InvalidResponseError) or the RPC layer fails
heartbeat
Checks if a delegated provider is online by sending a heartbeat round-trip. Can also be used to check if the local SDK worker is responsive.
Signature:
declare function heartbeat(
params?: { delegate?: DelegateBase },
): Promise<HeartbeatResponse>;Throws:
QvacErrorBase- When the provider is unreachable or the response is invalid
Examples:
// Check if a delegated provider is online
try {
await heartbeat({
delegate: { topic: "topicHex", providerPublicKey: "peerHex", timeout: 3000 },
});
console.log("Provider is online");
} catch {
console.log("Provider is offline");
}// Check if the local SDK worker is responsive
await heartbeat();invokePlugin
Invoke a non-streaming plugin handler.
Signature:
declare function invokePlugin<TResponse = unknown, TParams = unknown>(
options: InvokePluginOptions<TParams>,
rpcOptions?: RPCOptions,
): Promise<TResponse>;Throws:
QvacErrorBase- When the response type is invalid (InvalidResponseError) or the RPC layer fails
invokePluginStream
Invoke a streaming plugin handler.
Signature:
declare function invokePluginStream<TResponse = unknown, TParams = unknown>(
options: InvokePluginOptions<TParams>,
rpcOptions?: RPCOptions,
): AsyncGenerator<TResponse>;Throws:
QvacErrorBase- When an intermediate response has the wrong type (InvalidResponseError) or the RPC layer fails
loadModel
Has 2 overloads.
Overload 1 — Load new model
Loads a machine learning model from a local path, remote URL, or Hyperdrive key.
This function supports multiple model types: LLM (Large Language Model), Whisper (speech recognition), embeddings, NMT (translation), and TTS. It can handle both local file paths and Hyperdrive URLs (pear://).
When onProgress is provided, the function uses streaming to provide real-time download progress. Otherwise, it uses a simple request-response pattern for faster execution.
Signature:
declare function loadModel(
options: LoadModelOptions,
rpcOptions?: RPCOptions,
): Promise<string>;Throws:
QvacErrorBase- When model loading fails, with details in the error messageQvacErrorBase- When streaming ends unexpectedly (only when usingonProgress)QvacErrorBase- When receiving an invalid response type from the server
Example:
// Local file path - absolute path
const localModelId = await loadModel({
modelSrc: "/home/user/models/llama-7b.gguf",
modelType: "llm",
modelConfig: { contextSize: 2048 },
});
// Local file path - relative path
const relativeModelId = await loadModel({
modelSrc: "./models/whisper-base.gguf",
modelType: "whisper",
});
// Hyperdrive URL with key and path
const hyperdriveId = await loadModel({
modelSrc: "pear://<hyperdrive-key>/llama-7b.gguf",
modelType: "llm",
modelConfig: { contextSize: 2048 },
});
// Remote HTTP/HTTPS URL with progress tracking
const remoteId = await loadModel({
modelSrc: "https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_K_M.gguf",
modelType: "llm",
onProgress: (progress) => {
console.log(`Downloaded: ${progress.percentage}%`);
},
});
// Multimodal model with projection
const multimodalId = await loadModel({
modelSrc: "https://huggingface.co/.../main-model.gguf",
modelType: "llm",
modelConfig: {
ctx_size: 512,
projectionModelSrc: "https://huggingface.co/.../projection-model.gguf",
},
onProgress: (progress) => {
console.log(`Loading: ${progress.percentage}%`);
},
});
// Whisper with VAD model
const whisperId = await loadModel({
modelSrc: "https://huggingface.co/.../whisper-model.gguf",
modelType: "whisper",
modelConfig: {
mode: "caption",
output_format: "plaintext",
min_seconds: 2,
max_seconds: 6,
vadModelSrc: "https://huggingface.co/.../vad-model.bin",
},
});
// Load with automatic logging - logs from the model will be forwarded to your logger
import { getLogger } from "@qvac/sdk";
const logger = getLogger("my-app");
const modelId = await loadModel({
modelSrc: "/path/to/model.gguf",
modelType: "llm",
logger,
});Overload 2 — Hot-reload config
Hot-reloads configuration on an already loaded model.
Signature:
declare function loadModel(
options: ReloadConfigOptions,
rpcOptions?: RPCOptions,
): Promise<string>;Throws:
QvacErrorBase- When model reload fails, with details in the error messageQvacErrorBase- When receiving an invalid response type from the server
Example:
// Load new model
const modelId = await loadModel({
modelSrc: "pear://<hyperdrive-key>/whisper-tiny.gguf",
modelType: "whisper",
modelConfig: { language: "en" },
});
// Later, update the config without reloading the model
await loadModel({
modelId,
modelType: "whisper",
modelConfig: { language: "es" },
});loggingStream
Opens a logging stream to receive real-time logs.
Signature:
declare function loggingStream(
params: LoggingParams,
): AsyncGenerator<LoggingStreamResponse>;Throws:
QvacErrorBase- When the response type is invalid or when the stream fails
Example:
// Open a logging stream for a model
const logStream = loggingStream({ id: "my-model-id" });
// Or stream SDK server logs
const sdkLogs = loggingStream({ id: SDK_LOG_ID });
for await (const logMessage of logStream) {
console.log(`[${logMessage.level}] ${logMessage.namespace}: ${logMessage.message}`);
}modelRegistryGetModel
Fetches a single model entry from the registry by its path and source.
Signature:
declare function modelRegistryGetModel(
registryPath: string,
registrySource: string,
): Promise<ModelRegistryEntry>;Throws:
ModelRegistryQueryFailedError- When the model cannot be located or the registry query fails
modelRegistryList
Returns all available models from the QVAC distributed model registry.
Signature:
declare function modelRegistryList(): Promise<ModelRegistryEntry[]>;Throws:
ModelRegistryQueryFailedError- When the registry query fails
modelRegistrySearch
Searches the model registry with optional filters for model type, engine, and quantization.
Signature:
declare function modelRegistrySearch(
params?: ModelRegistrySearchParams,
): Promise<ModelRegistryEntry[]>;Throws:
ModelRegistryQueryFailedError- When the registry query fails
ocr
Performs Optical Character Recognition (OCR) on an image to extract text.
Signature:
declare function ocr(params: OCRClientParams): {
blockStream: AsyncGenerator<OCRTextBlock[]>;
blocks: Promise<OCRTextBlock[]>;
stats: Promise<OCRStats | undefined>;
};Example:
// Non-streaming mode (default) - get all blocks at once
const { blocks } = ocr({ modelId, image: "/path/to/image.png" });
for (const block of await blocks) {
console.log(block.text, block.bbox, block.confidence);
}
// Streaming mode - process blocks as they arrive
const { blockStream } = ocr({ modelId, image: imageBuffer, stream: true });
for await (const blocks of blockStream) {
console.log("Detected:", blocks);
}ragChunk
Chunks documents into smaller pieces for embedding. Part of the segregated flow: ragChunk() → embed() → ragSaveEmbeddings().
Signature:
declare function ragChunk(
params: RagChunkParams,
options?: RPCOptions,
): Promise<RagDoc[]>;Throws:
RAGChunkFailedError- When the operation fails
Example:
const chunks = await ragChunk({
documents: ["Long document text here..."],
chunkOpts: {
chunkSize: 256,
chunkOverlap: 50,
chunkStrategy: "paragraph",
},
});ragCloseWorkspace
Closes a RAG workspace, releasing in-memory resources (Corestore, HyperDB adapter, RAG instance).
Workspace lifecycle: Workspaces are implicitly opened. This function explicitly closes them, releasing memory and file locks. The workspace data remains on disk unless deleteOnClose is set to true.
Signature:
declare function ragCloseWorkspace(
params?: RagCloseWorkspaceParams,
options?: RPCOptions,
): Promise<void>;Throws:
RAGCloseWorkspaceFailedError- When the operation fails
Example:
// Close a specific workspace
await ragCloseWorkspace({ workspace: "my-docs" });
// Close and delete in one call
await ragCloseWorkspace({ workspace: "my-docs", deleteOnClose: true });ragDeleteEmbeddings
Deletes document embeddings from the RAG vector database.
Workspace lifecycle: This operation requires an existing workspace.
Signature:
declare function ragDeleteEmbeddings(
params: RagDeleteEmbeddingsParams,
options?: RPCOptions,
): Promise<void>;Throws:
RAGDeleteFailedError- When the operation fails or workspace doesn't exist
Example:
await ragDeleteEmbeddings({
ids: ["doc-1", "doc-2"],
workspace: "my-docs",
});ragDeleteWorkspace
Deletes a RAG workspace and all its data. The workspace must not be currently loaded/in-use.
Signature:
declare function ragDeleteWorkspace(
params: RagDeleteWorkspaceParams,
options?: RPCOptions,
): Promise<void>;Throws:
RAGDeleteFailedError- When the workspace doesn't exist or is currently loaded
Example:
await ragDeleteWorkspace({ workspace: "my-docs" });ragIngest
Ingests documents into the RAG vector database. Full pipeline: chunk → embed → save.
Workspace lifecycle: This operation implicitly opens (or creates) the workspace. The workspace remains open until closed.
Signature:
declare function ragIngest(
params: RagIngestParams,
options?: RPCOptions,
): Promise<{
processed: RagSaveEmbeddingsResult[];
droppedIndices: number[];
}>;Throws:
RAGSaveFailedError- When the operation failsStreamEndedError- When streaming ends unexpectedly (only when usingonProgress)
Example:
// Simple ingest
const result = await ragIngest({
modelId,
documents: ["Document 1", "Document 2"],
});
// With progress tracking
const result = await ragIngest({
modelId,
documents: ["Document 1", "Document 2"],
workspace: "my-docs",
onProgress: (stage, current, total) => {
console.log(`[${stage}] ${current}/${total}`);
},
});ragListWorkspaces
Lists all RAG workspaces with their open status.
Returns all workspaces that exist on disk. The open field indicates whether the workspace is currently loaded in memory and holding active resources (Corestore, HyperDB adapter, and possibly a RAG instance).
Signature:
declare function ragListWorkspaces(
options?: RPCOptions,
): Promise<RagWorkspaceInfo[]>;Throws:
RAGListWorkspacesFailedError- When the operation fails
Example:
const workspaces = await ragListWorkspaces();
// [{ name: "default", open: true }, { name: "my-docs", open: false }]ragReindex
Reindexes the RAG database to optimize search performance. For HyperDB, this rebalances centroids using k-means clustering.
Workspace lifecycle: This operation requires an existing workspace.
Note: Reindex requires a minimum number of documents to perform clustering. For HyperDB, this is 16 documents by default. If there are insufficient documents, reindexed will be false with details explaining the reason.
Signature:
declare function ragReindex(
params: RagReindexParams,
options?: RPCOptions,
): Promise<RagReindexResult>;Throws:
RAGSaveFailedError- When the operation fails or workspace doesn't existStreamEndedError- When streaming ends unexpectedly (only when usingonProgress)
Example:
// Simple reindex
const result = await ragReindex({
workspace: "my-docs",
});
// Check result
if (!result.reindexed) {
console.log("Reindex skipped:", result.details?.reason);
}
// With progress tracking
const result = await ragReindex({
workspace: "my-docs",
onProgress: (stage, current, total) => {
console.log(`[${stage}] ${current}/${total}`);
},
});ragSaveEmbeddings
Saves pre-embedded documents to the RAG vector database. Part of the segregated flow: chunk() → embed() → saveEmbeddings().
Workspace lifecycle: This operation implicitly opens (or creates) the workspace. The workspace remains open until closed.
Signature:
declare function ragSaveEmbeddings(
params: RagSaveEmbeddingsParams,
options?: RPCOptions,
): Promise<RagSaveEmbeddingsResult[]>;Throws:
RAGSaveFailedError- When the operation failsStreamEndedError- When streaming ends unexpectedly (only when usingonProgress)
Example:
// Segregated flow
const chunks = await ragChunk({ documents: ["text1", "text2"] });
const { embedding: embeddings } = await embed({ modelId, text: chunks.map(c => c.content) });
const embeddedDocs = chunks.map((chunk, i) => ({
...chunk,
embedding: embeddings[i],
embeddingModelId: modelId,
}));
const result = await ragSaveEmbeddings({
documents: embeddedDocs,
workspace: "my-workspace",
});ragSearch
Searches for similar documents in the RAG vector database.
Workspace lifecycle: This operation requires an existing workspace. If the workspace doesn't exist, returns an empty array.
Signature:
declare function ragSearch(
params: RagSearchParams,
options?: RPCOptions,
): Promise<RagSearchResult[]>;Throws:
RAGSearchFailedError- When the operation fails
Example:
const results = await ragSearch({
modelId,
query: "AI and machine learning",
topK: 5,
workspace: "my-docs",
});resume
Resumes all suspended Hyperswarm and Corestore resources.
Idempotent — calling while already active is a no-op. Also serves as the recovery path after a partial suspend failure.
Signature:
declare function resume(): Promise<void>;Throws:
RPCError- When one or more resources fail to resume
Example:
// Foreground handler
await resume();startQVACProvider
Starts a provider service that offers QVAC capabilities to remote peers. The provider's keypair can be controlled via the seed option or QVAC_HYPERSWARM_SEED environment variable.
Signature:
declare function startQVACProvider(params: ProvideParams): Promise<{
type: "provide";
success: boolean;
error?: string;
publicKey?: string;
}>;Throws:
QvacErrorBase- When the response type is not"provide"or the request fails
stopQVACProvider
Stops a running provider service and leaves the specified topic.
Signature:
declare function stopQVACProvider(params: StopProvideParams): Promise<{
type: "stopProvide";
success: boolean;
error?: string;
}>;Throws:
QvacErrorBase- When the response type is not"stopProvide"or the request fails
suspend
Suspends all active Hyperswarm and Corestore resources.
Idempotent — calling while already suspended is a no-op.
Signature:
declare function suspend(): Promise<void>;Throws:
RPCError- When one or more resources fail to suspend (partial failure)
Example:
// Background handler
await suspend();textToSpeech
Converts text to speech audio using a loaded TTS model.
Signature:
declare function textToSpeech(
params: TtsClientParams,
options?: RPCOptions,
): {
bufferStream: AsyncGenerator<number>;
buffer: Promise<number[]>;
done: Promise<boolean>;
};transcribe
Transcribe audio and return the complete text. Accepts either a file path or an audio buffer.
Signature:
declare function transcribe(
params: TranscribeClientParams,
options?: RPCOptions,
): Promise<string>;transcribeStream
Has 2 overloads.
Overload 1 — Upfront audio (deprecated)
⚠️ Deprecated: Pass audio via
transcribe()instead. This overload will be removed in the next major version.
Streaming transcription with upfront audio: sends full audio, yields text chunks as they arrive.
Signature:
declare function transcribeStream(
params: TranscribeClientParams,
options?: RPCOptions,
): AsyncGenerator<string>;Overload 2 — Bidirectional session
Opens a bidirectional streaming transcription session. Audio is streamed in via write(), and transcription text is yielded as the model's VAD detects complete speech segments.
The returned session is single-use. Attempting to iterate a second time will throw a TranscriptionFailedError.
Signature:
declare function transcribeStream(
params: TranscribeStreamClientParams,
options?: RPCOptions,
): Promise<TranscribeStreamSession>;translate
Translates text from one language to another using a specified translation model. Supports both NMT (Neural Machine Translation) and LLM models.
Signature:
declare function translate(
params: TranslateClientParams,
options?: RPCOptions,
): {
tokenStream: AsyncGenerator<string>;
stats: Promise<TranslationStats | undefined>;
text: Promise<string>;
};Throws:
QvacErrorBase- When translation fails with an error message or when language detection fails
Example:
// Streaming mode (default)
const result = translate({
modelId: "modelId",
text: "Hello world",
from: "en",
to: "es",
modelType: "llm",
});
for await (const token of result.tokenStream) {
console.log(token);
}
// Non-streaming mode
const response = translate({
modelId: "modelId",
text: "Hello world",
from: "en",
to: "es",
modelType: "llm",
stream: false,
});
console.log(await response.text);unloadModel
Unloads a previously loaded model from the server.
When the last model is unloaded (no more models remain), this function automatically closes the RPC connection, allowing the process to exit naturally without requiring manual cleanup.
Signature:
declare function unloadModel(params: UnloadModelParams): Promise<void>;Throws:
QvacErrorBase- When the response type is invalid or when the unload operation fails
Objects
profiler
QVAC SDK Profiler. Enable, export, and listen to profiling data.
Shape:
const profiler: {
enable(options?: ProfilerRuntimeOptions): void;
disable(): void;
isEnabled(): boolean;
exportJSON(options?: { includeRecentEvents?: boolean }): ProfilerExport;
exportTable(): string;
exportSummary(): string;
onRecord(callback: (event: ProfilingEvent) => void): () => void;
getConfig(): ResolvedProfilerConfig;
getAggregates(): Record<string, AggregatedStats>;
clear(): void;
};Methods:
enable(options?)— Enables profiling and resets all previously aggregated data.disable()— Disables profiling. New SDK operations will no longer be recorded.isEnabled()— Returns whether profiling is currently enabled.exportJSON(options?)— Exports profiling data as a structured JSON object suitable for machine consumption.exportTable()— Exports aggregated stats as a formatted ASCII table suitable for terminal output.exportSummary()— Exports a short, human-readable summary string of the aggregated stats.onRecord(callback)— Registers a listener for profiling events; returns an unsubscribe function.getConfig()— Returns the current effective profiler configuration.getAggregates()— Returns all aggregated stats keyed by operation name.clear()— Clears all aggregated data and the recent-events ring buffer.
Example:
import { profiler } from "@qvac/sdk";
profiler.enable({ mode: "summary" });
// ... run SDK operations ...
console.log(profiler.exportTable());
profiler.disable();