← Back to Docs
  • index.html
  • Clayground
  • Clayground.Ai
  • TextInference
  • Clayground 2025.2
  • TextInference QML Type

    Client-side LLM text generation. More...

    Import Statement: import Clayground.Ai

    Properties

    Signals

    Methods

    Detailed Description

    TextInference provides local LLM inference for chat and text generation. Models are downloaded automatically when modelId is set.

    import Clayground.Ai
    
    TextInference {
        id: llm
        modelId: "smollm2-1.7b"
        systemPrompt: "You are a helpful assistant."
    
        onToken: (tok) => responseText.text += tok
        onResponse: (full) => console.log("Complete:", full)
        onError: (msg) => console.error("Error:", msg)
    }
    
    Button {
        text: llm.generating ? "Stop" : "Send"
        onClicked: llm.generating ? llm.stop() : llm.send(input.text)
    }

    See also AiModelManager.

    Property Documentation

    currentResponse : string [read-only]

    The current response being generated.


    downloadProgress : real [read-only]

    Download progress (0.0 to 1.0).


    downloadedBytes : int [read-only]

    Bytes downloaded so far.


    downloading : bool [read-only]

    Whether the model is being downloaded.


    generating : bool [read-only]

    Whether text generation is in progress.


    loadProgress : real [read-only]

    Model loading progress (0.0 to 1.0).


    maxTokens : int

    Maximum tokens to generate per response.


    modelId : string

    The model to use for inference.

    Setting this property triggers automatic download if the model is not cached. Set to empty string to cancel download and unload.


    modelLoading : bool [read-only]

    Whether the model is being loaded into memory.


    modelReady : bool [read-only]

    Whether the model is loaded and ready for inference.


    noModel : string [read-only]

    Special value to cancel download/unload model.


    systemPrompt : string

    System prompt for the conversation.


    temperature : real

    Sampling temperature (0.0 to 2.0).


    totalBytes : int [read-only]

    Total bytes to download.


    Signal Documentation

    downloadCancelled()

    Emitted when download is cancelled.

    Note: The corresponding handler is onDownloadCancelled.


    downloadStarted(int totalBytes)

    Emitted when model download begins.

    Note: The corresponding handler is onDownloadStarted.


    error(string message)

    Emitted on error.

    Note: The corresponding handler is onError.


    modelDownloaded()

    Emitted when model download completes.

    Note: The corresponding handler is onModelDownloaded.


    modelReadySignal()

    Emitted when model is loaded and ready.

    Note: The corresponding handler is onModelReadySignal.


    response(string fullText)

    Emitted when generation completes.

    Note: The corresponding handler is onResponse.


    token(string token)

    Emitted for each generated token (streaming).

    Note: The corresponding handler is onToken.


    Method Documentation

    void clear()

    Clear the conversation history.


    void send(string message)

    Send a message and start generating a response.


    void stop()

    Stop the current generation.


    void unload()

    Unload the model from memory.