Huggingface.js documentation
Interface: TextGenerationStreamPrefillToken
@huggingface/inference
Use Inference ClientAPI reference
Classes
HfInferenceInferenceClientInferenceClientEndpointInferenceClientErrorInferenceClientHubApiErrorInferenceClientInputErrorInferenceClientProviderApiErrorInferenceClientProviderOutputErrorInferenceClientRoutingError
Interfaces
AudioToAudioOutputAudioToAudioOutputElemBaseArgsBodyParamsHeaderParamsInferenceProviderMappingEntryLoggerOptionsTextGenerationInputTextGenerationOutputTextGenerationStreamBestOfSequenceTextGenerationStreamDetailsTextGenerationStreamOutputTextGenerationStreamPrefillTokenTextGenerationStreamTokenUrlParams
Modules
@huggingface/hub
Interact with the HubAPI Reference
Classes
Interfaces
AuthInfoCachedFileInfoCachedRepoInfoCachedRevisionInfoCommitDataCommitDeletedEntryCommitEditFileCommitFileCommitInfoCommitOutputCredentialsDatasetEntryFileDownloadInfoOutputHFCacheInfoJobVolumeLfsPathInfoListFileEntryModelConfigModelDerivedFieldsModelEntryOAuthResultPathInfoQuantizationConfigRepoIdSafetensorsIndexJsonSafetensorsShardFileInfoSecurityFileStatusSpaceEntrySpaceResourceConfigSpaceResourceRequirementSpaceRuntimeTensorInfoUserInfoWhoAmIAppWhoAmIOrgWhoAmIUserXetFileInfoXetReadToken
@huggingface/mcp-client
@huggingface/tiny-agents
@huggingface/space-header
@huggingface/gguf
Interface: TextGenerationStreamPrefillToken
Properties
id
• id: number
Token ID from the model tokenizer
Defined in
inference/src/tasks/nlp/textGenerationStream.ts:23
logprob
• Optional logprob: number
Logprob Optional since the logprob of the first token cannot be computed
Defined in
inference/src/tasks/nlp/textGenerationStream.ts:30
text
• text: string
Token text
Defined in
inference/src/tasks/nlp/textGenerationStream.ts:25
Update on GitHub