Skip to content

Latest commit

 

History

History
117 lines (84 loc) · 9.9 KB

gen-ai-spans.md

File metadata and controls

117 lines (84 loc) · 9.9 KB

Semantic Conventions for GenAI spans

Status: Experimental

A request to an Generative AI is modeled as a span in a trace.

Span kind: MUST always be CLIENT.

Name

GenAI spans MUST follow the overall guidelines for span names. The span name SHOULD be {gen_ai.operation.name} {gen_ai.request.model}. Semantic conventions for individual GenAI systems and frameworks MAY specify different span name format.

GenAI attributes

These attributes track input data and metadata for a request to an GenAI model. Each attribute represents a concept that is common to most Generative AI clients.

Attribute Type Description Examples Requirement Level Stability
gen_ai.operation.name string The name of the operation being performed. [1] chat; text_completion Required Experimental
gen_ai.system string The Generative AI product as identified by the client or server instrumentation. [2] openai Required Experimental
error.type string Describes a class of error the operation ended with. [3] timeout; java.net.UnknownHostException; server_certificate_invalid; 500 Conditionally Required if the operation ended in an error Stable
gen_ai.request.model string The name of the GenAI model a request is being made to. [4] gpt-4 Conditionally Required If available. Experimental
server.port int GenAI server port. [5] 80; 8080; 443 Conditionally Required If server.address is set. Stable
gen_ai.request.frequency_penalty double The frequency penalty setting for the GenAI request. 0.1 Recommended Experimental
gen_ai.request.max_tokens int The maximum number of tokens the model generates for a request. 100 Recommended Experimental
gen_ai.request.presence_penalty double The presence penalty setting for the GenAI request. 0.1 Recommended Experimental
gen_ai.request.stop_sequences string[] List of sequences that the model will use to stop generating further tokens. ["forest", "lived"] Recommended Experimental
gen_ai.request.temperature double The temperature setting for the GenAI request. 0.0 Recommended Experimental
gen_ai.request.top_k double The top_k sampling setting for the GenAI request. 1.0 Recommended Experimental
gen_ai.request.top_p double The top_p sampling setting for the GenAI request. 1.0 Recommended Experimental
gen_ai.response.finish_reasons string[] Array of reasons the model stopped generating tokens, corresponding to each generation received. ["stop"]; ["stop", "length"] Recommended Experimental
gen_ai.response.id string The unique identifier for the completion. chatcmpl-123 Recommended Experimental
gen_ai.response.model string The name of the model that generated the response. [6] gpt-4-0613 Recommended Experimental
gen_ai.usage.input_tokens int The number of tokens used in the GenAI input (prompt). 100 Recommended Experimental
gen_ai.usage.output_tokens int The number of tokens used in the GenAI response (completion). 180 Recommended Experimental
server.address string GenAI server address. [7] example.com; 10.1.2.80; /tmp/my.sock Recommended Stable

[1]: If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.

[2]: The gen_ai.system describes a family of GenAI models with specific model identified by gen_ai.request.model and gen_ai.response.model attributes.

The actual GenAI product may differ from the one identified by the client. For example, when using OpenAI client libraries to communicate with Mistral, the gen_ai.system is set to openai based on the instrumentation's best knowledge.

For custom model, a custom friendly name SHOULD be used. If none of these options apply, the gen_ai.system SHOULD be set to _OTHER.

[3]: The error.type SHOULD match the error code returned by the Generative AI provider or the client library, the canonical name of exception that occurred, or another low-cardinality error identifier. Instrumentations SHOULD document the list of errors they report.

[4]: The name of the GenAI model a request is being made to. If the model is supplied by a vendor, then the value must be the exact name of the model requested. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.

[5]: When observed from the client side, and when communicating through an intermediary, server.port SHOULD represent the server port behind any intermediaries, for example proxies, if it's available.

[6]: If available. The name of the GenAI model that provided the response. If the model is supplied by a vendor, then the value must be the exact name of the model actually used. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.

[7]: When observed from the client side, and when communicating through an intermediary, server.address SHOULD represent the server address behind any intermediaries, for example proxies, if it's available.

error.type has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

Value Description Stability
_OTHER A fallback error value to be used when the instrumentation doesn't define a custom value. Stable

gen_ai.operation.name has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

Value Description Stability
chat Chat completion operation such as OpenAI Chat API Experimental
text_completion Text completions operation such as OpenAI Completions API (Legacy) Experimental

gen_ai.system has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.

Value Description Stability
anthropic Anthropic Experimental
az.ai.inference Azure AI Inference Experimental
cohere Cohere Experimental
openai OpenAI Experimental
vertex_ai Vertex AI Experimental

Capturing inputs and outputs

User inputs and model responses may be recorded as events parented to GenAI operation span. See Semantic Conventions for GenAI events for the details.