All Products
Search
Document Center

Application Real-Time Monitoring Service:Change history for semantic specifications

Last Updated:Mar 11, 2026

Each ARMS agent release updates the LLM plug-in to align with the latest semantic specifications. Use this page to identify span attribute changes between versions and update your queries, dashboards, and alerts.

Python agent

V2.0.0

V2.0.0 restructures span attributes for the OpenAI component to adopt the gen_ai.* namespace consistently. The changes affect both LLM and Embedding call types.

Important

After you upgrade to V2.0.0, update any dashboards or alerts that reference the renamed or removed attributes listed below.

LLM call type

Renamed attributes
Old attributeNew attributeFormat change
gen_ai.promptsgen_ai.input.messagesFlat string to JSON
gen_ai.completionsgen_ai.output.messagesFlat string to JSON
gen_ai.request.model_namegen_ai.request.model--
gen_ai.response.model_namegen_ai.response.model--
input.valuegen_ai.input.messagesUnstructured to JSON
output.mime_typegen_ai.output.type--
output.valuegen_ai.output.messagesUnstructured to JSON
gen_ai.request.tool_callsgen_ai.tool.definitionsUnstructured to JSON
gen_ai.usage.prompt_tokensgen_ai.usage.input_tokens--
gen_ai.usage.completion_tokensgen_ai.usage.output_tokens--
Removed attributes
AttributeReplacement
gen_ai.model_nameUse gen_ai.request.model or gen_ai.response.model.
input.mime_typeNone. Content type is no longer tracked as a separate attribute.
New attributes
AttributeDescription
gen_ai.request.choice.countNumber of completions to generate per prompt.
gen_ai.request.seedSeed for deterministic sampling.
gen_ai.request.frequency_penaltyPenalizes tokens by how often they appear in the output so far.
gen_ai.request.presence_penaltyPenalizes tokens based on whether they already appear in the output.
gen_ai.request.max_tokensMaximum number of tokens to generate.
gen_ai.request.top_pNucleus sampling threshold.
gen_ai.request.top_kLimits sampling to the top-k most probable tokens.
gen_ai.request.stop_sequencesSequences that signal the model to stop generating.
gen_ai.response.idUnique identifier for the model response.
gen_ai.response.finish_reasonsReasons the model stopped generating, such as stop or length.

Embedding call type

Renamed attributes
Old attributeNew attribute
embedding.model_namegen_ai.request.model
embedding.vector_sizegen_ai.embeddings.dimension.count
Removed attributes
AttributeReplacement
embedding.textNone. Raw embedding input is no longer captured.
embedding.vectorNone. Raw embedding vector is no longer captured.
New attributes
AttributeDescription
gen_ai.encoding.formatsEncoding formats requested for the embedding output.

Input and output format change

In V2.0.0, several attributes changed from a flat string or unstructured format to a structured JSON array. The following examples show the old and new formats.

gen_ai.prompts (old) vs. gen_ai.input.messages (new)

Old format (flat string):

"You are a helpful assistant.\nWhat is cloud computing?"

New format (JSON array):

[
  {"role": "system", "content": "You are a helpful assistant."},
  {"role": "user", "content": "What is cloud computing?"}
]

gen_ai.completions (old) vs. gen_ai.output.messages (new)

Old format (flat string):

"Cloud computing is the on-demand delivery of IT resources over the Internet."

New format (JSON array):

[
  {"role": "assistant", "content": "Cloud computing is the on-demand delivery of IT resources over the Internet."}
]