Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inference instrumentor to inference library #37846

Merged
merged 9 commits into from
Oct 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .vscode/cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -406,6 +406,9 @@
"uamqp",
"uksouth",
"ukwest",
"uninstrument",
"uninstrumented",
"uninstrumenting",
"unpad",
"unpadder",
"unpartial",
Expand Down
2 changes: 2 additions & 0 deletions sdk/ai/azure-ai-inference/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@

### Features Added

* Support for tracing. Please find more information in the package [README.md](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-inference/README.md).

### Breaking Changes

### Bugs Fixed
Expand Down
89 changes: 89 additions & 0 deletions sdk/ai/azure-ai-inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,14 @@ To update an existing installation of the package, use:
pip install --upgrade azure-ai-inference
```

If you want to install Azure AI Inferencing package with support for OpenTelemetry based tracing, use the following command:

```bash
pip install azure-ai-inference[trace]
```



## Key concepts

### Create and authenticate a client directly, using API key or GitHub token
Expand Down Expand Up @@ -530,6 +538,87 @@ For more information, see [Configure logging in the Azure libraries for Python](

To report issues with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues)

## Tracing

The Azure AI Inferencing API Tracing library provides tracing for Azure AI Inference client library for Python. Refer to Installation chapter above for installation instructions.

### Setup

The environment variable AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED controls whether the actual message contents will be recorded in the traces or not. By default, the message contents are not recorded as part of the trace. When message content recording is disabled any function call tool related function names, function parameter names and function parameter values are also not recorded in the trace. Set the value of the environment variable to "true" (case insensitive) for the message contents to be recorded as part of the trace. Any other value will cause the message contents not to be recorded.

You also need to configure the tracing implementation in your code by setting `AZURE_SDK_TRACING_IMPLEMENTATION` to `opentelemetry` or configuring it in the code with the following snippet:

<!-- SNIPPET:sample_chat_completions_with_tracing.trace_setting -->

```python
from azure.core.settings import settings
settings.tracing_implementation = "opentelemetry"
```

<!-- END SNIPPET -->

Please refer to [azure-core-tracing-documentation](https://learn.microsoft.com/python/api/overview/azure/core-tracing-opentelemetry-readme) for more information.

### Exporting Traces with OpenTelemetry

Azure AI Inference is instrumented with OpenTelemetry. In order to enable tracing you need to configure OpenTelemetry to export traces to your observability backend.
Refer to [Azure SDK tracing in Python](https://learn.microsoft.com/python/api/overview/azure/core-tracing-opentelemetry-readme?view=azure-python-preview) for more details.

Refer to [Azure Monitor OpenTelemetry documentation](https://learn.microsoft.com/azure/azure-monitor/app/opentelemetry-enable?tabs=python) for the details on how to send Azure AI Inference traces to Azure Monitor and create Azure Monitor resource.

### Instrumentation

Use the AIInferenceInstrumentor to instrument the Azure AI Inferencing API for LLM tracing, this will cause the LLM traces to be emitted from Azure AI Inferencing API.

<!-- SNIPPET:sample_chat_completions_with_tracing.instrument_inferencing -->

```python
from azure.ai.inference.tracing import AIInferenceInstrumentor
# Instrument AI Inference API
AIInferenceInstrumentor().instrument()
```

<!-- END SNIPPET -->


It is also possible to uninstrument the Azure AI Inferencing API by using the uninstrument call. After this call, the traces will no longer be emitted by the Azure AI Inferencing API until instrument is called again.

<!-- SNIPPET:sample_chat_completions_with_tracing.uninstrument_inferencing -->

```python
AIInferenceInstrumentor().uninstrument()
```

<!-- END SNIPPET -->

### Tracing Your Own Functions
The @tracer.start_as_current_span decorator can be used to trace your own functions. This will trace the function parameters and their values. You can also add further attributes to the span in the function implementation as demonstrated below. Note that you will have to setup the tracer in your code before using the decorator. More information is available [here](https://opentelemetry.io/docs/languages/python/).

<!-- SNIPPET:sample_chat_completions_with_tracing.trace_function -->

```python
from opentelemetry.trace import get_tracer
tracer = get_tracer(__name__)

# The tracer.start_as_current_span decorator will trace the function call and enable adding additional attributes
# to the span in the function implementation. Note that this will trace the function parameters and their values.
@tracer.start_as_current_span("get_temperature") # type: ignore
def get_temperature(city: str) -> str:

# Adding attributes to the current span
span = trace.get_current_span()
span.set_attribute("requested_city", city)

if city == "Seattle":
return "75"
elif city == "New York City":
return "80"
else:
return "Unavailable"
```

<!-- END SNIPPET -->

## Next steps

* Have a look at the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) folder, containing fully runnable Python code for doing inference using synchronous and asynchronous clients.
Expand Down
2 changes: 1 addition & 1 deletion sdk/ai/azure-ai-inference/assets.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@
"AssetsRepo": "Azure/azure-sdk-assets",
"AssetsRepoPrefixPath": "python",
"TagPrefix": "python/ai/azure-ai-inference",
"Tag": "python/ai/azure-ai-inference_498e85cbfd"
"Tag": "python/ai/azure-ai-inference_19a0adafc6"
}
6 changes: 3 additions & 3 deletions sdk/ai/azure-ai-inference/azure/ai/inference/_patch.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,8 +102,8 @@ def load_client(
"The AI model information is missing a value for `model type`. Cannot create an appropriate client."
)

# TODO: Remove "completions" and "embedding" once Mistral Large and Cohere fixes their model type
if model_info.model_type in (_models.ModelType.CHAT, "completion"):
# TODO: Remove "completions", "chat-comletions" and "embedding" once Mistral Large and Cohere fixes their model type
if model_info.model_type in (_models.ModelType.CHAT, "completion", "chat-completion", "chat-completions"):
chat_completion_client = ChatCompletionsClient(endpoint, credential, **kwargs)
chat_completion_client._model_info = ( # pylint: disable=protected-access,attribute-defined-outside-init
model_info
Expand Down Expand Up @@ -454,7 +454,7 @@ def complete(
:raises ~azure.core.exceptions.HttpResponseError:
"""

@distributed_trace
# pylint:disable=client-method-missing-tracing-decorator
def complete(
self,
body: Union[JSON, IO[bytes]] = _Unset,
Expand Down
4 changes: 2 additions & 2 deletions sdk/ai/azure-ai-inference/azure/ai/inference/aio/_patch.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ async def load_client(
)

# TODO: Remove "completions" and "embedding" once Mistral Large and Cohere fixes their model type
if model_info.model_type in (_models.ModelType.CHAT, "completion"):
if model_info.model_type in (_models.ModelType.CHAT, "completion", "chat-completion", "chat-completions"):
chat_completion_client = ChatCompletionsClient(endpoint, credential, **kwargs)
chat_completion_client._model_info = ( # pylint: disable=protected-access,attribute-defined-outside-init
model_info
Expand Down Expand Up @@ -437,7 +437,7 @@ async def complete(
:raises ~azure.core.exceptions.HttpResponseError:
"""

@distributed_trace_async
# pylint:disable=client-method-missing-tracing-decorator-async
async def complete(
self,
body: Union[JSON, IO[bytes]] = _Unset,
Expand Down
10 changes: 5 additions & 5 deletions sdk/ai/azure-ai-inference/azure/ai/inference/models/_patch.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
import re
import sys

from typing import List, AsyncIterator, Iterator, Optional, Union
from typing import Any, List, AsyncIterator, Iterator, Optional, Union
from azure.core.rest import HttpResponse, AsyncHttpResponse
from ._models import ImageUrl as ImageUrlGenerated
from ._models import ChatCompletions as ChatCompletionsGenerated
Expand Down Expand Up @@ -200,7 +200,7 @@ def __init__(self, response: HttpResponse):
self._response = response
self._bytes_iterator: Iterator[bytes] = response.iter_bytes()

def __iter__(self):
def __iter__(self) -> Any:
return self

def __next__(self) -> "_models.StreamingChatCompletionsUpdate":
Expand All @@ -220,7 +220,7 @@ def _read_next_block(self) -> bool:
return True
return self._deserialize_and_add_to_queue(element)

def __exit__(self, exc_type, exc_val, exc_tb) -> None:
def __exit__(self, exc_type: Any, exc_val: Any, exc_tb: Any) -> None: # type: ignore
self.close()

def close(self) -> None:
Expand All @@ -239,7 +239,7 @@ def __init__(self, response: AsyncHttpResponse):
self._response = response
self._bytes_iterator: AsyncIterator[bytes] = response.iter_bytes()

def __aiter__(self):
def __aiter__(self) -> Any:
return self

async def __anext__(self) -> "_models.StreamingChatCompletionsUpdate":
Expand All @@ -259,7 +259,7 @@ async def _read_next_block_async(self) -> bool:
return True
return self._deserialize_and_add_to_queue(element)

def __exit__(self, exc_type, exc_val, exc_tb) -> None:
def __exit__(self, exc_type: Any, exc_val: Any, exc_tb: Any) -> None: # type: ignore
asyncio.run(self.aclose())

async def aclose(self) -> None:
Expand Down
Loading