Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
Diffstat (limited to 'doc/development/ai_features.md')
-rw-r--r--doc/development/ai_features.md662
1 files changed, 7 insertions, 655 deletions
diff --git a/doc/development/ai_features.md b/doc/development/ai_features.md
index ffe151f3876..a952d8f2804 100644
--- a/doc/development/ai_features.md
+++ b/doc/development/ai_features.md
@@ -1,659 +1,11 @@
---
-stage: AI-powered
-group: AI Framework
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+redirect_to: 'ai_features/index.md'
+remove_date: '2023-12-01'
---
-# AI features based on 3rd-party integrations
+This document was moved to [another location](ai_features/index.md).
-[Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/117296) in GitLab 15.11.
-
-## Features
-
-- Async execution of the long running API requests
- - GraphQL Action starts the request
- - Background workers execute
- - GraphQL subscriptions deliver results back in real time
-- Abstraction for
- - OpenAI
- - Google Vertex AI
- - Anthropic
-- Rate Limiting
-- Circuit Breaker
-- Multi-Level feature flags
-- License checks on group level
-- Snowplow execution tracking
-- Tracking of Token Spent on Prometheus
-- Configuration for Moderation check of inputs
-- Automatic Markdown Rendering of responses
-- Centralised Group Level settings for experiment and 3rd party
-- Experimental API endpoints for exploration of AI APIs by GitLab team members without the need for credentials
- - OpenAI
- - Google Vertex AI
- - Anthropic
-
-## Feature flags
-
-Apply the following two feature flags to any AI feature work:
-
-- A general that applies to all AI features.
-- A flag specific to that feature. The feature flag name [must be different](feature_flags/index.md#feature-flags-for-licensed-features) than the licensed feature name.
-
-See the [feature flag tracker](https://gitlab.com/gitlab-org/gitlab/-/issues/405161) for the list of all feature flags and how to use them.
-
-## Implement a new AI action
-
-To implement a new AI action, connect to the preferred AI provider. You can connect to this API using either the:
-
-- Experimental REST API.
-- Abstraction layer.
-
-All AI features are experimental.
-
-## Test AI features locally
-
-NOTE:
-Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for help automating the following section.
-
-1. Enable the required general feature flags:
-
- ```ruby
- Feature.enable(:ai_related_settings)
- Feature.enable(:openai_experimentation)
- Feature.enable(:tofa_experimentation_main_flag)
- Feature.enable(:anthropic_experimentation)
- ```
-
-1. Simulate the GDK to [simulate SaaS](ee_features.md#simulate-a-saas-instance) and ensure the group you want to test has an Ultimate license
-1. Enable `Experimental features` and `Third-party AI services`
- 1. Go to the group with the Ultimate license
- 1. **Group Settings** > **General** -> **Permissions and group features**
- 1. Enable **Experiment features**
- 1. Enable **Third-party AI services**
-1. Enable the specific feature flag for the feature you want to test
-1. Set the required access token. To receive an access token:
- 1. For Vertex, follow the [instructions below](#configure-gcp-vertex-access).
- 1. For all other providers, like Anthropic or OpenAI, create an access request where `@m_gill`, `@wayne`, and `@timzallmann` are the tech stack owners.
-
-### Set up the embedding database
-
-NOTE:
-Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for help automating the following section.
-
-For features that use the embedding database, additional setup is needed.
-
-1. Enable [pgvector](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/pgvector.md#enable-pgvector-in-the-gdk) in GDK
-1. Enable the embedding database in GDK
-
- ```shell
- gdk config set gitlab.rails.databases.embedding.enabled true
- ```
-
-1. Run `gdk reconfigure`
-1. Run database migrations to create the embedding database
-
-### Set up GitLab Duo Chat
-
-NOTE:
-Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for help automating the following section.
-
-1. [Enable Anthropic API features](#configure-anthropic-access).
-1. [Enable OpenAI support](#configure-openai-access).
-1. [Ensure the embedding database is configured](#set-up-the-embedding-database).
-1. Enable feature specific feature flag.
-
- ```ruby
- Feature.enable(:gitlab_duo)
- Feature.enable(:tanuki_bot)
- Feature.enable(:ai_redis_cache)
- ```
-
-1. Ensure that your current branch is up-to-date with `master`.
-1. To access the GitLab Duo Chat interface, in the lower-left corner of any page, select **Help** and **Ask GitLab Duo Chat**.
-
-#### Tips for local development
-
-1. When responses are taking too long to appear in the user interface, consider restarting Sidekiq by running `gdk restart rails-background-jobs`. If that doesn't work, try `gdk kill` and then `gdk start`.
-1. Alternatively, bypass Sidekiq entirely and run the chat service synchronously. This can help with debugging errors as GraphQL errors are now available in the network inspector instead of the Sidekiq logs.
-
-```diff
-diff --git a/ee/app/services/llm/chat_service.rb b/ee/app/services/llm/chat_service.rb
-index 5fa7ae8a2bc1..5fe996ba0345 100644
---- a/ee/app/services/llm/chat_service.rb
-+++ b/ee/app/services/llm/chat_service.rb
-@@ -5,7 +5,7 @@ class ChatService < BaseService
- private
-
- def perform
-- worker_perform(user, resource, :chat, options)
-+ worker_perform(user, resource, :chat, options.merge(sync: true))
- end
-
- def valid?
-```
-
-### Working with GitLab Duo Chat
-
-Prompts are the most vital part of GitLab Duo Chat system. Prompts are the instructions sent to the Large Language Model to perform certain tasks.
-
-The state of the prompts is the result of weeks of iteration. If you want to change any prompt in the current tool, you must put it behind a feature flag.
-
-If you have any new or updated prompts, ask members of AI Framework team to review, because they have significant experience with them.
-
-### Setup for GitLab documentation chat (legacy chat)
-
-To populate the embedding database for GitLab chat:
-
-1. Open a rails console
-1. Run [this script](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/10588#note_1373586079) to populate the embedding database
-
-### Contributing to GitLab Duo Chat
-
-The Chat feature uses a [zero-shot agent](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/gitlab/llm/chain/agents/zero_shot/executor.rb) that includes a system prompt explaining how the large language model should interpret the question and provide an
-answer. The system prompt defines available tools that can be used to gather
-information to answer the user's question.
-
-The zero-shot agent receives the user's question and decides which tools to use to gather information to answer it.
-It then makes a request to the large language model, which decides if it can answer directly or if it needs to use one
-of the defined tools.
-
-The tools each have their own prompt that provides instructions to the large language model on how to use that tool to
-gather information. The tools are designed to be self-sufficient and avoid multiple requests back and forth to
-the large language model.
-
-After the tools have gathered the required information, it is returned to the zero-shot agent, which asks the large language
-model if enough information has been gathered to provide the final answer to the user's question.
-
-#### Adding a new tool
-
-To add a new tool:
-
-1. Create files for the tool in the `ee/lib/gitlab/llm/chain/tools/` folder. Use existing tools like `issue_identifier` or
-`resource_reader` as a template.
-
-1. Write a class for the tool that includes:
-
- - Name and description of what the tool does
- - Example questions that would use this tool
- - Instructions for the large language model on how to use the tool to gather information - so the main prompts that
- this tool is using.
-
-1. Test and iterate on the prompt using RSpec tests that make real requests to the large language model.
- - Prompts require trial and error, the non-deterministic nature of working with LLM can be surprising.
- - Anthropic provides good [guide](https://docs.anthropic.com/claude/docs/introduction-to-prompt-design) on working on prompts.
-
-1. Implement code in the tool to parse the response from the large language model and return it to the zero-shot agent.
-
-1. Add the new tool name to the `tools` array in `ee/lib/gitlab/llm/completions/chat.rb` so the zero-shot agent knows about it.
-
-1. Add tests by adding questions to the test-suite for which the new tool should respond to. Iterate on the prompts as needed.
-
-The key things to keep in mind are properly instructing the large language model through prompts and tool descriptions,
-keeping tools self-sufficient, and returning responses to the zero-shot agent. With some trial and error on prompts,
-adding new tools can expand the capabilities of the chat feature.
-
-There are available short [videos](https://www.youtube.com/playlist?list=PL05JrBw4t0KoOK-bm_bwfHaOv-1cveh8i) covering this topic.
-
-### Debugging
-
-To gather more insights about the full request, use the `Gitlab::Llm::Logger` file to debug logs.
-The default logging level on production is `INFO` and **must not** be used to log any data that could contain personal identifying information.
-
-To follow the debugging messages related to the AI requests on the abstraction layer, you can use:
-
-```shell
-export LLM_DEBUG=1
-gdk start
-tail -f log/llm.log
-```
-
-### Configure GCP Vertex access
-
-In order to obtain a GCP service key for local development, please follow the steps below:
-
-- Create a sandbox GCP environment by visiting [this page](https://about.gitlab.com/handbook/infrastructure-standards/#individual-environment) and following the instructions, or by requesting access to our existing group environment by using [this template](https://gitlab.com/gitlab-com/it/infra/issue-tracker/-/issues/new?issuable_template=gcp_group_account_iam_update_request).
-- In the GCP console, go to `IAM & Admin` > `Service Accounts` and click on the "Create new service account" button
-- Name the service account something specific to what you're using it for. Select Create and Continue. Under `Grant this service account access to project`, select the role `Vertex AI User`. Select `Continue` then `Done`
-- Select your new service account and `Manage keys` > `Add Key` > `Create new key`. This will download the **private** JSON credentials for your service account.
-- Open the Rails console. Update the settings to:
-
-```ruby
-Gitlab::CurrentSettings.update(vertex_ai_credentials: File.read('/YOUR_FILE.json'))
-
-# Note: These credential examples will not work locally for all models
-Gitlab::CurrentSettings.update(vertex_ai_host: "<root-domain>") # Example: us-central1-aiplatform.googleapis.com
-Gitlab::CurrentSettings.update(vertex_ai_project: "<project-id>") # Example: cloud-large-language-models
-```
-
-Internal team members can [use this snippet](https://gitlab.com/gitlab-com/gl-infra/production/-/snippets/2541742) for help configuring these endpoints.
-
-### Configure OpenAI access
-
-```ruby
-Gitlab::CurrentSettings.update(openai_api_key: "<open-ai-key>")
-```
-
-### Configure Anthropic access
-
-```ruby
-Feature.enable(:anthropic_experimentation)
-Gitlab::CurrentSettings.update!(anthropic_api_key: <insert API key>)
-```
-
-### Testing GitLab Duo Chat with predefined questions
-
-Because success of answers to user questions in GitLab Duo Chat heavily depends on toolchain and prompts of each tool, it's common that even a minor change in a prompt or a tool impacts processing of some questions. To make sure that a change in the toolchain doesn't break existing functionality, you can use the following rspecs to validate answers to some predefined questions:
-
-```ruby
-export OPENAI_API_KEY='<key>'
-export ANTHROPIC_API_KEY='<key>'
-REAL_AI_REQUEST=1 rspec ee/spec/lib/gitlab/llm/chain/agents/zero_shot/executor_spec.rb
-```
-
-When you need to update the test questions that require documentation embeddings,
-make sure a new fixture is generated and committed together with the change.
-
-#### Populating embeddings and using embeddings fixture
-
-To seed your development database with the embeddings for GitLab Documentation,
-you may use the pre-generated embeddings and a Rake test.
-
-```shell
-RAILS_ENV=development bundle exec rake gitlab:llm:embeddings:seed_pre_generated
-```
-
-The DBCleaner gem we use clear the database tables before each test runs.
-Instead of fully populating the table `tanuki_bot_mvc` where we store embeddings for the documentations,
-we can add a few selected embeddings to the table from a pre-generated fixture.
-
-For instance, to test that the question "How can I reset my password" is correctly
-retrieving the relevant embeddings and answered, we can extract the top N closet embeddings
-to the question into a fixture and only restore a small number of embeddings quickly.
-To faciliate an extraction process, a Rake task been written.
-You can add or remove the questions needed to be tested in the Rake task and run the task to generate a new fixture.
-
-```shell
-RAILS_ENV=development bundle exec rake gitlab:llm:embeddings:extract_embeddings
-```
-
-In the specs where you need to use the embeddings,
-use the RSpec config hook `:ai_embedding_fixtures` on a context.
-
-```ruby
-context 'when asking about how to use GitLab', :ai_embedding_fixtures do
- # ...examples
-end
-```
-
-## Experimental REST API
-
-Use the [experimental REST API endpoints](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/api/ai/experimentation) to quickly experiment and prototype AI features.
-
-The endpoints are:
-
-- `https://gitlab.example.com/api/v4/ai/experimentation/openai/completions`
-- `https://gitlab.example.com/api/v4/ai/experimentation/openai/embeddings`
-- `https://gitlab.example.com/api/v4/ai/experimentation/openai/chat/completions`
-- `https://gitlab.example.com/api/v4/ai/experimentation/anthropic/complete`
-- `https://gitlab.example.com/api/v4/ai/experimentation/tofa/chat`
-
-These endpoints are only for prototyping, not for rolling features out to customers.
-The experimental endpoint is only available to GitLab team members on production. Use the
-[GitLab API token](../user/profile/personal_access_tokens.md) to authenticate.
-
-## Abstraction layer
-
-### GraphQL API
-
-To connect to the AI provider API using the Abstraction Layer, use an extendable GraphQL API called
-[`aiAction`](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/app/graphql/mutations/ai/action.rb).
-The `input` accepts key/value pairs, where the `key` is the action that needs to be performed.
-We only allow one AI action per mutation request.
-
-Example of a mutation:
-
-```graphql
-mutation {
- aiAction(input: {summarizeComments: {resourceId: "gid://gitlab/Issue/52"}}) {
- clientMutationId
- }
-}
-```
-
-As an example, assume we want to build an "explain code" action. To do this, we extend the `input` with a new key,
-`explainCode`. The mutation would look like this:
-
-```graphql
-mutation {
- aiAction(input: {explainCode: {resourceId: "gid://gitlab/MergeRequest/52", code: "foo() { console.log()" }}) {
- clientMutationId
- }
-}
-```
-
-The GraphQL API then uses the [OpenAI Client](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/gitlab/llm/open_ai/client.rb)
-to send the response.
-
-Remember that other clients are available and you should not use OpenAI.
-
-#### How to receive a response
-
-As the OpenAI API requests are handled in a background job, we do not keep the request alive and
-the response is sent through the `aiCompletionResponse` subscription:
-
-```mutation
-subscription aiCompletionResponse($userId: UserID, $resourceId: AiModelID!) {
- aiCompletionResponse(userId: $userId, resourceId: $resourceId) {
- responseBody
- errors
- }
-}
-```
-
-WARNING:
-You should only subscribe to the subscription once the mutation is sent. If multiple subscriptions are active on the same page, they currently all receive updates as our identifier is the user and the resource. To mitigate this, you should only subscribe when the mutation is sent. You can use [`skip()`](You can use [`skip()`](https://apollo.vuejs.org/guide/apollo/subscriptions.html#skipping-the-subscription)) for this case. To prevent this problem in the future, we implement a [request identifier](https://gitlab.com/gitlab-org/gitlab/-/issues/408196).
-
-#### Current abstraction layer flow
-
-The following graph uses OpenAI as an example. You can use different providers.
-
-```mermaid
-flowchart TD
-A[GitLab frontend] -->B[AiAction GraphQL mutation]
-B --> C[Llm::ExecuteMethodService]
-C --> D[One of services, for example: Llm::GenerateSummaryService]
-D -->|scheduled| E[AI worker:Llm::CompletionWorker]
-E -->F[::Gitlab::Llm::Completions::Factory]
-F -->G[`::Gitlab::Llm::OpenAi::Completions::...` class using `::Gitlab::Llm::OpenAi::Templates::...` class]
-G -->|calling| H[Gitlab::Llm::OpenAi::Client]
-H --> |response| I[::Gitlab::Llm::OpenAi::ResponseService]
-I --> J[GraphqlTriggers.ai_completion_response]
-J --> K[::GitlabSchema.subscriptions.trigger]
-```
-
-## CircuitBreaker
-
-The CircuitBreaker concern is a reusable module that you can include in any class that needs to run code with circuit breaker protection. The concern provides a `run_with_circuit` method that wraps a code block with circuit breaker functionality, which helps prevent cascading failures and improves system resilience. For more information about the circuit breaker pattern, see:
-
-- [What is Circuit breaker](https://martinfowler.com/bliki/CircuitBreaker.html).
-- [The Hystrix documentation on CircuitBreaker](https://github.com/Netflix/Hystrix/wiki/How-it-Works#circuit-breaker).
-
-### Use CircuitBreaker
-
-To use the CircuitBreaker concern, you need to include it in a class. For example:
-
-```ruby
-class MyService
- include Gitlab::Llm::Concerns::CircuitBreaker
-
- def call_external_service
- run_with_circuit do
- # Code that interacts with external service goes here
-
- raise InternalServerError
- end
- end
-end
-```
-
-The `call_external_service` method is an example method that interacts with an external service.
-By wrapping the code that interacts with the external service with `run_with_circuit`, the method is executed within the circuit breaker.
-The circuit breaker is created and configured by the `circuit` method, which is called automatically when the `CircuitBreaker` module is included.
-The method should raise `InternalServerError` error which will be counted towards the error threshold if raised during the execution of the code block.
-
-The circuit breaker tracks the number of errors and the rate of requests,
-and opens the circuit if it reaches the configured error threshold or volume threshold.
-If the circuit is open, subsequent requests fail fast without executing the code block, and the circuit breaker periodically allows a small number of requests through to test the service's availability before closing the circuit again.
-
-### Configuration
-
-The circuit breaker is configured with two constants which control the number of errors and requests at which the circuit will open:
-
-- `ERROR_THRESHOLD`
-- `VOLUME_THRESHOLD`
-
-You can adjust these values as needed for the specific service and usage pattern.
-The `InternalServerError` is the exception class counted towards the error threshold if raised during the execution of the code block.
-This is the exception class that triggers the circuit breaker when raised by the code that interacts with the external service.
-
-NOTE:
-The `CircuitBreaker` module depends on the `Circuitbox` gem to provide the circuit breaker implementation. By default, the service name is inferred from the class name where the concern module is included. Override the `service_name` method if the name needs to be different.
-
-### Testing
-
-To test code that uses the `CircuitBreaker` concern, you can use `RSpec` shared examples and pass the `service` and `subject` variables:
-
-```ruby
-it_behaves_like 'has circuit breaker' do
- let(:service) { dummy_class.new }
- let(:subject) { service.dummy_method }
-end
-```
-
-## How to implement a new action
-
-### Register a new method
-
-Go to the `Llm::ExecuteMethodService` and add a new method with the new service class you will create.
-
-```ruby
-class ExecuteMethodService < BaseService
- METHODS = {
- # ...
- amazing_new_ai_feature: Llm::AmazingNewAiFeatureService
- }.freeze
-```
-
-### Create a Service
-
-1. Create a new service under `ee/app/services/llm/` and inherit it from the `BaseService`.
-1. The `resource` is the object we want to act on. It can be any object that includes the `Ai::Model` concern. For example it could be a `Project`, `MergeRequest`, or `Issue`.
-
-```ruby
-# ee/app/services/llm/amazing_new_ai_feature_service.rb
-
-module Llm
- class AmazingNewAiFeatureService < BaseService
- private
-
- def perform
- ::Llm::CompletionWorker.perform_async(user.id, resource.id, resource.class.name, :amazing_new_ai_feature)
- success
- end
-
- def valid?
- super && Ability.allowed?(user, :amazing_new_ai_feature, resource)
- end
- end
-end
-```
-
-### Authorization
-
-We recommend to use [policies](policies.md) to deal with authorization for a feature. Currently we need to make sure to cover the following checks:
-
-1. General AI feature flag is enabled
-1. Feature specific feature flag is enabled
-1. The namespace has the required license for the feature
-1. User is a member of the group/project
-1. `experiment_features_enabled` and `third_party_ai_features_enabled` flags are set on the `Namespace`
-
-For our example, we need to implement the `allowed?(:amazing_new_ai_feature)` call. As an example, you can look at the [Issue Policy for the summarize comments feature](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/issue_policy.rb). In our example case, we want to implement the feature for Issues as well:
-
-```ruby
-# ee/app/policies/ee/issue_policy.rb
-
-module EE
- module IssuePolicy
- extend ActiveSupport::Concern
- prepended do
- with_scope :subject
- condition(:ai_available) do
- ::Feature.enabled?(:openai_experimentation)
- end
-
- with_scope :subject
- condition(:amazing_new_ai_feature_enabled) do
- ::Feature.enabled?(:amazing_new_ai_feature, subject_container) &&
- subject_container.licensed_feature_available?(:amazing_new_ai_feature)
- end
-
- rule do
- ai_available & amazing_new_ai_feature_enabled & is_project_member
- end.enable :amazing_new_ai_feature
- end
- end
-end
-```
-
-### Pairing requests with responses
-
-Because multiple users' requests can be processed in parallel, when receiving responses,
-it can be difficult to pair a response with its original request. The `requestId`
-field can be used for this purpose, because both the request and response are assured
-to have the same `requestId` UUID.
-
-### Caching
-
-AI requests and responses can be cached. Cached conversation is being used to
-display user interaction with AI features. In the current implementation, this cache
-is not used to skip consecutive calls to the AI service when a user repeats
-their requests.
-
-```graphql
-query {
- aiMessages {
- nodes {
- id
- requestId
- content
- role
- errors
- timestamp
- }
- }
-}
-```
-
-This cache is especially useful for chat functionality. For other services,
-caching is disabled. (It can be enabled for a service by using `cache_response: true`
-option.)
-
-Caching has following limitations:
-
-- Messages are stored in Redis stream.
-- There is a single stream of messages per user. This means that all services
- currently share the same cache. If needed, this could be extended to multiple
- streams per user (after checking with the infrastructure team that Redis can handle
- the estimated amount of messages).
-- Only the last 50 messages (requests + responses) are kept.
-- Expiration time of the stream is 3 days since adding last message.
-- User can access only their own messages. There is no authorization on the caching
- level, and any authorization (if accessed by not current user) is expected on
- the service layer.
-
-### Check if feature is allowed for this resource based on namespace settings
-
-There are two settings allowed on root namespace level that restrict the use of AI features:
-
-- `experiment_features_enabled`
-- `third_party_ai_features_enabled`.
-
-To check if that feature is allowed for a given namespace, call:
-
-```ruby
-Gitlab::Llm::StageCheck.available?(namespace, :name_of_the_feature)
-```
-
-Add the name of the feature to the `Gitlab::Llm::StageCheck` class. There are arrays there that differentiate
-between experimental and beta features.
-
-This way we are ready for the following different cases:
-
-- If the feature is not in any array, the check will return `true`. For example, the feature was moved to GA and does not use a third-party setting.
-- If feature is in GA, but uses a third-party setting, the class will return a proper answer based on the namespace third-party setting.
-
-To move the feature from the experimental phase to the beta phase, move the name of the feature from the `EXPERIMENTAL_FEATURES` array to the `BETA_FEATURES` array.
-
-### Implement calls to AI APIs and the prompts
-
-The `CompletionWorker` will call the `Completions::Factory` which will initialize the Service and execute the actual call to the API.
-In our example, we will use OpenAI and implement two new classes:
-
-```ruby
-# /ee/lib/gitlab/llm/open_ai/completions/amazing_new_ai_feature.rb
-
-module Gitlab
- module Llm
- module OpenAi
- module Completions
- class AmazingNewAiFeature
- def initialize(ai_prompt_class)
- @ai_prompt_class = ai_prompt_class
- end
-
- def execute(user, issue, options)
- options = ai_prompt_class.get_options(options[:messages])
-
- ai_response = Gitlab::Llm::OpenAi::Client.new(user).chat(content: nil, **options)
-
- ::Gitlab::Llm::OpenAi::ResponseService.new(user, issue, ai_response, options: {}).execute(
- Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new
- )
- end
-
- private
-
- attr_reader :ai_prompt_class
- end
- end
- end
- end
-end
-```
-
-```ruby
-# /ee/lib/gitlab/llm/open_ai/templates/amazing_new_ai_feature.rb
-
-module Gitlab
- module Llm
- module OpenAi
- module Templates
- class AmazingNewAiFeature
- TEMPERATURE = 0.3
-
- def self.get_options(messages)
- system_content = <<-TEMPLATE
- You are an assistant that writes code for the following input:
- """
- TEMPLATE
-
- {
- messages: [
- { role: "system", content: system_content },
- { role: "user", content: messages },
- ],
- temperature: TEMPERATURE
- }
- end
- end
- end
- end
- end
-end
-```
-
-Because we support multiple AI providers, you may also use those providers for the same example:
-
-```ruby
-Gitlab::Llm::VertexAi::Client.new(user)
-Gitlab::Llm::Anthropic::Client.new(user)
-```
-
-### Add Ai Action to GraphQL
-
-TODO
-
-## Security
-
-Refer to the [secure coding guidelines for Artificial Intelligence (AI) features](secure_coding_guidelines.md#artificial-intelligence-ai-features).
+<!-- This redirect file can be deleted after <2023-12-01>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->