diff options
author | GitLab Bot <gitlab-bot@gitlab.com> | 2023-10-24 18:12:41 +0300 |
---|---|---|
committer | GitLab Bot <gitlab-bot@gitlab.com> | 2023-10-24 18:12:41 +0300 |
commit | 40a4f37126bb1a1dd6b6f4b3c0ebb414a3e3908a (patch) | |
tree | ff6b0774cbd1ab71b69d9e9bf9fa0e0b3d1ad799 /doc/development | |
parent | a19e3ec8e8545d5a6b275bab3e5ea8b0cc707449 (diff) |
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc/development')
-rw-r--r-- | doc/development/ai_features/duo_chat.md | 21 | ||||
-rw-r--r-- | doc/development/code_review.md | 2 | ||||
-rw-r--r-- | doc/development/fe_guide/graphql.md | 37 |
3 files changed, 17 insertions, 43 deletions
diff --git a/doc/development/ai_features/duo_chat.md b/doc/development/ai_features/duo_chat.md index 103801a5e0b..2a624d4b830 100644 --- a/doc/development/ai_features/duo_chat.md +++ b/doc/development/ai_features/duo_chat.md @@ -86,14 +86,25 @@ gdk start tail -f log/llm.log ``` -## Testing GitLab Duo Chat with predefined questions +## Testing GitLab Duo Chat against real LLMs -Because success of answers to user questions in GitLab Duo Chat heavily depends on toolchain and prompts of each tool, it's common that even a minor change in a prompt or a tool impacts processing of some questions. To make sure that a change in the toolchain doesn't break existing functionality, you can use the following rspecs to validate answers to some predefined questions: +Because success of answers to user questions in GitLab Duo Chat heavily depends +on toolchain and prompts of each tool, it's common that even a minor change in a +prompt or a tool impacts processing of some questions. + +To make sure that a change in the toolchain doesn't break existing +functionality, you can use the following RSpec tests to validate answers to some +predefined questions when using real LLMs: ```ruby -export OPENAI_API_KEY='<key>' -export ANTHROPIC_API_KEY='<key>' -REAL_AI_REQUEST=1 rspec ee/spec/lib/gitlab/llm/chain/agents/zero_shot/executor_spec.rb +export OPENAI_EMBEDDINGS='true' # if using OpenAI embeddings +export VERTEX_AI_EMBEDDINGS='true' # if using Vertex embeddings +export ANTHROPIC_API_KEY='<key>' # can use dev value of Gitlab::CurrentSettings.openai_api_key +export OPENAI_API_KEY='<key>' # can use dev value of Gitlab::CurrentSettings.anthropic_api_key +export VERTEX_AI_CREDENTIALS='<vertex-ai-credentials>' # can set as dev value of Gitlab::CurrentSettings.vertex_ai_credentials +export VERTEX_AI_PROJECT='<vertex-project-name>' # can use dev value of Gitlab::CurrentSettings.vertex_ai_project + +REAL_AI_REQUEST=1 bundle exec rspec ee/spec/lib/gitlab/llm/chain/agents/zero_shot/executor_real_requests_spec.rb ``` When you need to update the test questions that require documentation embeddings, diff --git a/doc/development/code_review.md b/doc/development/code_review.md index 8e6ea3d68e9..e5f79a55a06 100644 --- a/doc/development/code_review.md +++ b/doc/development/code_review.md @@ -764,7 +764,7 @@ A merge request may benefit from being considered a customer critical priority b Properties of customer critical merge requests: -- The [VP of Development](https://about.gitlab.com/job-families/engineering/development/management/vp/) ([@clefelhocz1](https://gitlab.com/clefelhocz1)) is the approver for deciding if a merge request qualifies as customer critical. Also, if two of his direct reports approve, that can also serve as approval. +- A senior director or higher in Development must approve that a merge request qualifies as customer-critical. Alternatively, if two of their direct reports approve, that can also serve as approval. - The DRI applies the `customer-critical-merge-request` label to the merge request. - It is required that the reviewers and maintainers involved with a customer critical merge request are engaged as soon as this decision is made. - It is required to prioritize work for those involved on a customer critical merge request so that they have the time available necessary to focus on it. diff --git a/doc/development/fe_guide/graphql.md b/doc/development/fe_guide/graphql.md index 99070f3d31c..5807c9c5621 100644 --- a/doc/development/fe_guide/graphql.md +++ b/doc/development/fe_guide/graphql.md @@ -974,28 +974,6 @@ const data = store.readQuery({ Read more about the `@connection` directive in [Apollo's documentation](https://www.apollographql.com/docs/react/caching/advanced-topics/#the-connection-directive). -### Managing performance - -The Apollo client batches queries by default. Given 3 deferred queries, -Apollo groups them into one request, sends the single request to the server, and -responds after all 3 queries have completed. - -If you need to have queries sent as individual requests, additional context can be provided -to tell Apollo to do this. - -```javascript -export default { - apollo: { - user: { - query: QUERY_IMPORT, - context: { - isSingleRequest: true, - } - } - }, -}; -``` - #### Polling and Performance While the Apollo client has support for simple polling, for performance reasons, our [ETag-based caching](../polling.md) is preferred to hitting the database each time. @@ -1081,21 +1059,6 @@ await this.$apollo.mutate({ }); ``` -ETags depend on the request being a `GET` instead of GraphQL's usual `POST`. Our default link library does not support `GET` requests, so we must let our default Apollo client know to use a different library. Keep in mind, this means your app cannot batch queries. - -```javascript -/* componentMountIndex.js */ - -const apolloProvider = new VueApollo({ - defaultClient: createDefaultClient( - {}, - { - useGet: true, - }, - ), -}); -``` - Finally, we can add a visibility check so that the component pauses polling when the browser tab is not active. This should lessen the request load on the page. ```javascript |