diff options
author | Patrick Steinhardt <psteinhardt@gitlab.com> | 2023-01-04 14:54:32 +0300 |
---|---|---|
committer | Patrick Steinhardt <psteinhardt@gitlab.com> | 2023-01-06 16:51:20 +0300 |
commit | b6f5e9537dbc9b43c0130e8372408e4aa7bb5355 (patch) | |
tree | d3ca37b4dc8d31458f280f85c23606d373d49b51 | |
parent | 43dec12d09afae20b7f0bb3cab4b3cfd80ffabfd (diff) |
limithandler: Fix flaky test caused by stream being closed async
When rate-limiting streams, the limiting only kicks in upon receiving
the first Protobuf message so that we can derive the limiting key based
on some parameters. This means that the client can already start sending
requests even though the server side has not yet decided whether it
wants to allow the RPC call or rate-limit it.
This causes one of our tests for a full-duplex call to be flaky as we
sometimes see an `io.EOF` when sending the requests. This is caused by
us sending multiple requests to the server: when it has received the
first request and closes the stream before we have sent all 10 of our
requests, then the client will indeed get an early `io.EOF`. This is
entirely expected though due to the async nature of the rate-limiting
for full-duplex calls.
Fix the test so that it gracefully handles an early `io.EOF`.
-rw-r--r-- | internal/middleware/limithandler/middleware_test.go | 15 |
1 files changed, 14 insertions, 1 deletions
diff --git a/internal/middleware/limithandler/middleware_test.go b/internal/middleware/limithandler/middleware_test.go index c78cfa9b2..3fc545c76 100644 --- a/internal/middleware/limithandler/middleware_test.go +++ b/internal/middleware/limithandler/middleware_test.go @@ -201,7 +201,20 @@ func TestStreamLimitHandler(t *testing.T) { // id, but subsequent requests in a stream, even with the same // id, should bypass the concurrency limiter for i := 0; i < 10; i++ { - require.NoError(t, stream.Send(&grpc_testing.StreamingOutputCallRequest{})) + // Rate-limiting the stream is happening asynchronously when + // the server-side receives the first message. When the rate + // limiter then decides that the RPC call must be limited, + // it will close the stream. + // + // It may thus happen that we already see an EOF here in + // case the closed stream is received on the client-side + // before we have sent all requests. We thus need to special + // case this specific error code and will just stop sending + // requests in that case. + if err := stream.Send(&grpc_testing.StreamingOutputCallRequest{}); err != nil { + require.Equal(t, io.EOF, err) + break + } } require.NoError(t, stream.CloseSend()) |