Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.xiph.org/xiph/opus.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorKat Walsh <kat@wikimedia.org>2011-08-30 04:47:33 +0400
committerJean-Marc Valin <jmvalin@jmvalin.ca>2011-08-31 08:40:23 +0400
commitb4f308a2f9b4be46e0993fb765f428d3f1d8ffa3 (patch)
treef2bd76ef708c250940935bbc9c1f9de0a9b86fe5
parentd6335abedc33283bbebf04db166b312771f64da5 (diff)
Further copyediting of draft.
-rw-r--r--doc/draft-ietf-codec-opus.xml72
1 files changed, 36 insertions, 36 deletions
diff --git a/doc/draft-ietf-codec-opus.xml b/doc/draft-ietf-codec-opus.xml
index 6f7591e2..7fa5984b 100644
--- a/doc/draft-ietf-codec-opus.xml
+++ b/doc/draft-ietf-codec-opus.xml
@@ -1638,7 +1638,7 @@ It is omitted when there are no stereo weights, i.e., unless the SILK frame
for an LBRR frame when the corresponding LBRR flags indicate the side channel
is present.
When present, the decoder reads a single value using the PDF in
- <xref target="silk_mid_only"/>, as implemented in
+ <xref target="silk_mid_only_pdf"/>, as implemented in
silk_stereo_decode_mid_only() (silk_decode_stereo_pred.c).
If the flag is set, then there is no corresponding SILK frame for the side
channel, the entire decoding process for the side channel is skipped, and
@@ -4452,7 +4452,7 @@ used in three different ways, to encode:
<t>
The range encoder maintains an internal state vector composed of the
-four-tuple (low,rng,rem,ext), representing the low end of the current
+four-tuple (low,rng,rem,ext) representing the low end of the current
range, the size of the current range, a single buffered output octet,
and a count of additional carry-propagating output octets. Both rng
and low are 32-bit unsigned integer values, rem is an octet value or
@@ -4526,7 +4526,7 @@ fl=sum(f(i),i<k), fh=fl+f(i), and ft=sum(f(i)).
The raw bits are packed at the end of the packet, starting by storing the
least significant bit of the value to be packed in the least significant bit
of the last byte, filling up to the most significant bit in
- the last byte, and the continuing in the least significant bit of the
+ the last byte, and then continuing in the least significant bit of the
penultimate byte, and so on.
This packing may continue into the last byte output by the range coder,
though the format should render it impossible to overwrite any set bit
@@ -4662,7 +4662,7 @@ fl=sum(f(i),i<k), fh=fl+f(i), and ft=sum(f(i)).
<section title='Voice Activity Detection'>
<t>
- The input signal is processed by a VAD (Voice Activity Detector) to produce a measure of voice activity, and also spectral tilt and signal-to-noise estimates, for each frame. The VAD uses a sequence of half-band filterbanks to split the signal in four subbands: 0 - Fs/16, Fs/16 - Fs/8, Fs/8 - Fs/4, and Fs/4 - Fs/2, where Fs is the sampling frequency, that is, 8, 12, 16, or 24&nbsp;kHz. The lowest subband, from 0 - Fs/16 is high-pass filtered with a first-order MA (Moving Average) filter (with transfer function H(z) = 1-z**(-1)) to reduce the energy at the lowest frequencies. For each frame, the signal energy per subband is computed. In each subband, a noise level estimator tracks the background noise level and an SNR (Signal-to-Noise Ratio) value is computed as the logarithm of the ratio of energy to noise level. Using these intermediate variables, the following parameters are calculated for use in other SILK modules:
+ The input signal is processed by a VAD (Voice Activity Detector) to produce a measure of voice activity, spectral tilt, and signal-to-noise estimates for each frame. The VAD uses a sequence of half-band filterbanks to split the signal into four subbands: 0 - Fs/16, Fs/16 - Fs/8, Fs/8 - Fs/4, and Fs/4 - Fs/2, where Fs is the sampling frequency (8, 12, 16, or 24&nbsp;kHz). The lowest subband, from 0 - Fs/16, is high-pass filtered with a first-order MA (Moving Average) filter (with transfer function H(z) = 1-z**(-1)) to reduce the energy at the lowest frequencies. For each frame, the signal energy per subband is computed. In each subband, a noise level estimator tracks the background noise level and an SNR (Signal-to-Noise Ratio) value is computed as the logarithm of the ratio of energy to noise level. Using these intermediate variables, the following parameters are calculated for use in other SILK modules:
<list style="symbols">
<t>
Average SNR. The average of the subband SNR values.
@@ -4734,7 +4734,7 @@ fl=sum(f(i),i<k), fh=fl+f(i), and ft=sum(f(i)).
<t>In the first stage, the whitened signal is downsampled to 4&nbsp;kHz (from 8&nbsp;kHz) and the current frame is correlated to a signal delayed by a range of lags, starting from a shortest lag corresponding to 500&nbsp;Hz, to a longest lag corresponding to 56&nbsp;Hz.</t>
<t>
- The second stage operates on a 8&nbsp;kHz signal ( downsampled from 12, 16, or 24&nbsp;kHz ) and measures time correlations only near the lags corresponding to those that had sufficiently high correlations in the first stage. The resulting correlations are adjusted for a small bias towards short lags to avoid ending up with a multiple of the true pitch lag. The highest adjusted correlation is compared to a threshold depending on:
+ The second stage operates on an 8&nbsp;kHz signal (downsampled from 12, 16, or 24&nbsp;kHz) and measures time correlations only near the lags corresponding to those that had sufficiently high correlations in the first stage. The resulting correlations are adjusted for a small bias towards short lags to avoid ending up with a multiple of the true pitch lag. The highest adjusted correlation is compared to a threshold depending on:
<list style="symbols">
<t>
Whether the previous frame was classified as voiced
@@ -4837,7 +4837,7 @@ Wsyn(z) = (1 - \ (a_syn(k) * z )*(1 - z * \ b_syn(k) * z ).
</figure>
</t>
<t>
- All noise shaping parameters are computed and applied per subframe of 5 milliseconds. First, an LPC analysis is performed on a windowed signal block of 15 milliseconds. The signal block has a look-ahead of 5 milliseconds relative to the current subframe, and the window is an asymmetric sine window. The LPC analysis is done with the autocorrelation method, with an order of 16 for best quality or 12 in low complexity operation. The quantization gain is found as the square-root of the residual energy from the LPC analysis, multiplied by a value inversely proportional to the coding quality control parameter and the pitch correlation.
+ All noise shaping parameters are computed and applied per subframe of 5&nbsp;ms. First, an LPC analysis is performed on a windowed signal block of 15&nbsp;ms. The signal block has a look-ahead of 5&nbsp;ms relative to the current subframe, and the window is an asymmetric sine window. The LPC analysis is done with the autocorrelation method, with an order of 16 for best quality or 12 in low complexity operation. The quantization gain is found by taking the square root of the residual energy from the LPC analysis and multiplying it by a value inversely proportional to the coding quality control parameter and the pitch correlation.
</t>
<t>
Next we find the two sets of short-term noise shaping coefficients a_ana(k) and a_syn(k), by applying different amounts of bandwidth expansion to the coefficients found in the LPC analysis. This bandwidth expansion moves the roots of the LPC polynomial towards the origin, using the formulas
@@ -4852,7 +4852,7 @@ Wsyn(z) = (1 - \ (a_syn(k) * z )*(1 - z * \ b_syn(k) * z ).
]]>
</artwork>
</figure>
- where a(k) is the k'th LPC coefficient and the bandwidth expansion factors g_ana and g_syn are calculated as
+ where a(k) is the k'th LPC coefficient, and the bandwidth expansion factors g_ana and g_syn are calculated as
<figure align="center">
<artwork align="center">
<![CDATA[
@@ -4893,7 +4893,7 @@ c_tilt = 0.04 + 0.06 * C
for voiced frames, where C again is the coding quality control parameter and is between 0 and 1.
</t>
<t>
- The adjustment gain G serves to correct any level mismatch between original and decoded signal that might arise from the noise shaping and de-emphasis. This gain is computed as the ratio of the prediction gain of the short-term analysis and synthesis filter coefficients. The prediction gain of an LPC synthesis filter is the square-root of the output energy when the filter is excited by a unit-energy impulse on the input. An efficient way to compute the prediction gain is by first computing the reflection coefficients from the LPC coefficients through the step-down algorithm, and extracting the prediction gain from the reflection coefficients as
+ The adjustment gain G serves to correct any level mismatch between the original and decoded signals that might arise from the noise shaping and de-emphasis. This gain is computed as the ratio of the prediction gain of the short-term analysis and synthesis filter coefficients. The prediction gain of an LPC synthesis filter is the square root of the output energy when the filter is excited by a unit-energy impulse on the input. An efficient way to compute the prediction gain is by first computing the reflection coefficients from the LPC coefficients through the step-down algorithm, and extracting the prediction gain from the reflection coefficients as
<figure align="center">
<artwork align="center">
<![CDATA[
@@ -4914,35 +4914,35 @@ c_tilt = 0.04 + 0.06 * C
<section title='Prefilter'>
<t>
- In the prefilter the input signal is filtered using the spectral valley de-emphasis filter coefficients from the noise shaping analysis, see <xref target='noise_shaping_analysis_overview_section' />. By applying only the noise shaping analysis filter to the input signal, it provides the input to the noise shaping quantizer.
+ In the prefilter the input signal is filtered using the spectral valley de-emphasis filter coefficients from the noise shaping analysis (see <xref target='noise_shaping_analysis_overview_section'/>). By applying only the noise shaping analysis filter to the input signal, it provides the input to the noise shaping quantizer.
</t>
</section>
<section title='Prediction Analysis' anchor='pred_ana_overview_section'>
<t>
- The prediction analysis is performed in one of two ways depending on how the pitch estimator classified the frame. The processing for voiced and unvoiced speech are described in <xref target='pred_ana_voiced_overview_section' /> and <xref target='pred_ana_unvoiced_overview_section' />, respectively. Inputs to this function include the pre-whitened signal from the pitch estimator, see <xref target='pitch_estimator_overview_section' />.
+ The prediction analysis is performed in one of two ways depending on how the pitch estimator classified the frame. The processing for voiced and unvoiced speech is described in <xref target='pred_ana_voiced_overview_section' /> and <xref target='pred_ana_unvoiced_overview_section' />, respectively. Inputs to this function include the pre-whitened signal from the pitch estimator (see <xref target='pitch_estimator_overview_section'/>).
</t>
<section title='Voiced Speech' anchor='pred_ana_voiced_overview_section'>
<t>
- For a frame of voiced speech the pitch pulses will remain dominant in the pre-whitened input signal. Further whitening is desirable as it leads to higher quality at the same available bitrate. To achieve this, a Long-Term Prediction (LTP) analysis is carried out to estimate the coefficients of a fifth order LTP filter for each of four subframes. The LTP coefficients are used to find an LTP residual signal with the simulated output signal as input to obtain better modeling of the output signal. This LTP residual signal is the input to an LPC analysis where the LPCs are estimated using Burgs method, such that the residual energy is minimized. The estimated LPCs are converted to a Line Spectral Frequency (LSF) vector, and quantized as described in <xref target='lsf_quantizer_overview_section' />. After quantization, the quantized LSF vector is converted to LPC coefficients and hence by using these quantized coefficients the encoder remains fully synchronized with the decoder. The LTP coefficients are quantized using a method described in <xref target='ltp_quantizer_overview_section' />. The quantized LPC and LTP coefficients are now used to filter the high-pass filtered input signal and measure a residual energy for each of the four subframes.
+ For a frame of voiced speech the pitch pulses will remain dominant in the pre-whitened input signal. Further whitening is desirable as it leads to higher quality at the same available bitrate. To achieve this, a Long-Term Prediction (LTP) analysis is carried out to estimate the coefficients of a fifth-order LTP filter for each of four subframes. The LTP coefficients are used to find an LTP residual signal with the simulated output signal as input to obtain better modeling of the output signal. This LTP residual signal is the input to an LPC analysis where the LPCs are estimated using Burg's method, such that the residual energy is minimized. The estimated LPCs are converted to a Line Spectral Frequency (LSF) vector and quantized as described in <xref target='lsf_quantizer_overview_section' />. After quantization, the quantized LSF vector is converted to LPC coefficients. By using these quantized coefficients, the encoder remains fully synchronized with the decoder. The LTP coefficients are quantized using a method described in <xref target='ltp_quantizer_overview_section' />. The quantized LPC and LTP coefficients are then used to filter the high-pass filtered input signal and measure residual energy for each of the four subframes.
</t>
</section>
<section title='Unvoiced Speech' anchor='pred_ana_unvoiced_overview_section'>
<t>
- For a speech signal that has been classified as unvoiced there is no need for LTP filtering as it has already been determined that the pre-whitened input signal is not periodic enough within the allowed pitch period range for an LTP analysis to be worth-while the cost in terms of complexity and rate. Therefore, the pre-whitened input signal is discarded and instead the high-pass filtered input signal is used for LPC analysis using Burgs method. The resulting LPC coefficients are converted to an LSF vector, quantized as described in the following section and transformed back to obtain quantized LPC coefficients. The quantized LPC coefficients are used to filter the high-pass filtered input signal and measure a residual energy for each of the four subframes.
+ For a speech signal that has been classified as unvoiced, there is no need for LTP filtering, as it has already been determined that the pre-whitened input signal is not periodic enough within the allowed pitch period range for LTP analysis to be worth the cost in terms of complexity and rate. The pre-whitened input signal is therefore discarded, and instead the high-pass filtered input signal is used for LPC analysis using Burg's method. The resulting LPC coefficients are converted to an LSF vector and quantized as described in the following section. They are then transformed back to obtain quantized LPC coefficients, which are then used to filter the high-pass filtered input signal and measure residual energy for each of the four subframes.
</t>
</section>
</section>
<section title='LSF Quantization' anchor='lsf_quantizer_overview_section'>
- <t>The purpose of quantization in general is to significantly lower the bit rate at the cost of some introduced distortion. A higher rate should always result in lower distortion, and lowering the rate will generally lead to higher distortion. A commonly used but generally sub-optimal approach is to use a quantization method with a constant rate where only the error is minimized when quantizing.</t>
+ <t>In general, the purpose of quantization is to significantly lower the bitrate at the cost of introducing some distortion. A higher rate should always result in lower distortion, and lowering the rate will generally lead to higher distortion. A commonly used but generally suboptimal approach is to use a quantization method with a constant rate, where only the error is minimized when quantizing.</t>
<section title='Rate-Distortion Optimization'>
- <t>Instead, we minimize an objective function that consists of a weighted sum of rate and distortion, and use a codebook with an associated non-uniform rate table. Thus, we take into account that the probability mass function for selecting the codebook entries are by no means guaranteed to be uniform in our scenario. The advantage of this approach is that it ensures that rarely used codebook vector centroids, which are modeling statistical outliers in the training set can be quantized with a low error but with a relatively high cost in terms of a high rate. At the same time this approach also provides the advantage that frequently used centroids are modeled with low error and a relatively low rate. This approach will lead to equal or lower distortion than the fixed rate codebook at any given average rate, provided that the data is similar to the data used for training the codebook.</t>
+ <t>Instead, we minimize an objective function that consists of a weighted sum of rate and distortion, and use a codebook with an associated non-uniform rate table. Thus, we take into account that the probability mass function for selecting the codebook entries is by no means guaranteed to be uniform in our scenario. This approach has several advantages. It ensures that rarely used codebook vector centroids, which are modeling statistical outliers in the training set, are quantized with low error at the expense of a high rate. At the same time, it allows modeling frequently used centroids with low error and a relatively low rate. This approach leads to equal or lower distortion than the fixed-rate codebook at any given average rate, provided that the data is similar to that used for training the codebook.</t>
</section>
<section title='Error Mapping' anchor='lsf_error_mapping_overview_section'>
<t>
- Instead of minimizing the error in the LSF domain, we map the errors to better approximate spectral distortion by applying an individual weight to each element in the error vector. The weight vectors are calculated for each input vector using the Inverse Harmonic Mean Weighting (IHMW) function proposed by Laroia et al., see <xref target="laroia-icassp" />.
+ Instead of minimizing the error in the LSF domain, we map the errors to better approximate spectral distortion by applying an individual weight to each element in the error vector. The weight vectors are calculated for each input vector using the Inverse Harmonic Mean Weighting (IHMW) function proposed by Laroia et al. (see <xref target="laroia-icassp" />).
Consequently, we solve the following minimization problem, i.e.,
<figure align="center">
<artwork align="center">
@@ -4957,7 +4957,7 @@ LSF_q = argmin { (LSF - c)' * W * (LSF - c) + mu * rate },
</section>
<section title='Multi-Stage Vector Codebook'>
<t>
- We arrange the codebook in a multiple stage structure to achieve a quantizer that is both memory efficient and highly scalable in terms of computational complexity, see e.g. <xref target="sinervo-norsig" />. In the first stage the input is the LSF vector to be quantized, and in any other stage s > 1, the input is the quantization error from the previous stage, see <xref target='lsf_quantizer_structure_overview_figure' />.
+ We arrange the codebook in a multiple-stage structure to achieve a quantizer that is both memory efficient and highly scalable in terms of computational complexity (see, e.g., <xref target="sinervo-norsig"/>). In the first stage the input is the LSF vector to be quantized, and in any other stage s > 1, the input is the quantization error from the previous stage (see <xref target='lsf_quantizer_structure_overview_figure'/>).
</t>
<figure align="center" anchor="lsf_quantizer_structure_overview_figure">
<artwork align="center">
@@ -4980,7 +4980,7 @@ LSF +----------+ res_1 +----------+ res_{S-1} +----------+
</figure>
<t>
- By storing total of M codebook vectors, i.e.,
+ By storing a total of M codebook vectors, i.e.,
<figure align="center">
<artwork align="center">
<![CDATA[
@@ -5003,16 +5003,16 @@ T = | | Ms
]]>
</artwork>
</figure>
- possible combinations for generating the quantized vector. It is for example possible to represent 2**36 uniquely combined vectors using only 216 vectors in memory, as done in SILK for voiced speech at all sample frequencies above 8&nbsp;kHz.
+ possible combinations for generating the quantized vector. It is, for example, possible to represent 2**36 uniquely combined vectors using only 216 vectors in memory, as is done in SILK for voiced speech at all sample frequencies above 8&nbsp;kHz.
</t>
</section>
<section title='Survivor Based Codebook Search'>
<t>
- This number of possible combinations is far too high for a full search to be carried out for each frame so for all stages but the last, i.e., s smaller than S, only the best min( L, Ms ) centroids are carried over to stage s+1. In each stage the objective function, i.e., the weighted sum of accumulated bitrate and distortion, is evaluated for each codebook vector entry and the results are sorted. Only the best paths and the corresponding quantization errors are considered in the next stage. In the last stage S the single best path through the multistage codebook is determined. By varying the maximum number of survivors from each stage to the next L, the complexity can be adjusted in real-time at the cost of a potential increase when evaluating the objective function for the resulting quantized vector. This approach scales all the way between the two extremes, L=1 being a greedy search, and the desirable but infeasible full search, L=T/MS. In fact, a performance almost as good as what can be achieved with the infeasible full search can be obtained at a substantially lower complexity by using this approach, see e.g. <xref target='leblanc-tsap' />.
+ This number of possible combinations is far too high to carry out a full search for each frame, so for all stages but the last (i.e., s smaller than S), only the best min(L, Ms) centroids are carried over to stage s+1. In each stage, the objective function (i.e., the weighted sum of accumulated bitrate and distortion) is evaluated for each codebook vector entry and the results are sorted. Only the best paths and their corresponding quantization errors are considered in the next stage. In the last stage, S, the single best path through the multistage codebook is determined. By varying the maximum number of survivors from each stage to the next, L, the complexity can be adjusted in real time, at the cost of a potential increase when evaluating the objective function for the resulting quantized vector. This approach scales all the way between the two extremes, L=1 being a greedy search, and the desirable but infeasible full search, L=T/MS. Performance almost as good as that of the infeasible full search can be obtained at substantially lower complexity by using this approach (see, e.g., <xref target='leblanc-tsap'/>).
</t>
</section>
<section title='LSF Stabilization' anchor='lsf_stabilizer_overview_section'>
- <t>If the input is stable, finding the best candidate will usually result in the quantized vector also being stable, but due to the multi-stage approach it could in theory happen that the best quantization candidate is unstable and because of this there is a need to explicitly ensure that the quantized vectors are stable. Therefore we apply a LSF stabilization method which ensures that the LSF parameters are within valid range, increasingly sorted, and have minimum distances between each other and the border values that have been pre-determined as the 0.01 percentile distance values from a large training set.</t>
+ <t>If the input is stable, finding the best candidate usually results in a quantized vector that is also stable. Due to the multi-stage approach, however, it is theoretically possible that the best quantization candidate is unstable. Because of this, it is necessary to explicitly ensure that the quantized vectors are stable. Therefore we apply an LSF stabilization method which ensures that the LSF parameters are within valid range, increasingly sorted, and have minimum distances between each other and the border values that have been predetermined as the 0.01 percentile distance values from a large training set.</t>
</section>
<section title='Off-Line Codebook Training'>
<t>
@@ -5023,7 +5023,7 @@ T = | | Ms
<section title='LTP Quantization' anchor='ltp_quantizer_overview_section'>
<t>
- For voiced frames, the prediction analysis described in <xref target='pred_ana_voiced_overview_section' /> resulted in four sets (one set per subframe) of five LTP coefficients, plus four weighting matrices. Also, the LTP coefficients for each subframe are quantized using entropy constrained vector quantization. A total of three vector codebooks are available for quantization, with different rate-distortion trade-offs. The three codebooks have 10, 20 and 40 vectors and average rates of about 3, 4, and 5 bits per vector, respectively. Consequently, the first codebook has larger average quantization distortion at a lower rate, whereas the last codebook has smaller average quantization distortion at a higher rate. Given the weighting matrix W_ltp and LTP vector b, the weighted rate-distortion measure for a codebook vector cb_i with rate r_i is give by
+ For voiced frames, the prediction analysis described in <xref target='pred_ana_voiced_overview_section' /> resulted in four sets (one set per subframe) of five LTP coefficients, plus four weighting matrices. The LTP coefficients for each subframe are quantized using entropy constrained vector quantization. A total of three vector codebooks are available for quantization, with different rate-distortion trade-offs. The three codebooks have 10, 20, and 40 vectors and average rates of about 3, 4, and 5 bits per vector, respectively. Consequently, the first codebook has larger average quantization distortion at a lower rate, whereas the last codebook has smaller average quantization distortion at a higher rate. Given the weighting matrix W_ltp and LTP vector b, the weighted rate-distortion measure for a codebook vector cb_i with rate r_i is give by
<figure align="center">
<artwork align="center">
<![CDATA[
@@ -5032,11 +5032,11 @@ T = | | Ms
</artwork>
</figure>
where u is a fixed, heuristically-determined parameter balancing the distortion and rate. Which codebook gives the best performance for a given LTP vector depends on the weighting matrix for that LTP vector. For example, for a low valued W_ltp, it is advantageous to use the codebook with 10 vectors as it has a lower average rate. For a large W_ltp, on the other hand, it is often better to use the codebook with 40 vectors, as it is more likely to contain the best codebook vector.
- The weighting matrix W_ltp depends mostly on two aspects of the input signal. The first is the periodicity of the signal; the more periodic the larger W_ltp. The second is the change in signal energy in the current subframe, relative to the signal one pitch lag earlier. A decaying energy leads to a larger W_ltp than an increasing energy. Both aspects do not fluctuate very fast which causes the W_ltp matrices for different subframes of one frame often to be similar. As a result, one of the three codebooks typically gives good performance for all subframes. Therefore the codebook search for the subframe LTP vectors is constrained to only allow codebook vectors to be chosen from the same codebook, resulting in a rate reduction.
+ The weighting matrix W_ltp depends mostly on two aspects of the input signal. The first is the periodicity of the signal; the more periodic, the larger W_ltp. The second is the change in signal energy in the current subframe, relative to the signal one pitch lag earlier. A decaying energy leads to a larger W_ltp than an increasing energy. Both aspects fluctuate relatively slowly, which causes the W_ltp matrices for different subframes of one frame often to be similar. Because of this, one of the three codebooks typically gives good performance for all subframes, and therefore the codebook search for the subframe LTP vectors is constrained to only allow codebook vectors to be chosen from the same codebook, resulting in a rate reduction.
</t>
<t>
- To find the best codebook, each of the three vector codebooks is used to quantize all subframe LTP vectors and produce a combined weighted rate-distortion measure for each vector codebook and the vector codebook with the lowest combined rate-distortion over all subframes is chosen. The quantized LTP vectors are used in the noise shaping quantizer, and the index of the codebook plus the four indices for the four subframe codebook vectors are passed on to the range encoder.
+ To find the best codebook, each of the three vector codebooks is used to quantize all subframe LTP vectors and produce a combined weighted rate-distortion measure for each vector codebook. The vector codebook with the lowest combined rate-distortion over all subframes is chosen. The quantized LTP vectors are used in the noise shaping quantizer, and the index of the codebook plus the four indices for the four subframe codebook vectors are passed on to the range encoder.
</t>
</section>
@@ -5053,7 +5053,7 @@ T = | | Ms
<section title='Range Encoder'>
<t>
- Range encoding is a well known method for entropy coding in which a bitstream sequence is continually updated with every new symbol, based on the probability for that symbol. It is similar to arithmetic coding but rather than being restricted to generating binary output symbols, it can generate symbols in any chosen number base. In SILK all side information is range encoded. Each quantized parameter has its own cumulative density function based on histograms for the quantization indices obtained by running a training database.
+ Range encoding is a well known method for entropy coding in which a bitstream sequence is continually updated with every new symbol, based on the probability for that symbol. It is similar to arithmetic coding, but rather than being restricted to generating binary output symbols, it can generate symbols in any chosen number base. In SILK all side information is range encoded. Each quantized parameter has its own cumulative density function based on histograms for the quantization indices obtained by running a training database.
</t>
<section title='Bitstream Encoding Details'>
@@ -5098,7 +5098,7 @@ The low-overlap window is created by zero-padding the basic window and inserting
<t>
The MDCT output is divided into bands that are designed to match the ear's critical
bands for the smallest (2.5&nbsp;ms) frame size. The larger frame sizes use integer
-multiplies of the 2.5&nbsp;ms layout. For each band, the encoder
+multiples of the 2.5&nbsp;ms layout. For each band, the encoder
computes the energy that will later be encoded. Each band is then normalized by the
square root of the <spanx style="strong">unquantized</spanx> energy, such that each band now forms a unit vector X.
The energy and the normalization are computed by compute_band_energies()
@@ -5147,7 +5147,7 @@ limited dynamic range do not suffer desynchronization. Identical prediction
clamping must be implemented in all encoders and decoders.
We approximate the ideal
probability distribution of the prediction error using a Laplace distribution
-with separate parameters for each frame size in intra and inter-frame modes. The
+with separate parameters for each frame size in intra- and inter-frame modes. The
coarse energy quantization is performed by quant_coarse_energy() and
quant_coarse_energy() (quant_bands.c). The encoding of the Laplace-distributed values is
implemented in ec_laplace_encode() (laplace.c).
@@ -5207,7 +5207,7 @@ L2 norm.
<t>
The search for the best codevector y is performed by alg_quant()
(vq.c). There are several possible approaches to the
-search with a trade-off between quality and complexity. The method used in the reference
+search, with a trade-off between quality and complexity. The method used in the reference
implementation computes an initial codeword y1 by projecting the residual signal
R = X - p' onto the codebook pyramid of K-1 pulses:
</t>
@@ -5314,10 +5314,10 @@ Compliance with this specification means that a decoder's output MUST be
To complement the Opus specification, the "Opus Custom" codec is defined to
handle special sampling rates and frame rates that are not supported by the
main Opus specification. Use of Opus Custom is discouraged for all but very
-special applications for which a frame size different from 2.5, 5, 10, 20 ms is
+special applications for which a frame size different from 2.5, 5, 10, or 20&nbsp;ms is
needed (for either complexity or latency reasons). Such applications will not
be compatible with the "main" Opus codec. In Opus Custom operation,
-only the CELT later is available, which is available using the celt_* function
+only the CELT layer is available, which is available using the celt_* function
calls in celt.h.
</t>
@@ -5334,7 +5334,7 @@ Malicious payloads must not cause the decoder to overrun its allocated memory
or to take an excessive amount of resources to decode.
Although problems
in encoders are typically rarer, the same applies to the encoder. Malicious
-audio stream must not cause the encoder to misbehave because this would
+audio streams must not cause the encoder to misbehave because this would
allow an attacker to attack transcoding gateways.
</t>
<t>
@@ -5342,11 +5342,11 @@ The reference implementation contains no known buffer overflow or cases where
a specially crafted packet or audio segment could cause a significant increase
in CPU load.
However, on certain CPU architectures where denormalized floating-point
- operations are much slower than normal floating-point operations it is
+ operations are much slower than normal floating-point operations, it is
possible for some audio content (e.g., silence or near-silence) to cause such
an increase in CPU load.
Denormals can be introduced by reordering operations in the compiler and depend
- on the target architecture, so it is difficult to guarantee an implementation
+ on the target architecture, so it is difficult to guarantee that an implementation
avoids them.
For such architectures, it is RECOMMENDED that one add very small
floating-point offsets to prevent significant numbers of denormalized
@@ -5367,9 +5367,9 @@ This document has no actions for IANA.
<t>
Thanks to all other developers, including Raymond Chen, Soeren Skak Jensen, Gregory Maxwell,
Christopher Montgomery, and Karsten Vandborg Soerensen. We would also
-like to thank Igor Dyakonov, Jan Skoglund for their help with subjective testing of the
-Opus codec. Thanks to John Ridges, Keith Yan and many others on the Opus and CELT mailing lists
-for their bug reports and feeback, as well as Ralph Giles, Christian Hoene, and
+like to thank Igor Dyakonov and Jan Skoglund for their help with subjective testing of the
+Opus codec. Thanks to John Ridges, Keith Yan, and many others on the Opus and CELT mailing lists
+for their bug reports and feedback, as well as Ralph Giles, Christian Hoene, and
Kat Walsh, for their feedback on the draft.
</t>
</section>
@@ -5610,7 +5610,7 @@ It is RECOMMENDED that a transport layer choose exactly one framing scheme,
For example, although a regular Opus stream does not support more than two
channels, a multi-channel Opus stream may be formed from several one- and
two-channel streams.
-To pack an Opus packets from each of these streams together in a single packet
+To pack an Opus packet from each of these streams together in a single packet
at the transport layer, one could use the self-delimiting framing for all but
the last stream, and then the regular, undelimited framing for the last one.
Reverting to the undelimited framing for the last stream saves overhead