Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/nextcloud/spreed.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorrobotboyfriend <56538508+robotboyfriend@users.noreply.github.com>2021-10-06 11:27:48 +0300
committerGitHub <noreply@github.com>2021-10-06 11:27:48 +0300
commit830a9c2271456f2c3d53b8adef376e159c05f293 (patch)
treed11a6541fbc51a46231cc4ccf25c17126062a711 /README.md
parentc15391c0d78a13e8b0629059d89becfd75e9728e (diff)
Update README.md
Improved spelling in README.
Diffstat (limited to 'README.md')
-rw-r--r--README.md6
1 files changed, 3 insertions, 3 deletions
diff --git a/README.md b/README.md
index 694a1975c..e2d31b237 100644
--- a/README.md
+++ b/README.md
@@ -48,11 +48,11 @@ A single video stream currently uses about 1 Mbit/sec and the total required ban
![](https://github.com/nextcloud/spreed/raw/e419b79819963a631ce811ffed432853ec4723c2/docs/HPB-P2P.png)
-This means that in a call with 5 participants, each has to send and receive about 4 Mbit/sec. Given the asymetric nature of most typical broadband connections, it's sending video that quickly becomes the bottleneck. Moreover, decoding all those video streams puts a big strain on the system of each participant.
+This means that in a call with 5 participants, each has to send and receive about 4 Mbit/sec. Given the asymmetric nature of most typical broadband connections, it's sending video that quickly becomes the bottleneck. Moreover, decoding all those video streams puts a big strain on the system of each participant.
To limit and CPU bandwidth usage, participants can disable video. This will drop the bandwidth use to audio only, about 50 kbit/sec (about 1/20th of the bandwidth of video), eliminating most decoding work. When all participants are on a fast network, a call with 20 people without video could be doable.
-Still a call creates a load on the participants' browsers (decoding streams) and on the server as it handles signaling. This, for example, also has consequences for devices that support calls. Mobile device browsers will sooner run out of compute capacity and cause issues to the call. While we continously work to optimize Talk for performance, there is still work to be done so it is not unlikely that the bottleneck will be there for the time being. We very much welcome help in optimization of calls!
+Still a call creates a load on the participants' browsers (decoding streams) and on the server as it handles signaling. This, for example, also has consequences for devices that support calls. Mobile device browsers will sooner run out of compute capacity and cause issues to the call. While we continuously work to optimize Talk for performance, there is still work to be done so it is not unlikely that the bottleneck will be there for the time being. We very much welcome help in optimization of calls!
### How to have the maximum number of participants in a call
@@ -68,7 +68,7 @@ With this setup, 20 users should be possible in a typical setup.
### Scaling beyond 5-20 users in a call
-Nextcloud offers a partner product, the Talk High Performance Back-end, which deals with this scalability issue by including a Selective Forwarding Unit (SFU). Each participant sends one stream to the SFU which distributes it under the participants. This typically scales to 30-50 or even more active participants. Further more, the HPB setup also allows calls with hundreds of passive participants. With this number of participants is only limited by the bandwidth of the SFU setup. This is ideal for one-to-many streaming like webinars or remote teaching lessons.
+Nextcloud offers a partner product, the Talk High Performance Back-end, which deals with this scalability issue by including a Selective Forwarding Unit (SFU). Each participant sends one stream to the SFU which distributes it under the participants. This typically scales to 30-50 or even more active participants. Furthermore, the HPB setup also allows calls with hundreds of passive participants. With this number of participants is only limited by the bandwidth of the SFU setup. This is ideal for one-to-many streaming like webinars or remote teaching lessons.
The HPB also takes care of signaling, decreasing the load of many calls on the Talk server and optional SIP integration so users can dial in to calls by phone.