Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/diaspora/diaspora.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorBenjamin Neff <benjamin@coding4coffee.ch>2022-09-10 02:18:29 +0300
committerBenjamin Neff <benjamin@coding4coffee.ch>2022-09-10 02:20:34 +0300
commitae4cbb18f7a05859698deb4f63fc769ea1dcf6a5 (patch)
tree12aae1b807cd8cb5fb19e6868dcc9f66001df5ab
parent1c72dcc412ad0ab4c865005ec787b6199e5d48bd (diff)
parent97cfc80a1fe6b712d15c13081cc938d3650fbdb1 (diff)
Merge pull request #8392 from denschub/unicorn-dust
Replace Unicorn with Puma
-rw-r--r--Changelog.md15
-rw-r--r--FederationProcfile6
-rw-r--r--Gemfile3
-rw-r--r--Gemfile.lock15
-rw-r--r--Procfile2
-rw-r--r--app/workers/archive_base.rb9
-rwxr-xr-xbin/puma27
-rwxr-xr-xbin/pumactl27
-rw-r--r--config.ru8
-rw-r--r--config/database.yml.example6
-rw-r--r--config/defaults.yml28
-rw-r--r--config/diaspora.toml.example34
-rw-r--r--config/eye.rb37
-rw-r--r--config/initializers/sidekiq.rb10
-rw-r--r--config/puma.rb47
-rw-r--r--config/unicorn.rb48
-rw-r--r--redis-integration1.conf486
-rw-r--r--redis-integration2.conf486
-rwxr-xr-xscript/server71
-rw-r--r--spec/workers/export_user_spec.rb14
20 files changed, 161 insertions, 1218 deletions
diff --git a/Changelog.md b/Changelog.md
index 2e8cc0dd0..2d23759b6 100644
--- a/Changelog.md
+++ b/Changelog.md
@@ -16,6 +16,20 @@ After [a discussion with our community on Discourse](https://discourse.diasporaf
Although the chat was never enabled per default and was marked as experimental, some production pods did set up the integration and offered an XMPP service to their users. After this release, diaspora\* will no longer contain a chat applet, so users will no longer be able to use the webchat inside diaspora\*. The existing module that is used to enable users to authenticate to Prosody using their diaspora\* credentials will continue to work, but contact list synchronization might not work without further changes to the Prosody module, which is developed independently from this project.
+## Changes around the appserver and related configuration
+
+With this release, we switched from `unicorn` to `puma` to run our applications. For podmins running the default setup, this should significantly reduce memory usage, with similar or even better frontend performance! However, as great as this change is, some configuration changes are required.
+
+- The `single_process_mode` and `embed_sidekiq_worker` configurations have been removed. This mode was never truly a "single-process" mode, as it just spawned the Background Workers inside the runserver. If you're using `script/server` to start your pod, this change does not impact you, but if you're running diaspora\* using other means, and you relied on this "single"-process mode, please ensure that Sidekiq workers get started.
+- The format of the `listen` configuration has changed. If you have not set that field in your configuration, you can skip this. Otherwise, make sure to adjust your configuration accordingly:
+ - Listening to Unix sockets with a relative path has changed from `unix:tmp/diaspora.sock` into `unix://tmp/diaspora.sock`.
+ - Listening to Unix sockets with an absolute path has changed from `unix:/run/diaspora/diaspora.sock` to `unix:///run/diaspora/diaspora.sock`.
+ - Listening to a local port has changed from `127.0.0.1:3000` to `tcp://127.0.0.1:3000`.
+- The `PORT` environment variable and the `-p` parameter to `script/server` have been removed. If you used that to run diaspora\* on a non-standard port, please use the `listen` configuration.
+- The `unicorn_worker` configuration has been dropped. With Puma, there should not be a need to increase the number of workers above a single worker in any pod of any size.
+- The `unicorn_timeout` configuration has been renamed to `web_timeout`.
+- **If you don't run your pod with `script/server`**, you have to update your setup. If you previously called `bin/bundle exec unicorn -c config/unicorn.rb` to run diaspora\*, you now have to run `bin/puma -C config/puma.rb`! Please update your systemd-Units or similar accordingly.
+
## Yarn for frontend dependencies
We use yarn to install the frontend dependencies now, so you need to have that installed. See here for how to install it: https://yarnpkg.com/en/docs/install
@@ -31,6 +45,7 @@ We use yarn to install the frontend dependencies now, so you need to have that i
* Use yarn to manage the frontend dependencies [#8364](https://github.com/diaspora/diaspora/pull/8364)
* Upgrade to latest `diaspora_federation`, remove support for old federation protocol [#8368](https://github.com/diaspora/diaspora/pull/8368)
* Remove support for `therubyracer` [#8337](https://github.com/diaspora/diaspora/issues/8337)
+* Replace `unicorn` with `puma` [#8392](https://github.com/diaspora/diaspora/pull/8392)
## Bug fixes
* Fix multiple photos upload progress bar [#7655](https://github.com/diaspora/diaspora/pull/7655)
diff --git a/FederationProcfile b/FederationProcfile
deleted file mode 100644
index aabfd7335..000000000
--- a/FederationProcfile
+++ /dev/null
@@ -1,6 +0,0 @@
-web1: env RAILS_ENV=integration1 bundle exec rails s -p 3001
-worker1: env RAILS_ENV=integration1 VVERBOSE=1 QUEUE=* bundle exec rake resque:work
-redis1: env RAILS_ENV=integration1 redis-server ./redis-integration1.conf
-web2: env RAILS_ENV=integration2 bundle exec rails s -p 3002
-worker2: env RAILS_ENV=integration2 VVERBOSE=1 QUEUE=* bundle exec rake resque:work
-redis2: env RAILS_ENV=integration2 redis-server ./redis-integration2.conf \ No newline at end of file
diff --git a/Gemfile b/Gemfile
index 48619214d..3a1a2dd91 100644
--- a/Gemfile
+++ b/Gemfile
@@ -10,8 +10,7 @@ gem "responders", "3.0.1"
# Appserver
-gem "unicorn", "6.1.0", require: false
-gem "unicorn-worker-killer", "0.4.5"
+gem "puma", "5.6.5", require: false
# Federation
diff --git a/Gemfile.lock b/Gemfile.lock
index 554a667cb..a45b16517 100644
--- a/Gemfile.lock
+++ b/Gemfile.lock
@@ -310,8 +310,6 @@ GEM
fuubar (2.5.1)
rspec-core (~> 3.0)
ruby-progressbar (~> 1.4)
- get_process_mem (0.2.7)
- ffi (~> 1.0)
gitlab (4.18.0)
httparty (~> 0.18)
terminal-table (>= 1.5.1)
@@ -398,7 +396,6 @@ GEM
jsonpath (1.1.2)
multi_json
jwt (2.4.1)
- kgio (2.11.4)
kostya-sigar (2.0.10)
leaflet-rails (1.7.0)
rails (>= 4.2.0)
@@ -520,6 +517,8 @@ GEM
byebug (~> 11.0)
pry (~> 0.10)
public_suffix (4.0.7)
+ puma (5.6.5)
+ nio4r (~> 2.0)
raabro (1.4.0)
racc (1.6.0)
rack (2.2.4)
@@ -581,7 +580,6 @@ GEM
rake (>= 12.2)
thor (~> 1.0)
rainbow (3.1.1)
- raindrops (0.20.0)
rake (12.3.3)
rash_alt (0.4.12)
hashie (>= 3.4)
@@ -735,12 +733,6 @@ GEM
unf_ext
unf_ext (0.0.8.2)
unicode-display_width (1.8.0)
- unicorn (6.1.0)
- kgio (~> 2.6)
- raindrops (~> 0.7)
- unicorn-worker-killer (0.4.5)
- get_process_mem (~> 0)
- unicorn (>= 4, < 7)
uuid (2.3.9)
macaddr (~> 1.0)
valid (1.2.0)
@@ -848,6 +840,7 @@ DEPENDENCIES
pronto-scss (= 0.11.0)
pry
pry-byebug
+ puma (= 5.6.5)
rack-cors (= 1.1.1)
rack-google-analytics (= 1.2.0)
rack-piwik (= 0.3.0)
@@ -885,8 +878,6 @@ DEPENDENCIES
twitter (= 7.0.0)
twitter-text (= 3.1.0)
typhoeus (= 1.4.0)
- unicorn (= 6.1.0)
- unicorn-worker-killer (= 0.4.5)
uuid (= 2.3.9)
versionist (= 2.0.1)
webmock (= 3.14.0)
diff --git a/Procfile b/Procfile
index 627365c9e..33aecbd98 100644
--- a/Procfile
+++ b/Procfile
@@ -1,2 +1,2 @@
-web: bin/bundle exec unicorn -c config/unicorn.rb -p $PORT
+web: bin/puma -C config/puma.rb
sidekiq: bin/bundle exec sidekiq
diff --git a/app/workers/archive_base.rb b/app/workers/archive_base.rb
index e1b641ae6..6b8b38a82 100644
--- a/app/workers/archive_base.rb
+++ b/app/workers/archive_base.rb
@@ -27,12 +27,17 @@ module Workers
end
def currently_running_archive_jobs
- return 0 if AppConfig.environment.single_process_mode?
-
Sidekiq::Workers.new.count do |process_id, thread_id, work|
!(Process.pid.to_s == process_id.split(":")[1] && Thread.current.object_id.to_s(36) == thread_id) &&
ArchiveBase.subclasses.map(&:to_s).include?(work["payload"]["class"])
end
+ rescue Redis::CannotConnectError
+ # If code gets to this point and there is no Redis conenction, we're
+ # running in a Test environment and have not mocked Sidekiq::Workers, so
+ # we're not testing the concurrency-limiting behavior.
+ # There is no way a production pod will run into this code, as diaspora*
+ # refuses to start without redis.
+ 0
end
end
end
diff --git a/bin/puma b/bin/puma
new file mode 100755
index 000000000..316845be6
--- /dev/null
+++ b/bin/puma
@@ -0,0 +1,27 @@
+#!/usr/bin/env ruby
+# frozen_string_literal: true
+
+#
+# This file was generated by Bundler.
+#
+# The application 'puma' is installed as part of a gem, and
+# this file is here to facilitate running it.
+#
+
+ENV["BUNDLE_GEMFILE"] ||= File.expand_path("../Gemfile", __dir__)
+
+bundle_binstub = File.expand_path("bundle", __dir__)
+
+if File.file?(bundle_binstub)
+ if File.read(bundle_binstub, 300) =~ /This file was generated by Bundler/
+ load(bundle_binstub)
+ else
+ abort("Your `bin/bundle` was not generated by Bundler, so this binstub cannot run.
+Replace `bin/bundle` by running `bundle binstubs bundler --force`, then run this command again.")
+ end
+end
+
+require "rubygems"
+require "bundler/setup"
+
+load Gem.bin_path("puma", "puma")
diff --git a/bin/pumactl b/bin/pumactl
new file mode 100755
index 000000000..75ffb108a
--- /dev/null
+++ b/bin/pumactl
@@ -0,0 +1,27 @@
+#!/usr/bin/env ruby
+# frozen_string_literal: true
+
+#
+# This file was generated by Bundler.
+#
+# The application 'pumactl' is installed as part of a gem, and
+# this file is here to facilitate running it.
+#
+
+ENV["BUNDLE_GEMFILE"] ||= File.expand_path("../Gemfile", __dir__)
+
+bundle_binstub = File.expand_path("bundle", __dir__)
+
+if File.file?(bundle_binstub)
+ if File.read(bundle_binstub, 300) =~ /This file was generated by Bundler/
+ load(bundle_binstub)
+ else
+ abort("Your `bin/bundle` was not generated by Bundler, so this binstub cannot run.
+Replace `bin/bundle` by running `bundle binstubs bundler --force`, then run this command again.")
+ end
+end
+
+require "rubygems"
+require "bundler/setup"
+
+load Gem.bin_path("puma", "pumactl")
diff --git a/config.ru b/config.ru
index 99723cf22..118c99f09 100644
--- a/config.ru
+++ b/config.ru
@@ -8,14 +8,6 @@
require_relative "config/environment"
-# Kill unicorn workers really aggressively (at 300mb)
-if defined?(Unicorn)
- require "unicorn/worker_killer"
- oom_min = (280) * (1024**2)
- oom_max = (300) * (1024**2)
- # Max memory size (RSS) per worker
- use Unicorn::WorkerKiller::Oom, oom_min, oom_max
-end
use Rack::Deflater
run Rails.application
diff --git a/config/database.yml.example b/config/database.yml.example
index f71f860ab..5e98ad6aa 100644
--- a/config/database.yml.example
+++ b/config/database.yml.example
@@ -45,9 +45,3 @@ production:
test:
<<: *combined
database: diaspora_test
-integration1:
- <<: *combined
- database: diaspora_integration1
-integration2:
- <<: *combined
- database: diaspora_integration2
diff --git a/config/defaults.yml b/config/defaults.yml
index 3b3919f00..9680f358c 100644
--- a/config/defaults.yml
+++ b/config/defaults.yml
@@ -11,7 +11,6 @@ defaults:
certificate_authorities:
redis:
require_ssl: true
- single_process_mode: false
sidekiq:
concurrency: 5
retry: 10
@@ -40,14 +39,12 @@ defaults:
sql: false
federation: false
server:
- listen: '0.0.0.0:3000'
+ listen: "tcp://127.0.0.1:3000"
rails_environment: 'development'
pid: "tmp/pids/web.pid"
stderr_log:
stdout_log:
- unicorn_worker: 2
- unicorn_timeout: 90
- embed_sidekiq_worker: false
+ web_timeout: 90
sidekiq_workers: 1
map:
mapbox:
@@ -179,23 +176,19 @@ development:
environment:
assets:
serve: true
- single_process_mode: true
require_ssl: false
logging:
debug:
sql: true
- server:
- unicorn_worker: 1
settings:
autofollow_on_join: false
autofollow_on_join_user: ''
production:
server:
- listen: 'unix:tmp/diaspora.sock'
+ listen: 'unix://tmp/diaspora.sock'
test:
environment:
url: 'http://localhost:9887/'
- single_process_mode: true
require_ssl: false
assets:
serve: true
@@ -211,18 +204,3 @@ test:
secret: 'sdoigjosdfijg'
mail:
enable: true
-integration1:
- environment:
- url: 'http://localhost:45789/'
- single_process_mode: true
- assets:
- serve: true
- require_ssl: false
-integration2:
- environment:
- url: 'http://localhost:34658/'
- redis: 'redis://localhost:6380'
- single_process_mode: true
- assets:
- serve: true
- require_ssl: false
diff --git a/config/diaspora.toml.example b/config/diaspora.toml.example
index 6304fb3fe..6401fc043 100644
--- a/config/diaspora.toml.example
+++ b/config/diaspora.toml.example
@@ -54,14 +54,6 @@
## Do not change this default unless you are sure!
#require_ssl = true
-## Single-process mode (default=false).
-## If set to true, Diaspora will work with just the appserver (Unicorn by
-## default) running. However, this makes it quite slow as intensive jobs
-## must be run all the time inside the request cycle. We strongly
-## recommended you leave this disabled for production setups.
-## Set to true to enable.
-#single_process_mode = false
-
## Set redirect URL for an external image host (Amazon S3 or other).
## If hosting images for your pod on an external server (even your own),
## add its URL here. All requests made to images under /uploads/images
@@ -162,12 +154,12 @@
## Settings affecting how ./script/server behaves.
[configuration.server]
-## Where the appserver should listen to (default="unix:tmp/diaspora.sock")
-#listen = "unix:tmp/diaspora.sock"
-#listen = "unix:/run/diaspora/diaspora.sock"
-#listen = "127.0.0.1:3000"
+## Where the appserver should listen to (default="unix://tmp/diaspora.sock")
+#listen = "unix://tmp/diaspora.sock"
+#listen = "unix:///run/diaspora/diaspora.sock"
+#listen = "tcp://127.0.0.1:3000"
-## Set the path for the PID file of the unicorn master process (default=tmp/pids/web.pid)
+## Set the path for the PID file of the web master process (default=tmp/pids/web.pid)
#pid = "tmp/pids/web.pid"
## Rails environment (default="development").
@@ -175,23 +167,15 @@
## Change this to "production" if you wish to run a production environment.
#rails_environment = "production"
-## Write unicorn stderr and stdout log.
-#stderr_log = "log/unicorn-stderr.log"
-#stdout_log = "log/unicorn-stdout.log"
-
-## Number of Unicorn worker processes (default=2).
-## Increase this if you have many users.
-#unicorn_worker = 2
+## Write web stderr and stdout log.
+#stderr_log = "log/web-stderr.log"
+#stdout_log = "log/web-stdout.log"
## Number of seconds before a request is aborted (default=90).
## Increase if you get empty responses, or if large image uploads fail.
## Decrease if you're under heavy load and don't care if some
## requests fail.
-#unicorn_timeout = 90
-
-## Embed a Sidekiq worker inside the unicorn process (default=false).
-## Useful for minimal Heroku setups.
-#embed_sidekiq_worker = false
+#web_timeout = 90
## Number of Sidekiq worker processes (default=1).
## In most cases it is better to
diff --git a/config/eye.rb b/config/eye.rb
index 20dfe9c86..94ded11c5 100644
--- a/config/eye.rb
+++ b/config/eye.rb
@@ -14,39 +14,30 @@ Eye.application("diaspora") do
stderr "log/eye_processes_stderr.log"
process :web do
- unicorn_command = "bin/bundle exec unicorn -c config/unicorn.rb"
-
- if rails_env == "production"
- start_command "#{unicorn_command} -D"
- daemonize false
- restart_command "kill -USR2 {PID}"
- restart_grace 10.seconds
- else
- start_command unicorn_command
- daemonize true
- end
+ web_command = "bin/puma -C config/puma.rb"
+
+ start_command web_command
+ daemonize true
+ restart_command "kill -USR2 {PID}"
+ restart_grace 10.seconds
pid_file AppConfig.server.pid.get
stop_signals [:TERM, 10.seconds]
- env "PORT" => ENV["PORT"]
-
monitor_children do
stop_command "kill -QUIT {PID}"
end
end
group :sidekiq do
- with_condition(!AppConfig.environment.single_process_mode?) do
- AppConfig.server.sidekiq_workers.to_i.times do |i|
- i += 1
-
- process "sidekiq#{i}" do
- start_command "bin/bundle exec sidekiq"
- daemonize true
- pid_file "tmp/pids/sidekiq#{i}.pid"
- stop_signals [:USR1, 0, :TERM, 10.seconds, :KILL]
- end
+ AppConfig.server.sidekiq_workers.to_i.times do |i|
+ i += 1
+
+ process "sidekiq#{i}" do
+ start_command "bin/bundle exec sidekiq"
+ daemonize true
+ pid_file "tmp/pids/sidekiq#{i}.pid"
+ stop_signals [:USR1, 0, :TERM, 10.seconds, :KILL]
end
end
end
diff --git a/config/initializers/sidekiq.rb b/config/initializers/sidekiq.rb
index 104e94a82..9770db3ee 100644
--- a/config/initializers/sidekiq.rb
+++ b/config/initializers/sidekiq.rb
@@ -3,16 +3,6 @@
require "sidekiq_middlewares"
require "sidekiq/middleware/i18n"
-# Single process-mode
-if AppConfig.environment.single_process_mode? && !Rails.env.test?
- if Rails.env.production?
- warn "WARNING: You are running Diaspora in production without Sidekiq"
- warn " workers turned on. Please set single_process_mode to false in"
- warn " config/diaspora.toml."
- end
- require "sidekiq/testing/inline"
-end
-
Sidekiq.configure_server do |config|
config.redis = AppConfig.get_redis_options
diff --git a/config/puma.rb b/config/puma.rb
new file mode 100644
index 000000000..a11c0c0f1
--- /dev/null
+++ b/config/puma.rb
@@ -0,0 +1,47 @@
+# frozen_string_literal: true
+
+require_relative "load_config"
+
+pidfile AppConfig.server.pid.get
+bind AppConfig.server.listen.get
+
+worker_timeout AppConfig.server.web_timeout.to_i
+
+if AppConfig.server.stdout_log? || AppConfig.server.stderr_log?
+ stdout_redirect AppConfig.server.stdout_log? ? AppConfig.server.stdout_log.get : "/dev/null",
+ AppConfig.server.stderr_log? ? AppConfig.server.stderr_log.get : "/dev/null"
+end
+
+# In general, running Puma in cluster-mode is one of those very rare setups
+# that's only relevant in *huge* scale. However, starting 1 worker runs Puma in
+# cluster mode, with a single worker. This means you get to pay all the memory
+# overhead of spawning in "cluster mode", but you don't get any performance
+# benefits. This makes no sense. Setting "workers = 0" explicitly turns off
+# cluster mode.
+#
+# For more details and further references, see
+# https://github.com/puma/puma/commit/81d26e91b777ab120e8f52d45385f0e018438ba4
+workers 0
+
+preload_app!
+
+before_fork do
+ # we're preloading app in production, so force-reconenct the DB
+ ActiveRecord::Base.connection_pool.disconnect!
+
+ # drop the Redis connection
+ Sidekiq.redis {|redis| redis.client.disconnect }
+end
+
+on_worker_boot do
+ # reopen logfiles to obtain a new file descriptor
+ Logging.reopen
+
+ ActiveSupport.on_load(:active_record) do
+ # we're preloading app in production, so reconnect to DB
+ ActiveRecord::Base.establish_connection
+ end
+
+ # We don't generate uuids in the frontend, but let's be on the safe side
+ UUID.generator.next_sequence
+end
diff --git a/config/unicorn.rb b/config/unicorn.rb
deleted file mode 100644
index ac2120b65..000000000
--- a/config/unicorn.rb
+++ /dev/null
@@ -1,48 +0,0 @@
-# frozen_string_literal: true
-
-require_relative "load_config"
-
-port = ENV["PORT"]
-port = port && !port.empty? ? port.to_i : nil
-
-listen port || AppConfig.server.listen.get unless RACKUP[:set_listener]
-pid AppConfig.server.pid.get
-worker_processes AppConfig.server.unicorn_worker.to_i
-timeout AppConfig.server.unicorn_timeout.to_i
-stderr_path AppConfig.server.stderr_log.get if AppConfig.server.stderr_log?
-stdout_path AppConfig.server.stdout_log.get if AppConfig.server.stdout_log?
-
-preload_app true
-@sidekiq_pid = nil
-
-before_fork do |_server, _worker|
- ActiveRecord::Base.connection.disconnect! # preloading app in master, so reconnect to DB
-
- # disconnect redis if in use
- Sidekiq.redis(&:close) unless AppConfig.environment.single_process_mode?
-
- @sidekiq_pid ||= spawn("bin/bundle exec sidekiq") if AppConfig.server.embed_sidekiq_worker?
-end
-
-after_fork do |server, worker|
- Logging.reopen # reopen logfiles to obtain a new file descriptor
-
- ActiveRecord::Base.establish_connection # preloading app in master, so reconnect to DB
-
- # We don't generate uuids in the frontend, but let's be on the safe side
- UUID.generator.next_sequence
-
- # Check for an old master process from a graceful restart
- old_pid = "#{AppConfig.server.pid.get}.oldbin"
-
- if File.exist?(old_pid) && server.pid != old_pid
- begin
- # Remove a worker from the old master when we fork a new one (TTOU)
- # Except for the last worker forked by this server, which kills the old master (QUIT)
- signal = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
- Process.kill(signal, File.read(old_pid).to_i)
- rescue Errno::ENOENT, Errno::ESRCH
- # someone else did our job for us
- end
- end
-end
diff --git a/redis-integration1.conf b/redis-integration1.conf
deleted file mode 100644
index 877925868..000000000
--- a/redis-integration1.conf
+++ /dev/null
@@ -1,486 +0,0 @@
-# Redis configuration file example
-
-# Note on units: when memory size is needed, it is possible to specifiy
-# it in the usual form of 1k 5GB 4M and so forth:
-#
-# 1k => 1000 bytes
-# 1kb => 1024 bytes
-# 1m => 1000000 bytes
-# 1mb => 1024*1024 bytes
-# 1g => 1000000000 bytes
-# 1gb => 1024*1024*1024 bytes
-#
-# units are case insensitive so 1GB 1Gb 1gB are all the same.
-
-# By default Redis does not run as a daemon. Use 'yes' if you need it.
-# Note that Redis will write a pid file in /usr/local/var/run/redis.pid when daemonized.
-daemonize no
-
-# When running daemonized, Redis writes a pid file in /usr/local/var/run/redis.pid by
-# default. You can specify a custom pid file location here.
-pidfile /usr/local/var/run/redis.pid
-
-# Accept connections on the specified port, default is 6379.
-# If port 0 is specified Redis will not listen on a TCP socket.
-port 6379
-
-# If you want you can bind a single interface, if the bind option is not
-# specified all the interfaces will listen for incoming connections.
-#
-# bind 127.0.0.1
-
-# Specify the path for the unix socket that will be used to listen for
-# incoming connections. There is no default, so Redis will not listen
-# on a unix socket when not specified.
-#
-# unixsocket /tmp/redis.sock
-# unixsocketperm 755
-
-# Close the connection after a client is idle for N seconds (0 to disable)
-timeout 0
-
-# Set server verbosity to 'debug'
-# it can be one of:
-# debug (a lot of information, useful for development/testing)
-# verbose (many rarely useful info, but not a mess like the debug level)
-# notice (moderately verbose, what you want in production probably)
-# warning (only very important / critical messages are logged)
-loglevel verbose
-
-# Specify the log file name. Also 'stdout' can be used to force
-# Redis to log on the standard output. Note that if you use standard
-# output for logging but daemonize, logs will be sent to /dev/null
-logfile stdout
-
-# To enable logging to the system logger, just set 'syslog-enabled' to yes,
-# and optionally update the other syslog parameters to suit your needs.
-# syslog-enabled no
-
-# Specify the syslog identity.
-# syslog-ident redis
-
-# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
-# syslog-facility local0
-
-# Set the number of databases. The default database is DB 0, you can select
-# a different one on a per-connection basis using SELECT <dbid> where
-# dbid is a number between 0 and 'databases'-1
-databases 16
-
-################################ SNAPSHOTTING #################################
-#
-# Save the DB on disk:
-#
-# save <seconds> <changes>
-#
-# Will save the DB if both the given number of seconds and the given
-# number of write operations against the DB occurred.
-#
-# In the example below the behaviour will be to save:
-# after 900 sec (15 min) if at least 1 key changed
-# after 300 sec (5 min) if at least 10 keys changed
-# after 60 sec if at least 10000 keys changed
-#
-# Note: you can disable saving at all commenting all the "save" lines.
-
-save 900 1
-save 300 10
-save 60 10000
-
-# Compress string objects using LZF when dump .rdb databases?
-# For default that's set to 'yes' as it's almost always a win.
-# If you want to save some CPU in the saving child set it to 'no' but
-# the dataset will likely be bigger if you have compressible values or keys.
-rdbcompression yes
-
-# The filename where to dump the DB
-dbfilename dump_integration1.rdb
-
-# The working directory.
-#
-# The DB will be written inside this directory, with the filename specified
-# above using the 'dbfilename' configuration directive.
-#
-# Also the Append Only File will be created inside this directory.
-#
-# Note that you must specify a directory here, not a file name.
-dir tmp/
-
-################################# REPLICATION #################################
-
-# Master-Slave replication. Use slaveof to make a Redis instance a copy of
-# another Redis server. Note that the configuration is local to the slave
-# so for example it is possible to configure the slave to save the DB with a
-# different interval, or to listen to another port, and so on.
-#
-# slaveof <masterip> <masterport>
-
-# If the master is password protected (using the "requirepass" configuration
-# directive below) it is possible to tell the slave to authenticate before
-# starting the replication synchronization process, otherwise the master will
-# refuse the slave request.
-#
-# masterauth <master-password>
-
-# When a slave lost the connection with the master, or when the replication
-# is still in progress, the slave can act in two different ways:
-#
-# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
-# still reply to client requests, possibly with out of data data, or the
-# data set may just be empty if this is the first synchronization.
-#
-# 2) if slave-serve-stale data is set to 'no' the slave will reply with
-# an error "SYNC with master in progress" to all the kind of commands
-# but to INFO and SLAVEOF.
-#
-slave-serve-stale-data yes
-
-# Slaves send PINGs to server in a predefined interval. It's possible to change
-# this interval with the repl_ping_slave_period option. The default value is 10
-# seconds.
-#
-# repl-ping-slave-period 10
-
-# The following option sets a timeout for both Bulk transfer I/O timeout and
-# master data or ping response timeout. The default value is 60 seconds.
-#
-# It is important to make sure that this value is greater than the value
-# specified for repl-ping-slave-period otherwise a timeout will be detected
-# every time there is low traffic between the master and the slave.
-#
-# repl-timeout 60
-
-################################## SECURITY ###################################
-
-# Require clients to issue AUTH <PASSWORD> before processing any other
-# commands. This might be useful in environments in which you do not trust
-# others with access to the host running redis-server.
-#
-# This should stay commented out for backward compatibility and because most
-# people do not need auth (e.g. they run their own servers).
-#
-# Warning: since Redis is pretty fast an outside user can try up to
-# 150k passwords per second against a good box. This means that you should
-# use a very strong password otherwise it will be very easy to break.
-#
-# requirepass foobared
-
-# Command renaming.
-#
-# It is possilbe to change the name of dangerous commands in a shared
-# environment. For instance the CONFIG command may be renamed into something
-# of hard to guess so that it will be still available for internal-use
-# tools but not available for general clients.
-#
-# Example:
-#
-# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
-#
-# It is also possilbe to completely kill a command renaming it into
-# an empty string:
-#
-# rename-command CONFIG ""
-
-################################### LIMITS ####################################
-
-# Set the max number of connected clients at the same time. By default there
-# is no limit, and it's up to the number of file descriptors the Redis process
-# is able to open. The special value '0' means no limits.
-# Once the limit is reached Redis will close all the new connections sending
-# an error 'max number of clients reached'.
-#
-# maxclients 128
-
-# Don't use more memory than the specified amount of bytes.
-# When the memory limit is reached Redis will try to remove keys with an
-# EXPIRE set. It will try to start freeing keys that are going to expire
-# in little time and preserve keys with a longer time to live.
-# Redis will also try to remove objects from free lists if possible.
-#
-# If all this fails, Redis will start to reply with errors to commands
-# that will use more memory, like SET, LPUSH, and so on, and will continue
-# to reply to most read-only commands like GET.
-#
-# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
-# 'state' server or cache, not as a real DB. When Redis is used as a real
-# database the memory usage will grow over the weeks, it will be obvious if
-# it is going to use too much memory in the long run, and you'll have the time
-# to upgrade. With maxmemory after the limit is reached you'll start to get
-# errors for write operations, and this may even lead to DB inconsistency.
-#
-# maxmemory <bytes>
-
-# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
-# is reached? You can select among five behavior:
-#
-# volatile-lru -> remove the key with an expire set using an LRU algorithm
-# allkeys-lru -> remove any key accordingly to the LRU algorithm
-# volatile-random -> remove a random key with an expire set
-# allkeys->random -> remove a random key, any key
-# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
-# noeviction -> don't expire at all, just return an error on write operations
-#
-# Note: with all the kind of policies, Redis will return an error on write
-# operations, when there are not suitable keys for eviction.
-#
-# At the date of writing this commands are: set setnx setex append
-# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
-# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
-# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
-# getset mset msetnx exec sort
-#
-# The default is:
-#
-# maxmemory-policy volatile-lru
-
-# LRU and minimal TTL algorithms are not precise algorithms but approximated
-# algorithms (in order to save memory), so you can select as well the sample
-# size to check. For instance for default Redis will check three keys and
-# pick the one that was used less recently, you can change the sample size
-# using the following configuration directive.
-#
-# maxmemory-samples 3
-
-############################## APPEND ONLY MODE ###############################
-
-# By default Redis asynchronously dumps the dataset on disk. If you can live
-# with the idea that the latest records will be lost if something like a crash
-# happens this is the preferred way to run Redis. If instead you care a lot
-# about your data and don't want to that a single record can get lost you should
-# enable the append only mode: when this mode is enabled Redis will append
-# every write operation received in the file appendonly.aof. This file will
-# be read on startup in order to rebuild the full dataset in memory.
-#
-# Note that you can have both the async dumps and the append only file if you
-# like (you have to comment the "save" statements above to disable the dumps).
-# Still if append only mode is enabled Redis will load the data from the
-# log file at startup ignoring the dump.rdb file.
-#
-# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
-# log file in background when it gets too big.
-
-appendonly no
-
-# The name of the append only file (default: "appendonly.aof")
-# appendfilename appendonly.aof
-
-# The fsync() call tells the Operating System to actually write data on disk
-# instead to wait for more data in the output buffer. Some OS will really flush
-# data on disk, some other OS will just try to do it ASAP.
-#
-# Redis supports three different modes:
-#
-# no: don't fsync, just let the OS flush the data when it wants. Faster.
-# always: fsync after every write to the append only log . Slow, Safest.
-# everysec: fsync only if one second passed since the last fsync. Compromise.
-#
-# The default is "everysec" that's usually the right compromise between
-# speed and data safety. It's up to you to understand if you can relax this to
-# "no" that will will let the operating system flush the output buffer when
-# it wants, for better performances (but if you can live with the idea of
-# some data loss consider the default persistence mode that's snapshotting),
-# or on the contrary, use "always" that's very slow but a bit safer than
-# everysec.
-#
-# If unsure, use "everysec".
-
-# appendfsync always
-appendfsync everysec
-# appendfsync no
-
-# When the AOF fsync policy is set to always or everysec, and a background
-# saving process (a background save or AOF log background rewriting) is
-# performing a lot of I/O against the disk, in some Linux configurations
-# Redis may block too long on the fsync() call. Note that there is no fix for
-# this currently, as even performing fsync in a different thread will block
-# our synchronous write(2) call.
-#
-# In order to mitigate this problem it's possible to use the following option
-# that will prevent fsync() from being called in the main process while a
-# BGSAVE or BGREWRITEAOF is in progress.
-#
-# This means that while another child is saving the durability of Redis is
-# the same as "appendfsync none", that in pratical terms means that it is
-# possible to lost up to 30 seconds of log in the worst scenario (with the
-# default Linux settings).
-#
-# If you have latency problems turn this to "yes". Otherwise leave it as
-# "no" that is the safest pick from the point of view of durability.
-no-appendfsync-on-rewrite no
-
-# Automatic rewrite of the append only file.
-# Redis is able to automatically rewrite the log file implicitly calling
-# BGREWRITEAOF when the AOF log size will growth by the specified percentage.
-#
-# This is how it works: Redis remembers the size of the AOF file after the
-# latest rewrite (or if no rewrite happened since the restart, the size of
-# the AOF at startup is used).
-#
-# This base size is compared to the current size. If the current size is
-# bigger than the specified percentage, the rewrite is triggered. Also
-# you need to specify a minimal size for the AOF file to be rewritten, this
-# is useful to avoid rewriting the AOF file even if the percentage increase
-# is reached but it is still pretty small.
-#
-# Specify a precentage of zero in order to disable the automatic AOF
-# rewrite feature.
-
-auto-aof-rewrite-percentage 100
-auto-aof-rewrite-min-size 64mb
-
-################################## SLOW LOG ###################################
-
-# The Redis Slow Log is a system to log queries that exceeded a specified
-# execution time. The execution time does not include the I/O operations
-# like talking with the client, sending the reply and so forth,
-# but just the time needed to actually execute the command (this is the only
-# stage of command execution where the thread is blocked and can not serve
-# other requests in the meantime).
-#
-# You can configure the slow log with two parameters: one tells Redis
-# what is the execution time, in microseconds, to exceed in order for the
-# command to get logged, and the other parameter is the length of the
-# slow log. When a new command is logged the oldest one is removed from the
-# queue of logged commands.
-
-# The following time is expressed in microseconds, so 1000000 is equivalent
-# to one second. Note that a negative number disables the slow log, while
-# a value of zero forces the logging of every command.
-slowlog-log-slower-than 10000
-
-# There is no limit to this length. Just be aware that it will consume memory.
-# You can reclaim memory used by the slow log with SLOWLOG RESET.
-slowlog-max-len 1024
-
-################################ VIRTUAL MEMORY ###############################
-
-### WARNING! Virtual Memory is deprecated in Redis 2.4
-### The use of Virtual Memory is strongly discouraged.
-
-# Virtual Memory allows Redis to work with datasets bigger than the actual
-# amount of RAM needed to hold the whole dataset in memory.
-# In order to do so very used keys are taken in memory while the other keys
-# are swapped into a swap file, similarly to what operating systems do
-# with memory pages.
-#
-# To enable VM just set 'vm-enabled' to yes, and set the following three
-# VM parameters accordingly to your needs.
-
-vm-enabled no
-# vm-enabled yes
-
-# This is the path of the Redis swap file. As you can guess, swap files
-# can't be shared by different Redis instances, so make sure to use a swap
-# file for every redis process you are running. Redis will complain if the
-# swap file is already in use.
-#
-# The best kind of storage for the Redis swap file (that's accessed at random)
-# is a Solid State Disk (SSD).
-#
-# *** WARNING *** if you are using a shared hosting the default of putting
-# the swap file under /tmp is not secure. Create a dir with access granted
-# only to Redis user and configure Redis to create the swap file there.
-vm-swap-file /tmp/redis.swap
-
-# vm-max-memory configures the VM to use at max the specified amount of
-# RAM. Everything that deos not fit will be swapped on disk *if* possible, that
-# is, if there is still enough contiguous space in the swap file.
-#
-# With vm-max-memory 0 the system will swap everything it can. Not a good
-# default, just specify the max amount of RAM you can in bytes, but it's
-# better to leave some margin. For instance specify an amount of RAM
-# that's more or less between 60 and 80% of your free RAM.
-vm-max-memory 0
-
-# Redis swap files is split into pages. An object can be saved using multiple
-# contiguous pages, but pages can't be shared between different objects.
-# So if your page is too big, small objects swapped out on disk will waste
-# a lot of space. If you page is too small, there is less space in the swap
-# file (assuming you configured the same number of total swap file pages).
-#
-# If you use a lot of small objects, use a page size of 64 or 32 bytes.
-# If you use a lot of big objects, use a bigger page size.
-# If unsure, use the default :)
-vm-page-size 32
-
-# Number of total memory pages in the swap file.
-# Given that the page table (a bitmap of free/used pages) is taken in memory,
-# every 8 pages on disk will consume 1 byte of RAM.
-#
-# The total swap size is vm-page-size * vm-pages
-#
-# With the default of 32-bytes memory pages and 134217728 pages Redis will
-# use a 4 GB swap file, that will use 16 MB of RAM for the page table.
-#
-# It's better to use the smallest acceptable value for your application,
-# but the default is large in order to work in most conditions.
-vm-pages 134217728
-
-# Max number of VM I/O threads running at the same time.
-# This threads are used to read/write data from/to swap file, since they
-# also encode and decode objects from disk to memory or the reverse, a bigger
-# number of threads can help with big objects even if they can't help with
-# I/O itself as the physical device may not be able to couple with many
-# reads/writes operations at the same time.
-#
-# The special value of 0 turn off threaded I/O and enables the blocking
-# Virtual Memory implementation.
-vm-max-threads 4
-
-############################### ADVANCED CONFIG ###############################
-
-# Hashes are encoded in a special way (much more memory efficient) when they
-# have at max a given numer of elements, and the biggest element does not
-# exceed a given threshold. You can configure this limits with the following
-# configuration directives.
-hash-max-zipmap-entries 512
-hash-max-zipmap-value 64
-
-# Similarly to hashes, small lists are also encoded in a special way in order
-# to save a lot of space. The special representation is only used when
-# you are under the following limits:
-list-max-ziplist-entries 512
-list-max-ziplist-value 64
-
-# Sets have a special encoding in just one case: when a set is composed
-# of just strings that happens to be integers in radix 10 in the range
-# of 64 bit signed integers.
-# The following configuration setting sets the limit in the size of the
-# set in order to use this special memory saving encoding.
-set-max-intset-entries 512
-
-# Similarly to hashes and lists, sorted sets are also specially encoded in
-# order to save a lot of space. This encoding is only used when the length and
-# elements of a sorted set are below the following limits:
-zset-max-ziplist-entries 128
-zset-max-ziplist-value 64
-
-# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
-# order to help rehashing the main Redis hash table (the one mapping top-level
-# keys to values). The hash table implementation redis uses (see dict.c)
-# performs a lazy rehashing: the more operation you run into an hash table
-# that is rhashing, the more rehashing "steps" are performed, so if the
-# server is idle the rehashing is never complete and some more memory is used
-# by the hash table.
-#
-# The default is to use this millisecond 10 times every second in order to
-# active rehashing the main dictionaries, freeing memory when possible.
-#
-# If unsure:
-# use "activerehashing no" if you have hard latency requirements and it is
-# not a good thing in your environment that Redis can reply form time to time
-# to queries with 2 milliseconds delay.
-#
-# use "activerehashing yes" if you don't have such hard requirements but
-# want to free memory asap when possible.
-activerehashing yes
-
-################################## INCLUDES ###################################
-
-# Include one or more other config files here. This is useful if you
-# have a standard template that goes to all redis server but also need
-# to customize a few per-server settings. Include files can include
-# other files, so use this wisely.
-#
-# include /path/to/local.conf
-# include /path/to/other.conf
diff --git a/redis-integration2.conf b/redis-integration2.conf
deleted file mode 100644
index bb16fed8d..000000000
--- a/redis-integration2.conf
+++ /dev/null
@@ -1,486 +0,0 @@
-# Redis configuration file example
-
-# Note on units: when memory size is needed, it is possible to specifiy
-# it in the usual form of 1k 5GB 4M and so forth:
-#
-# 1k => 1000 bytes
-# 1kb => 1024 bytes
-# 1m => 1000000 bytes
-# 1mb => 1024*1024 bytes
-# 1g => 1000000000 bytes
-# 1gb => 1024*1024*1024 bytes
-#
-# units are case insensitive so 1GB 1Gb 1gB are all the same.
-
-# By default Redis does not run as a daemon. Use 'yes' if you need it.
-# Note that Redis will write a pid file in /usr/local/var/run/redis.pid when daemonized.
-daemonize no
-
-# When running daemonized, Redis writes a pid file in /usr/local/var/run/redis.pid by
-# default. You can specify a custom pid file location here.
-pidfile /usr/local/var/run/redis.pid
-
-# Accept connections on the specified port, default is 6379.
-# If port 0 is specified Redis will not listen on a TCP socket.
-port 6380
-
-# If you want you can bind a single interface, if the bind option is not
-# specified all the interfaces will listen for incoming connections.
-#
-# bind 127.0.0.1
-
-# Specify the path for the unix socket that will be used to listen for
-# incoming connections. There is no default, so Redis will not listen
-# on a unix socket when not specified.
-#
-# unixsocket /tmp/redis.sock
-# unixsocketperm 755
-
-# Close the connection after a client is idle for N seconds (0 to disable)
-timeout 0
-
-# Set server verbosity to 'debug'
-# it can be one of:
-# debug (a lot of information, useful for development/testing)
-# verbose (many rarely useful info, but not a mess like the debug level)
-# notice (moderately verbose, what you want in production probably)
-# warning (only very important / critical messages are logged)
-loglevel verbose
-
-# Specify the log file name. Also 'stdout' can be used to force
-# Redis to log on the standard output. Note that if you use standard
-# output for logging but daemonize, logs will be sent to /dev/null
-logfile stdout
-
-# To enable logging to the system logger, just set 'syslog-enabled' to yes,
-# and optionally update the other syslog parameters to suit your needs.
-# syslog-enabled no
-
-# Specify the syslog identity.
-# syslog-ident redis
-
-# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
-# syslog-facility local0
-
-# Set the number of databases. The default database is DB 0, you can select
-# a different one on a per-connection basis using SELECT <dbid> where
-# dbid is a number between 0 and 'databases'-1
-databases 16
-
-################################ SNAPSHOTTING #################################
-#
-# Save the DB on disk:
-#
-# save <seconds> <changes>
-#
-# Will save the DB if both the given number of seconds and the given
-# number of write operations against the DB occurred.
-#
-# In the example below the behaviour will be to save:
-# after 900 sec (15 min) if at least 1 key changed
-# after 300 sec (5 min) if at least 10 keys changed
-# after 60 sec if at least 10000 keys changed
-#
-# Note: you can disable saving at all commenting all the "save" lines.
-
-save 900 1
-save 300 10
-save 60 10000
-
-# Compress string objects using LZF when dump .rdb databases?
-# For default that's set to 'yes' as it's almost always a win.
-# If you want to save some CPU in the saving child set it to 'no' but
-# the dataset will likely be bigger if you have compressible values or keys.
-rdbcompression yes
-
-# The filename where to dump the DB
-dbfilename dump_integration2.rdb
-
-# The working directory.
-#
-# The DB will be written inside this directory, with the filename specified
-# above using the 'dbfilename' configuration directive.
-#
-# Also the Append Only File will be created inside this directory.
-#
-# Note that you must specify a directory here, not a file name.
-dir tmp/
-
-################################# REPLICATION #################################
-
-# Master-Slave replication. Use slaveof to make a Redis instance a copy of
-# another Redis server. Note that the configuration is local to the slave
-# so for example it is possible to configure the slave to save the DB with a
-# different interval, or to listen to another port, and so on.
-#
-# slaveof <masterip> <masterport>
-
-# If the master is password protected (using the "requirepass" configuration
-# directive below) it is possible to tell the slave to authenticate before
-# starting the replication synchronization process, otherwise the master will
-# refuse the slave request.
-#
-# masterauth <master-password>
-
-# When a slave lost the connection with the master, or when the replication
-# is still in progress, the slave can act in two different ways:
-#
-# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
-# still reply to client requests, possibly with out of data data, or the
-# data set may just be empty if this is the first synchronization.
-#
-# 2) if slave-serve-stale data is set to 'no' the slave will reply with
-# an error "SYNC with master in progress" to all the kind of commands
-# but to INFO and SLAVEOF.
-#
-slave-serve-stale-data yes
-
-# Slaves send PINGs to server in a predefined interval. It's possible to change
-# this interval with the repl_ping_slave_period option. The default value is 10
-# seconds.
-#
-# repl-ping-slave-period 10
-
-# The following option sets a timeout for both Bulk transfer I/O timeout and
-# master data or ping response timeout. The default value is 60 seconds.
-#
-# It is important to make sure that this value is greater than the value
-# specified for repl-ping-slave-period otherwise a timeout will be detected
-# every time there is low traffic between the master and the slave.
-#
-# repl-timeout 60
-
-################################## SECURITY ###################################
-
-# Require clients to issue AUTH <PASSWORD> before processing any other
-# commands. This might be useful in environments in which you do not trust
-# others with access to the host running redis-server.
-#
-# This should stay commented out for backward compatibility and because most
-# people do not need auth (e.g. they run their own servers).
-#
-# Warning: since Redis is pretty fast an outside user can try up to
-# 150k passwords per second against a good box. This means that you should
-# use a very strong password otherwise it will be very easy to break.
-#
-# requirepass foobared
-
-# Command renaming.
-#
-# It is possilbe to change the name of dangerous commands in a shared
-# environment. For instance the CONFIG command may be renamed into something
-# of hard to guess so that it will be still available for internal-use
-# tools but not available for general clients.
-#
-# Example:
-#
-# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
-#
-# It is also possilbe to completely kill a command renaming it into
-# an empty string:
-#
-# rename-command CONFIG ""
-
-################################### LIMITS ####################################
-
-# Set the max number of connected clients at the same time. By default there
-# is no limit, and it's up to the number of file descriptors the Redis process
-# is able to open. The special value '0' means no limits.
-# Once the limit is reached Redis will close all the new connections sending
-# an error 'max number of clients reached'.
-#
-# maxclients 128
-
-# Don't use more memory than the specified amount of bytes.
-# When the memory limit is reached Redis will try to remove keys with an
-# EXPIRE set. It will try to start freeing keys that are going to expire
-# in little time and preserve keys with a longer time to live.
-# Redis will also try to remove objects from free lists if possible.
-#
-# If all this fails, Redis will start to reply with errors to commands
-# that will use more memory, like SET, LPUSH, and so on, and will continue
-# to reply to most read-only commands like GET.
-#
-# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
-# 'state' server or cache, not as a real DB. When Redis is used as a real
-# database the memory usage will grow over the weeks, it will be obvious if
-# it is going to use too much memory in the long run, and you'll have the time
-# to upgrade. With maxmemory after the limit is reached you'll start to get
-# errors for write operations, and this may even lead to DB inconsistency.
-#
-# maxmemory <bytes>
-
-# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
-# is reached? You can select among five behavior:
-#
-# volatile-lru -> remove the key with an expire set using an LRU algorithm
-# allkeys-lru -> remove any key accordingly to the LRU algorithm
-# volatile-random -> remove a random key with an expire set
-# allkeys->random -> remove a random key, any key
-# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
-# noeviction -> don't expire at all, just return an error on write operations
-#
-# Note: with all the kind of policies, Redis will return an error on write
-# operations, when there are not suitable keys for eviction.
-#
-# At the date of writing this commands are: set setnx setex append
-# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
-# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
-# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
-# getset mset msetnx exec sort
-#
-# The default is:
-#
-# maxmemory-policy volatile-lru
-
-# LRU and minimal TTL algorithms are not precise algorithms but approximated
-# algorithms (in order to save memory), so you can select as well the sample
-# size to check. For instance for default Redis will check three keys and
-# pick the one that was used less recently, you can change the sample size
-# using the following configuration directive.
-#
-# maxmemory-samples 3
-
-############################## APPEND ONLY MODE ###############################
-
-# By default Redis asynchronously dumps the dataset on disk. If you can live
-# with the idea that the latest records will be lost if something like a crash
-# happens this is the preferred way to run Redis. If instead you care a lot
-# about your data and don't want to that a single record can get lost you should
-# enable the append only mode: when this mode is enabled Redis will append
-# every write operation received in the file appendonly.aof. This file will
-# be read on startup in order to rebuild the full dataset in memory.
-#
-# Note that you can have both the async dumps and the append only file if you
-# like (you have to comment the "save" statements above to disable the dumps).
-# Still if append only mode is enabled Redis will load the data from the
-# log file at startup ignoring the dump.rdb file.
-#
-# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
-# log file in background when it gets too big.
-
-appendonly no
-
-# The name of the append only file (default: "appendonly.aof")
-# appendfilename appendonly.aof
-
-# The fsync() call tells the Operating System to actually write data on disk
-# instead to wait for more data in the output buffer. Some OS will really flush
-# data on disk, some other OS will just try to do it ASAP.
-#
-# Redis supports three different modes:
-#
-# no: don't fsync, just let the OS flush the data when it wants. Faster.
-# always: fsync after every write to the append only log . Slow, Safest.
-# everysec: fsync only if one second passed since the last fsync. Compromise.
-#
-# The default is "everysec" that's usually the right compromise between
-# speed and data safety. It's up to you to understand if you can relax this to
-# "no" that will will let the operating system flush the output buffer when
-# it wants, for better performances (but if you can live with the idea of
-# some data loss consider the default persistence mode that's snapshotting),
-# or on the contrary, use "always" that's very slow but a bit safer than
-# everysec.
-#
-# If unsure, use "everysec".
-
-# appendfsync always
-appendfsync everysec
-# appendfsync no
-
-# When the AOF fsync policy is set to always or everysec, and a background
-# saving process (a background save or AOF log background rewriting) is
-# performing a lot of I/O against the disk, in some Linux configurations
-# Redis may block too long on the fsync() call. Note that there is no fix for
-# this currently, as even performing fsync in a different thread will block
-# our synchronous write(2) call.
-#
-# In order to mitigate this problem it's possible to use the following option
-# that will prevent fsync() from being called in the main process while a
-# BGSAVE or BGREWRITEAOF is in progress.
-#
-# This means that while another child is saving the durability of Redis is
-# the same as "appendfsync none", that in pratical terms means that it is
-# possible to lost up to 30 seconds of log in the worst scenario (with the
-# default Linux settings).
-#
-# If you have latency problems turn this to "yes". Otherwise leave it as
-# "no" that is the safest pick from the point of view of durability.
-no-appendfsync-on-rewrite no
-
-# Automatic rewrite of the append only file.
-# Redis is able to automatically rewrite the log file implicitly calling
-# BGREWRITEAOF when the AOF log size will growth by the specified percentage.
-#
-# This is how it works: Redis remembers the size of the AOF file after the
-# latest rewrite (or if no rewrite happened since the restart, the size of
-# the AOF at startup is used).
-#
-# This base size is compared to the current size. If the current size is
-# bigger than the specified percentage, the rewrite is triggered. Also
-# you need to specify a minimal size for the AOF file to be rewritten, this
-# is useful to avoid rewriting the AOF file even if the percentage increase
-# is reached but it is still pretty small.
-#
-# Specify a precentage of zero in order to disable the automatic AOF
-# rewrite feature.
-
-auto-aof-rewrite-percentage 100
-auto-aof-rewrite-min-size 64mb
-
-################################## SLOW LOG ###################################
-
-# The Redis Slow Log is a system to log queries that exceeded a specified
-# execution time. The execution time does not include the I/O operations
-# like talking with the client, sending the reply and so forth,
-# but just the time needed to actually execute the command (this is the only
-# stage of command execution where the thread is blocked and can not serve
-# other requests in the meantime).
-#
-# You can configure the slow log with two parameters: one tells Redis
-# what is the execution time, in microseconds, to exceed in order for the
-# command to get logged, and the other parameter is the length of the
-# slow log. When a new command is logged the oldest one is removed from the
-# queue of logged commands.
-
-# The following time is expressed in microseconds, so 1000000 is equivalent
-# to one second. Note that a negative number disables the slow log, while
-# a value of zero forces the logging of every command.
-slowlog-log-slower-than 10000
-
-# There is no limit to this length. Just be aware that it will consume memory.
-# You can reclaim memory used by the slow log with SLOWLOG RESET.
-slowlog-max-len 1024
-
-################################ VIRTUAL MEMORY ###############################
-
-### WARNING! Virtual Memory is deprecated in Redis 2.4
-### The use of Virtual Memory is strongly discouraged.
-
-# Virtual Memory allows Redis to work with datasets bigger than the actual
-# amount of RAM needed to hold the whole dataset in memory.
-# In order to do so very used keys are taken in memory while the other keys
-# are swapped into a swap file, similarly to what operating systems do
-# with memory pages.
-#
-# To enable VM just set 'vm-enabled' to yes, and set the following three
-# VM parameters accordingly to your needs.
-
-vm-enabled no
-# vm-enabled yes
-
-# This is the path of the Redis swap file. As you can guess, swap files
-# can't be shared by different Redis instances, so make sure to use a swap
-# file for every redis process you are running. Redis will complain if the
-# swap file is already in use.
-#
-# The best kind of storage for the Redis swap file (that's accessed at random)
-# is a Solid State Disk (SSD).
-#
-# *** WARNING *** if you are using a shared hosting the default of putting
-# the swap file under /tmp is not secure. Create a dir with access granted
-# only to Redis user and configure Redis to create the swap file there.
-vm-swap-file /tmp/redis.swap
-
-# vm-max-memory configures the VM to use at max the specified amount of
-# RAM. Everything that deos not fit will be swapped on disk *if* possible, that
-# is, if there is still enough contiguous space in the swap file.
-#
-# With vm-max-memory 0 the system will swap everything it can. Not a good
-# default, just specify the max amount of RAM you can in bytes, but it's
-# better to leave some margin. For instance specify an amount of RAM
-# that's more or less between 60 and 80% of your free RAM.
-vm-max-memory 0
-
-# Redis swap files is split into pages. An object can be saved using multiple
-# contiguous pages, but pages can't be shared between different objects.
-# So if your page is too big, small objects swapped out on disk will waste
-# a lot of space. If you page is too small, there is less space in the swap
-# file (assuming you configured the same number of total swap file pages).
-#
-# If you use a lot of small objects, use a page size of 64 or 32 bytes.
-# If you use a lot of big objects, use a bigger page size.
-# If unsure, use the default :)
-vm-page-size 32
-
-# Number of total memory pages in the swap file.
-# Given that the page table (a bitmap of free/used pages) is taken in memory,
-# every 8 pages on disk will consume 1 byte of RAM.
-#
-# The total swap size is vm-page-size * vm-pages
-#
-# With the default of 32-bytes memory pages and 134217728 pages Redis will
-# use a 4 GB swap file, that will use 16 MB of RAM for the page table.
-#
-# It's better to use the smallest acceptable value for your application,
-# but the default is large in order to work in most conditions.
-vm-pages 134217728
-
-# Max number of VM I/O threads running at the same time.
-# This threads are used to read/write data from/to swap file, since they
-# also encode and decode objects from disk to memory or the reverse, a bigger
-# number of threads can help with big objects even if they can't help with
-# I/O itself as the physical device may not be able to couple with many
-# reads/writes operations at the same time.
-#
-# The special value of 0 turn off threaded I/O and enables the blocking
-# Virtual Memory implementation.
-vm-max-threads 4
-
-############################### ADVANCED CONFIG ###############################
-
-# Hashes are encoded in a special way (much more memory efficient) when they
-# have at max a given numer of elements, and the biggest element does not
-# exceed a given threshold. You can configure this limits with the following
-# configuration directives.
-hash-max-zipmap-entries 512
-hash-max-zipmap-value 64
-
-# Similarly to hashes, small lists are also encoded in a special way in order
-# to save a lot of space. The special representation is only used when
-# you are under the following limits:
-list-max-ziplist-entries 512
-list-max-ziplist-value 64
-
-# Sets have a special encoding in just one case: when a set is composed
-# of just strings that happens to be integers in radix 10 in the range
-# of 64 bit signed integers.
-# The following configuration setting sets the limit in the size of the
-# set in order to use this special memory saving encoding.
-set-max-intset-entries 512
-
-# Similarly to hashes and lists, sorted sets are also specially encoded in
-# order to save a lot of space. This encoding is only used when the length and
-# elements of a sorted set are below the following limits:
-zset-max-ziplist-entries 128
-zset-max-ziplist-value 64
-
-# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
-# order to help rehashing the main Redis hash table (the one mapping top-level
-# keys to values). The hash table implementation redis uses (see dict.c)
-# performs a lazy rehashing: the more operation you run into an hash table
-# that is rhashing, the more rehashing "steps" are performed, so if the
-# server is idle the rehashing is never complete and some more memory is used
-# by the hash table.
-#
-# The default is to use this millisecond 10 times every second in order to
-# active rehashing the main dictionaries, freeing memory when possible.
-#
-# If unsure:
-# use "activerehashing no" if you have hard latency requirements and it is
-# not a good thing in your environment that Redis can reply form time to time
-# to queries with 2 milliseconds delay.
-#
-# use "activerehashing yes" if you don't have such hard requirements but
-# want to free memory asap when possible.
-activerehashing yes
-
-################################## INCLUDES ###################################
-
-# Include one or more other config files here. This is useful if you
-# have a standard template that goes to all redis server but also need
-# to customize a few per-server settings. Include files can include
-# other files, so use this wisely.
-#
-# include /path/to/local.conf
-# include /path/to/other.conf
diff --git a/script/server b/script/server
index 3fcbcc895..aee6cf4f5 100755
--- a/script/server
+++ b/script/server
@@ -22,23 +22,6 @@ on_failure()
fi
}
-# Check if already running/port blocked
-chk_service()
-{
- port=${1:?Missing port}
- case $os in
- *[Bb][Ss][Dd]*|Darwin)
- ## checks ipv[46]
- netstat -anL | awk '{print $2}' | grep "\.$1$"
- ;;
- *)
- # Is someone listening on the ports already? (ipv4 only test ?)
- netstat -nl | grep '[^:]:'$port'[ \t]'
- ;;
- esac
-}
-
-
# ensure right directory
realpath=$( ruby -e "puts File.expand_path(\"$0\")")
cd $(dirname $realpath)/..
@@ -106,8 +89,6 @@ fi
os=$(uname -s)
vars=$(bin/bundle exec ruby ./script/get_config.rb \
- single_process_mode=environment.single_process_mode? \
- embed_sidekiq_worker=server.embed_sidekiq_worker \
workers=server.sidekiq_workers \
redis_url=environment.redis \
| grep -vE "is not writable|as your home directory temporarily"
@@ -115,24 +96,6 @@ vars=$(bin/bundle exec ruby ./script/get_config.rb \
on_failure "Couldn't parse $CONFIG_FILE!"
eval "$vars"
-args="$@"
-for arg in $(echo $args | awk '{ for (i = 1; i <= NF; i++) print $i}')
-do
- [ "$prev_arg" = '-p' ] && PORT="$arg"
- prev_arg="$arg"
-done
-
-if [ -n "$PORT" ]
-then
- export PORT
-
- services=$(chk_service $PORT)
- if [ -n "$services" ]
- then
- fatal "Port $PORT is already in use.\n\t$services"
- fi
-fi
-
# Force AGPL
if [ -w "public" -a ! -e "public/source.tar.gz" ]
then
@@ -161,16 +124,13 @@ application, run:
fi
# Check if redis is running
-if [ "$single_process_mode" = "false" ]
+if [ -n "$redis_url" ]
then
- if [ -n "$redis_url" ]
- then
- redis_param="url: '$redis_url'"
- fi
- if [ "$(bin/bundle exec ruby -e "require 'redis'; puts Redis.new($redis_param).ping" 2> /dev/null | grep -vE "is not writable|as your home directory temporarily" )" != "PONG" ]
- then
- fatal "Can't connect to redis. Please check if it's running and if environment.redis is configured correctly in $CONFIG_FILE."
- fi
+ redis_param="url: '$redis_url'"
+fi
+if [ "$(bin/bundle exec ruby -e "require 'redis'; puts Redis.new($redis_param).ping" 2> /dev/null | grep -vE "is not writable|as your home directory temporarily" )" != "PONG" ]
+then
+ fatal "Can't connect to redis. Please check if it's running and if environment.redis is configured correctly in $CONFIG_FILE."
fi
# Check for old curl versions (see https://github.com/diaspora/diaspora/issues/4202)
@@ -201,22 +161,5 @@ if [ -n "${ldconfig}" ]; then
fi
# Start Diaspora
-printf "Starting Diaspora in $RAILS_ENV mode "
-if [ -n "$PORT" ]
-then
- printf "on port $PORT "
-fi
-if [ "$embed_sidekiq_worker" = "true" ]
-then
- echo "with a Sidekiq worker embedded into Unicorn."
- workers=0
-elif [ "$single_process_mode" = "true" ]
-then
- echo "with job processing inside the request cycle."
- workers=0
-else
- echo "with $workers Sidekiq worker(s)."
-fi
-echo ""
-
+printf "Starting Diaspora in $RAILS_ENV mode with $workers Sidekiq worker(s)."
exec bin/bundle exec loader_eye --stop_all -c config/eye.rb
diff --git a/spec/workers/export_user_spec.rb b/spec/workers/export_user_spec.rb
index 657b801d8..0fe454201 100644
--- a/spec/workers/export_user_spec.rb
+++ b/spec/workers/export_user_spec.rb
@@ -25,14 +25,9 @@ describe Workers::ExportUser do
context "concurrency" do
before do
- AppConfig.environment.single_process_mode = false
AppConfig.settings.archive_jobs_concurrency = 1
end
- after :all do
- AppConfig.environment.single_process_mode = true
- end
-
let(:pid) { "#{Socket.gethostname}:#{Process.pid}:#{SecureRandom.hex(6)}" }
it "schedules a job for later when already another parallel export job is running" do
@@ -76,14 +71,5 @@ describe Workers::ExportUser do
Workers::ExportUser.new.perform(alice.id)
end
-
- it "runs the export when diaspora is in single process mode" do
- AppConfig.environment.single_process_mode = true
- expect(Sidekiq::Workers).not_to receive(:new)
- expect(Workers::ExportUser).not_to receive(:perform_in).with(kind_of(Integer), alice.id)
- expect(alice).to receive(:perform_export!)
-
- Workers::ExportUser.new.perform(alice.id)
- end
end
end