Skip to content

Commit

Permalink
Merge pull request #886 from matrix-org/dinsic-release-v1.14.x
Browse files Browse the repository at this point in the history
Merge mainline release v1.14.0 into dinsic
  • Loading branch information
anoadragon453 authored Jun 17, 2020
2 parents 21b81f7 + eef7a44 commit 2da8293
Show file tree
Hide file tree
Showing 29 changed files with 781 additions and 164 deletions.
5 changes: 5 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,11 @@ SyTest requires a number of dependencies that are easiest installed from CPAN.
Synapse does not need to be installed, as SyTest will run it directly from
its source code directory.

Additionally, a number of native dependencies are required. To install these
dependencies on an Ubuntu/Debian-derived Linux distribution, run the following::

sudo apt install libpq-dev build-essential

Installing on OS X
------------------
Dependencies can be installed on OS X in the same manner, except that packages
Expand Down
33 changes: 33 additions & 0 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,10 +55,43 @@ Synapse:
* `BLACKLIST`: set non-empty to change the default blacklist file to the
specified path relative to the Synapse directory

Some examples of running Synapse in different configurations:

* Running Synapse in worker mode using
[TCP-replication](https://github.com/matrix-org/synapse/blob/master/docs/tcp_replication.md):

```
docker run --rm -it -e POSTGRES=1 -e WORKERS=1 -v /path/to/synapse\:/src:ro \
-v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:py35
```

* Running Synapse in worker mode using redis:

```
docker network create testfoobar
docker run --network testfoobar --name testredis -d redis:5.0
docker run --network testfoobar --rm -it -e POSTGRES=1 -e WORKERS=1 \
-v /path/to/synapse\:/src:ro \
-v /path/to/where/you/want/logs\:/logs \
matrixdotorg/sytest-synapse:py35 --redis-host testredis
# Use `docker start/stop testredis` if you want to explicitly kill redis or start it again after reboot
```

Dendrite:

Dendrite does not currently make use of any environment variables.

## Using the local checkout of Sytest

If you would like to run tests with a custom checkout of Sytest, add a volume
to the docker command mounting the checkout to the `/sytest` folder in the
container:

```
docker run --rm -it /path/to/synapse\:/src:ro -v /path/to/where/you/want/logs\:/logs \
-v /path/to/code/sytest\:/sytest:ro matrixdotorg/sytest-synapse:py35
```

## Building the containers

The containers are built by executing `./build.sh`. You will then have to push
Expand Down
8 changes: 6 additions & 2 deletions docs/dendrite-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,14 +41,18 @@ SyTest will expect Dendrite to be at `../dendrite` relative to Sytest's root dir
Simply run the following to execute tests:

```
./run-tests.pl -I Dendrite::Monolith -W ../dendrite/testfile
./run-tests.pl -I Dendrite::Monolith -W ../dendrite/sytest-whitelist -B ../dendrite/sytest-blacklist
```

## Useful flags

* `-W` applies a test whitelist file, one of which is currently kept up to date
with what sytests Dendrite passes
[here](https://github.com/matrix-org/dendrite/blob/master/testfile)
[here](https://github.com/matrix-org/dendrite/blob/master/sytest-whitelist)

* `-B` applies a test blacklist file, one of which is currently kept up to date
with what sytests are currently flaky (fail *sometimes*) with Dendrite
[here](https://github.com/matrix-org/dendrite/blob/master/sytest-blacklist)

* `-d` lets you set the path to Dendrite's `bin/` directory, in case it's
somewhere other than `../dendrite/bin`
5 changes: 5 additions & 0 deletions lib/SyTest/Homeserver/Dendrite.pm
Original file line number Diff line number Diff line change
Expand Up @@ -282,10 +282,15 @@ sub _start_monolith
'--tls-key', $self->{paths}{tls_key},
);

push(@command, '-api') if $ENV{'API'} == '1';
$output->diag( "Starting Dendrite with: @command" );

return $self->_start_process_and_await_connectable(
setup => [
env => {
LOG_DIR => $self->{hs_dir},
DENDRITE_TRACE_SQL => $ENV{'DENDRITE_TRACE_SQL'},
DENDRITE_TRACE_HTTP => $ENV{'DENDRITE_TRACE_HTTP'},
},
],
command => [ @command ],
Expand Down
7 changes: 7 additions & 0 deletions lib/SyTest/Homeserver/Synapse.pm
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ sub _init

$self->{paths} = {};
$self->{dendron} = '';
$self->{redis_host} = '';

$self->SUPER::_init( $args );

Expand Down Expand Up @@ -291,6 +292,11 @@ sub start
limit_usage_by_mau => "true",
max_mau_value => 50000000,

redis => {
enabled => $self->{redis_host} ne '',
host => $self->{redis_host},
},

map {
defined $self->{$_} ? ( $_ => $self->{$_} ) : ()
} qw(
Expand Down Expand Up @@ -662,6 +668,7 @@ sub _init
$self->SUPER::_init( @_ );

$self->{dendron} = delete $args->{dendron_binary};
$self->{redis_host} = delete $args->{redis_host};

if( my $level = delete $args->{torture_replication} ) {
# torture the replication protocol a bit, to replicate bugs.
Expand Down
6 changes: 5 additions & 1 deletion lib/SyTest/HomeserverFactory/Synapse.pm
Original file line number Diff line number Diff line change
Expand Up @@ -101,15 +101,17 @@ sub _init
$self->{impl} = "SyTest::Homeserver::Synapse::ViaDendron";
$self->{args}{dendron_binary} = "";
$self->{args}{torture_replication} = 0;
$self->{args}{redis_host} = "";
}

sub get_options
{
my $self = shift;

return (
'dendron-binary=s' => \$self->{args}{dendron_binary},
'dendron-binary=s' => \$self->{args}{dendron_binary},
'torture-replication:50' => \$self->{args}{torture_replication},
'redis-host=s' => \$self->{args}{redis_host},
$self->SUPER::get_options(),
);
}
Expand All @@ -125,6 +127,8 @@ sub print_usage
--dendron-binary PATH - path to the 'dendron' binary
--torture-replication[=LEVEL] - enable torturing of the replication protocol
--redis-host HOST - if set then use redis for replication
EOF
}

Expand Down
18 changes: 14 additions & 4 deletions scripts/dendrite_sytest.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ set -ex

cd /sytest

mkdir /work
mkdir -p /work

# Make sure all Perl deps are installed -- this is done in the docker build so will only install packages added since the last Docker build
./install-deps.pl
Expand All @@ -23,6 +23,7 @@ su -c 'for i in pg1 pg2 sytest_template; do psql -c "CREATE DATABASE $i;"; done'
export PGUSER=postgres
export POSTGRES_DB_1=pg1
export POSTGRES_DB_2=pg2
export GOBIN=/tmp/bin

# Write out the configuration for a PostgreSQL Dendrite
# Note: Dendrite can run entirely within a single database as all of the tables have
Expand All @@ -32,15 +33,17 @@ export POSTGRES_DB_2=pg2
# Build dendrite
echo >&2 "--- Building dendrite from source"
cd /src
./build.sh
mkdir -p $GOBIN
go install -v ./cmd/dendrite-monolith-server
go install -v ./cmd/generate-keys
cd -

# Run the tests
echo >&2 "+++ Running tests"

TEST_STATUS=0
mkdir -p /logs
./run-tests.pl -I Dendrite::Monolith -d /src/bin -W /src/sytest-whitelist -O tap --all \
./run-tests.pl -I Dendrite::Monolith -d $GOBIN -W /src/sytest-whitelist -O tap --all \
--work-directory="/work" \
"$@" > /logs/results.tap || TEST_STATUS=$?

Expand All @@ -52,7 +55,7 @@ fi

# Check for new tests to be added to the test whitelist
/src/show-expected-fail-tests.sh /logs/results.tap /src/sytest-whitelist \
/src/sytest-blacklist || TEST_STATUS=$?
/src/sytest-blacklist > /work/show_expected_fail_tests_output.txt || TEST_STATUS=$?

echo >&2 "--- Copying assets"

Expand All @@ -62,6 +65,13 @@ rsync -r --ignore-missing-args --min-size=1B -av /work/server-0 /work/server-1 /
if [ $TEST_STATUS -ne 0 ]; then
# Build the annotation
perl /sytest/scripts/format_tap.pl /logs/results.tap "$BUILDKITE_LABEL" >/logs/annotate.md
# If show-expected-fail-tests logged something, put it into the annotation
# Annotations from a failed build show at the top of buildkite, alerting
# developers quickly as to what needs to change in the black/whitelist.
cat /work/show_expected_fail_tests_output.txt >> /logs/annotate.md
fi

echo >&2 "--- Sytest compliance report"
(cd /src && ./are-we-synapse-yet.py /logs/results.tap) || true

exit $TEST_STATUS
36 changes: 25 additions & 11 deletions scripts/synapse_sytest.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

# Run the sytests.

set -ex
set -e

cd "$(dirname $0)/.."

Expand All @@ -25,11 +25,6 @@ if [ -n "$MULTI_POSTGRES" ] || [ -n "$POSTGRES" ]; then

# Start the database
su -c 'eatmydata /usr/lib/postgresql/*/bin/pg_ctl -w -D $PGDATA start' postgres

su -c psql postgres <<< "show config_file"
su -c psql postgres <<< "show max_connections"
su -c psql postgres <<< "show full_page_writes"
su -c psql postgres <<< "show fsync"
fi

# Now create the databases
Expand Down Expand Up @@ -110,18 +105,37 @@ elif [ -n "$POSTGRES" ]; then

fi

# default value for SYNAPSE_SOURCE
: ${SYNAPSE_SOURCE:=/src}

# if we're running against a source directory, turn it into a tarball. pip
# will then unpack it to a temporary location, and build it. (As of pip 20.1,
# it will otherwise try to build it in-tree, which means writing changes to the
# source volume outside the container.)
#
if [ -d "$SYNAPSE_SOURCE" ]; then
echo "Creating tarball from synapse source"
tar -C "$SYNAPSE_SOURCE" -czf /tmp/synapse.tar.gz \
synapse scripts setup.py README.rst synctl MANIFEST.in
SYNAPSE_SOURCE="/tmp/synapse.tar.gz"
elif [ ! -r "$SYNAPSE_SOURCE" ]; then
echo "Unable to read synapse source at $SYNAPSE_SOURCE" >&2
exit 1
fi

if [ -n "$OFFLINE" ]; then
# if we're in offline mode, just put synapse into the virtualenv, and
# hope that the deps are up-to-date.
#
# (`pip install -e` likes to reinstall setuptools even if it's already installed,
# so we just run setup.py explicitly.)
#
(cd /src && /venv/bin/python setup.py -q develop)
# --no-use-pep517 works around what appears to be a pip issue
# (https://github.com/pypa/pip/issues/5402 possibly) where pip wants
# to reinstall any requirements for the build system, even if they are
# already installed.
/venv/bin/pip install --no-index --no-use-pep517 "$SYNAPSE_SOURCE"
else
# We've already created the virtualenv, but lets double check we have all
# deps.
/venv/bin/pip install -q --upgrade --no-cache-dir /src
/venv/bin/pip install -q --upgrade --no-cache-dir "$SYNAPSE_SOURCE"[redis]
/venv/bin/pip install -q --upgrade --no-cache-dir \
lxml psycopg2 coverage codecov tap.py coverage_enable_subprocess

Expand Down
30 changes: 24 additions & 6 deletions tests/10apidoc/02login.pl
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,10 @@

content => {
type => "m.login.password",
user => $user_id,
identifier => {
type => "m.id.user",
user => $user_id,
},
password => $password,
},
)->then( sub {
Expand Down Expand Up @@ -114,7 +117,10 @@

content => {
type => "m.login.password",
user => $user_id,
identifier => {
type => "m.id.user",
user => $user_id,
},
password => $password,
device_id => $device_id,
},
Expand Down Expand Up @@ -146,7 +152,10 @@

content => {
type => "m.login.password",
user => $user_localpart,
identifier => {
type => "m.id.user",
user => $user_localpart,
},
password => $password,
},
)->then( sub {
Expand Down Expand Up @@ -174,7 +183,10 @@

content => {
type => "m.login.password",
user => "i-ought-not-to-exist",
identifier => {
type => "m.id.user",
user => "i-ought-not-to-exist",
},
password => "XXX",
},
)->main::expect_http_403;
Expand All @@ -193,7 +205,10 @@

content => {
type => "m.login.password",
user => $user_id,
identifier => {
type => "m.id.user",
user => $user_id,
},
password => "${password}wrong",
},
)->main::expect_http_403->then( sub {
Expand Down Expand Up @@ -225,7 +240,10 @@ sub matrix_login_again_with_user
uri => "/r0/login",
content => {
type => "m.login.password",
user => $user->user_id,
identifier => {
type => "m.id.user",
user => $user->user_id,
},
password => $user->password,
%args,
},
Expand Down
Loading

0 comments on commit 2da8293

Please sign in to comment.