Integrating Nextcloud + Collabora behind reverse proxy

So you have a setup where Nextcloud runs on a host with internal IP which is exposed to the public network using a reverse proxy like Nginx. Public URLs are not accessible from the internal network due to DNS resolution. Research shows that this setup seems to be quite common.

Next you want to integrate Collabora Office into Nextcloud to allow editing of office documents. First steps are to install the Nextcloud app „richdocuments“ then install the coolwsd service which provides Collabora Office. We switch off SSL in coolwsd as it is provided by the reverse proxy. Internally HTTP can be used.

Now you have 2 separate services which send requests to each other so both have to reach the other service but we can’t use public URLs as they are not reachable internally.

Next you find a quick overview how to setup this scenario:

  • There are 3 URLs in Nextcloud’s richdocuments which have to be setup using the console as 2 of them are not available in the UI. For internal communication we use HTTP so we don’t have to deal with certificates.
    • public_wopi_url is the public URL of your Nextcloud instance, e.g. https://nextcloud.example.com
      • sudo -u www-data php ./occ config:app:set richdocuments public_wopi_url --value https://nextcloud.example.com
    • wopi_callback_url is the internal URL of your Nextcloud instance, e.g. http://192.168.201.102
      • sudo -u www-data php ./occ config:app:set richdocuments wopi_callback_url --value http://192.168.201.102
    • wopi_url is the internal URL of coolwsd, e.g. http://192.168.201.102:9980
      • sudo -u www-data php ./occ config:app:set richdocuments wopi_url --value http://192.168.201.102:9980

This way both services can reach each other internally.

Integration works by including Collabora in an iframe in our Nextcloud instance. So we have to extend the Content Security Policy (CSP) of coolwsd as its directive frame-ancestors by default only includes wopi_callback_url which is not publicly reachable. We can set it in /etc/coolwsd/coolwsd.xml:

<frame_ancestors desc="OBSOLETE: Use content_security_policy. Specify who is allowed to embed the Collabora Online iframe (coolwsd and WOPI host are always allowed). Separate multiple hosts by space.">https://nextcloud.example.com 'self'</frame_ancestors>

The hint regarding frame_ancestors being obsolete might be correct but i found that only by using frame_ancestors i got a correct CSP-policy. Using content_security_policy created 2 policies which prevent loading the iframe. I will put an issue on github.

Mock NTP time

For device testing a client needed a solution to set devices reliably to a time in the future to trigger certificate renewal and similar processes.

We agreed that providing a mock time via a NTP service would be a good solution. One requirement was to have the NTP service run in a Docker container.

I explored the space of NTP services and if in one of those the time could be set freely but didn’t have any luck finding such a solution. What i found was that for some NTP services the local time could be used as time source. But as we wanted to run the NTP server in a Docker container changing the local time was not possible.

After considering different approaches i came up with the following solution:

I put it all together and put my solution for running a NTP service with fake time as Docker container on github.

Improving the click to deploy solution for WordPress on Google Cloud Platform

If you’re using this recipe to run WordPress on a virtual machine on the Google Cloud Platform (GCP) chances are high you’re experiencing problems. Here’s how to solve them.

My example is a customer’s website running on a small instance of type e2-small. This instance only has 2G of RAM and 2 vCPUs.

Main problem of the setup is that it runs Apache webserver using mpm_worker process management model with PHP integrated as Apache module. This way Apache spawns multiple processes with separate memory areas each running a copy of the PHP interpreter together with all the PHP code needed to executed WordPress. That is quite a lot. For the website i was analyzing Apache had a resident set size of 1,75G without much traffic. So there wasn’t much RAM left for mysql and once some requests came in it lead to mysqld being killed due to out-of-memory errors.

One could argue that running an website on such a small instance was prone for error from missing resources from the beginning. But from my experience i always try to increasing efficiency of a given runtime environment before increasing resources because problems persist and eventually increased resources will be also be maxed out.

So how to improve this setup? Just switch Apache to event based process management mpm_event and run PHP separately as PHP-FPM using FastCGI as interface between Apache and PHP. This way the memory footprint of Apache decreases dramatically and PHP’s usage resources can be managed independently by configuration.

So here is a quick recipe:

Activate Apache mpm_event and PHP-FPM

sudo su -
systemctl stop apache2
a2dismod php7.4
a2dismod mpm_prefork
a2enmod mpm_event
apt install php7.4-fpm libapache2-mod-fcgid
a2enconf php7.4
a2enconf php7.4-fpm
a2enmod proxy
a2enmod proxy_fcgi
apachectl configtest && systemctl restart apache2

Using this recipe the website i was analyzing went from having no RAM left to ~800M of RAM available. Monitoring will show how the website keeps up with traffic.

JWT implementation details

I recently implemented JWT (JSON Web Token) based auth on an embedded device using nginx with embedded Lua. Following are implementation details.

  • refresh and access token are created as JWT (signed with variable JWT_SECRET)
  • refresh token is delivered as cookie and stored on the client. It is valid for 24hrs only for path /jwt/refresh.
  • access token is delivered in responses and requests as Authentication-header.
  • refresh token is stored on the device in /media/persistency/www/refresh_token.txt
    • /media/persistency/www/ is created and owned to web-user at nginx startup by systemd service file.
  • refresh of access and refresh token is done via route /jwt/refresh by verification of the refresh token which is invalidated by using it once.
  • refresh token can be verified in 2 variants:
    1) by checking the signature with JWT_SECRET
    2) if this fails because JWT_SECRET has changed (e.g. due to device reboot) the token value is compared to the token value stored in /media/persistency/www/refresh_token.txt. If both are equal it is verified.
  • additional validations of the tokens
    • token type refresh/access is validated so both tokens can only be used for their intended purpose.
    • IP of refresh token using client has to be identical to obtaining IP of refresh token which is stored in the token

Tricks for ProxMox VE

Free repo for testing servers

To get updates for your testing servers running ProxMox VE you can use following repo which is not meant for production use but works well for testing purposes. Just add the following lines to /etc/apt/sources.list:

# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve buster pve-no-subscription

Uncomment the enterprise repo in /etc/apt/sources.list.d/pve-enterprise.list and run apt-get update afterwards. You can now run apt-get upgrade to get package upgrades.

Resizing disks of Ubuntu VMs

If your VM runs an Ubuntu cloud image you can use the following simple procedure. This probably applies to other Linux distributions as well.

Just resize the block device of a VM on the host using following command e.g. by 20G (106 is the VM ID):

qm resize 106 scsi0 +20G

This should be visible in dmesg of that VM:

sda: detected capacity change from 45311066112 to 66785902592

Afterwards just reboot that VM and the filesystem will be expanded automatically, again visible in dmesg:

EXT4-fs (sda1): resizing filesystem from 11033851 to 16276731 blocks

Finished!

Extract JSON attributes as columns in MySQL

So you have been lured into storing attribute data as JSON in your MySQL to avoid defining a proper schema from the beginning. Well, now for some report you need a certain attribute as column. Here is how to solve this task.

We have following simplified schema:

CREATE TABLE `store` (
  `storeId` int(11) NOT NULL,
  `attributes` json DEFAULT NULL
);

With data as in following example:

   storeId: 4644
attributes: [
  {
    "value": "",
    "attributeId": "representedBy"
  },
  {
    "value": "abc_de",
    "locale": "de_DE",
    "attributeId": "terms"
  },
  {
    "value": false,
    "attributeId": "hidden"
  }
]

We may want to extract the value of attribute representedBy as column in our report. So to extract we use following trick

select storeId,
cast(
  json_unquote(
    json_extract(attributes, 
      json_unquote(
        replace(
          json_search(attributes, 'one', 'representedBy'),
          'attributeId', 'value'
        )
      )
    )
) as char) as representedBy
from store

How does it work:

  • json_search(attributes, 'one', 'representedBy') searches the first occurrence of „representedBy“ and returns the path to the key where it is stored
  • using replace we modify the path to point to the value instead of the attributeId
  • after json_unquote we can use that path in json_extract. The return value is again unquoted and casted to char to have a proper string in the report

Why do we have to use that trick?

We can’t extract the attribute value by using the function json_extract directly because

  • JSON paths which are used in json_extract for selection don’t provide selection based on keyValues – you can only select by keyName
  • json_extract doesn’t separate selection from evaluation – what is selected is evaluated.
  • Feature request: with both features added we could extract using json_extract(attributes, '$[*].attributeId="representedBy"', '$.value')

Using imapsync to migrate mailboxes

Recently i had to migrate some mailboxes from one imap server to the other. After a bit of research into tools for imap migration running on Linux i decided to use imapsync. It has all the relevant features and is maintained. Running it showed that it tries to do the right things by default. As i will have to do such a migration in the future i document my setup.

Installation

imapsync is a great tool but somehow doesn’t come with perlbrew preparation.

imapsync has a lot of dependencies (mostly perl libs) which are installed via the system’s package manager. But sometimes you just want to execute a migration and not have all those libs on the system afterwards. So a Docker setup setup is provided which solves this easily. But Perl has its established solution for this problem, so we want to use this: perlbrew.

So i prepared a cpanfile which will hopefully be included in the future.

requires 'App::cpanminus';
requires 'Authen::NTLM';
requires 'CGI';
requires 'Compress::Zlib';
requires 'Crypt::OpenSSL::RSA';
requires 'Data::Dumper';
requires 'Data::Uniqid';
requires 'Digest::HMAC';
requires 'Digest::HMAC_MD5';
requires 'Digest::MD5';
requires 'Dist::CheckConflicts';
requires 'Encode';
requires 'Encode::Byte';
requires 'Encode::IMAPUTF7';
requires 'File::Copy::Recursive';
requires 'File::Tail';
requires 'IO::Socket::INET';
requires 'IO::Socket::INET6';
requires 'IO::Socket::SSL';
requires 'IO::Tee';
requires 'JSON';
requires 'JSON::WebToken';
requires 'JSON::WebToken::Crypt::RSA';
requires 'HTML::Entities';
requires 'LWP::UserAgent';
requires 'Mail::IMAPClient';
requires 'MIME::Base64';
requires 'Module::Implementation';
requires 'Module::Runtime';
requires 'Module::ScanDeps';
requires 'Net::SSLeay';
requires 'Package::Stash';
requires 'Package::Stash::XS';
requires 'PAR::Packer';
requires 'Parse::RecDescent';
requires 'Pod::Usage';
requires 'Readonly';
requires 'Regexp::Common';
requires 'Sys::MemInfo';
requires 'Term::ReadKey';
requires 'Test::Fatal';
requires 'Test::Mock::Guard';
requires 'Test::MockObject';
requires 'Test::More';
requires 'Test::Pod';
requires 'Test::Requires';
requires 'Test::Deep';
requires 'Text::ParseWords';
requires 'Try::Tiny';
requires 'Unicode::String';
requires 'URI::Escape';

Just save as cpanfile and run cpanm --installdeps .

Running

There a good examples for testing the migration of single mailboxes (https://imapsync.lamiral.info/examples/imapsync_example.sh) and doing it in bulk (https://imapsync.lamiral.info/examples/sync_loop_unix.sh). From there you can craft your own scripts.

Don’t forget to test your new mail setup using https://www.mail-tester.com/ 😉

WordPress upload permissions

Here you find a quick fix for a wordpress problem i found nowhere written:

I have a wordpress instance running on shared hosting which requires having group read bit set in permissions of uploaded files to be readable by the web server. But uploaded files were set to 0600 instead of 0640 and were thus not viewable on the website.

First i tried setting FS_CHMOD_FILE but this didn’t work as it is not used for uploaded files.

Then i found that wordpress determines the permission bits of uploaded files by taking the permissions of the target directory and if needed the parent directory of it and removing the execution bits. So to solve the problem just set the permissions of wp-content/uploads and all sub directories to 0750. This way wordpress sets file permissions for uploads to 0640 as required.

Upgrading an application on Elastic Beanstalk from Amazon Linux 1 to 2

These days i have the pleasure to upgrade an application running on AWS Elastic Beanstalk from platform Multi-container Docker to new platform Docker. The former is based on Amazon Linux 1 (AL1) while the latter runs on Amazon Linux 2 (AL2). I follow this guide.

I have encountered the following advantages so far:

  • links between containers are now fully bidirectional: on AL1 those links could only be defined and used in one direction. If you wanted bidirectional communication you had to implement service discovery yourself
  • the limit on 10 containers per app is gone: in my tests i could easily spin up 12 containers.
    • apparently this limitation came from the automatic translation of an Elastic Beanstalk app into a task definition. The limitation is documented as task definition max containers.
    • on AL1 one got the following error:
frontend-staging - ERROR: Service:AmazonECS, Code:ClientException, Message:Too many containers., Class:com.amazonaws.services.ecs.model.ClientException
  • i could restart single containers without affecting the other running containers: on AL1 this would cause a restart of all containers for reasons unknown to me.
  • simplified logging – all services log to /var/log/eb-docker/containers/eb-current-app/eb-stdouterr.log which can be streamed to CloudWatch.

Magento and timezones

Today i had the issue that after dumping and importing a Magento Database from one instance of MySQL into another one the order dates got transported 2 hours into the future. How come?!?

Investigation and testing showed that the source database was running with timezone „GMT“ aka „UTC“ aka +00. The target database was running in system timezone which was CEST aka „Europe/Berlin“ aka +02. Still no problem in sight as Magento anyway stores all values in timezone „GMT“!

Turns out Magento uses data type timestamp for order dates etc. MySQL documentation states:

MySQL converts TIMESTAMP values from the current time zone to UTC for storage, and back from UTC to the current time zone for retrieval. (This does not occur for other types such as DATETIME.) By default, the current time zone for each connection is the server’s time. The time zone can be set on a per-connection basis. As long as the time zone setting remains constant, you get back the same value you store. If you store a TIMESTAMP value, and then change the time zone and retrieve the value, the retrieved value is different from the value you stored. This occurs because the same time zone was not used for conversion in both directions. The current time zone is available as the value of the time_zone system variable. For more information, see Section 5.1.13, “MySQL Server Time Zone Support”.

https://dev.mysql.com/doc/refman/5.7/en/datetime.html

Exactly this happened: If you store a TIMESTAMP value, and then change the time zone and retrieve the value, the retrieved value is different from the value you stored.

What was the solution? Well first i wanted to set the target timezone in the dump file. This way target MySQL would convert into GMT during importing and back when the shop reads values. This works except that all values get converted into summer time as it is summer now.

Alternative and better solution: Just set the timezone for each of the shop’s connections by adding the follwing in app/etc/local.xml:

<initStatements><![CDATA[SET NAMES utf8;SET time_zone = '+00:00';]]></initStatements>

This way MySQL doesn’t convert timestamp values in those connections and Magento gets GMT as expected from the database.

So in my view there are some interesting observations:

  • Why does Magento 1 by design a double conversion of date values by storing them as timestamp in MySQL (except when the server’s system timezone is GMT)?
    • Magento creates GMT dates
    • storing such a value in MySQL interprets it as in the connections timezone and transforms it into GMT for storage.
    • reading is done vice versa: read as GMT and convert into connections timezone
  • datetime would be a more fitting data type in my view
  • Magento 2 consequently fixes this by setting in web/vendor/magento/framework/DB/Adapter/Pdo/Mysql.php:
$this->_connection->query("SET time_zone = '+00:00'");
  • MySQL automagically converts timestamps with the correct timezone offset valid for the indivdiual timestamp in the connection’s timezone