Development Log 2023-10-14

2023 October 14

Began the process of migrating the SCANALYST site to Amazon Linux 2023.

Created a new instance with:
    AMI:            Amazon Linux 2023 AMI 2023.2.20231011.0 x86_64 HVM kernel-6.1
                    ami-065ab11fbd3d0323d
Clicked "Launch instance from AMI":
    Instance type:  t3.medium
    Key pair name:  Scanalyst
    Instance details:
        Network:    (default)
        Subnet:     eu-central-1b
        Auto-assign IPv6 IP: Enable
        Firewall: Existing security group, Scanalyst, sg-049b61db659446aab
        All other: (default)
    Storage:
        Root    /dev/xvda   snap-0b108c803dcd979d5  96 Gb   No delete on termination
        Server  /dev/sdb    No snapshot             2 Gb    No delete on termination
    Tags:
        Name:   Scanalyst L2023
Selected Launch.

New instance was created as i-092217f3135c36549 with:
    IPv4 public address:    3.122.192.181
    IPv6 address:           2a05:d014:d43:3101:d577:c011:85c2:e9db
    /dev/xvda   vol-0cf623d818601d186       Scanalyst root L2023
    /dev/sdb    vol-03cabd4e67ff7ffc8       Scanalyst server L2023

Made an /etc/hosts entry on Hayek:
    3.122.192.181   sc2
to reduce the amount of typing in system configuration.

Logged in with:
    $ ssh -i Scanalyst.pem ec2-user@sc2
    X11 forwarding request failed on channel 0
       ,     #_
       ~\_  ####_        Amazon Linux 2023
      ~~  \_#####\
      ~~     \###|
      ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
       ~~       V~' '->
        ~~~         /
          ~~._.   _/
             _/ _/
           _/m/'

Ran:
    sudo su
    dnf update
which reported nothing to update.

uname -a reports:
    Linux ip-172-31-19-123.eu-central-1.compute.internal 6.1.55-75.123.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Sep 26 20:06:16 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Created a mount point and mounted /dev/sdb.
    sudo su
    mkdir /server
    mkfs -t ext4 /dev/sdb
    fsck -f /dev/sdb
    mount /dev/sdb /server

Added:
    /dev/sdb   /server     ext4    defaults        1   2
to /etc/fstab.  Note that the file system on the root device is
now xfs, not ext4 as it was on the previous Linux 2 AMI.  Someday
we might want to migrate /server to xfs, but this is not that
day.

Set /etc/hostname to "scanalyst".

Rebooted to make sure it was re-mounted.  It was.

The system came up with the hostname changed and /server mounted.

Added accounts to /etc/passwd:
    kelvin:x:500:500:John Walker:/server/home/kelvin:/bin/bash

Added corresponding entries to /etc/shadow:
    kelvin:!!:18902:0:99999:7:::
then /etc/group:
    kelvin:x:500:
and /etc/gshadow:
    kelvin:!::

Transferred the current contents of the /server file system from the
production Scanalyst site to /server on sc2.  This transfers my
/server/home/kelvin directory under which everything I need to log in
and build Discord should be installed.

Indeed, I can now log in to sc2 via ssh without a password.

Added kelvin to /etc/sudoers.d/90-cloud-init-users to permit
sudo without a password.
    # User rules for kelvin
    kelvin   ALL=(ALL) NOPASSWD:ALL
Tested: it works.

Installed our magic /bin/super utility.  I simply copied the binary 
from the production Fourmilab AWS server.

Transferred the /root/.ssh/authorized_keys file from production 
Scanalyst to sc2, saving the original as authorized_keys_ORIGINAL.

Edited /etc/ssh/sshd_config and set:
    PermitRootLogin yes

Restarted:
    systemctl restart sshd

Now I can log in as root from local machines without a
password.  Verified that regular user logins continue to work.
This will allow a mirror backup from Juno.

Rebooted to confirm that all of the configuration and
permission changes so far persist.

I can now log in with my regular account and use super when I get
there.  From now on, we shouldn't need to use ec2-user, but it's there
if necessary.

Configured AWS command line:
    aws configure
    AWS Access Key ID [None]: REDACTED
    AWS Secret Access Key [None]: REDACTED
    Default region name [None]: eu-central-1
Tested with:
    aws s3 ls
and it seems to be working.

Installed:
    super
    dnf install git
and tested: it works.

Installed:
    dnf install docker

Installed:
    dnf install nmap-ncat

Set up Docker to start at boot time.
    systemctl start docker
    systemctl status docker
        #   Looks OK
    systemctl enable docker
    systemctl is-enabled docker
        #   enabled
Now reboot again to make sure it comes up after a boot.

Installed:
    dnf install xauth
in the hope this will allow X11 forwarding on SSH logins.  After
logging out and back in, it re-created ~/.Xauthority and now X11
tunnelling works.

Installed:
    dnf install gtk3-devel
    # This installed 147 dependencies.
    dnf install gcc
    dnf install gcc-c++
    dnf install intltool

This permitted re-building and installing Geany, which had been
downloaded from:
    https://www.geany.org/download/releases/
into ~/linuxtools/geany-1.38 on the production Scanalyst and I re-built
with:
    cd ~/linuxtools/geany-1.38
    make distclean
    ./configure
    make
    super
    make install
This installs in the /usr/local directory tree.  Since Geany is
not available as an AWS package, this locally-built version will
not be automatically updated by the dnf package manager
(although all of its dependencies will be).  But then Geany is a
stable package which changes only very slowly.  I chose to rebuild
from source to avoid library version dependency Hell later on as
the system is updated.  I tested it after installation and it works,
which also confirmed that X11 forwarding is working.

Installed:
    super
    dnf install "perl(JSON)"
    dnf install "perl(CGI)"
This is required by "credit", which now works.

Rebuilt the Bacula file daemon in /server/src/bacula to verify all
dependencies present and library compatibility.  The daemon source was
copied over from production Scanalyst with the rest of /server.
    cd /server/src/bacula/bacula-5.2.10
    ./BuildFourmilabClient
This does the complete configuration, make, and installation process
and worked with no problems.

Started the Bacula file daemon:
    super
    /server/init/bacula start
and verified it is running.

Discovered that when I set up the server, I neglected to add a line in 
a newly created /etc/rc.d/rc.local to start our local services (which 
consist only of Bacula).  Consequently, Bacula was not restarted 
automatically after a reboot.  Added:
    /server/init/servers $1
to /etc/rc.d/rc.local (who knew it still existed in these dark days of
systemd?) to start and stop our local servers at boot and shutdown.
Set the file executable, which systemd requires:
    chmod 755 /etc/rc.d/rc.local
Now it dies with:
    systemctl start rc-local
    Job for rc-local.service failed because the control process exited with error code.
    See "systemctl status rc-local.service" and "journalctl -xeu rc-local.service" for details.
And the systemctl status..." says:
    scanalyst (rc.local)[93002]: rc-local.service: Failed to execute /etc/rc.d/rc.local: Exec format error
    scanalyst (rc.local)[93002]: rc-local.service: Failed at step EXEC spawning /etc/rc.d/rc.local: Exec format error
After further psychoanalysis, I determined it need a "shebang" line at
the start of rc.local to believe it's really executable.  I added:
    #! /bin/bash
and now systemctl start and stop work.

But (and there's always a but, isn't there?) "systemctl enable"
continued to fail with an eight line error message worthy of Microsoft.
According to:
    https://www.linuxbabe.com/linux-server/how-to-enable-etcrc-local-with-systemd
the way around this is to create a /etc/systemd/system/rc-local.service
file containing the following incantations to demonic systemd (example
updated per comments below):
    [Unit]
     Description=/etc/rc.d/rc.local Compatibility
     ConditionPathExists=/etc/rc.d/rc.local

    [Service]
     Type=forking
     ExecStart=/etc/rc.d/rc.local start
     TimeoutSec=0
     StandardOutput=tty
     RemainAfterExit=yes
     SysVStartPriority=99

    [Install]
     WantedBy=multi-user.target
Bit happens, and who ya gonna call?

With this file in place, I can now perform:
    systemctl enable rc-local
    Created symlink /etc/systemd/system/multi-user.target.wants/rc-local.service → /etc/systemd/system/rc-local.service.
and "systemctl status rc-local" reports:
     Loaded: loaded (/etc/systemd/system/rc-local.service; enabled; preset: disabled)
enabled ... disabled.  What the Hell, it's systemd.

All right, it's time to once again reboot and see if all of this stuff
works after a reboot.

Nope.  Bacula was not started.  systemctl status explains:
    ConditionPathExists=/etc/rc.local was not met
so I guess we need to modify that /etc/systemd/system/rc-local.service
file to set:
    Description=/etc/rc.d/rc.local Compatibility
    ConditionPathExists=/etc/rc.d/rc.local
    ExecStart=/etc/rc.d/rc.local start
instead.  (I have modified the model file above accordingly to avoid
repeating this problem is it is copied blindly.)

Let's reboot again!

And, finally, this time it started the Bacula file daemon.  The
systemd status report looks normal.

And, with that, I'm going to call it a night (or, more precisely,
not-so-early morning).  You may have noticed I haven't yet done a
single thing about bringing up Discourse, its automatic installation of
Nginx, or the forbidding tower of Let's Encrypt, not to mention
how we synchronise this new, pristine Discourse installation with the
database of the production site.

That is a matter for another day.
1 Like

I know little/nothing about code. What is this figure, please?

1 Like

I’m not a programmer, but it looks like an ASCII art depiction of the Amazon Linux logo. The picture is created using text symbols but (to my knowledge) it doesn’t represent any code.

2 Likes

It is an ASCII art image of the Bezos Bird of Monopoly. It adorns the windowless 200 storey Amazon Ministry of Cloud Computing bestriding the Columbia river and using its waters to cool the Amazon Web Services machines. You can see the Bird right above the banner that reads “COMPATIBILITY IS UNPROFITABLE" outside the 196th floor.

4 Likes