2023 October 8
Began campaign to migrate Agora to Amazon Linux 2023. The Getting
Started document is:
https://docs.aws.amazon.com/linux/al2023/ug/ec2.html
Made a backup AMI.
Agora Backup 2023-10-08 ami-0292585a6324f6fb0
/ snap-09a895039c55900ac
/server snap-0d939f4a884ba9c18
/vault snap-0e1ae02b85f83f4ee (Encrypted)
The snapshots of /server and /vault from this AMI will be used to create
the corresponding file systems of the Linux 2023 system.
Made volumes from the /server and /vault snapshots above. All created
by selecting the snapshot, then Actions/Create volume from snapshot.
Choose Availability Zone eu-central-1b.
/server snap-0d939f4a884ba9c18 vol-0535b25530efd1498 Agora server L2023
/vault snap-0e1ae02b85f83f4ee vol-03633dd175f06efa4
For /vault, the ARN for the encryption key is:
arn:aws:kms:eu-central-1:812306733122:key/REDACTED
and the KMS key is:
REDACTED
Created a new instance with:
AMI: Amazon Linux 2023 AMI 2023.2.20231002.0 x86_64 HVM kernel-6.1
ami-088e71edb8795252f
Clicked "Launch instance from AMI":
Instance type: t3.medium
Key pair name: Agora
Instance details:
Network: (default)
Subnet: eu-central-1b
Auto-assign IPv6 IP: Enable
Firewall: Existing security group, Agora, sg-06373ae6aba10e811
All other: (default)
Storage:
Root /dev/xvda snap-0371fdb2a84a9fb84 8 Gb No delete on termination
(Will add volumes from snapshots after launch. Cannot add at
launch time.)
Tags:
Name: Agora L2023
Selected Launch.
New instance was created as i-0f273900c3dd3a6e3 with:
IPv4 public address: 18.197.109.52
IPv6 address: 2a05:d014:d43:3101:f550:4e24:1c7f:c7f2
/dev/xvda vol-06308ea129026e41d Agora root L2023
Made an /etc/hosts entry on Hayek:
18.197.109.52 ag2
to reduce the amount of typing in system configuration.
Changed root volume to not delete on termination with:
cd ~/w/Agora_AWS
# No super
aws ec2 modify-instance-attribute --instance-id i-0f273900c3dd3a6e3 --block-device-mappings file://map.json
where the file map.json contained:
[
{
"DeviceName": "/dev/xvda",
"Ebs": {
"DeleteOnTermination": false
}
}
]
Confirmed the mode had changed in the instance page (had to refresh to
see it).
Logged in with:
$ ssh -i Agora.pem ec2-user@ag2
X11 forwarding request failed on channel 0
, #_
~\_ ####_ Amazon Linux 2023
~~ \_#####\
~~ \###|
~~ \#/ ___ https://aws.amazon.com/linux/amazon-linux-2023
~~ V~' '->
~~~ /
~~._. _/
_/ _/
_/m/'
Ran:
sudo su
yum update
which reported nothing to update.
uname -a reports:
Linux ip-172-31-19-99.eu-central-1.compute.internal 6.1.55-75.123.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Sep 26 20:06:16 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Attached the /server volume created from the production Agora system.
Go to Volumes, select "Agora server L2023". Actions/Attach volume.
Select instance Agora L2023 (i-0f273900c3dd3a6e3). Select device
name "/dev/sdb" (same as on Agora).
Click Attach, and it reports successfully attached.
Back on the console, running as root:
fsck -f /dev/sdb
/dev/nvme1n1: recovering journal
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong (229710002, counted=229268449).
Fix? yes
Free inodes count wrong (98042691, counted=98042796).
Fix? yes to all
/dev/nvme1n1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/nvme1n1: 261204/98304000 files (3.5% non-contiguous), 163947551/393216000 blocks
fsck -f /dev/sdb
fsck from util-linux 2.37.4
e2fsck 1.46.5 (30-Dec-2021)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/nvme1n1: 261204/98304000 files (3.5% non-contiguous), 163947551/393216000 blocks
Mounted file system with:
mkdir /server
mount /dev/sdb /server
and it looks OK.
Now let's try the same with the encrypted /vault file system. This
ought to be fun.
Go to Volumes, select "Agora vault L2023". Actions/Attach volume.
Select instance Agora L2023 (i-0f273900c3dd3a6e3). Select device
name "/dev/sdc" (same as on Agora).
Click Attach, and it reports successfully attached.
Back on the console, running as root:
fsck -f /dev/sdb
e2fsck 1.46.5 (30-Dec-2021)
/dev/nvme2n1: recovering journal
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/nvme2n1: 16/524288 files (0.0% non-contiguous), 75215/2097152 blocks
Mounted file system with:
mkdir /vault
mount /dev/sdc /vault
and it looks OK. Whew! That was a lot easier than I feared. It looks
like once you specify the encryption key with the file system, all you
have to do is mount it.
Confirmed /dev/sdb and /dev/sdc mounts shown in the Storage tab of the
Instances page for Agora L2023. The KMS Key ID is shown for the /vault
file system.
Added:
/dev/sdb /server ext4 defaults 1 2
/dev/sdc /vault ext4 defaults 1 2
to /etc/fstab to mount file system on reboot.
Rebooted to make sure they were re-mounted. They were.
Migrated from the default ec2-user account to kelvin as follows:
Edited /etc/passwd and /etc/shadow entries and added lines:
passwd: kelvin:x:500:500:John Walker:/server/home/kelvin:/bin/bash
shadow: kelvin:!!:17509:0:99999:7:::
Edited /etc/group and /etc/gshadow entries and changed lines for
ec2_user/500 to:
group: kelvin:x:500:
gshadow: kelvin:!::
Confirmed user name and group shown correctly in:
ls -l /server/home
Installed my ~/.ssh and ~/bin directories as well as other .config
files from those on SC.
Installed /usr/bin/super so I can get to root from my login.
Added to /etc/sudoers.d/90-cloud-init-users:
# User rules for kelvin
kelvin ALL=(ALL) NOPASSWD:ALL
so sudo works from my login.
Changed host name in prompt in ~/.bash_profile to @ag23.
Now I can log in directly with:
ssh ag2
using my own private key.
Backed up /root/.ssh/authorized_keys as authorized_keys_ORIG
and installed the copy from AG. I can now ssh in directly as
root from Hayek.
Now it's time to take the big gulp and reboot, then see if I can still
get in to the system with my regular log in and private key.
It worked!
Installed:
yum install gtk3-devel
This installed 147 dependencies.
Installed:
yum install gcc
yum install gcc-c++
yum install intltool
This permitted building and installing Geany, which had been downloaded
from:
https://www.geany.org/download/releases/
into ~/linuxtools/geany-1.38 and I re-built with:
make distclean
./configure
make
super
make install
This installs in the /usr/local directory tree. Since Geany is
not available as an AWS package, this locally-built version will
not be automatically updated by the YUM package manager
(although all of its dependencies will be). But then Geany is a
stable package which changes only very slowly.
Installed:
yum install xauth
Logged out and logged back in again, which created an ~/.Xauthority
file. Geany now works and can pop up a window on the machine from
which I logged in with SSH.
Installed:
super
yum install "perl(JSON)"
yum install "perl(CGI)"
This is required by "credit", which is not presently working:
Parameter validation failed:
Invalid length for parameter Dimensions[0].Value, value: 0, valid min length: 1
malformed JSON string, neither array, object, number, string or atom, at character offset 0 (before "(end of string)") at /server/home/kelvin/bin/aws_stats.pl line 124.
Maybe we haven't been up long enough to initialise the data. I'll try
later.
The Motif emulator from AWS Linux 2 doesn't seem to be available on
2023, which torpedoes nedit below the water line. We may have to just
bid farewell to it and move entirely to Geany.
Installed the following packages to support Bitcoin Core.
super
yum install xcb-util
yum install xcb-util-wm
yum install xcb-util-image
yum install xcb-util-keysyms
yum install xcb-util-renderutil
yum install libxkbcommon-x11
Updated the ~/.bitcoin/bitcoin.conf for the private and (ephemeral)
public IP addresses of AG2:
server=1
txindex=1
rpcbind=127.0.0.1
rpcbind=172.31.19.99
rpcallowip=193.8.230.147
rpcallowip=118.197.109.52
rpcallowip=127.0.0.1
rpcport=8332
Let's try starting Bitcoin Core. Note that this is the whacko patched
version to run with locally-built libraries on AWS Linux 2.
bitcoin -server
OK, that didn't work because we haven't installed the whacko GLIBC in
/opt. Let's try running the vanilla Bitcoin Core from Satoshi's
treasure chest.
https://bitcoin.org/en/download
https://bitcoin.org/bin/bitcoin-core-25.0/bitcoin-25.0-x86_64-linux-gnu.tar.gz
Cazart! It works. Or, at least it loads, pops up its start window,
and goes into "Verifying blocks", which seems to proceed very slowly.
Is it, for some reason, re-verifying the whole blockchain? That would
be interesting. (Note that I brought over all of its files from AG.
Why would it be doing that?)
It's only using around 0.3% of the CPU. So, whatever it's doing, it
isn't CPU bound doing it. After about 5 minutes, just jumped to 16%
done verifying blocks.
Made a symbolic link:
super
ln -s /server/home/kelvin /home/kelvin
This allows shell scripts to run in either the local Juno environment
or from an AWS /server file system.
And, finally, it came up and is catching up, from eight hours behind
the inexorably growing blockchain.
It's caught up, and it seems to have loaded the wallets from the
encrypted /vault file system with no problems. Let's see if it can
handle CLI status requests.
bitcoin-cli -getinfo
Chain: main
Blocks: 811290
Headers: 811290
Verification progress: 99.9996%
Difficulty: 57321508229258.04
Network: in 4, out 10, total 14
Version: 250000
Time offset (s): -6
Proxies: n/a
Min tx relay fee rate (BTC/kvB): 0.00001000
bitcoin-cli -netinfo 4
Bitcoin Core client v25.0.0 - server 70016/Satoshi:25.0.0/
<-> type net mping ping send recv txn blk hb addrp addrl age id address version
in ipv4 6 6 4 10 3 25 129.13.189.202:45178 70002/dsn.tm.kit.edu/bitcoin:0.9.99/
in ipv4 20 20 4 74 1 32 147.229.8.240:49418 70016/bitcoinj:0.16.2/Bitcoin Wallet:9.26/
in ipv4 29 29 4 64 1 33 141.20.33.66:52984 70002/dsn.tm.kit.edu/bitcoin:0.9.99/
in ipv4 57 57 4 54 3 23 129.13.189.200:48002 70002/dsn.tm.kit.edu/bitcoin:0.9.99/
out full ipv4 5 6 1 7 0 2 . 1005 7 3 176.9.2.175:8333 70016/Satoshi:25.0.0/
out full ipv6 7 7 2 6 0 1002 2 27 [2001:7c0:2310:0:f816:3eff:fe52:6059]:8333 70016/Satoshi:22.0.0/
out block ipv4 16 16 10 10 * . 2 30 217.182.200.177:8333 70016/Satoshi:0.21.1/
out full ipv4 28 28 1 1 0 4 . 1004 6 6 65.108.138.106:8333 70016/Satoshi:22.0.0/
out full ipv4 93 93 0 5 0 1002 3 26 34.95.4.198:8333 70015/Satoshi:0.20.1/
out full ipv4 114 114 1 1 0 1004 6 8 69.135.23.213:8333 70016/Satoshi:22.0.0/
out full ipv4 117 117 4 5 0 1001 3 19 159.2.191.175:8333 70016/Satoshi:0.21.1/
out full ipv4 160 160 8 3 0 1007 5 11 138.68.8.225:30200 70016/Satoshi:24.0.0/
out full ipv4 225 226 3 13 0 1001 4 12 49.67.175.121:8333 70016/Satoshi:0.21.0/
out block ipv4 265 265 104 104 * . 1 31 172.14.127.191:8333 70016/Satoshi:22.0.0/
ms ms sec sec min min min
ipv4 ipv6 total block
in 4 0 4
out 9 1 10 2
total 13 1 14
Local addresses
2a05:d014:d43:3101:f550:4e24:1c7f:c7f2 port 8333 score 1
Looks good, and we're receiving blocks from the network and serving
them to others.
There is a great deal that remains to be done, but this is much farther
than I expected to get in the first assault on the summit, so I'm going
to call it a night and see how it runs overnight.