Linux Notes - Setting up an Ubuntu server
- Logging in
- Users and Groups
- Security
- Configuring Vim
- Managing Linux
- Managing Jobs
- Copying files
- Zipping files
- Cron jobs and logging
- Hosting Express application with Nginx
- Further notes on nginx
- Installing and running PostgreSQL
- Hosting Docker applications
Logging in
From a terminal, simply enter the following command to log in as root:
ssh root@<ip-address>
Note: This assumes a valid public key has already been set up on the server, and if so it will prompt for a passphrase.
The above command obviously requires a user to remember usernames, IP addresses, etc. This can prove challenging, so there are two ways to simplify this. First, create an alias within the file ~/.ssh/config (Google instructions on the format of this file), or secondly use ssh-agent, this can be achieved simply by storing the private key via the command ssh-add <privatekeyfile>.
Users and Groups
Creating a new user
Once logged in, enter the following commands to add a new user:
sudo adduser <username>
sudo passwd <username>
This will prompt for a password for <username> (and will require it to be entered twice), and then the new user account will exist.
To switch to a new user after logging in as root (or any other user) simply enter the following: su <username> (you may be prompted for the user's password).
To allow ssh access for this new user, copy /root/.ssh/authorized_keys to /home/<username>/.ssh/authorized_keys, change the owner of the file to the new user, and then the relevant permissions. The following commands should server as a useful reference:
chown <username> /home/<username>/.ssh
chmod 700 /home/<username>/.ssh
chown <username> /home/<username>/.ssh/authorized_keys
chmod 644 /home/<username>/.ssh/authorized_keys
As an alternative to using the same public key (i.e. copying /root/.ssh/authorized_keys to the new user's directory), use the ssh-keygen utility to create a new public/private key pair and copy this public key to the aforementioned directory.
To do this, create a new key file, then copy this to the server (though you will first need to have manually set up some method of authentication to allow this in the first place! - one strategy is to follow the above step, i.e. mimicing root's key, then using that to authorise a new key, then deleting root's key). Sample commands are as follows:
ssh-keygen -f ~/.ssh/<newkeyname>
ssh-copy-id -i <newkeyname>.pub <username>@hostip
Remember that when creating a new key you will be prompted to set a passphrase; this will encrypt the private key so it should be safe even if it falls into the wrong hands.
Note: To SSH into the server using this new key file, you will need to specifically tell the ssh utility to use it as follows:
ssh -i ~/.ssh/<newkeyname> <username>@hostip
Creating a new group and adding users
To create new group and add a user to it:
sudo groupadd <groupname>
sudo usermod -aG <groupname> <username>
Viewing users and groups
To view users and groups respectively:
cat /etc/passwd
cat /etc/group
To view members of a particular group:
getent group <groupname>
To view groups that the active user is part of:
groups
Switching users
Rather than using su to switch to the root user whenever elevated permissions are required, use the sudo command instead. For this to work the user will need to be given sudo permissions. The easiest way to achieve this is to assign the user to the relevant group, via the following command:
sudo usermod -aG <groupname> <username>
Checking logins
To view a list of all logins: last
To view the login for each user: lastlog
Security
Enable firewall
ufw is installed by default on Ubuntu. Basic usage is as follows:
sudo ufw enable
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https
Disabling Root login and password authentication
To disable root login, edit the file /etc/ssh/sshd_config, by setting the property PermitRootLogin to no, then restart sshd via the command sudo systemctl restart sshd.
To disable password-based login (i.e. limiting login options to ssh only), edit the file /etc/ssh/sshd_config by setting the property PasswordAuthentication to no. This will require restarting sshd as per the above.
Configuring Vim
A Vim configuration file lives within a user's default directory, i.e. /home/<username>/.vimrc. All configuration settings should be placed in here. If there is reliance on vim-plug (vim-plug) then first copy https://github.com/junegunn/vim-plug/blob/master/plug.vim into /home/<username>/.vim/autoload and within Vim execute the command PlugInstall.
Alternatively, to configure Vim for all users, place configuration setting in /etc/vim/vimrc.local, and copy the above-referenced vim.plug file to /etc/vim/autoload.
Note: If using neovim instead, then the configuration file should live at /home/<username>/.config/nvim/init.vim, and the vim-plug configuration file should live at /home/<username>/.local/share/nvim/site/autoload/plug.vim.
Enabling TLS
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d
This will place the relevant certificates in /etc/letsencrypt/live/<domain>/.
Don't forget to update the firewall rules to allow traffic on port 443! Also, you may need to review and edit whatever nginx conf file was updated by certbot as it may not have done so without errors!
Fail2ban
sudo apt install fail2ban
The program should be enabled by default, but if not run sudo systemctl enable fail2ban.
A jail can be inspected using the client application, e.g. sudo fail2ban-client status sshd.
Managing Linux
System Information
To check what Linux distribution is in use:
less /etc/os-release
To see information on a machine's capabilities:
less /proc/cpuinfo
Alternatively:
uname -a
Disk Usage
To view partitions: sudo fdisk -l
To view disk usage: df -h
A more useful utility is du - this shows the size used by each file/folder in a hierarcht. Use as follows: du -hd 1 <directory>
The -h option is to present the output in human-readable format. The -d 1 option controls the depth of the hierarchy scanned.
Memory / CPU Usage
To view memory urage: free -h
The -h option is to present the output in human-readable format.
To see a list of running processes ranked by those using most memory / CPU respectively:
ps -e u --sort -%mem
ps -e u --sort -%cpu
The -e option shows all processes, the u options shows output in user-oriented format.
To see a running list of all processes use the top command. Interactive options can be used to sort by memory or CPU, limit the number of processes shown, and control the refresh rate - amongst other options.
Managing Jobs
Foreground and background jobs
Foreground jobs block the terminal, background jobs do not. To pause a foreground
job press Ctrl + Z. This will suspend the job. Typing bg will resume the job in the
background, where typing fg will resume it in the foreground. To see all jobs
simply enter the jobs command - this will show all jobs and their status, and
a job id - hitting bg
Useful job management commands
To schedule a process as a background process append an &, e.g. ping www.google.com &
To move a process from a foreground process to a background process press CTRL + Z
To view all background processes within a terminal window run jobs
To continue a backgrounded process run bg [%n], where n is the job number (omit this to operate on the most recently backgrounded process).
To return a backgrounded process to the foreground run fg [%n], where n is the job number (omit this to operate on the most recently backgrounded process).
To kill a foreground process press CTRL + C
To kill a background process run kill %n, where n is the job number.
Copying files
To copy files from one computer to another, one of either scp or sftp can be used.
scp copies a file/directory via command line arguments, whilst sftp creates an interactive session.
Copying a file using scp can be done as follows (pass the -r flag if a directory should be recursively copied in its entirety):
scp -i ~/.ssh/<keyname> <local-filepath> <username>@host:/<remote-filepath>
Zipping files
Zip a file / list of files
To zip one or more files run the following command: tar -czf output.tar.gz file1[.ext] [file2[.ext]...]
List files inside of a zipped directory
To list files inside of a zipped directory run the following command: tar -tf output.tar.gz
Unzip a zipped directory
To unzip a zipped directory run the following command: tar -xf output.tar.gz
Cron jobs and logging
Cron jobs are jobs that are automatically run on a schedule, according to a cron expression.
To edit running cron jobs for a particular user run crontab -e.
To redirect the output of a cron job into the system's standard logging (which on CentOS 8 is recorded at /var/log/messsage), pipe the output of a cron job into the logger utility. To also capture the output of stderr (which just happens to be the default logging stream for the Python logging module...) redirect stderr to stdout using the operator 2>&1.
An example cron job running every minute based on a correctly configured Python file would therefore look like the following:
* * * * * /<fully-qualified-filepath>/<filename>.py 2>&1 | logger
Note: An alternative shorthand for redirecting stderr to stdout is the &> operator.
Hosting Express application with Nginx
Prequisites - an Express app, configured to listen on port 3000. For the purposes of this example it is assumed that the app has one endpoint at /foo, and is run from bin/www, i.e. as per the default configuration for Express-Generator.
Steps and commands followed to date (may not be perfect and instructions may need to be updated):
Enable Nginx
sudo apt install nginx
sudo systemctl enable nginx
Configure Nginx
First, comment out the 'server' section in /etc/nginx/nginx.conf. This will disable the default configuration for HTTP requests on port 80.
Next, app a file called
server {
listen 80;
listen [::]:80 default_server;
location /foo {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:3000/foo;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
#proxy_cache_bypass $http_upgrade;
} # end location
location / {
deny all;
}
} # end server
This script specifically will listen on port 80 and forward all requests for /foo to the running Express application. All other endpoints will be disbabled and will receive a 403 response.
For the above to take effect enter the following:
sudo systemctl restart nginx
Viewing logs
sudo tail -f /var/log/nginx/access.log
sudo tail -f /var/log/nginx/error.log
Running the Express application
To run the application we use a program specifically for Node called pm2.
sudo npm install -g pm2
pm2 start bin/www
You may need to edit /etc/environment to set NODE_ENV="production", and restart the
application with a flag to update environment variables: pm2 restart -a
Viewing Node application logs
pm2 logs
pm2 monit
Further notes on nginx
Restricting sites with basic authentication
Full instructions can be found at https://www.digitalocean.com/community/tutorials/how-to-set-up-password-authentication-with-nginx-on-ubuntu-14-04.
Within /etc/nginx create a new file called .htpasswd.
For a given username and password, first create a hash of the password via the command openssl passwd -apr1 <password>.
OPTIONAL - The following command can be used to generate a random password: openssl rand -base64 8
Then, combined the username and hashed password (joining them with a ":") and append to .htpasswd.
Finally, edit the relevant nginx configuration file by adding the following lines in the appropriate location block:
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
Reload and/or restart nginx.
Installing and running PostgreSQL
To install PostgreSQL on CentOS 8 follow the instructions at https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-centos-8.
A broad summary of the steps is as follows:
sudo dnf module enable postgresql:12
sudo dnf install postgresql-server
sudo postgresql-setup --initdb
sudo systemctl start postgresql
sudo systemctl enable postgresql
This will set up a new user called postgres (note: no password; you will need to su from a superuser). Switch to this user with the command sudo -i -u postgres, or directly enter a psql terminal by entering sudo -u postgres psql.
Enter the command sudu -u postgres createuser --interactive at
the command line to step through a wizard to create a new user. Alternatively, to specify manual
options (respectively, to create databases and roles, act as superuser, and require
a password) enter the following: sudo -u postgres createuser -drsP
To ensure that passwords are required on login, edit the file /var/lib/pgsql/data/pg_hba.conf and change authentication options from peer and ident to md5 throughout. Then, enter a psql console and enter the following command: SELECT pg_reload_conf();. Optionally, retain authentication as peer for local connections for user postgres in order to always be able to log in as postgres simply via the command sudo -u postgres psql.
Useful PostgreSQL commands
To list all available databases from the command line: psql -l
Once logged in, you can check your current connection information by typing: \conninfo
To add/change a password for an existing user (e.g. if a user created via the createuser --interactive command), simply log into a shell and execute the command ALTER USER <username> WITH PASSWORD '<new_password>';.
Hosting Docker applications
Sockets
A note on sockets - When hosting Docker applications, we can check what ports are publicly exposed via the command ss -lnt. The ss program is a utility to investigate sockets. The -lnt flags show listening ports, numeric values (instead of service names), and TCP connections respectively.
Azure DevOps
git repositories - SSH authentication
Azure DevOps provides git repositories, and to connect to these on macOS or Linux it is recommended to use SSH. The steps are as follows:
- Create a public/private SSH key pair: ssh-keygen -f ~/.ssh/<newkeyname> create a passphrase when prompted
- Ensure that this key is added to your SSH Agent: ssh-add ~/.ssh/<newkeyname> enter the passphrase when prompted
- Copy the newly created public key to the Azure DevOps "SSH public keys"
- Set the relevant git configuration settings at the local repository level: git config --add --local core.sshCommand 'ssh -i ~/.ssh/<newkeyname>'
Full instructions can be found at https://docs.microsoft.com/en-us/azure/devops/repos/git/use-ssh-keys-to-authenticate?view=azure-devops.
Service Principals
Sometimes when executing scripts remotely it is necessary to have access to certain Azure resources (e.g. access to KeyVault in order to get a connection string for a database to carry out a deploy as part of a release pipeline). When these automated scripts are not Azure resources themselves then they need be run under a "Service Principal" in order to be authenticated.
Create and use a Service Principal - Password-based authentication
In a command line, assuming the Azure CLI is installed and you have logged into it, execute the following command: az ad sp create-for-rbac -n <SERVICE_PRINCIPAL_NAME>. This will create a Service Principal, and provide details of the client ID, client secret, and tenant ID that will be required when actually authenticating as the Service Principal. Then, in the Azure Portal assign the relevant permissions to this newly created Service Principal.In order to authenticate as a Service Principal, if attempting to authenticate in a dotnet executable using Azure.Identity.DefaultAzureCredential, then the shell executing the script / executable must have the following environment variables set, where the values are as per the newly created Service Principal:
- AZURE_CLIENT_ID
- AZURE_TENANT_ID
- AZURE_CLIENT_SECRET
Create and use a Service Principal - Certificate-based authentication
In a command line, assuming the Azure CLI is installed and you have logged into it, execute the following command: az ad sp create-for-rbac -n <SERVICE_PRINCIPAL_NAME> --create-cert. This will create a Service Principal, and provide details of the client ID, certificate location, and tenant ID that will be required when actually authenticating as the Service Principal. Then, in the Azure Portal assign the relevant permissions to this newly created Service Principal.In order to authenticate as a Service Principal, if attempting to authenticate in a dotnet executable using Azure.Identity.DefaultAzureCredential, then the shell executing the script / executable must have the following environment variables set, where the values are as per the newly created Service Principal:
- AZURE_CLIENT_ID
- AZURE_TENANT_ID
- AZURE_CLIENT_CERTIFICATE_PATH
Creating and consuming Python packages
Useful resources:
To build and publish a package:
- Create a feed under Azure DevOps Artifacts, making a note of the assigned name
- Ensure your Python project is in a fit state to be packaged :)
- Create a YAML pipeline for the repository, and edit it to look (broadly) like the following:
- Note that the above assumes (a) that tests are configured to run using tox, and (b) a setup.py file exists, and the version property is set to '__BUILDNUMBER__' in order that the pipeline can replace this with a configured build number (else the publish action will fail if the version number is not updated with each build).
variables:
python.version: '3.10'
major.version: '0'
minor.version: '0'
name: $(major.version).$(minor.version).$(Rev:r)
trigger:
- master
pool:
vmImage: ubuntu-latest
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- task: PythonScript@0
inputs:
scriptSource: 'inline'
script: |
with open("setup.py", "rt") as fin:
data = fin.read()
data = data.replace('{__BUILDNUMBER__}', '$(Build.BuildNumber)')
with open("setup.py", "wt") as fin:
fin.write(data)
displayName: 'Update Version Number'
- script: |
pip install build twine tox
displayName: 'Install dependencies'
- script: tox
displayName: 'Run tests'
- script: |
python -m build
displayName: 'Build distribution'
- task: TwineAuthenticate@1
inputs:
artifactFeed: '<PROJECT_NAME/FEED_NAME>'
displayName: 'Twine Authenticate'
- script: |
python -m twine upload -r <FEED_NAME> --config-file $(PYPIRC_PATH) dist/*.whl
displayName: 'Upload package'
To consume a published package:
- Create a Personal Access Token ("PAT") under the user's security settings within Azure DevOps
- pip install the package and pass the PAT embedded into the repository URL through the --index-url or --extra-index-url arguments, where the URL takes the following form:
https://<feedname>:<pattoken>@pkgs.dev.azure.com/<org>/<project>/_packaging/<feed>/pypi/simple/
Databases
SQL Server
Users and Logins
The SQL Server access model uses the concepts of logins for authentication (at the server level) and users for authorization (at the database level).
To create a login: CREATE LOGIN <login> WITH PASSWORD = '<password>'[, CHECK_POLICY = ON|OFF];
To create a user and associate it to a login: USE <database>; CREATE USER <username> FROM LOGIN <login>;
To achieve anything with the newly created user permissions will need to be granted. A simple way to achieve this is to make use of SQL Server's in-built 'Fixed-database roles'. There are a number of roles; some of the most useful are db_ddladmin (allows user to run any DDL command), db_datawriter (allows user to add, delete, or change data in all user tables), and db_datareader (allows user to read all data from all user tables and views). The syntax to add a user to a role is as follows:
ALTER ROLE <role_name> ADD MEMBER <username>;
Note executing the above will require a user to themselves have permissions to execute the ALTER ROLE command.
.Net Development
Running services "in production" from Docker containers with TLS
Generate a self-signed certificate
To enable HTTPS use in local web app development use the dotnet dev-certs command.
To create a simple self-signed certificate simply run the following command: dotnet dev-certs https --trust. This will create a pfx file, typically within ~/.aspnet/dev-certs/https (on a Mac).
For greater control, and to allow other applications to be able to make use of the created certificate, create a self-signed certificate in a specified location and protected with a password with the following command: dotnet dev-certs https --trust --export-path <PATH> --password <PASSWORD>.
Pass self-signed certificate to Docker container
Assuming a Docker image encapsulating a .Net application, this can be run securely over TLS (only locally, as that is the only location that will trust the self-signed certificate), via a run command similar to the following:
docker run \
--rm \
-p <PUBLISHED PORT>:443 \
-e ASPNETCORE_URLS="https://*"
-e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/<CERTIFICATE NAME>.pfx \
-e ASPNETCORE_Kestrel__Certificates__Default__Password=<CERTIFICATE PASSWORD> \
-v ~/.aspnet/dev-certs/https:/https/ \
<IMAGE>:<TAG>
The above assumes that the self-signed certificate exists as a pfx file within ~/.aspnet/dev-certs/https.
The ASPNETCORE_URLS environment variable tells the application to listen to all IP addresses on the https protocol on the default port of 443.
The ASPNETCORE_Kestrel__Certificates__Default__Password environment variable is only required if the pfx certificate is password-protected.
OpenSSL and encryption
- Hashing
- Password-Based Key Derivation Functions
- Symmetric Encryption
- Asymmetric Encryption
- Combining symmetric and asymmetric encryption
- Digital Signatures
Hashing
Basic commands and documentation
Basic hashing commands are run using openssl dgst, and documentation can be viewed via man openssl-dgst.
A list of available hashing algorithms can be viewed via openssl dgst -list.
A sensible starting point is to use the sha256 hashing algorithm.
Hashing Files
To hash the file foo.txt run the command: openssl dgst -sha256 foo.txt
Password-Based Key Derivation Functions
Generating a key from a password using PBKDF2
Using password as a password, we can generate a key as follows:
openssl kdf \
-keylen 32 \
-kdfopt pass:password \
-kdfopt salt:$(openssl rand 16) \
PBKDF2
In the example above note the use of the salt parameter to ensure that the same password produces different keys when used multiple times.
Symmetric Encryption
Basic commands and documentation
Basic encryption commands are run using openssl enc, and documentation can be viewed via man openssl-enc.
A list of available encryption ciphers can be viewed via openssl enc -list.
For symmetric encryption a sensible starting point is to use the aes-256-cbc cipher.
Encrypting Files with self-generated Keys and Initialization Vectors
Fist generate a key and initialization vector via the openssl rand command as follows:
export K=$(openssl rand -hex 32)
export IV=$(openssl rand -hex 16)
Next, for a given file (say foo.txt) encrypt as follows:
openssl enc \
-aes-256-cbc \
-base64 \
-e \
-in foo.txt \
-K $K \
-iv $IV \
-out foo.txt.encrypted
Note: The -base64 flag simply base64 encodes the encrypted output for readability (but is not required).
To decrypt the encrypted file simply reverse the above command (by swapping the -in and -out parameters, and replacing the -e encrypt flag with a -d decrypt flag.
Encrypting Files with Keys and Initialization Vectors generated from user-supplied passwords
For a given file (say foo.txt) and password (say password) encrypt as follows:
openssl enc \
-aes-256-cbc \
-base64 \
-e \
-pbkdf2 \
-pass pass:password \
-salt \
-in foo.txt \
-out foo.txt.encrypted
Note: The -salt flag is "on" by default and so is technically not required but can be provided for visibility.
Decrypting works as follows:
openssl enc \
-aes-256-cbc \
-base64 \
-d \
-pbkdf2 \
-pass pass:password \
-in foo.txt.encrypted
Note: The password can be supplied in a number of different ways. As above, passing -pass pass:<PASSWORD> will pass a password directly from the command line. Similarly, passing -pass env:<ENV_VAR> will use the named environment variable. Finally, simply omit the -pass flag altogether to enter interactive mode whereby openssl will prompt the user for a password to encrypt/decrypt as required.
A minimal example to encrypt a file would look as follows (with the CLI prompting the user for a password):
openssl enc -aes-256-cbc -pbkdf2 -in foo.txt -out foo.txt.encrypted
Asymmetric Encryption
Basic commands and documentation
Documentation can be viewed via the following man pages:
- man openssl-genpkey (generate a private key)
- man openssl-pkey (public or private key processing command)
- man openssl-pkeyutl (public key algorithm command)
Generating and inspecting a keypair
A keypair derived using the RSA algorithm can be generated via the following command:
openssl genpkey -algorithm RSA
The above command generates a 2,048-bit private key by default. The number of bits can be specifed by appending -pkeyopt rsa_keygen_bits:4096 to the command (replacing 4,096 with the requisite number of bits).
For a keypair, say rsa_keypair.pem, we can view the structure via the following command:
openssl pkey -in rsa_keypair.pem -text -noout
If we wish to restrict the above to only the public component we can the following command:
openssl pkey -in rsa_keypair.pem -pubout
Note: As always, simply append -out <FILENAME> to the above command to save the output to a specified file.
Encrypting and decrypting using a keypair
RSA is limited to encrypting data no longer than the key length during one operation. Therefore we typically apply RSA encryption to session keys. The following code shows the generation of an arbitrary session key, and how to encrypt it, assuming a public key saved at rsa_public_key.pem:
openssl rand -out session_key.bin 32
openssl pkeyutl \
-encrypt \
-in session_key.bin \
-out session_key.bin.encrypted \
-pubin \
-inkey rsa_public_key.pem \
-pkeyopt rsa_padding_mode:oaep
We can decrypt the above output assuming a private key saved at rsa_keypair.pem:
openssl pkeyutl \
-decrypt \
-in session_key.bin.encrypted \
-inkey rsa_keypair.pem \
-pkeyopt rsa_padding_mode:oaep
Combining symmetric and asymmetric encryption
Assume we wish to encrypt the contents of foo.txt.
- Generate a symmetric encryption key: openssl rand -out symmetric_keyfile.key 128
- Encrypt the file using the symmetric encryption key: openssl enc -aes-256-cbc -in foo.txt -out foo.txt.enc -pbkdf2 -pass file:symmetric_keyfile.key
- Generate a keypair: openssl genpkey -algorithm RSA -out private_key.pem
- Generate a public key: openssl pkey -pubout -in private_key.pem -out public_key.pem
- Encrypt the symmetric encryption key: openssl pkeyutl -encrypt -in symmetric_keyfile.key -out symmetric_keyfile.key.enc -pubin -inkey public_key.pem -pkeyopt rsa_padding_mode:oaep
We can now distribute both the encrypted file foo.txt.enc and the encrypted key symmetric_keyfile.key.enc. Decryption requires knowledge of private_key.pem.
To decrypt the file:
- Decrypt the symmetric encryption key: openssl pkeyutl -decrypt -in symmetric_keyfile.key.enc -out symmetric_keyfile.key -inkey private_key.pem -pkeyopt rsa_padding_mode:oaep
- Decrypt the file: openssl enc -d -aes-256-cbc -in foo.txt.enc -out foo.txt -pbkdf2 -pass file:symmetric_keyfile.key
Digital Signatures
We can use Elliptic Curves to digitally sign a file. We generate a keypair, and use the private key to sign a file, and the corresponding public key to verify the signature.
View the available curves: openssl ecparam -list_curves
Generate a keypair: openssl genpkey -algorithm EC -pkeyopt ec_paramgen_curve:secp521r1 -out ec_keypair.pem
Generate a public key: openssl pkey -in ec_keypair.pem -pubout -out ec_public_key.pem
Sign the contents of somefile.txt using the private key ec_keypair.pem:
openssl pkeyutl \
-sign \
-digest sha3-512 \
-rawin \
-in somefile.txt \
-inkey ec_keypair.pem \
-out somefile.txt.signature
Verify the contents of somefile.txt.signature using the public key ec_public_key.pem:
openssl pkeyutl \
-verify \
-digest sha3-512 \
-rawin \
-in somefile.txt \
-pubin \
-inkey ec_public_key.pem \
-sigfile somefile.txt.signature