Introduction to Baremetal
Once you've grown beyond the confines and limitations of the cloud deployment providers, it's time to get serious: hosting your own code on big iron. Prepare for performance like you've only dreamed of! Also be prepared for IT and infrastructure responsibilities like you've only had nightmares of.
With Redwood's Baremetal deployment option, the source (like your dev machine) will SSH into one or more remote machines and execute commands in order to update your codebase, run any database migrations and restart services.
Deploying from a client (like your own development machine) consists of running a single command:
First time deploy:
yarn rw deploy baremetal production --first-run
Subsequent deploys:
yarn rw deploy baremetal production
If you haven't done any kind of remote server work before, you may be in a little over your head to start with. But don't worry: until relatively recently (cloud computing, serverless, lambda functions) this is how all websites were deployed, so we've got a good 30 years of experience getting this working!
If you're new to connecting to remote servers, check out the Intro to Servers guide we wrote just for you.
Deployment Lifecycle
The Baremetal deploy runs several commands in sequence. These can be customized, to an extent, and some of them skipped completely:
df
to make sure there is enough free disk space on the servergit clone --depth=1
to retrieve the latest code- Create a
.env
symlink to the shared.env
in the app dir yarn install
- installs dependencies- Runs prisma DB migrations
- Generate Prisma client libs
- Runs data migrations
- Builds the web and/or api sides
- Symlink the latest deploy dir to
current
in the app dir - Restart the serving process(es)
- Remove older deploy directories
First Run Lifecycle
If the --first-run
flag is specified then step 7 above will execute the following commands instead:
pm2 start [service]
- starts the serving process(es)pm2 save
- saves the running services to the deploy users config file for future startup. See Starting on Reboot for further information
Directory Structure
Once you're deployed and running, you'll find a directory structure that looks like this:
└── var
└── www
└── myapp
├── .env <────────────────┐
├── current ───symlink──┐ │
└── releases │ │
└── 20220420120000 <┘ │
├── .env ─symlink─┘
├── api
├── web
├── ...
There's a symlink current
pointing to directory named for a timestamp (the timestamp of the last deploy) and within that is your codebase, the latest revision having been clone
d. The .env
file in that directory is then symlinked back out to the one in the root of your app path, so that it can be shared across deployments.
So a reference to /var/www/myapp/current
will always be the latest deployed version of your codebase. If you wanted to setup nginx to serve your web side, you would point it to /var/www/myapp/current/web/dist
as the root
and it will always be serving the latest code: a new deploy will change the current
symlink and nginx will start serving the new files instantaneously.
App Setup
Run the following to add the required config files to your codebase:
yarn rw setup deploy baremetal
This will add dependencies to your package.json
and create two files:
deploy.toml
contains server config for knowing which machines to connect to and which commands to runecosystem.config.js
for PM2 to know what service(s) to monitor
If you see an error from gyp
you may need to add some additional dependencies before yarn install
will be able to complete. See the README for node-type
for more info: https://github.com/nodejs/node-gyp#installation
Configuration
Before your first deploy you'll need to add some configuration.
ecosystem.config.js
By default, baremetal assumes you want to run the yarn rw serve
command, which provides both the web and api sides. The web side will be available on port 8910 unless you update your redwood.toml
file to make it available on another port. The default generated ecosystem.config.js
will contain this config only, within a service called "serve":
module.exports = {
apps: [
{
name: 'serve',
cwd: 'current',
script: 'node_modules/.bin/rw',
args: 'serve',
instances: 'max',
exec_mode: 'cluster',
wait_ready: true,
listen_timeout: 10000,
},
],
}
If you follow our recommended config below, you could update this to only serve the api side, because the web side will be handled by nginx. That could look like:
module.exports = {
apps: [
{
name: 'api',
cwd: 'current',
script: 'node_modules/.bin/rw',
args: 'serve api',
instances: 'max',
exec_mode: 'cluster',
wait_ready: true,
listen_timeout: 10000,
},
],
}
deploy.toml
This file contains your server configuration: which servers to connect to and which commands to run on them.
[[production.servers]]
host = "server.com"
username = "user"
agentForward = true
sides = ["api","web"]
packageManagerCommand = "yarn"
monitorCommand = "pm2"
path = "/var/www/app"
processNames = ["serve"]
repo = "git@github.com:myorg/myapp.git"
branch = "main"
keepReleases = 5
This lists a single server, in the production
environment, providing the hostname and connection details (username
and agentForward
), which sides
are hosted on this server (by default it's both web and api sides), the path
to the app code and then which PM2 service names should be (re)started on this server.
Config Options
host
- hostname to the serverport
- [optional] ssh port for server connection, defaults to 22username
- the user to login aspassword
- [optional] if you are using password authentication, include that hereprivateKey
- [optional] if you connect with a private key, include the content of the key here, as a buffer:privateKey: Buffer.from('...')
. Use this orprivateKeyPath
, not both.privateKeyPath
- [optional] if you connect with a private key, include the path to the key here:privateKeyPath: path.join('path','to','key.pem')
Use this orprivateKey
, not both.passphrase
- [optional] if your private key contains a passphrase, enter it hereagentForward
- [optional] if you have agent forwarding enabled, set this totrue
and your own credentials will be used for further SSH connections from the server (like when connecting to GitHub)sides
- An array of sides that will be built on this serverpackageManagerCommand
- The package manager bin to call, defaults toyarn
but could be updated to be prefixed with another command first, for example:doppler run -- yarn
monitorCommand
- The monitor bin to call, defaults topm2
but could be updated to be prefixed with another command first, for example:doppler run -- pm2
path
- The absolute path to the root of the application on the servermigrate
- [optional] Whether or not to run migration processes on this server, defaults totrue
processNames
- An array of service names fromecosystem.config.js
which will be (re)started on a successful deployrepo
- The path to the git repo to clonebranch
- [optional] The branch to deploy (defaults tomain
)keepReleases
- [optional] The number of previous releases to keep on the server, including the one currently being served (defaults to 5)freeSpaceRequired
- [optional] The amount of free space required on the server in MB (defaults to 2048 MB). You can set this to0
to skip checking.
The easiest connection method is generally to include your own public key in the server's ~/.ssh/authorized_keys
mannually or by running ssh-copy-id user@server.com
from your local machine, enable agent forwarding, and then set agentForward = true
in deploy.toml
. This will allow you to use your own credentials when pulling code from GitHub (required for private repos). Otherwise you can create a deploy key and keep it on the server.
Using Environment Variables in deploy.toml
Similarly to redwood.toml
, deploy.toml
supports interpolation of environment variables. For more details on how to use the environment variable interpolation see Using Environment Variables in redwood.toml
Multiple Servers
If you start horizontally scaling your application you may find it necessary to have the web and api sides served from different servers. The configuration files can accommodate this:
[[production.servers]]
host = "api.server.com"
username = "user"
agentForward = true
sides = ["api"]
path = "/var/www/app"
processNames = ["api"]
[[production.servers]]
host = "web.server.com"
username = "user"
agentForward = true
sides = ["web"]
path = "/var/www/app"
migrate = false
processNames = ["web"]
module.exports = {
apps: [
{
name: 'api',
cwd: 'current',
script: 'node_modules/.bin/rw',
args: 'serve api',
instances: 'max',
exec_mode: 'cluster',
wait_ready: true,
listen_timeout: 10000,
},
{
name: 'web',
cwd: 'current',
script: 'node_modules/.bin/rw',
args: 'serve web',
instances: 'max',
exec_mode: 'cluster',
wait_ready: true,
listen_timeout: 10000,
},
],
}
Note the inclusion of migrate = false
so that migrations are not run again on the web server (they only need to run once and it makes sense to keep them with the api side).
You can add as many [[servers]]
blocks as you need.
Multiple Environments
You can deploy to multiple environments from a single deploy.toml
by including servers grouped by environment name:
[[production.servers]]
host = "prod.server.com"
username = "user"
agentForward = true
sides = ["api", "web"]
path = "/var/www/app"
processNames = ["serve"]
[[staging.servers]]
host = "staging.server.com"
username = "user"
agentForward = true
sides = ["api", "web"]
path = "/var/www/app"
processNames = ["serve", "stage-logging"]
At deploy time, include the environment in the command:
yarn rw deploy baremetal staging
Note that the codebase shares a single ecosystem.config.js
file. If you need a different set of services running in different environments you'll need to simply give them a unique name and reference them in the processNames
option of deploy.toml
(see the additional stage-logging
process in the above example).
Server Setup
You will need to create the directory in which your app code will live. This path will be the path
var in deploy.toml
. Make sure the username you will connect as in deploy.toml
has permission to read/write/execute files in this directory. For example, if your /var
dir is owned by root
, but you're going to deploy with a user named deploy
:
sudo mkdir -p /var/www/myapp
sudo chown deploy:deploy /var/www/myapp
You'll want to create an .env
file in this directory containing any environment variables that are needed by your app (like DATABASE_URL
at a minimum). This will be symlinked to each release directory so that it's available as the app expects (in the root directory of the codebase).
The deployment process uses a 'non-interactive' SSH session to run commands on the remote server. A non-interactive session will often load a minimal amount of settings for better compatibility and speed. In some versions of Linux .bashrc
by default does not load (by design) from a non-interactive session. This can lead to yarn
(or other commands) not being found by the deployment script, even though they are in your path, because additional ENV vars are set in ~/.bashrc
which provide things like NPM paths and setup.
A quick fix on some distros is to edit the deployment user's ~/.bashrc
file and comment out the lines that stop non-interactive processing.
# If not running interactively, don't do anything
- case $- in
- *i*) ;;
- *) return;;
- esac
# If not running interactively, don't do anything
+ # case $- in
+ # *i*) ;;
+ # *) return;;
+ # esac
This may also be a one-liner like:
- [ -z "$PS1" ] && return
+ # [ -z "$PS1" ] && return
There are techniques for getting node
, npm
and yarn
to be available without loading everything in .bashrc
. See this comment for some ideas.
First Deploy
Back on your development machine, enter your details in deploy.toml
, commit it and push it up, and then try a first deploy:
yarn rw deploy baremetal production --first-run
If there are any issues the deploy should stop and you'll see the error message printed to the console.
If it worked, hooray! You're deployed to BAREMETAL. If not, read on...
Troubleshooting
On the server you should see a new directory inside the path
you defined in deploy.toml
. It should be a timestamp of the deploy, like:
drwxrwxr-x 7 ubuntu ubuntu 4096 Apr 22 23:00 ./
drwxr-xr-x 7 ubuntu ubuntu 4096 Apr 22 22:46 ../
-rw-rw-r-- 1 ubuntu ubuntu 1167 Apr 22 20:49 .env
drwxrwxr-x 10 ubuntu ubuntu 4096 Apr 22 21:43 20220422214218/
You may or may not also have a current
symlink in the app directory pointing to that timestamp directory (it depends how far the deploy script got before it failed as to whether you'll have the symlink or not).
cd
into that timestamped directory and check that you have a .env
symlink pointing back to the app directory's .env
file.
Next, try performing all of the steps yourself that would happen during a deploy:
yarn install
yarn rw prisma migrate deploy
yarn rw prisma generate
yarn rw dataMigrate up
yarn rw build
ln -nsf "$(pwd)" ../current
If they worked for you, the deploy process should have no problem as it runs the same commands (after all, it connects via SSH and runs the same commands you just did!)
Next we can check that the site is being served correctly. Run yarn rw serve
and make sure your processes start and are accessible (by default on port 8910):
curl http://localhost:8910
# or
wget http://localhost:8910
If you don't see the content of your web/src/index.html
file then something isn't working. You'll need to fix those issues before you can deploy. Verify the api side is responding:
curl http://localhost:8910/.redwood/functions/graphql?query={redwood{version}}
# or
wget http://localhost:8910/.redwood/functions/graphql?query={redwood{version}}
You should see something like:
{
"data": {
"redwood": {
"version": "1.0.0"
}
}
}
If so then your API side is up and running! The only thing left to test is that the api side has access to the database. This call would be pretty specific to your app, but assuming you have port 8910 open to the world you could simply open a browser to click around to find a page that makes a database request.
Was the problem with starting your PM2 process? That will be harder to debug here in this doc, but visit us in the forums or Discord and we'll try to help!
If your processes are up and running in pm2 you can monitor their log output. Run pm2 monit
and get a nice graphical interface for watching the logs on your processes. Press the up/down arrows to move through the processes and left/right to switch panes.
Sometimes the log messages are too long to read in the pane at the right. In that case you can watch them live by "tailing" them right in the terminal. pm2 logs are written to ~/.pm2/logs
and are named after the process name and id, and whether they are standard output or error messages. Here's an example directory listing:
ubuntu@ip-123-45-67-89:~/.pm2/logs$ ll
total 116
drwxrwxr-x 2 ubuntu ubuntu 4096 Jan 20 17:58 ./
drwxrwxr-x 5 ubuntu ubuntu 4096 Jan 20 17:40 ../
-rw-rw-r-- 1 ubuntu ubuntu 0 Jan 20 17:58 api-error-0.log
-rw-rw-r-- 1 ubuntu ubuntu 0 Jan 20 17:58 api-error-1.log
-rw-rw-r-- 1 ubuntu ubuntu 27788 Jan 20 18:11 api-out-0.log
-rw-rw-r-- 1 ubuntu ubuntu 21884 Jan 20 18:11 api-out-1.log
To watch a log live, run:
tail -f ~/.pm2/logs/api-out-0.log
Note that if you have more than one process running, like we do here, requesting a page on the website will send the request to one of available processes randomly, so you may not see your request show up unless you refresh a few times. Or you can connect to two separate SSH sessions and tail both of the log files at the same time.