Dead Simple Docker Development with Laravel
Leo Sjöberg • May 11, 2017
After recently talking about Docker and Vagrant on an episode of Larachat Live, I realised it'd probably be nice with a short little guide on getting started with Docker environments in Laravel projects. No fuss about what Docker is, no super-cool, special configuration, just a simple setup for developing locally.
Docker Compose
Before getting started, I want to make a quick note on something called Docker Compose that we'll be using. Docker Compose is a way to manage a set of docker containers for a single project, without complicated docker commands and per-container management.
Setup
For this setup, we will have
- A docker-compose to declare our containers
- An nginx config file
- A PHP container definition (defined by a
Dockerfile
)
So first, let's get the boilerplate out of the way: the docker-compose.yml
file, usually stored in the root of your project. This file holds information about which containers we are using in our project:
1version: '3' 2 3services: 4 nginx: 5 image: nginx:latest 6 volumes: 7 - ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf 8 - ./:/var/www/html 9 ports:10 - 80:8011 - 443:44312 fpm:13 build: docker/php-fpm14 volumes:15 - ./:/var/www/html16 mysql:17 image: mysql:5.718 environment:19 MYSQL_ROOT_PASSWORD: root20 MYSQL_USER: homestead21 MYSQL_PASSWORD: secret22 MYSQL_DATABASE: homestead23 volumes:24 - /var/lib/mysql25 ports:26 - 3306:3306
This is all the config you'll need. Well, this, the PHP setup, and an nginx config. Let's quickly go through it to make sure it's not all too confusing.
First in the docker-compose
file, you declare the syntax version, right now, both 2.x and 3.x are supported (and most people still use 2.x). After that, you declare services
. These are what most people call containers. They're declared by a name on the top level, and then use either an image or a build
configuration for the container.
You might notice that the fpm
service has the following block:
1build: docker/php-fpm
build
specifies which folder should be used as the template to build our container. Docker will look for a file called Dockerfile
in that directory, let's have a look at that. To create the Dockerfile
, simply run
1mkdir -p docker/php-fpm && touch docker/php-fpm/Dockerfile
In that Dockerfile is where we put all the configuration you would usually run when you start a new server:
1FROM php:7.1-fpm 2 3RUN apt-get update && apt-get install -y \ 4 curl \ 5 libssl-dev \ 6 zlib1g-dev \ 7 libicu-dev \ 8 libmcrypt-dev 9RUN docker-php-ext-configure intl10RUN docker-php-ext-install pdo_mysql mbstring intl opcache mcrypt11 12# Install xdebug13RUN pecl install xdebug \14 && docker-php-ext-enable xdebug15 16RUN usermod -u 1000 www-data17 18WORKDIR /var/www/html19 20CMD ["php-fpm"]21 22EXPOSE 9000
So what we're doing here is using the official php-fpm 7.1 image as our base (if you want a smaller footprint, feel free to use Alpine). We then run apt-get install
to install various libraries that are needed by Laravel, just like you would on a regular OS. After that, you'll see the unique commands docker-php-ext-configure
and docker-php-ext-install
. These are commands provided specifically by the PHP image, that makes container configuration a lot easier. The story is similar with installing xdebug. We then create a new www-data
user (since, in the PHP container, www-data
by default has access to /var/www
). Then we set the working directory to /var/www/html
, making it our "default", so to speak, from which php-fpm starts any action.
Last but not least, we run the php-fpm
command to start FPM, and expose port 9000. EXPOSE
is a Docker directive that means other containers can connect to this container's exposed port. It does not automatically expose it to your machine (that's what the ports
section, which we'll get to, is all about).
Phew, that was lengthy, but the good news is we're done with a lot of the legwork.
So back to our docker-compose.yml
, you might notice the fpm
and nginx
containers both have a volumes
key. volumes
is the way in which you bind your local directory to the container, so that any changes you make locally also end up in the container, sort of like how you connect your local directory to a VM through Vagrant with NFS. The line - ./:/var/www/html
binds our current directory (./
) on the host to the /var/www/html
directory in the container.
You might also have noticed that we have another volume declaration on the nginx
service; ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
. This binds a configuration file into Nginx's default configuration, and avoids the hassle of building a custom Nginx image. We don't yet have that file, so let's make it!
Simply run
1mkdir -p docker/nginx && touch docker/nginx/default.conf
As for the content of your newly created default.conf
,
1server { 2 listen 80 default_server; 3 listen [::]:80 default_server ipv6only=on; 4 5 root /var/www/html/public; 6 index index.php index.html; 7 8 location / { 9 try_files $uri $uri/ /index.php?$query_string;10 }1112 location ~ \.php$ {13 try_files $uri /index.php =404;14 fastcgi_split_path_info ^(.+\.php)(/.+)$;15 fastcgi_pass fpm:9000;16 fastcgi_index index.php;17 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;18 include fastcgi_params;19 }20}
For the most part, this looks like any old default Nginx configuration. There is one subtle difference. If you look at our location ~ \.php$
block, you can see that we use fastcgi_pass fpm:9000;
. Whereas in a VM, you might pass it to 127.0.0.1:9000
or similar, in Docker, you simple use the name of your container (as specified in the docker-compose
file) as the host, IP numbers be gone!
Alright, there we are, almost done!
Let's jump back to our docker-compose.yml
yet again and look for more things we don't understand... How about the mysterious ports
? ports
does a very simple thing: it binds a host port to a container port. So - 80:80
means you can access the container's port 80, on your port 80. Your port is to the left of the colon (:
), the container's port is to the right. So we do that to be able to hit our webserver by just accessing localhost
(since our local port 80 is bound to the nginx container, it's like hitting the nginx container's port 80, meaning it's basically like having nginx run locally, but without you having gone through the messy installation. WIN!). We do the same thing for the mysql
service, so that we can connect to our database from our own machine as well. This lets us connect to our database on 127.0.0.1:3306
, which means it's super easy to setup Sequel Pro too.
Last but not least, we have all those gnarly environment
variables. Those are used by the mysql
Docker image when you first start your project's mysql service, to setup some configuration. It will automatically
- Create a root user with the root password you specified,
- Create a database with the name you specified,
- Create a user with the username and password you specified that only has access to the specified database.
This is all without you doing any configuration at all!
Pulling it together
Now that everything is configured, to actually build the containers and run our environment, simply run docker-compose up -d
, and once the containers are build, you should be able to access your Laravel project on localhost
. Then use docker-compose stop
to stop the running containers (using down
will remove/destroy the containers as opposed to stopping them).
Running Commands
The straightforward way to run commands inside a container is with docker exec
. So the way you would execute an Artisan command would be
1docker exec -it myproject_fpm_1 php artisan migrate
Note here that I used myproject_fpm_1
, rather than just fpm
. That's because we're using Docker rather than docker-compose
, so we need to use the complete container name. You can find out the exact name by using docker ps
. The default naming scheme is {folder}_{composename}_1
. In a similar manner, if you desperately need to get into your container, you can actually access bash interactively inside it:
1docker exec -it myproject_fpm_1 bash
Wrap-Up and More Containers
So that's the quickstart on Docker with Laravel. It might seem like a lot of work, but you quite quickly realise that the vast majority of this is reusable, and you actually end up saving both time and resources by using Docker for local development.
While this is all I would like to officially include in this little quickstart, feel free to read on if you'd like to learn about setting up a container for scheduled jobs with cron, and a container for running artisan commands in a more convenient manner!
Thanks for reading!
Aside: More Containers!
Alright, you've decided you just can't get enough of Docker! Good, you've been converted.
So let's setup a couple more containers that may prove useful! Our first one will be an artisan container, whose only purpose will be to make running artisan commands less of a pain for you. Below you can see an excerpt of the docker-compose.yml
, with the relevant pieces to add.
1services: 2 # ... 3 artisan: 4 build: 5 context: . 6 dockerfile: docker/artisan/Dockerfile 7 volumes: 8 - ./:/var/www/html 9 scheduler:10 build:11 context: .12 dockerfile: docker/scheduler/Dockerfile13 volumes:14 - ./:/var/www/html15 - ./docker/scheduler/crontab:/etc/crontab
Artisan
So let's start with the Artisan container. Firstly, you know the drill...
1mkdir -p docker/artisan && touch docker/artisan/Dockerfile
Now, for the Dockerfile
, it's pretty similar to our fpm container:
1FROM php:7.1-cli2 3RUN apt-get update && apt-get install -y \4 libmcrypt-dev5RUN docker-php-ext-install pdo_mysql mbstring mcrypt6 7RUN usermod -u 1000 www-data8WORKDIR /var/www/html9ENTRYPOINT ["php", "artisan"]
The main differences are that we use the PHP CLI docker image rather than the FPM image, and that we install fewer packages from apt-get
because we generally don't need them all. Feel free to add whatever you need though! As you can see, we do the same thing with regards to creating a user and setting the working directory. However, the last line is the most important here:
1ENTRYPOINT ["php", "artisan"]
This tells docker that all commands that are executed in the container by docker run
actually start with php artisan
. This is used to make a container based off of single executions, in the same manner an executable file would be handled, rather than being longlived (like a server or database). Now, I said docker run
, what's that? Well, what you will care about is "what's docker-compose run
?". docker-compose run
is a command that takes the name of a service, and a command to be executed inside it, and runs it. Internally, it's converted to a docker run
call, it's just a nice wrapper so you don't need to write out the entire container name. At this point, I should also mention that when using docker-compose run
, we often attach the --rm
option. This will remove the container after use (don't worry, it won't take any extra time).
So now, we can run the following:
1docker-compose run --rm artisan migrate
artisan
here is the name of our service as you may recall, but since our entrypoint is specified to be php artisan
, we don't need to type that out. And because developers are lazy, here, throw this in your bash/zsh config:
1alias dcr="docker-compose run --rm"
Great, you can now just run dcr artisan migrate
. It's just as short as php artisan migrate
, but without any hassle of SSHing into a VM.
Scheduler
Next, let's look at cron. First, you know the drill:
1mkdir -p docker/scheduler && touch docker/scheduler/{Dockerfile,crontab}
For the Dockerfile
:
1FROM php:7.1-cli 2 3RUN apt-get update \ 4 && apt-get install -y --no-install-recommends runit \ 5 && apt-get install -y --no-install-recommends cron \ 6 && mkdir /etc/service/cron \ 7 && echo '#!/bin/sh' > /etc/service/cron/run \ 8 && echo 'exec /usr/sbin/cron -f' >> /etc/service/cron/run \ 9 && chmod -R 700 /etc/service/cron/ \10 && chmod 600 /etc/crontab \11 && rm -f /etc/cron.daily/standard \12 && rm -f /etc/cron.daily/upstart \13 && rm -f /etc/cron.daily/dpkg \14 && rm -f /etc/cron.daily/password \15 && rm -f /etc/cron.weekly/fstrim \16 && apt-get purge -y --auto-remove \17 && apt-get clean \18 && rm -rf /var/lib/apt/lists/*19 20CMD ["runsv", "/etc/service/cron"]
The only difference between this container and previous ones is that we install what's required for cron
to work. We chmod
a couple files, remove some defaults , and then run /etc/service/cron
. Our crontab
file is gracefully handled through a docker-compose volume declared in the compose file
1- ./docker/scheduler/crontab:/etc/crontab
And, finally, our docker/scheduler/crontab
is simply
1* * * * * php /var/www/html/artisan schedule:run >> /dev/null 2>&1
This just runs php artisan schedule:run
every minute, piping the output to /dev/null
.
That's all for now. I hope you've enjoyed the read!