Dockerfile: series of commands used to create an image. Below is an explanation of some of the basic commands you can use inside:
# an existing docker image that contains some of the functionality you need (like node.js, or java drop wizard, etc):
MAINTAINER your name <firstname.lastname@example.org>
# any commands you need to run as part of your image build (each run will create an image that is cacheable)
RUN apt-get update
RUN some other command
# notice how we are passing -y to avoid the y/n question at install time:
RUN apt-get install -y some_package
# example of creating a config file via echo, also, this command will make your docker image available to any external connections:
RUN echo “bind_ip = 0.0.0.0” >> /etc/mongodb.conf
# Including files from our local host into the image:
ADD some_local_file_path some_path_inside_docker_img
# important! this is how you expose ports from inside the image:
# this command run after the image start (it is the default, can be overwritten in the command line)
ENTRYPOINT could be used instead of CMD, the difference is that ENTRYPOINT will always execute, whereas CMD can be overwritten by the arguments passed in the command line
Once your Dockerfile is ready, you are ready to build your image:
docker build -t your_docker_namespace/some_tag:latest .
The . indicates you want to use the local folder to run your build. This will execute each command in that Dockerfile at build time.
What build does is to run the local Dockerfile instructions into a machine, marked with the tag name provided (some_tag:latest ), so you can run it locally afterwards.
Once you build successfully, you are ready to push to the docker hub repo, so you can download and use this new image from anywhere:
docker tag your_new_img_tag your_dockerhub_namespace/your_img_name
docker push your_new_img_tag your_dockerhub_namespace/your_img_name
# pulling images from the docker hub to your local env:
docker pull postgres:latest
you will notice how several “things” are downloaded. This is because images are comprised of several sets of layers, some of those shareable between images. The idea is to be able to cache and reuse better.
By default, you are pulling from the dockerhub repo.
# running docker images:
(remember to build first, if you want to version the run locally)
docker run docker_img_name /path/to/command command_args
docker run –name dockerimgname -it -v /src:/somedirinsideimg/src -p 9000:9000
# running the ubuntu image locally, and then interact with it (-it) by opening a bash session to it:
docker run -it ubuntu /bin/bash
# exposing ports in a running docker container:
docker run -d -p 8000:80 –name some_name atbaker/nginx-example
Notes: option -d is so we run in detach mode (in the background). For the ports, it takes port 80 in the docker container, and makes it accessible in port 8000 in the host machine. The –name option is to avoid the default name docker gives to the running images (you can pass any string to it). To get the actual ip address you need to hit on your machine (in the browser, for example), you need to run:
So the actual url you will be looking at (for the example above) would be something like:
# tailing logs on a running docker container:
docker logs -f some_name
# see what has changed on a docker container since we started it:
docker diff some_name
# check the history of commands run to produce a docker image:
docker history docker_img_name
# inspect low level information about our container:
docker inspect some_name
# get the top command applied to our docker image:
docker stats some_name
# remove all docker running images:
docker rm –force `docker ps -qa`
# creating new docker images:
pull and run a base docker image as instructed above, and then go ahead and go inside the image:
docker run -it image_name_here bash
inside the image, do whatever modifications you need to do for the base image, then you can commit your changes as follows:
docker commit -m “Some description of the changes here” docker_id_here docker_tag_here
the docker tag at the end is just any descriptor of your new image version. To push the changes to dockerhub you need to login first, and then push:
docker tag docker_tag_here your_dockerhub_namespace/name_of_docker_repository
docker push your_dockerhub_namespace/name_of_docker_repository
Mounting external volumes inside docker images
docker run \
-ti -v `pwd`/testdir:/root \
we are running an image, and attaching whatever folder we are at the moment (via pwd), plus /testdir, to go inside the root folder in the docker image
so whatever files we create inside that image, they will also be created in the root directory.
Example of how to persist data between stops and starts of a docker image:
docker run -it --name somedockerimgname -v /root ubuntu /bin/bash
so now, when you stop and restart somedockerimgname, the files you created inside the /root folder will still be there. Destroying the container will still remove the data though!
A very simple example of running a local node.js based app via a docker file:
RUN mkdir -p /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
CMD [ “npm”, “start” ]
FROM takes the latest node image
RUN makes a dir inside that image
WORKDIR makes that dir the current directory
COPY copies your local files into the image
EXPOSE tells what ports will be open
CMD tells what will run afterwards all is well
a wrapper for docker commands, and also so it is easier to keep all dockers running services in one place / file
Sample of a simple docker-compose.yml setup:
links will tell where to connect to the other related services
volumes are the local mounted data on the services
notice the dot in the “build:” part, that tells docker where to look for that Dockerfile with instructions on what to do for this service (in this case the local directory)
note: in developer mode, you may want to mount your node app directory, instead of copy / add, so when you make changes, you don’t have to restart manually:
Differences between docker and Vagrant
Vagrant is meant to spawn and manage entire Virtual Machines. Docker is more a series of files and executables packed in the image, so when programs run, they are directed to that set of files. When initialized, we are not booting a full fledge VM, just the set of files needed to run as one.
Docker’s goal is to run the fewest services per image, so you may need multiple to run your app.
The advantage of docker is that it gives you more flexibility, as you can swap services as modules easier. Also, it require less resources than running full blown Virtual Machines.
Docker also has its own internal network service. You can control the ports that the outside world uses to communicate with your image.