Image result for docker

When I first started to learn Docker I had a headache to figure out how to start and what are the benefits of using such technology, In fact there was a lot of keywords to learn about (image, container, Dockerfile, docker-compose, networks,…) all this keywords was driving me crazy but after reading a lot of articles and watching many tutorials I was able to understand and connect everything together . In this article I will try to explain the main concepts that will help you learn about Docker faster.


Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.

Mmm Looking good

lets imagine that you are working on a simple web project on your localhost using PHP environment , To run PHP on your server you need to install a web server (Nginx or Apache,…), PHP interpreter ,you may also need a Database engine to store website data .

In traditional way you will have to install each of this components by your self and keep an eye on each component version to ensure that nothing will break due to some updates.

You have finished your fantastic project and now you want to deploy it to your production server but first you have to duplicate your exact localhost environment to your production server by installing each component manually, now your server hosting your PHP project and every thing looks fine but then your website starts to grow and the traffic was too big for your server to handle , you need now to create another instance of your server with load balancer to split the traffic among your instances and of course you need to install each component again on each instance, what if you need to update the PHP version ? you will have to update it manually on each instance. What if you need to use another PHP version on the same server for another project ?

Docker to rescue

By using docker containers we can duplicate our environment easily and create exact instances as much as we want .

we can update our environment on all servers by updating one file only.

we can use multiple versions of the same library on the same server without any problem.

All that and many more makes Docker such a great tool to create, deploy, and run applications by using containers.

How Docker works ?

VMs include the application, The required libraries and binaries, and full guest OS. Full virtualization is much heavier than containerization .

Containers include the application and all of its dependencies but share host OS kernel with other containers. running as isolated process in user space on host OS. containers require far fewer resources (for example, they don’t need a full OS), they’re easy to deploy and they start fast. This allows you to have higher density, meaning that it allows you to run more services on the same hardware unit, thereby reducing costs.

VM vs Docker

How to use Docker ?

That’s mean if you want to install Python3 in your environment you install it by issuing apt install python3. so you need to include this command in your Dockerfile. check the following is the content of Dockerfile that will create an image that includes nodejs 10.15 and python3 .

Lets assume that we are working on a project where we need a nodejs and python and we have the following project structure.

- Dockerfile
- node-app.js
- package.json
- requirements.txt



The content of Dockerfile will be as following:

FROM node:10.15.0
COPY . /app/
RUN yarn install
RUN apt update && apt install python3 -y && apt install python3-pip -y
RUN pip3 install -r requirements.txt
CMD ["node","node-app.js"]

First command From node:10.15.0 this command will fetch the official image of nodejs from Docker Hub

The next command WORKDIR /app will create a new directory called app inside the container and set it as the current working directory.

COPY . /app/ this command will copy all the content of directory where Dockerfile exist into docker image.

RUN yarn install this command will use the package.json file in the node project to install the dependencies needed .

RUN apt update && apt install python3 -y && apt install python3-pip -y this command will update apt cache then install python3 and pip3 inside the image.

RUN pip3 install -r requirements.txt this command will install all dependencies needed by python app.

EXPOSE 3000 this command indicates that our application will work on port 3000

CMD ["node","node-app.js"] this command will be executed when we will run our image inside a container.

Now after we figured out what each command do its time to create (build) our image .

Build Image

docker build -t node-python:1.0.0 . that’s it now docker will download all the files and binaries needed and run all the instructions in Dockerfile to produce the image and -t node-python:1.0.0 will tag(name) the result image. . the dot in the end tell docker that Dockerfile is in the current directory.

List images

docker images

list docker images

Run Image in container

docker run command

In our case to run the image we will use simpler command:

docker run -p 6000:3000 node-python:1.0.0

-p 6000:3000 this part of command will map the port 6000 of host OS with the port 3000 of container that’s mean when we visit http://localhost:6000 the request will be passed to our app inside the container.

we can also add -d argument to run the container as daemon in the background docker run -p 6000:3000 -d node-python:1.0.0

That’s it we just containerized our application successfully . I hope after this article you will be more comfortable about Docker and the concept behind it .

In the next article I will explain more advanced topics in Docker and how use docker-compose to connect multiple containers using docker networks. Follow me to stay updated and clap if you liked this article . Thanks for reading.

I am a highly motivated software engineer with a passion for web development I have a keen interest in technology and problem solving .