https://aws.amazon.com/getting-started/container-microservices-tutorial/module-four/
https://aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/
using docker
then at aws
1 cluster with all micro inside, with individual auto-scale
then 1 app load balancer, assign traffic according to path
Playing with Docker in AWS
For using docker, i created simple EC2 server and install docker stuff
# yum update
# yum install docker -y
# usermod -a -G docker ec2-user
# service docker start
# docker info
Above command will install docker, add ec2-user into docker group, start the docker service and list out docker information.
----------------------------------------------------
Managing the container
here is the basic command for docker
!!! build
# docker build -t gab-test
!!! run
# docker run gab-test
!!! run and link server port 80 to container port 80
# docker run -p 80:80 gab-test
!!! if didnt specify port, it will use random port from server
# docker run -ti -p 45678 -p 45679 --name gab-test centos:latest bash
then you will need to use docker command to find out which dynamic port it use
# docker port gab-test
this is useful if the container has set which listen port it use but do not want conflict the server port if running in multiple container, as it random get port from the server.
but then you will need some service discovery to find out the port and join to group to work together as cluster / pool
!!! exposing UDP ports
# docker run -p <server_port>:<container_port>/udp
port forward is from inside to outside
!!! delete container
# docker rm gab-test
you can run the container and auto delete the container upon exit
# docker run --rm gab-test
!!! ssh into container
# docker exec -it gab-test /bin/bash
or
# docker -it gab-test bash
you can run it and put it at the backgroud
# docker -it -d gab-test bash
then you can connect back to it by
# docker attach <container_name>
--------------------------------------------
Managing the images
!!! save as image and name it "img" with "latest" tag
# docker commit gab-test img:latest
!!! list images
# docker images
!!! list images with filter
# docker images --filter reference=gab-test
!!! list current running container
# docker ps
Container will stop when the main process is stop
example: docker run -ti centos:latest bash
bash has become the main process
so when you exit from bash, it will stop the container
use below command to list all container
# docker ps -a
!!! clean up unused docker images and container
# docker system prune
!!! delete all things
# docker system prune -a
All Container start from images file
since i do not have own images file, i be using public images from docker repo
# docker pull centos
list out images to confirm it was success
# docker images
--------------------------------------------------
Managing network & port
then now we create our first container
below command will create container name "gab-test" + map port 80 from container to server + using the images repo name "centos" with tag "latest"
# docker create --name gab-test -p 80:80 centos:latest
find out the port used for certain container
# docker port gab-test
!!! you can link 2 containers together so it can direct connect to it
first create 1st container
# docker run -ti --name server
create 2nd container and link back to 1st container
# docker run -ti --link server --name client
by using link, you can nc to server IP and pass data
at server container
# nc -lp 1234
at client container
# nc server 1234
but link will break after i get stop and started.
this is because server IP get change
and client host file didnt get update with the new IP
!!! create legacy linking
you can create private network for docker
# docker network create <network_name>
then you can run container inside the network
# docker run -it --net=<network_name> --name gab-test centos:latest
----------------------------------------
images
list down all the images
#docker images
the size of the images is actually shared. so the sum of it does not equal to total used space inside the server
once created images, push to repo
you can use pull to run the images again
images easy to get build up and consume space
so you can use command to clear the images inside the server
# docker rmi <image_name>:tag
# docker rmi <image_id>
---------------------------------------
Volume
there are 2 varieties
- Persistent (data remain even after container stop)
- ephemeral (data lost when there is no container using it
these volume is not part of the images, so using persistent volume wont change your image
create shared folder
# mkdir example
then create container and bind the shared folder into it
# docker run -it -v /opt/example:/shared-folder --name gab-test centos:latest bash
you can also share data between 2 container directly
at the 1st container
# docker run -ti -v /shared-data --name gab-test centos:latest bash
put some file into the /shared-data
then at the 2nd container, create and link to 1st container
# docker run --ti --volumes-from gab-test centos:latest bash
this will link the container name gab-test volume
this shared volume which shared directly between container remain even after the 1st container stop.
this is the example of ephemeral volume, data exist as long there is container using.
but if all container stop using it, it will gone
so you can create 3rd container, and link it back to 2nd shared container
and so on
------------------------------------------
Images
there is repo maintain by Docker itself
and from there, you can use command to search for images
# docker search ubuntu
this will list down all ubuntu related images
do notice the "Stars" and "Official"
for reliable images
Stars = same like like / fame
Official = direct from OS distributer
for more info of that images, suggest to use browser as it will show how to use the images
and what things to take note
you have to login to docker if you wanna push images back to Docker repo
# docker login
-----------------------------------------------
Usage of DockerFile
DockerFile is a small "program" or "list of command" to create an image
below is the command to build with the DockerFile
# docker build -t <image_name> .
Notice the " . " at the end of the command, that is to indicate the location of the dockerfile
if DockerFile is at different location
# docker build -t <image_name> /path/
The image is store locally in your server, you need to use push command to push to public repo if needed.
do that note, if you save a big file inside and save as image,
the image size will be very big.
it suggest to delete the file if no longer needed during the build process before save
so the image will be in smaller size
Statement
FROM = which image to download and start (must be the first command in your DockerFile)
MAINTAINER = Author of this DockerFile (example: MAINTAINER Firstname Lastname <email> )
RUN = run command line, wait it finish & save the result
ADD = add file from server to it. it also can be use to uncompress the file into it
Example: ADD project.tar.gz /data/
ADD script.sh /script.sh
ADD https://example.com/project.rpm /project/
ENV = set environtment variables
Example: ENV parameter=example
EXPOSE = map port into container
VOLUME = define shared volume
WORKDIR = default directory to use whenever it start
USER = which user the container will run as
reference:
https://docs.docker.com/engine/reference/builder/
-----------------------------------------
Resource limiting
- can use to schedule CPU time
- can use to limit memory allocation
- inherited limitations and quotas ( by saying this, container cannot cannot escape the limit by starting more process, it only can play around within that quotas )
Note: although i cant think of any situations where i need to limit my container resource, maybe it useful if you have limited server.? but not sure if i wanna do that if it was Production environtment.
-------------------------------------------
Managing service
use --restart to restart the container if it die
# docker run -p 8080:8080 --restart=always --name gab-test centos:latest
-------------------------------------------
Save & load Images
you can save image as zip file for backup purpose or transfer to your customer
# docker save -o my-images.tar.gz gab-test:latest centos:latest
this will save gab-test images + centos images together in 1 file
after save, even if you delete the images from your local machine,
you always can load back from this zip file
# docker load -i my-images.tar.gz
this command will load out the 2 images and store locally
useful to move container between server
-------------------------------------------
Playing with AWS ECS
create a new repo with gab-test
click the "View push commands"
and follow the setup to pus the images to gab-test repo
once you build your images locally
you need to apply tag from your images to the repo before push
docker tag gab-test:latest 922322261069.dkr.ecr.us-east-1.amazonaws.com/gab-test:latest