https://aws.amazon.com/getting-started/container-microservices-tutorial/module-four/
https://aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/
using docker
then at aws
1 cluster with all micro inside, with individual auto-scale
then 1 app load balancer, assign traffic according to path
Playing with Docker in AWS
For using docker, i created simple EC2 server and install docker stuff
# yum update
# yum install docker -y
# usermod -a -G docker ec2-user
# service docker start
# docker info
Above command will install docker, add ec2-user into docker group, start the docker service and list out docker information.
----------------------------------------------------
Managing the container
here is the basic command for docker
!!! build
# docker build -t gab-test
!!! run
# docker run gab-test
!!! run and link server port 80 to container port 80
# docker run -p 80:80 gab-test
!!! if didnt specify port, it will use random port from server
# docker run -ti -p 45678 -p 45679 --name gab-test centos:latest bash
then you will need to use docker command to find out which dynamic port it use
# docker port gab-test
this is useful if the container has set which listen port it use but do not want conflict the server port if running in multiple container, as it random get port from the server.
but then you will need some service discovery to find out the port and join to group to work together as cluster / pool
!!! exposing UDP ports
# docker run -p <server_port>:<container_port>/udp
port forward is from inside to outside
!!! delete container
# docker rm gab-test
you can run the container and auto delete the container upon exit
# docker run --rm gab-test
!!! ssh into container
# docker exec -it gab-test /bin/bash
or
# docker -it gab-test bash
you can run it and put it at the backgroud
# docker -it -d gab-test bash
then you can connect back to it by
# docker attach <container_name>
--------------------------------------------
Managing the images
!!! save as image and name it "img" with "latest" tag
# docker commit gab-test img:latest
!!! list images
# docker images
!!! list images with filter
# docker images --filter reference=gab-test
!!! list current running container
# docker ps
Container will stop when the main process is stop
example: docker run -ti centos:latest bash
bash has become the main process
so when you exit from bash, it will stop the container
use below command to list all container
# docker ps -a
!!! clean up unused docker images and container
# docker system prune
!!! delete all things
# docker system prune -a
All Container start from images file
since i do not have own images file, i be using public images from docker repo
# docker pull centos
list out images to confirm it was success
# docker images
--------------------------------------------------
Managing network & port
then now we create our first container
below command will create container name "gab-test" + map port 80 from container to server + using the images repo name "centos" with tag "latest"
# docker create --name gab-test -p 80:80 centos:latest
find out the port used for certain container
# docker port gab-test
!!! you can link 2 containers together so it can direct connect to it
first create 1st container
# docker run -ti --name server
create 2nd container and link back to 1st container
# docker run -ti --link server --name client
by using link, you can nc to server IP and pass data
at server container
# nc -lp 1234
at client container
# nc server 1234
but link will break after i get stop and started.
this is because server IP get change
and client host file didnt get update with the new IP
!!! create legacy linking
you can create private network for docker
# docker network create <network_name>
then you can run container inside the network
# docker run -it --net=<network_name> --name gab-test centos:latest
----------------------------------------
images
list down all the images
#docker images
the size of the images is actually shared. so the sum of it does not equal to total used space inside the server
once created images, push to repo
you can use pull to run the images again
images easy to get build up and consume space
so you can use command to clear the images inside the server
# docker rmi <image_name>:tag
# docker rmi <image_id>
---------------------------------------
Volume
there are 2 varieties
- Persistent (data remain even after container stop)
- ephemeral (data lost when there is no container using it
these volume is not part of the images, so using persistent volume wont change your image
create shared folder
# mkdir example
then create container and bind the shared folder into it
# docker run -it -v /opt/example:/shared-folder --name gab-test centos:latest bash
you can also share data between 2 container directly
at the 1st container
# docker run -ti -v /shared-data --name gab-test centos:latest bash
put some file into the /shared-data
then at the 2nd container, create and link to 1st container
# docker run --ti --volumes-from gab-test centos:latest bash
this will link the container name gab-test volume
this shared volume which shared directly between container remain even after the 1st container stop.
this is the example of ephemeral volume, data exist as long there is container using.
but if all container stop using it, it will gone
so you can create 3rd container, and link it back to 2nd shared container
and so on
------------------------------------------
Images
there is repo maintain by Docker itself
and from there, you can use command to search for images
# docker search ubuntu
this will list down all ubuntu related images
do notice the "Stars" and "Official"
for reliable images
Stars = same like like / fame
Official = direct from OS distributer
for more info of that images, suggest to use browser as it will show how to use the images
and what things to take note
you have to login to docker if you wanna push images back to Docker repo
# docker login
-----------------------------------------------
Usage of DockerFile
DockerFile is a small "program" or "list of command" to create an image
below is the command to build with the DockerFile
# docker build -t <image_name> .
Notice the " . " at the end of the command, that is to indicate the location of the dockerfile
if DockerFile is at different location
# docker build -t <image_name> /path/
The image is store locally in your server, you need to use push command to push to public repo if needed.
do that note, if you save a big file inside and save as image,
the image size will be very big.
it suggest to delete the file if no longer needed during the build process before save
so the image will be in smaller size
Statement
FROM = which image to download and start (must be the first command in your DockerFile)
MAINTAINER = Author of this DockerFile (example: MAINTAINER Firstname Lastname <email> )
RUN = run command line, wait it finish & save the result
ADD = add file from server to it. it also can be use to uncompress the file into it
Example: ADD project.tar.gz /data/
ADD script.sh /script.sh
ADD https://example.com/project.rpm /project/
ENV = set environtment variables
Example: ENV parameter=example
EXPOSE = map port into container
VOLUME = define shared volume
WORKDIR = default directory to use whenever it start
USER = which user the container will run as
reference:
https://docs.docker.com/engine/reference/builder/
-----------------------------------------
Resource limiting
- can use to schedule CPU time
- can use to limit memory allocation
- inherited limitations and quotas ( by saying this, container cannot cannot escape the limit by starting more process, it only can play around within that quotas )
Note: although i cant think of any situations where i need to limit my container resource, maybe it useful if you have limited server.? but not sure if i wanna do that if it was Production environtment.
-------------------------------------------
Managing service
use --restart to restart the container if it die
# docker run -p 8080:8080 --restart=always --name gab-test centos:latest
-------------------------------------------
Save & load Images
you can save image as zip file for backup purpose or transfer to your customer
# docker save -o my-images.tar.gz gab-test:latest centos:latest
this will save gab-test images + centos images together in 1 file
after save, even if you delete the images from your local machine,
you always can load back from this zip file
# docker load -i my-images.tar.gz
this command will load out the 2 images and store locally
useful to move container between server
-------------------------------------------
Playing with AWS ECS
create a new repo with gab-test
click the "View push commands"
and follow the setup to pus the images to gab-test repo
once you build your images locally
you need to apply tag from your images to the repo before push
docker tag gab-test:latest 922322261069.dkr.ecr.us-east-1.amazonaws.com/gab-test:latest
Gab Knowledge Base
this blog was created for my own personal notes. If any of the post is useful for you, i happy to hear that but if there is any mistake make on my notes, please correct me
Tuesday, September 7, 2021
Monday, July 15, 2019
Jboss 7 EAP cluster in AWS using RHEL 8
Environment
server: Red Hat Enterprise Linux release 8.0 (Ootpa)Jboss: 7.2.2
Java: jdk-8u211-linux-x64
AWS services used: EC2, S3, EFS
I download all jboss and java installation file at /home/ec2-user/jboss
Install Java JDK
# yum localinstall jdk-8u211-linux-x64.rpm -y
Setup Jboss
I setup everything inside /opt# cd /opt/
# unzip /home/ec2-user/jboss/jboss-eap-7.2.0.zip
go to standalone config folder and copy the needed file
# cd /opt/jboss-eap-7.2/standalone/configuration
# cp /opt/jboss-eap-7.2/docs/examples/configs/standalone-ec2-ha.xml .
go to bin/init.d and edit file so it will use the new config file
# cd /opt/jboss-eap-7.2/bin/init.d
# vim jboss-eap.conf
below is the config that i change in red color
===========================START===============================
[root@ip-172-31-46-162 init.d]# cat jboss-eap.conf
# General configuration for the init.d scripts,
# not necessarily for JBoss EAP itself.
# default location: /etc/default/jboss-eap
## Location of JDK
# JAVA_HOME="/usr/lib/jvm/default-java"
JAVA_HOME="/usr/java/default"
## Location of JBoss EAP
# JBOSS_HOME="/opt/jboss-eap"
JBOSS_HOME="/opt/jboss-eap-7.2"
## The username who should own the process.
# JBOSS_USER=jboss-eap
JBOSS_USER=jboss
## The mode JBoss EAP should start, standalone or domain
JBOSS_MODE=standalone
## Configuration for standalone mode
JBOSS_CONFIG=standalone-ec2-ha.xml
## Configuration for domain mode
# JBOSS_DOMAIN_CONFIG=domain.xml
# JBOSS_HOST_CONFIG=host-master.xml
## The amount of time to wait for startup
# STARTUP_WAIT=60
## The amount of time to wait for shutdown
# SHUTDOWN_WAIT=60
## Location to keep the console log
# JBOSS_CONSOLE_LOG="/var/log/jboss-eap/console.log"
JBOSS_CONSOLE_LOG="/opt/jboss-eap-7.2/standalone/log/console.log"
## Additionals args to include in startup
# JBOSS_OPTS="--admin-only -b 127.0.0.1"
===========================END=================================
edit this file so it will use the config we just edit
# vim jboss-eap-rhel.sh
find and edit this which in red color
============================START================================
# Load JBoss EAP init.d configuration.
if [ -z "$JBOSS_CONF" ]; then
JBOSS_CONF="/opt/jboss-eap-7.2/bin/init.d/jboss-eap.conf"
fi
[ -r "$JBOSS_CONF" ] && . "${JBOSS_CONF}"
============================END================================
now is edit standalone.conf to input the s3 bucket details
# cd /opt/jboss-eap-7.2/bin
# vim standalone.conf
add below into it
=============================START===============================
ACCESS_KEY_ID=YOUR_ACCESS
SECRET_ACCESS_KEY=YOUR_SECRET
S3_PING_BUCKET=gab-jboss
NODE_NAME=`hostname`
INTERNAL_IP_ADDRESS=`ip addr show | grep eth0 -A 2 | head -n 3 | tail -n 1 | awk '{ print $2 }' | sed "s-/24--g" | cut -d'/' -f1`
=============================END==================================
find end edit this
=============================START================================
JAVA_OPTS="-Xms1303m -Xmx1303m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.jgroups.s3_ping.access_key='$ACCESS_KEY_ID' -Djboss.jgroups.s3_ping.secret_access_key='$SECRET_ACCESS_KEY' -Djboss.jgroups.s3_ping.bucket='$S3_PING_BUCKET' -Djboss.jvmRoute=$NODE_NAME"
JAVA_OPTS="$JAVA_OPTS -Djboss.bind.address=$INTERNAL_IP_ADDRESS -Djboss.bind.address.private=$INTERNAL_IP_ADDRESS"
else
echo "JAVA_OPTS already set in environment; overriding default settings with values: $JAVA_OPTS"
fi
=============================END===================================
Setup S3 bucket
go to S3 side and create bucket name gab-jboss
if you change the bucket name, do remember update your standalone.conf so it point to correct bucket
Test the cluster
go download this war file if you have redhat account
https://access.redhat.com/solutions/46373
deploy and see the log if it form the cluster and check S3 bucket if file created there
PS: it wont form cluster if there is no war file set as distributed
Auto Scale
The configuration above is support auto-scaling. As mean it will join the cluster upon launching.
remove the testing war file and save as AMI. and configure your auto scale base on this AMI
remove the testing war file and save as AMI. and configure your auto scale base on this AMI
the node name is configure to follow server hostname which i set in standalone.conf
centralize log file
for this, i am using EFS storage
follow AWS guide to create a new one and install NFS in your RHEL server
# yum install -y nfs-utilsFriday, March 9, 2018
apache mod_rewrite
^
, signifies the start of an URL, under the current directory. This directory is whatever directory the .htaccess file is in. You’ll start almost all matches with a caret.$
, signifies the end of the string to be matched. You should add this in to stop your rules matching the first part of longer URLs.
example
RewriteCond %{QUERY_STRING} ^resid=ID
RewriteRule ^/eid? http://store.datascrip.com/? [R=301,L]
Force SSL
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI}
Redirrect everything to new URL
RewriteRule /.* http://www.new-domain.com/ [R]
redirrect certain query string match + sub path to new URL
RewriteCond %{QUERY_STRING} ^resid=ID$
RewriteRule ^/eid? http://new.com/ [R,L]
this will redirrect testing.com/eid?resid=ID to http://new.com
the symbol $ is to indicate the string end
RewriteCond %{HTTP_HOST} ^gab.com
RewriteRule "^(.*)/p/7074689$" "https://gab.com/p/7075394" [R=301,L]
this will rewrite request go to gab.com/* and end with that redirrect to different place
Block all access to /test/* except /test/gab/*
RewriteCond %{HTTP_HOST} ^abc.com
RewriteCond %{REQUEST_URI} ^/test
RewriteRule "!^/test/gab(.*)" "-" [F]
# rewrite rule base on akamai country code for maintenance pages
RewriteCond %{HTTP:X-Akamai-Edgescape} code=MY
RewriteCond %{REQUEST_URI} !^/maintenance
RewriteRule ^(.*)$ /maintenance/ES/maintenance.html [R=301,L]
https://www.cheatography.com/davechild/cheat-sheets/mod-rewrite/
Friday, October 20, 2017
nagios passive check + custom script at remote host
This is the 2nd post for the custom script.
Due to my new environment where i had limited access & yet i still wanna pass some server information back to nagios for monitoring and alert + without compromising security issue
Nagios server || Firewall || production server
the only port open between this 2 server is SSH port, so i will need to utilize this to send data from my server back to Nagios server
In this post, will be all my notes on my custom script to check disk space, memory, CPU load and checking some service if it running / stopped.
otherwise, please check below URL for how to setup nagios
http://gab-tech.blogspot.my/2012/08/setup-nagios.html
although the post is kinda old, but the setup should be same.
for this post, i am using Nagios Core 4.2.4
Now you need to create entry at nagios server
you can edit your current config or create a new config.
for mine, i create a new config for every project group for easy manage
# vim Hybris.cfg
define hostgroup{
hostgroup_name HYBRIS-DEV
alias HYBRIS-DEV
members HYBRIS-APP-D01
}
define host{
use linux-server
host_name HYBRIS-APP-D01
alias HYBRIS-APP-D01
address HYBRIS-APP-D01.gab.com
notification_interval 0
}
define service{
use local-service
host_name HYBRIS-APP-D01
service_description /home
check_command check_log
notifications_enabled 1
notification_interval 0
passive_checks_enabled 1 }
restart nagios server
# /etc/init.d/nagios restart
you can issue nagios configtest to check configuration before restart if it got any error
# /etc/init.d/nagios configtest
Example of mine:
echo "[`date +%s`] PROCESS_SERVICE_CHECK_RESULT;GAB-APP-P01;/home;1;test output" >> nagios.cmd
host_name = GAB-APP-P01
svc_description = /home
return_code = 1
plugin_output = test
then using this coomand, we can manual push the result from our custom script back to nagios.
to test if it is working, you can initiate this command to test
1. check storage script
2. check cpu script
3. check memory script
4. check service running script
then to avoid duplicate code of push data back to nagios server, i separate out another script for purely send data back to nagios
5. push data to nagios server script
NOTE: For security issue
I not going to use root to push data back to nagios, i create a user cal nagios.
then i create ssh-keygen for nagios and put to nagios server so everytime it push data back to nagios server, it can skip password authentication part.
For how to setup SSH-keygen, please refer to this link below for setup ssh-keygen
http://gab-tech.blogspot.my/2011/03/incremental-backup.html
here is the example script i use at remote host
PS: at nagios user home dir, i created script directory and store all my script there
---------- nagios.sh ----------
#!/bin/bash
ssh -t nagios@NAGIOS_SERVER_IP "
cd /usr/local/nagios/var/rw
echo '[`date +%s`] PROCESS_SERVICE_CHECK_RESULT;GAB-APP-D01;$1;$2;$3' >> nagios.cmd"
i set cronjob to record down df -h result to a file
# record every 5 minute to df-result
*/5 * * * * df -h > /home/nagios/script/df-result
---------- check_storage.sh ----------
#!/bin/bash
# all script located here
cd /home/nagios/script
# delay 30 sec before start check so it can confirm wont crash with cronjob record result
sleep 30s
store1="/"
result1=$(grep -w "/" df-result | awk '{print $4}')
status1=$(bash status.sh $result1)
/bin/bash nagios.sh $store1 $status1 $result1
store2="/boot"
result2=$(grep -w "/boot" df-result | awk '{print $5}')
status2=$(bash status.sh $result2)
/bin/bash nagios.sh $store2 $status2 $result2
store3="/home"
result3=$(grep -w "/home" df-result | awk '{print $4}')
status3=$(bash status.sh $result3)
/bin/bash nagios.sh $store3 $status3 $result3
#!/bin/bash
sar=$(sar 1 1 | tail -n 1 | awk '{print $8}')
load=`echo "100.00-$sar" | bc`
if [[ $load == .* ]]
then load=$(echo "0$load")
fi
if (( $(echo "$load < 80" | bc -l) )); then
status=0
elif (( $(echo "$load > 90" |bc -l) )); then
status=2
elif (( $(echo "$load > 80" | bc -l) )); then
status=1
else
status=3
fi
load=$(echo $load%)
cd /home/nagios/script
/bin/bash nagios.sh cpu $status $load
---------- memory_V6.sh ----------
#!/bin/bash
total=$(free -m | grep "Mem:" | awk '{print $2}')
used=$(free -m | grep "buffers/cache" | awk '{print $3}')
#echo $total
#echo $used
percentage100=$[$used*100]
percentage=$[percentage100/$total]
result=$(echo $percentage%)
#echo $result
cd /home/nagios/script
status=$(bash status.sh $percentage)
/bin/bash nagios.sh memory $status $result
---------- hybris_service.sh ----------
#!/bin/bash
sleep 15s
cd /var/log/nagios/script
HYBRUNNING=`ps auxwww | grep hybris | grep "jmxremote" | grep -v grep | wc -l`
if [ ${HYBRUNNING} -ne 0 ]; then
result=running
status=0
else
result=stop
status=2
fi
/bin/bash nagios.sh Hybris-service $status $result
*/5 * * * * /home/nagios/script/check_storage.sh > /dev/null 2>&1
*/5 * * * * /home/nagios/script/hybris_service.sh > /dev/null 2>&1
*/1 * * * * /home/nagios/script/memory_V6.sh > /dev/null 2>&1
*/1 * * * * /home/nagios/script/cpu_load.sh > /dev/null 2>&1
reference:
https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/passivechecks.html
https://somoit.net/nagios/nagios-using-passive-checks-without-agent
Due to my new environment where i had limited access & yet i still wanna pass some server information back to nagios for monitoring and alert + without compromising security issue
Nagios server || Firewall || production server
the only port open between this 2 server is SSH port, so i will need to utilize this to send data from my server back to Nagios server
In this post, will be all my notes on my custom script to check disk space, memory, CPU load and checking some service if it running / stopped.
Setup
I assume nagios server is done setup and running perfectly good.otherwise, please check below URL for how to setup nagios
http://gab-tech.blogspot.my/2012/08/setup-nagios.html
although the post is kinda old, but the setup should be same.
for this post, i am using Nagios Core 4.2.4
Now you need to create entry at nagios server
you can edit your current config or create a new config.
for mine, i create a new config for every project group for easy manage
# vim Hybris.cfg
define hostgroup{
hostgroup_name HYBRIS-DEV
alias HYBRIS-DEV
members HYBRIS-APP-D01
}
define host{
use linux-server
host_name HYBRIS-APP-D01
alias HYBRIS-APP-D01
address HYBRIS-APP-D01.gab.com
notification_interval 0
}
define service{
use local-service
host_name HYBRIS-APP-D01
service_description /home
check_command check_log
notifications_enabled 1
notification_interval 0
passive_checks_enabled 1 }
Dummy script
from the nagios setup, can see the check_command i use is pointint to check_log
there are no plugin call check_log actually, it just a dummy script to satisfy nagios. Because if i didnt set check_command, nagios will give error.
and my custom script is at different server.
there are no plugin call check_log actually, it just a dummy script to satisfy nagios. Because if i didnt set check_command, nagios will give error.
and my custom script is at different server.
open and edit this file
# vim command.cfg
put this into it
put this into it
# 'fake command' command definition
define command{
command_name check_log
command_line /bin/bash /usr/local/nagios/script/check_passive
}
then at nagios folder create script directory
and create check_passive with 770 permission
and create check_passive with 770 permission
put this into it
#!/bin/sh
echo "please disable active check and use passive"
exit 1
restart nagios server
# /etc/init.d/nagios restart
you can issue nagios configtest to check configuration before restart if it got any error
# /etc/init.d/nagios configtest
Manual push result to Nagios from remote host
From nagios documentation, we can use this command to push result into nagios[<timestamp>] PROCESS_SERVICE_CHECK_RESULT;<host_name>;<svc_description>;<return_code>;<plugin_output>
Example of mine:
echo "[`date +%s`] PROCESS_SERVICE_CHECK_RESULT;GAB-APP-P01;/home;1;test output" >> nagios.cmd
host_name = GAB-APP-P01
svc_description = /home
return_code = 1
plugin_output = test
then using this coomand, we can manual push the result from our custom script back to nagios.
to test if it is working, you can initiate this command to test
ssh -t nagios@nagios_server_IP "you should be able to see the result in your nagios server
cd /usr/local/nagios/var/rw
echo '[`date +%s`] PROCESS_SERVICE_CHECK_RESULT;GAB-APP-D01;/home;1;test' >> nagios.cmd"
Setup Remote host script
because I dont want all checking under 1 script, so i separate out to few script1. check storage script
2. check cpu script
3. check memory script
4. check service running script
then to avoid duplicate code of push data back to nagios server, i separate out another script for purely send data back to nagios
5. push data to nagios server script
NOTE: For security issue
I not going to use root to push data back to nagios, i create a user cal nagios.
then i create ssh-keygen for nagios and put to nagios server so everytime it push data back to nagios server, it can skip password authentication part.
For how to setup SSH-keygen, please refer to this link below for setup ssh-keygen
http://gab-tech.blogspot.my/2011/03/incremental-backup.html
here is the example script i use at remote host
PS: at nagios user home dir, i created script directory and store all my script there
5. Push data to nagios server script
edit the RED color word to suit your server---------- nagios.sh ----------
#!/bin/bash
ssh -t nagios@NAGIOS_SERVER_IP "
cd /usr/local/nagios/var/rw
echo '[`date +%s`] PROCESS_SERVICE_CHECK_RESULT;GAB-APP-D01;$1;$2;$3' >> nagios.cmd"
--------- END ----------
1. Check Storage Script
In order to avoid keep repeat issue df -h command for each checking,i set cronjob to record down df -h result to a file
# record every 5 minute to df-result
*/5 * * * * df -h > /home/nagios/script/df-result
---------- check_storage.sh ----------
#!/bin/bash
# all script located here
cd /home/nagios/script
# delay 30 sec before start check so it can confirm wont crash with cronjob record result
sleep 30s
store1="/"
result1=$(grep -w "/" df-result | awk '{print $4}')
status1=$(bash status.sh $result1)
/bin/bash nagios.sh $store1 $status1 $result1
store2="/boot"
result2=$(grep -w "/boot" df-result | awk '{print $5}')
status2=$(bash status.sh $result2)
/bin/bash nagios.sh $store2 $status2 $result2
store3="/home"
result3=$(grep -w "/home" df-result | awk '{print $4}')
status3=$(bash status.sh $result3)
/bin/bash nagios.sh $store3 $status3 $result3
---------- END ----------
2. check cpu script
---------- cpu_load.sh ----------#!/bin/bash
sar=$(sar 1 1 | tail -n 1 | awk '{print $8}')
load=`echo "100.00-$sar" | bc`
if [[ $load == .* ]]
then load=$(echo "0$load")
fi
if (( $(echo "$load < 80" | bc -l) )); then
status=0
elif (( $(echo "$load > 90" |bc -l) )); then
status=2
elif (( $(echo "$load > 80" | bc -l) )); then
status=1
else
status=3
fi
load=$(echo $load%)
cd /home/nagios/script
/bin/bash nagios.sh cpu $status $load
---------- END ----------
3. check memory script
This check memory script is only for redhat/centos 6 and above---------- memory_V6.sh ----------
#!/bin/bash
total=$(free -m | grep "Mem:" | awk '{print $2}')
used=$(free -m | grep "buffers/cache" | awk '{print $3}')
#echo $total
#echo $used
percentage100=$[$used*100]
percentage=$[percentage100/$total]
result=$(echo $percentage%)
#echo $result
cd /home/nagios/script
status=$(bash status.sh $percentage)
/bin/bash nagios.sh memory $status $result
---------- END ----------
4. check service running script
---------- hybris_service.sh ----------#!/bin/bash
sleep 15s
cd /var/log/nagios/script
HYBRUNNING=`ps auxwww | grep hybris | grep "jmxremote" | grep -v grep | wc -l`
if [ ${HYBRUNNING} -ne 0 ]; then
result=running
status=0
else
result=stop
status=2
fi
/bin/bash nagios.sh Hybris-service $status $result
---------- END ---------
CRONJOB
set cronjob to run this script every 5 min*/5 * * * * /home/nagios/script/check_storage.sh > /dev/null 2>&1
*/5 * * * * /home/nagios/script/hybris_service.sh > /dev/null 2>&1
*/1 * * * * /home/nagios/script/memory_V6.sh > /dev/null 2>&1
*/1 * * * * /home/nagios/script/cpu_load.sh > /dev/null 2>&1
reference:
https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/passivechecks.html
https://somoit.net/nagios/nagios-using-passive-checks-without-agent
Thursday, August 3, 2017
7 layer model
Layer 1: Physical The Physical layer consists of the
physical media and dumb devices that make up the infrastructure of our
networks. This pertains to the cabling and connections such as Category
5e and RJ-45 connectors. Note that this layer also includes light and
rays, which pertain to media such as fiber optics and microwave
transmission equipment. Attack considerations are aligned with the
physical security of site resources. Although not flashy, physical
security still bears much fruit in penetration (pen) testing and
real-world scenarios.
Layer 2: Data Link The Data Link layer works to ensure that the data it transfers is free of errors. At this layer, data is contained in frames. Functions such as media access control and link establishment occur at this layer. This layer encompasses basic protocols such as 802.3 for Ethernet and 802.11 for Wi-Fi.
Layer 3: Network The Network layer determines the path of data packets based on different factors as defined by the protocol used. At this layer we see IP addressing for routing of data packets. This layer also includes routing protocols such as the Routing Information Protocol (RIP) and the Interior Gateway Routing Protocol (IGRP). This is the know-where-to-go layer.
Layer 4: Transport The Transport layer ensures the transport or sending of data is successful. This function can include error-checking operations as well as working to keep data messages in sequence. At this layer we find the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP).
Layer 5: Session The Session layer identifies established system sessions between different network entities. When you access a system remotely, for example, you are creating a session between your computer and the remote system. The Session layer monitors and controls such connections, allowing multiple, separate connections to different resources. Common use includes NetBIOS and RPC.
Layer 6: Presentation The Presentation layer provides a translation of data that is understandable by the next receiving layer. Traffic flow is presented in a format that can be consumed by the receiver and can optionally be encrypted with protocols such as Secure Sockets Layer (SSL).
Layer 7: Application The Application layer functions as a user platform in which the user and the software processes within the system can operate and access network resources. Applications and software suites that we use on a daily basis are under this layer. Common examples include protocols we interact with on a daily basis, such as FTP and HTTP.
Layer 2: Data Link The Data Link layer works to ensure that the data it transfers is free of errors. At this layer, data is contained in frames. Functions such as media access control and link establishment occur at this layer. This layer encompasses basic protocols such as 802.3 for Ethernet and 802.11 for Wi-Fi.
Layer 3: Network The Network layer determines the path of data packets based on different factors as defined by the protocol used. At this layer we see IP addressing for routing of data packets. This layer also includes routing protocols such as the Routing Information Protocol (RIP) and the Interior Gateway Routing Protocol (IGRP). This is the know-where-to-go layer.
Layer 4: Transport The Transport layer ensures the transport or sending of data is successful. This function can include error-checking operations as well as working to keep data messages in sequence. At this layer we find the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP).
Layer 5: Session The Session layer identifies established system sessions between different network entities. When you access a system remotely, for example, you are creating a session between your computer and the remote system. The Session layer monitors and controls such connections, allowing multiple, separate connections to different resources. Common use includes NetBIOS and RPC.
Layer 6: Presentation The Presentation layer provides a translation of data that is understandable by the next receiving layer. Traffic flow is presented in a format that can be consumed by the receiver and can optionally be encrypted with protocols such as Secure Sockets Layer (SSL).
Layer 7: Application The Application layer functions as a user platform in which the user and the software processes within the system can operate and access network resources. Applications and software suites that we use on a daily basis are under this layer. Common examples include protocols we interact with on a daily basis, such as FTP and HTTP.
Friday, July 28, 2017
jboss eap 7 standalone setup Database
This note is for example guide for adding MySql, Oracle and mariadb for jboss eap 7 standalone.
MySql
1. download Mysql jdbc from this URL
https://dev.mysql.com/downloads/connector/j/
2. then at jboss directory, create this path
https://dev.mysql.com/downloads/connector/j/
2. then at jboss directory, create this path
<jboss>/modules/com/mysql/main
3. upload the jdbc into the main directory and create module.xml.
copy and paste below into module.xml
change the word in red color to be same name as your driver name
change the word in red color to be same name as your driver name
<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.0" name="com.mysql">
<resources>
<resource-root path="mysql-connector-java-5.1.43-bin.jar"/>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
</dependencies>
</module>
4. change permission of the newly created file and directory to jboss
5. start jboss service
6. go to jboss bin directory and connect to jboss-cli
# ./jboss-cli.sh -c --controller=<server-IP>
7. input this to add Mysql driver
/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-xa-datasource-class-name=com.mysql.jdbc.jdbc2.optional.MysqlXADataSource)
8. to edit the database connection details, i use its admin pages to add as it much more easy
at Browser, access to your admin pages by <server_ip>:9990
then follow the step below as screenshot
go to Configuration > Subsystems > Datasources > Non-Xa
and click add
then click your newly created Mysql and click test connection
2. create path for module/com/oracle/main
3. upload the driver to main folder and create module.xml
copy and paste below into module.xml
change the word in red color to be same name as your driver name
<module xmlns="urn:jboss:module:1.1" name="com.oracle">
<resources>
<resource-root path="ojdbc6.jar"/>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
</dependencies>
</module>
4. change ownership to jboss for newly create dir and file
# chown -R jboss;jboss module
5. start jboss and use jboss-cli to add the driver information
# ./jboss-cli.sh -c --controller=<server-IP>
6. copy paste below to setup the driver
/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.xa.client.OracleXADataSource)
7. to add DB details, go to admin site and add like Mysql example above. ( repeat step 8 for Mysql section)
just need to edit from mysql to oracle
MariaDB
2. create path for module/com/mariadb/main
3. upload the driver to main folder and create module.xml
copy and paste below into module.xml
change the word in red color to be same name as your driver name
<module xmlns="urn:jboss:module:1.1" name="com.mariadb">
<resources>
<resource-root path="mariadb-java-client-1.3.3.jar"/>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
</dependencies>
</module>
4. change ownership to jboss for newly create dir and file
# chown -R jboss:jboss module
5. start jboss and use jboss-cli to add the driver information
# ./jboss-cli.sh -c --controller=<server-IP>
4. copy and paste to add the driver information
/subsystem=datasources/jdbc-driver=mariadb:add(driver-name=mariadb,driver-module-name=com.mariadb,driver-xa-datasource-class-name=org.mariadb.jdbc.MariaDbDataSource)
5. to add DB details, go to admin site and add like Mysql example above. (repeat step 8 at Mysql section)
just need to edit from mysql to oracle
MsSql
2. create path for module/com/microsoft/main
3. upload the driver to main folder and create module.xml
copy and paste below into module.xml
change the word in red color to be same name as your driver name
<module xmlns="urn:jboss:module:1.1" name="com.microsoft">
<resources>
<resource-root path=".jar"/>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
<module name="javax.xml.bind.api"/>
</dependencies>
</module>
4. change ownership to jboss for newly create dir and file
# chown -R jboss;jboss module
5. start jboss and use jboss-cli to add the driver information
# ./jboss-cli.sh -c --controller=<server-IP>
4. copy and paste to add the driver information
/subsystem=datasources/jdbc-driver=microsoft:add(driver-name=microsoft,driver-module-name=com.microsoft,driver-xa-datasource-class-name=com.microsoft.sqlserver.jdbc.SQLServerXADataSource)
5. to add DB details, go to admin site and add like Mysql example above. (repeat step 8 at Mysql section)
just need to edit from mysql to oracle
at Browser, access to your admin pages by <server_ip>:9990
then follow the step below as screenshot
go to Configuration > Subsystems > Datasources > Non-Xa
and click add
Choose MySQL and click Next
Enter your JNDI name as below
go to Detected Driver and choose mysql
edit your connection URL according to your DB with the username and password
confirm details is correct and click Finish
Restart your jboss,
then back to admin > configuration againthen click your newly created Mysql and click test connection
you are done once test successful
Oracle
1. download your oracle jdbc driver from this URL, choose the one match your DB version
2. create path for module/com/oracle/main
3. upload the driver to main folder and create module.xml
copy and paste below into module.xml
change the word in red color to be same name as your driver name
<module xmlns="urn:jboss:module:1.1" name="com.oracle">
<resources>
<resource-root path="ojdbc6.jar"/>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
</dependencies>
</module>
4. change ownership to jboss for newly create dir and file
# chown -R jboss;jboss module
5. start jboss and use jboss-cli to add the driver information
# ./jboss-cli.sh -c --controller=<server-IP>
6. copy paste below to setup the driver
/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.xa.client.OracleXADataSource)
7. to add DB details, go to admin site and add like Mysql example above. ( repeat step 8 for Mysql section)
just need to edit from mysql to oracle
MariaDB
1. download your oracle jdbc driver from this URL, choose the one match your DB version
2. create path for module/com/mariadb/main
3. upload the driver to main folder and create module.xml
copy and paste below into module.xml
change the word in red color to be same name as your driver name
<module xmlns="urn:jboss:module:1.1" name="com.mariadb">
<resources>
<resource-root path="mariadb-java-client-1.3.3.jar"/>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
</dependencies>
</module>
4. change ownership to jboss for newly create dir and file
# chown -R jboss:jboss module
5. start jboss and use jboss-cli to add the driver information
# ./jboss-cli.sh -c --controller=<server-IP>
4. copy and paste to add the driver information
/subsystem=datasources/jdbc-driver=mariadb:add(driver-name=mariadb,driver-module-name=com.mariadb,driver-xa-datasource-class-name=org.mariadb.jdbc.MariaDbDataSource)
5. to add DB details, go to admin site and add like Mysql example above. (repeat step 8 at Mysql section)
just need to edit from mysql to oracle
MsSql
1. download your oracle jdbc driver from this URL, choose the one match your DB version
2. create path for module/com/microsoft/main
3. upload the driver to main folder and create module.xml
copy and paste below into module.xml
change the word in red color to be same name as your driver name
<module xmlns="urn:jboss:module:1.1" name="com.microsoft">
<resources>
<resource-root path=".jar"/>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
<module name="javax.xml.bind.api"/>
</dependencies>
</module>
4. change ownership to jboss for newly create dir and file
# chown -R jboss;jboss module
5. start jboss and use jboss-cli to add the driver information
# ./jboss-cli.sh -c --controller=<server-IP>
4. copy and paste to add the driver information
/subsystem=datasources/jdbc-driver=microsoft:add(driver-name=microsoft,driver-module-name=com.microsoft,driver-xa-datasource-class-name=com.microsoft.sqlserver.jdbc.SQLServerXADataSource)
just need to edit from mysql to oracle
Friday, July 21, 2017
Jboss EAP 7 Standalone cluster - TCP
Tested environment
OS: Centos 7 / Rhel 7 (SELinux and firewall disabled )
Java: Oracle JDK 1.8
Jboss: Jboss EAP 7.0.0 (2016-05-10)
Started to try setup Jboss EAP 7 cluster using standalone mode. But fail to setup using their default config which using UDP multicast, so been googling and found working solution at RedHat portal which using TCP for it multicast.
below is the step i had taken to setup my Jboss EAP 7 Standalone cluster for 2 server
Please make sure you had setup 2 server before you start this as the cluster config needed to input both server IP
Oracle JDK 1.8
1. download your oracle jdk 1.8 from this URL
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
download the "jdk-8u141-linux-x64.rpm"
2. transfer to your server and install it
# yum localinstall jdk-8u141-linux-x64.rpm
3. confirm your java with this command
# java -version
https://developers.redhat.com/products/eap/download/
I use version 7.0.0 as currently that's the latest stable version.
2. transfer to your Centos 7 / RHEL 7 and unpack the package
# unzip jboss-eap-7.0.0.zip
3. move the folder to /opt
# mv jboss-eap-7.0 /opt/
4. Add a management user, you can skip this if you does not need it. But for me, it useful for me to monitoring and give to developer to deploy code and get log.
# cd /opt/jboss-eap-7.0/bin
( NOTE: since i only using it for testing and development use, i edit the password config so I do not require me to setup complicated password )
# vim add-user.properties
# ./add-user.sh
5. go to init.d folder and update jboss config
6. Edit the startup script to point to this config file
7. add user jboss since inside the config, we had set to run this as jboss user
Edit your host to your IP for both of your server (which in Red color font)
6. you need to edit java_opt to give your node a name
# cd /opt/jboss-eap-7.0/bin/
# vim standalone.conf
please use different name for your 2nd server
7. you can start your jboss and try access it using <server_IP>:8080
8. your cluster is done and for your war file, you need to code it to support cluster. at the end of this node, I will write testing section on how to make sure it is cluster and jsession is transfer to each other when it down.
for the startup, we will use back jboss initd script provided.
It located at /opt/jboss-eap-7.0/bin/init.d/jboss-eap-rhel.sh
1. go to systemd and create jboss.service
# cd /usr/lib/systemd/system
# vim jboss.service
2. paste this into it and save it
3. enable jboss to start during boot
# systemctl enable jboss.service
4. start jboss service to verify it is working
# systemctl start jboss
1. install apache and needed package
# yum install httpd httpd-devel gcc
2. download mod_jk and build it. You can get it from this URL
http://tomcat.apache.org/download-connectors.cgi
get the JK 1.2.42 Source Release tar.gz (e.g. Unix, Linux, Mac OS)
3. unpack it, configure, make
# tar -zxvf tomcat-connectors-1.2.42-src.tar.gz
# tomcat-connectors-1.2.42-src/native/
# find / -iname apxs
# ./configure --with-apxs=/usr/bin/apxs
# make
# make install
4. setup web config file
# cd /etc/httpd/conf.d/
# vim workers.properties
# vim mod_jk.conf
5. save it and start apache service
for mine, i only using Vmware workstation on my laptop and host 2 server. So there is nothing to be see on the log if do not have any War file with cluster setting deploy.
So do not freak out if your log do not show the cluster member.
here is the step taken to test my cluster is working
1. download this war file, which i get it from RedHat solutions
https://drive.google.com/uc?export=download&id=0B04R1MEwmWozVTRaS1VxNGtJU1E
Unzip the file.
Inside it, go to counter/dist, download the counter.war
2. deploy it to your both of your jboss. You can either use the management console to deploy it or put the war file to the deployment folder
(NOTE: once you deploy the war file to both server, you should be able to see the cluster member info show up in your log )
3. go to <server-1_IP>/counter in browser
4. refresh few times, and you should see the counter increasing
5. now stop server-1 jboss, when it done fully stop, even we are still using server-1 web but the backend should be redirected to server-2 jboss since we had stop server-1 jboss.
6. Try refresh the pages and you should see the counter is continue increase instead of get reset. this had prove the jsession has been pass to other member of the cluster when it get down.
OS: Centos 7 / Rhel 7 (SELinux and firewall disabled )
Java: Oracle JDK 1.8
Jboss: Jboss EAP 7.0.0 (2016-05-10)
Started to try setup Jboss EAP 7 cluster using standalone mode. But fail to setup using their default config which using UDP multicast, so been googling and found working solution at RedHat portal which using TCP for it multicast.
below is the step i had taken to setup my Jboss EAP 7 Standalone cluster for 2 server
Please make sure you had setup 2 server before you start this as the cluster config needed to input both server IP
Oracle JDK 1.8
1. download your oracle jdk 1.8 from this URL
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
download the "jdk-8u141-linux-x64.rpm"
2. transfer to your server and install it
# yum localinstall jdk-8u141-linux-x64.rpm
3. confirm your java with this command
# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
JBOSS EAP 7 setup
1. download your jboss from this URLhttps://developers.redhat.com/products/eap/download/
I use version 7.0.0 as currently that's the latest stable version.
2. transfer to your Centos 7 / RHEL 7 and unpack the package
# unzip jboss-eap-7.0.0.zip
3. move the folder to /opt
# mv jboss-eap-7.0 /opt/
4. Add a management user, you can skip this if you does not need it. But for me, it useful for me to monitoring and give to developer to deploy code and get log.
# cd /opt/jboss-eap-7.0/bin
( NOTE: since i only using it for testing and development use, i edit the password config so I do not require me to setup complicated password )
# vim add-user.properties
password.restriction=WARN
to
password.restriction=RELAX
# ./add-user.sh
What type of user do you wish to add?
a) Management User (mgmt-users.properties)
b) Application User (application-users.properties)
(a): a
Enter the details of the new user to add.
Using realm 'ManagementRealm' as discovered from the existing property files.
Username : jboss-admin
Password :
Re-enter Password :
What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[ ]:
About to add user 'jboss-admin' for realm 'ManagementRealm'
Is this correct yes/no? yes
Added user 'jboss-admin' to file '/opt/jboss-eap-7.0/standalone/configuration/mgmt-users.properties'
Added user 'jboss-admin' to file '/opt/jboss-eap-7.0/domain/configuration/mgmt-users.properties'
Added user 'jboss-admin' with groups to file '/opt/jboss-eap-7.0/standalone/configuration/mgmt-groups.properties'
Added user 'jboss-admin' with groups to file '/opt/jboss-eap-7.0/domain/configuration/mgmt-groups.properties'
Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls.
yes/no? yes
To represent the user add the following to the server-identities definition <secret value="amJvc3MtYWRtaW4=" />
5. go to init.d folder and update jboss config
# cd init.d/
# vim jboss-eap.conf
## Location of JDK
JAVA_HOME="/usr/java/default"
## Location of JBoss EAP
JBOSS_HOME="/opt/jboss-eap-7.0"
## The username who should own the process.
JBOSS_USER=jboss
## The mode JBoss EAP should start, standalone or domain
JBOSS_MODE=standalone
## Configuration for standalone mode
JBOSS_CONFIG=standalone-ha.xml
6. Edit the startup script to point to this config file
# vim jboss-eap-rhel.sh
# Load JBoss EAP init.d configuration.
if [ -z "$JBOSS_CONF" ]; then
JBOSS_CONF="/opt/jboss-eap-7.0/bin/init.d/jboss-eap.conf"
fi
7. add user jboss since inside the config, we had set to run this as jboss user
# useradd jboss
8. change jboss ownership to jboss
# chown -R /opt/jboss-eap-7.0
9. try startup the jboss
# ./jboss-eap-rhel.sh start
Cluster setup
1. once the jboss is started, left it running as the next step needed it to implement the setting.
go to bin directory and create new file call tcp-cluster
# cd /opt/jboss-eap-7.0/bin
# touch tcp-cluster
2. open tcp-cluster file and pass this below into it.
batch
# Add the tcpping stack
/subsystem=jgroups/stack=tcpping:add
/subsystem=jgroups/stack=tcpping/transport=TCP:add(socket-binding=jgroups-tcp)
/subsystem=jgroups/stack=tcpping/protocol=TCPPING:add
# Set the properties for the TCPPING protocol
/subsystem=jgroups/stack=tcpping/protocol=TCPPING:write-attribute(name=properties,value={initial_hosts="HOST_A[7600],HOST_B[7600]",port_range=0,timeout=3000})
/subsystem=jgroups/stack=tcpping/protocol=MERGE3:add
/subsystem=jgroups/stack=tcpping/protocol=FD_SOCK:add(socket-binding=jgroups-tcp-fd)
/subsystem=jgroups/stack=tcpping/protocol=FD:add
/subsystem=jgroups/stack=tcpping/protocol=VERIFY_SUSPECT:add
/subsystem=jgroups/stack=tcpping/protocol=pbcast.NAKACK2:add
/subsystem=jgroups/stack=tcpping/protocol=UNICAST3:add
/subsystem=jgroups/stack=tcpping/protocol=pbcast.STABLE:add
/subsystem=jgroups/stack=tcpping/protocol=pbcast.GMS:add
/subsystem=jgroups/stack=tcpping/protocol=MFC:add
/subsystem=jgroups/stack=tcpping/protocol=FRAG2:add
# Set tcpping as the stack for the ee channel
/subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcpping)
run-batch
reload
Edit your host to your IP for both of your server (which in Red color font)
3. execute the script by using this command
# ./jboss-cli.sh --connect --file=tcp-cluster
4. stop your jboss service
# ./init.d/jboss-eap-rhel.sh stop
5. go to edit standalone-ha config and update it to listen to your IP instead of localhost
# cd /opt/jboss-eap-7.0/standalone/configuration/
# vim standalone-ha.xml
<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:192.168.95.132}"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address:192.168.95.132}"/>
</interface>
<interface name="private">
<inet-address value="${jboss.bind.address.private:192.168.95.132}"/>
</interface>
</interfaces>
6. you need to edit java_opt to give your node a name
# cd /opt/jboss-eap-7.0/bin/
# vim standalone.conf
if [ "x$JAVA_OPTS" = "x" ]; then
JAVA_OPTS="-Xms1350m -Xmx1350m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true -Djboss.node.name=node1"
else
echo "JAVA_OPTS already set in environment; overriding default settings with values: $JAVA_OPTS"
fi
please use different name for your 2nd server
7. you can start your jboss and try access it using <server_IP>:8080
8. your cluster is done and for your war file, you need to code it to support cluster. at the end of this node, I will write testing section on how to make sure it is cluster and jsession is transfer to each other when it down.
Startup
for the startup, we will use back jboss initd script provided.
It located at /opt/jboss-eap-7.0/bin/init.d/jboss-eap-rhel.sh
1. go to systemd and create jboss.service
# cd /usr/lib/systemd/system
# vim jboss.service
2. paste this into it and save it
[Unit]
Description=Jboss EAP 7
After=syslog.target
After=network.target
[Service]
Type=forking
ExecStart=/opt/jboss-eap-7.0/bin/init.d/jboss-eap-rhel.sh start
ExecStop=/opt/jboss-eap-7.0/bin/init.d/jboss-eap-rhel.sh stop
TimeoutStartSec=300
TimeoutStopSec=300
[Install]
WantedBy=multi-user.target
Description=Jboss EAP 7
After=syslog.target
After=network.target
[Service]
Type=forking
ExecStart=/opt/jboss-eap-7.0/bin/init.d/jboss-eap-rhel.sh start
ExecStop=/opt/jboss-eap-7.0/bin/init.d/jboss-eap-rhel.sh stop
TimeoutStartSec=300
TimeoutStopSec=300
[Install]
WantedBy=multi-user.target
3. enable jboss to start during boot
# systemctl enable jboss.service
4. start jboss service to verify it is working
# systemctl start jboss
Apache Web Setup
I going to use apache with mod_jk for my web + balancer1. install apache and needed package
# yum install httpd httpd-devel gcc
2. download mod_jk and build it. You can get it from this URL
http://tomcat.apache.org/download-connectors.cgi
get the JK 1.2.42 Source Release tar.gz (e.g. Unix, Linux, Mac OS)
3. unpack it, configure, make
# tar -zxvf tomcat-connectors-1.2.42-src.tar.gz
# tomcat-connectors-1.2.42-src/native/
# find / -iname apxs
# ./configure --with-apxs=/usr/bin/apxs
# make
# make install
4. setup web config file
# cd /etc/httpd/conf.d/
# vim workers.properties
worker.list=worker1,node1,node2,status
#node name you using here need to be same in standalone.conf
worker.jkstatus.type=status
#node1
worker.node1.port=8009
worker.node1.host=192.168.95.132
worker.node1.type=ajp13
worker.node1.lbfactor=1
worker.node1.ping_mode=A
#worker.node1.cachesize=10
#node2
worker.node2.port=8009
worker.node2.host=192.168.95.135
worker.node2.type=ajp13
worker.node2.lbfactor=1
worker.node2.ping_mode=A
#worker.node2.cachesize=10
# Load-balancing behaviour
worker.worker1.type=lb
worker.worker1.balance_workers=node1,node2
worker.worker1.sticky_session=1
# vim mod_jk.conf
LoadModule jk_module modules/mod_jk.so
<IfModule mod_jk.c>
JkWorkersFile /etc/httpd/conf.d/workers.properties
JkShmFile /var/log/httpd/mod_jk.shm
JkLogFile /var/log/httpd/mod_jk.log
JkLogLevel info,debug
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
JkMount /* worker1
#mount this url, edit as neccessary
</IfModule>
5. save it and start apache service
Testing
By right, if you are using Exsi to host your server. You should see cluster member view inside the log during the jboss startup.for mine, i only using Vmware workstation on my laptop and host 2 server. So there is nothing to be see on the log if do not have any War file with cluster setting deploy.
So do not freak out if your log do not show the cluster member.
here is the step taken to test my cluster is working
1. download this war file, which i get it from RedHat solutions
https://drive.google.com/uc?export=download&id=0B04R1MEwmWozVTRaS1VxNGtJU1E
Unzip the file.
Inside it, go to counter/dist, download the counter.war
2. deploy it to your both of your jboss. You can either use the management console to deploy it or put the war file to the deployment folder
(NOTE: once you deploy the war file to both server, you should be able to see the cluster member info show up in your log )
3. go to <server-1_IP>/counter in browser
4. refresh few times, and you should see the counter increasing
5. now stop server-1 jboss, when it done fully stop, even we are still using server-1 web but the backend should be redirected to server-2 jboss since we had stop server-1 jboss.
6. Try refresh the pages and you should see the counter is continue increase instead of get reset. this had prove the jsession has been pass to other member of the cluster when it get down.
Subscribe to:
Posts (Atom)