Monday, April 18, 2016

mariadb cluster

Environment i using
OS: RHEL7
database: MariaDB 10.1.13
firewalld: off
SElinux: off

===== Install MariaDB =====

default in your OS, it had mariadb include in yum repo but it was using old version.
Please add this repo to enable the latest version officially from MariaDB

# vim /etc/yum.repo.d/mariadb.repo

---------- mariadb.repo ----------

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/rhel7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

---------- END ----------

for the baseurl, if you are using centos or other, you can check it at here for the path


install mariadb using this command

# yum install mariadb-server



===== Setup MaridDB for Cluster =====

open and edit my.cnf
add the section in red color

# vim /etc/my.cnf

---------- my.cnf ----------
#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

[galera]

wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
binlog_format=ROW
wsrep_cluster_address='gcomm://'
wsrep_cluster_name='galera_cluster'
wsrep_node_name='node1'

---------- END ----------

for the 2nd database, just repeat the installation step but at my.cnf you need to edit the
wsrep_cluster_address='gcomm://<node 1 IP address>'
wsrep_node_name='node2'


Start both databases
# systemctl start mariadb

login to mysql and you can check if it was success

MariaDB [(none)]> SHOW STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 2     |
+--------------------+-------+
1 row in set (0.01 sec)

this shown above indicate got 2 node mean it was success.

you can check for other info using this command

MariaDB [(none)]> show global status like 'wsrep_%';

Monday, April 11, 2016

SSH Login Banner

there are 2 type of banner
1 is show before you login
and another is show after you success login

-------------------------------------------------------------------
Show before login

by default, there already prepare a banner for us but is not used.
it located at /etc/issue.net
it show the kernel version as banner at login
you can use this or use your own script
just create a file example
# vim  /etc/ssh/banner
and put something like this

#####################
#                                      #
#  Welcome to Centos 7 #
#                                     #
####################

then enable the banner
# vim /etc/ssh/sshd_config
find and edit this

# no default banner path
#Banner none

to

# no default banner path
Banner /etc/ssh/banner

then restart the sshd service
-----------------------------------------------------------------

Show after success login

# vim /etc/motd

and edit to your like

setup Liferay 7 tomcat bundle + cluster

OS = CentOS Linux release 7.2.1511 (Core)
Liferay version = liferay-portal-tomcat-7.0-ce-ga1-20160331161017956
Java = java version "1.7.0_79"

===== Liferay =====

1. download java and install it
    for mine i download oracle java sdk 7 at
    http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
    download the rpm for easy install and upgrade
    # yum localinstall jdk-7u79-linux-x64.rpm

2. download liferay and extract it.
    for mine, i extract and put it at /opt
    then i rename it to liferay so it will become /opt/liferay

3. go to /opt/liferay/tomcat-8.0.32/bin
    test run 1 times to confirm it was working with default settings
    # ./startup.sh
    use browser and try access it at
    <server ip>:8080
    and stop it after confirm working
    # shutdown.sh

4. install tomcat native for better performance
    at bin directory, extract tomcat-native.tar.gz and navigate to native directory inside it
    # ./configure --with-apr=/usr/bin/apr-1-config --with-java-home=/usr/java/default --with-ssl=/usr/bin/openssl --prefix=/usr
    # make
    # make install

5. back to bin directory and extract commons-daemon-native.tar.gz
    navigate into unix folder
    # ./configure --with-java=/usr/java/default
    # make
    # cp jsvc ../..

6. add tomcat user for liferay to run instead of using root
    # useradd tomcat
    # chown -R tomcat: /opt/liferay

7. at tomcat bin directory, edit setenv.sh and change the Xmx value to suit your server memory.
    for mine, i also had manual set Xms value as well

====== startup script =====

since it is using systemd for centos7, below is the guide on how to add
# cd /etc/systemd/system
# vim tomcat.service

=== tomcat.service ===

# Systemd unit file for tomcat
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target

[Service]
Type=forking
#ExecStart=/etc/init.d/tomcat start
ExecStart=/opt/liferay/tomcat/bin/startup.sh
ExecStop=/opt/liferay/tomcat/bin/shutdown.sh
User=tomcat
Group=tomcat

TimeoutStartSec=0
TimeoutStopSec=600

[Install]
WantedBy=multi-user.target

=== END ===

enable it to run at startup
# systemctl enable tomcat.service

now you can test use systemctl to start and stop to confirm it working
# systemctl start tomcat
# systemctl stop tomcat

monitor the log at /opt/liferay/tomcat/logs/catalina.out
to make sure it fully startup without error


===== Apache =====

you either can use your firewall to redirrect port 8080 to port 80
or
use mod_jk for port 80 to 8080




===== cluster =====

1. edit <liferay>/tomcat/conf/context.xml
     change <Context>
     to <Context distributable="true">

2. edit server.xml
    change <Engine name="Catalina" defaultHost="localhost">
    to <Engine name="Catalina" defaultHost="localhost" jvmRoute="node1">
    then below it add this as well

=== server.xml ===

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
        channelSendOptions="6">

  <Manager className="org.apache.catalina.ha.session.BackupManager"
        expireSessionsOnShutdown="false"
        notifyListenersOnReplication="true"
        mapSendOptions="6"/>


  <Channel className="org.apache.catalina.tribes.group.GroupChannel">
    <Membership className="org.apache.catalina.tribes.membership.McastService"
        address="228.0.0.4"
        port="45564"
        frequency="500"
        dropTime="3000"/>
    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
      address="auto"
        port="5000"
        selectorTimeout="100"
        maxThreads="6"/>

    <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
      <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
    </Sender>
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
  </Channel>

  <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
         filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>

  <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
=== end ===

3. edit <liferay>/tomcat/conf/Catalina/localhost/ROOT.xml and add this into it

=== ROOT.xml ===

<Resource
        name="jdbc/LiferayPool"
        auth="Container"
        type="javax.sql.DataSource"
        driverClassName="com.mysql.jdbc.Driver"
        url="jdbc:mysql://<DB IP>/<DB name>?useUnicode=true&amp;characterEncoding=UTF-8"
        username="DB username"
        password="DB password"
        maxActive="100"
        maxIdle="30"
        maxWait="60000"
    />

=== end ===

4. then at <liferay>/tomcat/webapps/ROOT/WEB-INF/classes, create portal-ext.properties file and put this into it

=== portal-ext.properties ===

jdbc.default.jndi.name=jdbc/LiferayPool

=== end ===