Quantcast
Channel: Liferay Savvy
Viewing all 99 articles
Browse latest View live

Enable Metrics Provider in Zookeeper

$
0
0

Zookeeper provided “metrics provider” such a way user can monitor zookeeper. Prometheus is one of the monitoring services can be used to monitor zookeeper.

 

The following are the steps to enable metrics provider in Zookeeper.

 

Configure Metrics provider in Zookeeper configuration.

Install “Prometheus” service


Software’s and Tools


 

Windows 10

Java 1.8 or higher

Zookeeper 3.7.0

prometheus-2.28.1

 

 


Prerequisite


Setup a Zookeeper cluster

 

http://www.liferaysavvy.com/2021/07/setup-zookeeper-cluster.html

 

 

Configure Metrics provider in Zookeeper configuration.

 

Open “zoo.cfg” file and update with metric provider configuration.

 

Zookeeper Node1

 

 

## Metrics Providers

# https://prometheus.io Metrics Exporter

metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider

metricsProvider.httpPort=7001

metricsProvider.exportJvmInfo=true

 

 

 

Zookeeper Node2

 

 

## Metrics Providers

# https://prometheus.io Metrics Exporter

metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider

metricsProvider.httpPort=7002

metricsProvider.exportJvmInfo=true

 

 

 

Zookeeper Node3

 

 

## Metrics Providers

# https://prometheus.io Metrics Exporter

metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider

metricsProvider.httpPort=7003

metricsProvider.exportJvmInfo=true

 

 


Start all Zookeeper server and make sure all are starting successfully.

 

Install “Prometheus” service


Go to Prometheus download page and download latest version.

 

https://prometheus.io/download/

 

Direct Links as follows

 

https://github.com/prometheus/prometheus/releases/download/v2.28.1/prometheus-2.28.1.windows-amd64.zip

 

Extract in local Drive





Configure Zookeeper cluster in Prometheus

 

Create “metrics-zk.yaml” file in root directory of “Prometheus” and set Prometheus's scraper to target the Zookeepercluster endpoints.

 

Add following configuration in “metrics-zk.yaml” file and make sure target property should have zookeeper cluster hosts with metrics port that enabled in “zoo.cfg

 

 

# my global config

global:

  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.

  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.

  # scrape_timeout is set to the global default (10s).

 

# Alertmanager configuration

alerting:

  alertmanagers:

  - static_configs:

    - targets:

      # - alertmanager:9093

 

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.

rule_files:

  # - "first_rules.yml"

  # - "second_rules.yml"

 

# A scrape configuration containing exactly one endpoint to scrape:

# Here it's Prometheus itself.

scrape_configs:

  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.

  - job_name: 'zookeepermetrics'

    #metrics_path: '/zookeeper-metrics'

    # scheme defaults to 'http'.

 

    static_configs:

    - targets: ['localhost:7001','localhost:7002','localhost:7003']

 

 



Start “Prometheus”


Open command prompt and locate to “Prometheus” root directory. Use following start command and pass web listen address and config file as options.


 

prometheus.exe --config.file metrics-zk.yaml --web.listen-address ":9090" --storage.tsdb.path "metrics-zk.data"

 

 


 

Verify Installation


Access “Prometheus” web interface with following URL and its running on 9090 port.

 

 

http://localhost:9090/

 

 

 



 

Targets Status


Go to Status menu and click on targets so we can see Zookeeper cluster health for each node.





 

Zookeeper node’s health in the cluster.

 

 

http://localhost:9090/targets

 

 

 

 



Reference


https://github.com/apache/zookeeper/blob/master/zookeeper-docs/src/main/resources/markdown/zookeeperMonitor.md





Centralized Logging for Liferay Portal

$
0
0

Kafka is distributed streaming system based on “publish and subscribe to” model. We can use Kafka for centralized logging system for applications.


This article demonstrate the implementation of  Kafka centralized logging system for Liferay Portal. This will be used in Liferay cluster environments to check all Liferay Nodes logs at one place.

 

Prerequisite.


Kafka Cluster with 3 Nodes

Liferay Cluster with 2 Nodes

 

Software’s and Tools



 

Windows 10

Kafka 2.8

Java 1.8 or higher

Zookeeper 3.5.9 (Embedded in Kafka)

Liferay 7.4 CE

 

 

 

Steps to implement Centralized Logging for Liferay


Setup a Kafka Cluster with 3 Brokers

Setup a Liferay Cluster with 2 Nodes

Configure Kafka Appender

Add required JAR’s

Add Liferay Node system property

Start Kafka Cluster

Create Kafka Topic

Start Kafka consumer

Start Liferay Cluster

View Liferay Cluster logs in Kafka Consumer

 

Architecture Diagram




 

Setup a Kafka Cluster with 3 Brokers


This demonstration required Kafka cluster and follow below article to setup a Kafka cluster on windows.


http://www.liferaysavvy.com/2021/07/setup-kafka-cluster.html


 

Setup a Liferay Cluster with 2 Nodes


We are setting up centralized logging system for Liferay portal so we needed Liferay Cluster up and running.


Follow below article to setup Liferay Cluster


http://www.liferaysavvy.com/2021/07/liferay-portal-apache-webserver.html

 


Configure Kafka Appender



Need to configure Kafka Appender in portal log4j configuration. If any customization to Liferay portal log4j required to update configuration in “portal-log4j-ext.xml


Create META-INF directory in each Liferay Portal instance tomcat lib directory




 

Create “portal-log4j-ext.xml” file in META-INFdirectory.





Add Kafka Appender configuration in addition to existing Liferay Portal Log4J configuration.


Following is Kafka Appender configuration

 


<Kafka name="Kafka" topic="liferay-kafka-logs">

            <PatternLayout pattern="${sys:liferay.node} %d{yyyy-MM-dd HH:mm:ss.SSS} %-5p [%t][%c{1}:%L] %m%n"/>

            <Property name="bootstrap.servers">localhost:9092,localhost:9092,localhost:9093</Property>

        </Kafka>

        <Async name="KafkaAsync">

            <AppenderRef ref="Kafka"/>

        </Async>

<Loggers>

                    <Root level="INFO">

                              <AppenderRef ref="KafkaAsync"/>

                   

                    </Root>

          </Loggers>


 

“bootstrap.servers” are the Kafka cluster hosts and its ports and each one should comma separated.


Topic attribute represent Kafka topic name where all the Liferay logs will be sending.


All Liferay Portal cluster logs are following to Kafka on same topic so It is required to differentiate logs by Liferay Node name so "${sys:liferay.node}” configuration will be the Liferay Node name, which is configured as system property. We need to included node system property as part of log pattern.

 

Use below configuration in “portal-log4j-ext.xml” which included Liferay log4j and Kafka Appender configuration.

 


 

<?xml version="1.0"?>

 

<Configuration strict="true">

          <Appenders>

                    <Appender name="CONSOLE" type="Console">

                              <Layout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} %-5p [%t][%c{1}:%L] %m%n" type="PatternLayout" />

                    </Appender>

 

                    <Appender filePattern="@liferay.home@/logs/liferay.%d{yyyy-MM-dd}.log" ignoreExceptions="false" name="TEXT_FILE" type="RollingFile">

                              <Layout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} %-5p [%t][%c{1}:%L] %m%n" type="PatternLayout" />

 

                              <TimeBasedTriggeringPolicy />

 

                              <DirectWriteRolloverStrategy />

                    </Appender>

 

                    <Appender filePattern="@liferay.home@/logs/liferay.%d{yyyy-MM-dd}.xml" ignoreExceptions="false" name="XML_FILE" type="RollingFile">

                              <Log4j1XmlLayout locationInfo="true" />

 

                              <TimeBasedTriggeringPolicy />

 

                              <DirectWriteRolloverStrategy />

                    </Appender>

                    <Kafka name="Kafka" topic="liferay-kafka-logs">

            <PatternLayout pattern="${sys:liferay.node} %d{yyyy-MM-dd HH:mm:ss.SSS} %-5p [%t][%c{1}:%L] %m%n"/>

            <Property name="bootstrap.servers">localhost:9092,localhost:9092,localhost:9093</Property>

        </Kafka>

        <Async name="KafkaAsync">

            <AppenderRef ref="Kafka"/>

        </Async>

 

        <Console name="stdout" target="SYSTEM_OUT">

            <PatternLayout pattern="%d{HH:mm:ss.SSS} %-5p [%-7t] %F:%L - %m%n"/>

        </Console>

 

          </Appenders>

 

          <Loggers>

                    <Root level="INFO">

                              <AppenderRef ref="KafkaAsync"/>

                              <AppenderRef ref="CONSOLE" />

                              <AppenderRef ref="TEXT_FILE" />

                              <AppenderRef ref="XML_FILE" />

                    </Root>

          </Loggers>

</Configuration>

 

 


Original configuration available in “tomcat/webapps/ROOT/WEB-INF/lib/portal-impl.jar META-INF/portal-log4j.xml”. We took required configuration and updated in “portal-log4j-ext.xml


https://github.com/liferay/liferay-portal/blob/7.4.x/portal-impl/src/META-INF/portal-log4j.xml

 


Add required JAR’s



Implementation is required few jars files which should be added in tomcat global class path.


The following are required jars


 

kafka-clients-2.8.0.jar,

slf4j-api.jar,

kafka-log4j-appender-2.8.0.jar

 

 

kafka-clients-2.8.0.jar have all Kafka API to interact with Kafka brokers like Producers and Consumers.


kafka-clients-2.8.0.jaris available in Kafka server lib directory.


 



Copy same jar from Kafka server lib to tomcat libdirectory.




 

kafka-log4j-appender-2.8.0.jar have log4j Appender implementation and its internally using “kafka-clients”. It had Kafka producers to send logs to Kafka topic.

 

Download “kafka-log4j-appender” from maven central repository and add it in tomcat lib directory.


https://repo1.maven.org/maven2/org/apache/kafka/kafka-log4j-appender/2.8.0/kafka-log4j-appender-2.8.0.jar





 

 

slf4j-api.jarrequired by Kafka Appender so it should be available in portal tomcat lib directory.


slf4j-api.jar is available in Liferay Portal ROOT/WEB-INF/lib directory. Copy/Move slf4j-api.jarfile to tomcat global lib directory.




 

Note:


Above all configuration should be updated in each Liferay Node in the cluster.

 


Add Liferay Node system property


 

Need to add Liferay Node system property in Liferay tomcat setenv.bat file



Liferay Node1


Locate to Liferay Node1tomcat bin directory and open setenv.bat file in editor and new Liferay node system variable to existing list.


 

-Dliferay.node=Liferay-Node1

 

 

 

set "CATALINA_OPTS=%CATALINA_OPTS% -Dfile.encoding=UTF-8 -Djava.locale.providers=JRE,COMPAT,CLDR -Djava.net.preferIPv4Stack=true -Duser.timezone=GMT -Xms2560m -Xmx2560m -XX:MaxNewSize=1536m -XX:MaxMetaspaceSize=768m -XX:MetaspaceSize=768m -XX:NewSize=1536m -XX:SurvivorRatio=7 -Dliferay.node=Liferay-Node1"

 

 






Liferay Node2


Locate to Liferay Node2tomcat bin directory and open setenv.bat file in editor and new Liferay node system variable to existing list.

 

 

-Dliferay.node=Liferay-Node2

 

 

 

set "CATALINA_OPTS=%CATALINA_OPTS% -Dfile.encoding=UTF-8 -Djava.locale.providers=JRE,COMPAT,CLDR -Djava.net.preferIPv4Stack=true -Duser.timezone=GMT -Xms2560m -Xmx2560m -XX:MaxNewSize=1536m -XX:MaxMetaspaceSize=768m -XX:MetaspaceSize=768m -XX:NewSize=1536m -XX:SurvivorRatio=7 -Dliferay.node=Liferay-Node2"

 

 



 

Start Kafka Cluster


We already setup Kafka cluster and follow the same article to start Kafka Cluster


http://www.liferaysavvy.com/2021/07/setup-kafka-cluster.html

 


Create Kafka Topic



Open command prompt and locate to one of the Kafka broker bin windows directory. Use following create topic command.


Topic: liferay-kafka-logs


Same topic we have configured in the Kafka appended log4j configuration.


 

kafka-topics.bat --create --zookeeper localhost:2181,localhost:2182,localhost:2183 --replication-factor 3 --partitions 3 --topic liferay-kafka-logs

 

 

We should pass all zookeeper cluster nodes in the options.


Make sure topic is created by using list command.


 

kafka-topics.bat --zookeeper localhost:2181,localhost:2182,localhost:2183 –list

 

 

 



 

Start Kafka consumer



Open command prompt and locate to Kafka bin windows directory. Use following consumer command to start consumer. We have to start consumer for “liferay-kafka-logs

 

 

kafka-console-consumer.bat --bootstrap-server localhost:9092,localhost:9093,localhost:9094 --topic liferay-kafka-logs --from-beginning

 

 





Start Liferay Cluster



Now start Each Liferay node in the cluster. We already setup the Liferay cluster and follow the same article to start Liferay Nodes.


http://www.liferaysavvy.com/2021/07/liferay-portal-apache-webserver.html

 


View Liferay Cluster logs in Kafka Consumer


Observe the Kafka consumer console window which is flowing with Liferay both nodes’ logs.


These logs will be differentiated by Liferay Node names in the cluster that we already configured as system property and same used as part Kafka Appender configuration.

 






It confirms that successfully implemented centralized logging system with Kafka Log4J Appender.



Advantages



This implementation will avoid to store logs in each server such way a logs management will become very easy and all logs available in central location.


This will resolve storage issues in the serves.


Integrate Splunkor Kibana Web UI connect to Kafka to monitor applications in efficient manner.



Notes


Demonstration purpose we implemented all the clusters in single machine. Real production has multiple servers to manage Kafka and Liferay clusters.


Due to single machine, we might have to change port numbers for Liferay and Kafka cluster but production environment that is not necessary.


Demonstration purpose we have used Kafka consumer to show logs but real-world environment we have to use Web UI tools to monitor logs like Splunk or Kibana.


We have not changed existing Liferay portal Appenders so logs will be stored in each node logs directory in files. To avoid duplicate storage of logs, we can remove other Appender so that logs will not be stored in local logs directory.


References


https://logging.apache.org/log4j/2.x/manual/appenders.html

 

 


Author

 

 

 

 

 

Install Elastic Search Cluster

$
0
0

Elastic search is open-source distributed search and analytics engine based on Lucene search engine. It’s completely Restful implementation and easy to use.


Elastic search is core of Elastic stack and there are many products from elastic stack.


Example demonstrating 3 nodes elastic search cluster.

 



 

Software’s and Tools



 

Windows 10

Java 1.8 or higher

Elasticsearch-7.13.3

 

 

Download and Extract



Go to elastic search download page and click on below links to download “elasticsearch-7.13.3” to your local machine.


https://www.elastic.co/downloads/elasticsearch

 

Direct download link


https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.13.3-windows-x86_64.zip


Extract downloaded elastic search zip file to local drive.




 

Elastic Search Cluster



Elastic Search Node1



Open command prompt and locate elastic search bin directory and use below start command to start elastic search Node1 in the cluster.

 

Need to provide Cluster Name, Node Name, Data and Logspath as parameters.

 

 

elasticsearch.bat -Ecluster.name=elastic-search-cluster -Enode.name=node1  -Epath.data=data1 -Epath.logs=log1

 


 



 




We can see logs in the console which confirms the elastic search startup and its ports numbers.

 




We just started first node it elected as Master Node and It uses the 9300 port for discovery in the cluster. Rest services can be accessed on 9200 port.



Elastic Search Node2



Open second command prompt and locate to elastic search bin directory and use above start command.

 

 

elasticsearch.bat -Ecluster.name=elastic-search-cluster -Enode.name=node2  -Epath.data=data2 -Epath.logs=log2

 


 



We can see logs in the console which confirms the elastic search startup and its ports numbers.

 




Node2 uses the 9301 port for discovery port in the cluster. Master node will identify the node2 and it will join in the cluster. Rest services can be accessed on 9201 port.

 

 

Node 1 Master node we can see the information that Node2 joined in the cluster.

 



 

Elastic Search Node2



Open 3rd command prompt and use below start command to stat elastic search Node3

 

 

elasticsearch.bat -Ecluster.name=elastic-search-cluster -Enode.name=node3  -Epath.data=data3 -Epath.logs=log3

 

 



Startup logs confirms the Node3 startup.




Node3 uses the 9302 port for discovery port in the cluster. Master node will identify the node3 and it will join in the cluster. Rest services can be accessed on 9202 port.


Node 1 Master node we can see the information that Node3 joined in the cluster.




 

Cluster Information



Nodes

Discovery Port

Rest Access

Node1(Master)

127.0.0.1:9300

127.0.0.1:9200

Node2

127.0.0.1:9301

127.0.0.1:9201

Node3

127.0.0.1:9302

127.0.0.1:9202

 


Node1 Rest Access


 

 

http://localhost:9200/

 

 

Access above URL and it will return JSON data which contains the node details

 



Node2 Rest Access




http://localhost:9201/


 

Access above URL and it will return JSON data which contains the node details





Node2 Rest Access




http://localhost:9202/




Access above URL and it will return JSON data which contains the node details

 




Check Cluster Health


We can use any one of below URL to check elastic cluster health.


 

http://localhost:9200/_cat/health?v

 

http://localhost:9201/_cat/health?v

 

http://localhost:9202/_cat/health?v

 

 



Now we have successfully completed setup elastic cluster. We can see node specific Data and Logs directories in the rood directory of Elastic search.

 



 

Note1


We can use any node Rest access to do elastic operations like create document and search. Follow elastic search quick start guide to know more about operations.

 

References

 

https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html

 


Author

 

Logstash Installation

$
0
0

Logstash is one of the software from Elastic stack to collect logs from different data sources and send logs to Elastic search. It will take data from different sources and process data as per our requirements.

 




 

Software and Tools



 

Windows 10

Java 1.8 or higher

logstash-7.13.3

 

 

 

Download and Extract


Go to elastic search download page and download Logstash.

 

https://www.elastic.co/downloads/logstash

 

Direct link as follows

 

https://artifacts.elastic.co/downloads/logstash/logstash-7.13.3-windows-x86_64.zip

 

Extract downloaded zip in local drive

 



 

 

Start Logstash


Open command prompt and locate to bin directory of Logstash and use below command to start.


 

logstash.bat -e "input { stdin { } } output { stdout {} }"

 

 





Actually, we need to pass Logstash configuration file as input to start command but we have not decided input/outputconfiguration so we just started with dummy input/output.

 

Finally, Logstash API can be accessible on 9600port

 

 

http://localhost:9600/

 

 




 

 

Reference


https://www.elastic.co/guide/en/logstash/current/introduction.html



Author

 

Kibana Installation

$
0
0

Kibana is analytics, visualization and monitoring tool from Elastic stack. It will connect to Elastic search and analyze and visualize the data. Most of the organizations are using Kibana for applications log monitoring. We can build nice Dashboards in Kibana and configure alerts as well.

 

Software and Tools



 

Windows 10

Java 1.8 or higher

kibana-7.13.3

elasticsearch-7.13.3

 

 


Prerequisite



Install Elastic Search and Start Elastic Cluster

 

http://www.liferaysavvy.com/2021/07/install-elastic-search-cluster.html

 

 

Download and Extract


Go to elastic search download page and download Kibana as per your system configurations.

 

https://www.elastic.co/downloads/kibana

 


Direct link as follows

 

https://artifacts.elastic.co/downloads/kibana/kibana-7.13.3-windows-x86_64.zip

 


Extract downloaded zip in local drive

 




 

 

Configure Kibana


Locate to Kibana conf directory and make sure elastic search cluster nodes.



Add/Update “kibana.yml” with elastic cluster nodes according to our elastic cluster installation.

 


 

# The URLs of the Elasticsearch instances to use for all your queries.

elasticsearch.hosts: ["http://localhost:9200","http://localhost:9201","http://localhost:9202"]





Start Kibana


Open command prompt and locate to root directory of Kibana and use below command to start. Pass “kibana.yaml” file as startup --configoption for kibata.bat.

 

 

 

bin\kibana.bat --config config\kibana.yml

 

 

 





 

 






Kibana will start on 5601 port and you can access Kibana with below URL. We can Kibana startup information in console logs.

 

 

http://localhost:5601/

 

 






 

 

 

Sample Data in Kibana


Click on Add Data button and navigate to Kibana home page and click on “Try Sample Data








 

We can add any sample data from given options. We can try Sample weblogs.

 

 





 

Once data is added, we can view the data. Click on view data and select Dashboard.

 





 

Sample Web logs data dashboard as follow




 

Reference

 

https://www.elastic.co/guide/en/kibana/current/get-started.html

 



Author


Liferay Portal Logs Monitoring with ELKK

$
0
0

This article demonstrates the Liferay Portal Logs monitoring using ELKK stack.


 

Elastic Search

Logstash

Kibana

Kafka

 

 

Previous Article We already implemented Liferay Centralized logging system using Kafka.


Now we will use Logstash to pull the Liferay Logs from Kafka and push to Elastic Search.


Kibana will be used as Visualization and Monitoringtool to analyze the Liferay portal logs.


Following is Architecture diagram

 



Software’s and Tools


 

Windows 10

Java 1.8 or higher

Apache24

Liferay 7.4

Zookeeper-3.7.0

Kafka-2.8.0

Logstash-7.13.3

Kibana-7.13.3

Elasticsearch-7.13.3

 

 

 

Prerequisite


Implement Liferay Centralized logging system from below article which covers the Zookeeper, Kafka and Liferay Installation.

 

http://www.liferaysavvy.com/2021/07/centralized-logging-for-liferay-portal.html

 



It’s time to install and configure ELKK stack.

 

  • Install Elastic Search Cluster
  • Install Logstash and configure input/output pipeline and Start
  • Validate Index creation in Elastic Search
  • Install Kibana and Start
  • Create Index Pattern in Kibana and Analyze Liferay portal logs.

 


Install Elastic Search Cluster


Follow the below article to install Elastic Search Cluster


http://www.liferaysavvy.com/2021/07/install-elastic-search-cluster.html



Install Logstash and configure input/output pipeline and Start


Follow the blow article to install log stash


http://www.liferaysavvy.com/2021/07/logstash-installation.html


Above Logstash install with dummy input/outputand now it’s time to define actual Logstash pipeline. It’s very important step.


Locate to Logstash config location and create “logstash.conf” file.







 

We have all logs in Kafka now we will define Logstash pipeline input to Kafka and output to elastic search.


Use following configuration in “logstash.conf” file


 

input { 

    kafka {

        bootstrap_servers => "localhost:9092,localhost:9093,localhost:9094"

        topics => ["liferay-kafka-logs"]

    }

}

 

output { 

    elasticsearch {

        hosts => ["localhost:9200","localhost:9201","localhost:9202"]

        index => "liferay-index"

    }

}

 

 



 


Input should have Kafka bootstrap servers and Kafka topics. All logs are sending on “liferay-kafka-logs” topic and same topic was used in Liferay Log4J configuration for Kafka Appender.


Output is Elastic search cluster instances that we already installed. We also need to provide index name so that all logs will be tagged with given index.


Open command prompt and locate to Logstash root directory and use following command to start Logstash.


 

bin\logstash.bat -f config\logstash.conf

 

 






Once Logstash started successfully, all logs are collecting from Kafka and push to Elastic Search.



Install Kibana and Start


Follow the below Article to Install Kibana

 

http://www.liferaysavvy.com/2021/07/kibana-installation.html


 

Validate Index creation in Elastic Search


Make sure Logstash given index in configuration (logstash.conf) should be present in the Elastic search index list.


We can use any one of the Elastic cluster Node to confirm the elastic search health and index details. All should be green in the output.


Use below URL

 

http://localhost:9200/_cat/indices?v

 

 

 



Make sure all Stack Started and following is order. If anything, missed, star/restart in the order.

 

 

Start Zookeeper Cluster

Start Kafka Cluster

Start Liferay Portal Cluster

Start Elastic Cluster

Start Logstash
Start Kibana

 

 

Example screen shows all services started in local machine.






ELKK Important Information

 


 

Zookeeper Cluster

 

localhost:2181

localhost:2182

localhost:2183

 

 

Kafka Cluster

 

localhost:9092

localhost:9093

localhost:9094

 

 

Liferay Portal Cluster

 

http://localhost/

 

 

Elastic Cluster

 

http://localhost:9200/

http://localhost:9201/

http://localhost:9202/

 

 

Logstash

 

 

http://localhost:9600/

 

 

Kibana

 

http://localhost:5601/

 

 

Kafka Topic

 

liferay-kafka-logs

 

 

Elastic Search Index

 

liferay-index

 

 


Define Index Pattern in Kibana


To Monitor logs in Kibana we need to create index pattern in Kibana.


Go to Kibana home page and click on Left side toggle panel and Click on “Stack Management” and add Kibana Index pattern.


 



 

 

Click on Kibana àIndex Pattern à Create Index Pattern

 



 

Provide the index name which we were provided in the Logstash file. You can provide exact index name or use wildcard pattern (liferay-*).

 



 

 

Select time field and create index pattern




 

Go to Analytics à Discovery. We can see index in the list.

 



 

 

 


Select newly created index and all the logs’ data visible in the page. Change the time frame to play with logs data.

 





Author

Liferay Tomcat Access Logs to Kafka

$
0
0

Tomcat access logs to keep the record of all requests processed by the application which are deployed in tomcat server. It will log every request and its response status. We can build many reports based on access logs.


Default Tomcat Access logs will be writing all logs into file when we enable it in server.xml file.



<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" -->

               prefix="localhost_access_log" suffix=".txt"

                pattern="%h %l %u %t &quot;%r&quot; %s %b" />




Assume that if we wanted to maintain all logs in centralized location that is Kafka. All application logs can be sent to Kafka using log4j Kafka Appender but access logs are different. We will use Kafka client API to send tomcat access logs to Kafka.


We will use Tomcat Valve and Access log API to implement our custom Valve where we will implement logic to send access logs to Kafka.

 

  • Create Kafka Topic
  • Create Custom Tomcat Access Logs Valve
  • Deploy Custom Tomcat Access Logs Valve
  • Configure Custom Access Logs Valve in server.xml
  • Validate Implementation

 




 

 

Prerequisite


Setup Zookeeper Cluster


http://www.liferaysavvy.com/2021/07/setup-zookeeper-cluster.html



Setup Kafka Cluster


http://www.liferaysavvy.com/2021/07/setup-kafka-cluster.html



Install Liferay Cluster


http://www.liferaysavvy.com/2021/07/centralized-logging-for-liferay-portal.html



 

Start Zookeeper Cluster

Start Kafka Cluster


 


 Create Kafka Topic


Open command prompt and locate to one of the Kafka broker bin windows directory. Use following create topic command.



 

kafka-topics.bat --create --zookeeper localhost:2181,localhost:2182,localhost:2183 --replication-factor 3 --partitions 3 --topic liferay-tomcat-access-logs

 

 

We should pass all zookeeper cluster nodes in the options.




 

List topics


Make sure topic successfully created.


 

kafka-topics.bat --zookeeper localhost:2181,localhost:2182,localhost:2183 --list

 

 




Create Custom Tomcat Access Logs Valve


Create Custom Access valve is very simple we just need to override the log(--) method from “AbstractAccessLogValve” We will use kafka clients to send message to Kafka.


KafkaAccessLogValve.java



package com.liferaysavvy.kafka.accesslog;

 

import com.liferaysavvy.kafka.accesslog.constants.KafkaConstants;

import com.liferaysavvy.kafka.accesslog.producer.KafkaMessageSender;

import org.apache.catalina.valves.AbstractAccessLogValve;

import org.apache.juli.logging.Log;

import org.apache.juli.logging.LogFactory;

 

import java.io.CharArrayWriter;

 

public class KafkaAccessLogValve extends AbstractAccessLogValve {

    private static final Log log = LogFactory.getLog(KafkaAccessLogValve.class);

    @Override

    public void log(CharArrayWriter message) {

        try {

 

            new Thread(() -> new KafkaMessageSender().sendMessage(message.toString())).start();

           /* Thread thread = new Thread(){;

                public void run(){

                    System.out.println("Thread Running");

                }

            };

            thread.start();*/

        } catch (Exception e) {

            log.error("Access logs are not sending to Kafka",e);

        }

 

    }

}


 


KafkaMessageSender.java



package com.liferaysavvy.kafka.accesslog.producer;

 

import com.liferaysavvy.kafka.accesslog.config.KafkaConfig;

import com.liferaysavvy.kafka.accesslog.constants.KafkaConstants;

import org.apache.kafka.clients.producer.Producer;

import org.apache.kafka.clients.producer.ProducerRecord;

 

public class KafkaMessageSender {

    public void sendMessage(String message) {

        final Producer<String, String> kafkaProducer = KafkaConfig.getProducer();

        ProducerRecord<String, String> record = new ProducerRecord<String, String>(KafkaConstants.TOPIC, message);

        kafkaProducer.send(record);

        kafkaProducer.flush();

        kafkaProducer.close();

    }

}


 


KafkaConstants.java



package com.liferaysavvy.kafka.accesslog.constants;

public final class KafkaConstants {

    private KafkaConstants(){}

    public static final String TOPIC = "liferay-tomcat-access-logs";

    // Kafka Brokers

    public static final String BOOTSTRAP_SERVERS = "localhost:9092, localhost:9093, localhost:9094";

}


 


 KafkaConfig.java



package com.liferaysavvy.kafka.accesslog.config;

import com.liferaysavvy.kafka.accesslog.constants.KafkaConstants;

import org.apache.kafka.clients.producer.KafkaProducer;

import org.apache.kafka.clients.producer.Producer;

import org.apache.kafka.clients.producer.ProducerConfig;

import org.apache.kafka.common.serialization.StringSerializer;

 

import java.util.Properties;

 

public final class KafkaConfig {

 

    private KafkaConfig() {}

 

    public static Producer<String, String> getProducer() {

        Properties properties = new Properties();

        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, KafkaConstants.BOOTSTRAP_SERVERS);

        properties.put(ProducerConfig.CLIENT_ID_CONFIG, "TomcatKafkaAccessLog");

        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());

        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());

        return new KafkaProducer<>(properties);

    }

}


 


Deploy Custom Tomcat Access Logs Valve


Get source code from below link and build the maven project. It will generate jar artifact. Copy generated JAR file to tomcat/libdirectory.


https://github.com/LiferaySavvy/tomcat-accesslog-kafka-producer


 

mvn clean install

 



Deploy jar file in every tomcat in the cluster.

 

Liferay-Node1




 

Liferay-Node2




 


Configure Custom Access Logs Valve in “server.xml”


Locate to tomcat conf directory and update server.xmlfile with custom valve configuration. Repeat the same for every node in the Liferay cluster.


 

<Valve className="com.liferaysavvy.kafka.accesslog.KafkaAccessLogValve" pattern="%h %l %u %t &quot;%r&quot; %s %b" />


 



 

Validate Implementation


 

Start Liferay Cluster


 

Start Kafka Consumer on “liferay-tomcat-access-logs


Open command prompt and locate to Kafka bin windows directory. Use following consumer command to start consumer.


 

kafka-console-consumer.bat --bootstrap-server localhost:9092,localhost:9093,localhost:9094--topic liferay-tomcat-access-logs --from-beginning

 

 





We can see Liferay tomcat access logs in Kafka Consumer.





Use Kibana for logs monitoring, analyze and build dashboards for Access Logs. Need to configure Kafka topic in Logstash input so that logos will be available for Kibana.


Follow the below article to use Kibana for Logs monitoring.

 

http://www.liferaysavvy.com/2021/07/liferay-portal-logs-monitoring-with-elkk.html

 


Author

Kafka Monitoring with Prometheus

$
0
0

Prometheusis monitoring opensource tool. Previous Article we have enabled Zookeeper metrics and monitor in the Prometheus.


http://www.liferaysavvy.com/2021/07/enable-metrics-provider-in-zookeeper.html

 

This article demonstrates the Kafka monitoring using Prometheus. Prometheus uses the JMX Exporter agent to get all JVM metrics from Kafka.


We need to run JMX Exported Java agent in each server where Kafka is running.


Software’s and Tools


 

Windows 10

Java 1.8 or higher

Zookeeper 3.7.0

Kafka 2.8

JMX Exporter Java Agent 0.15.0

prometheus-2.28.1

 

 







Prerequisite



Set up Zookeeper Cluster


http://www.liferaysavvy.com/2021/07/setup-zookeeper-cluster.html

 

Set up Kafka Cluster


http://www.liferaysavvy.com/2021/07/setup-kafka-cluster.html

 


  • Download JMX Exporter
  • Configure JMX Exporter for Kafka
  • Start Kafka with JMX Exporter Agent
  • Install Prometheus
  • Configure Prometheus scrape for Kafka
  • Verify Kafka brokers in Prometheus

 


Download and Start JMX Exporter



Download JMX Exporter jar file from following location.


https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/

 

Direct link for latest JMX Exporter JAR


https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.15.0/jmx_prometheus_javaagent-0.15.0.jar

 

OR


https://github.com/LiferaySavvy/kafka-monitoring/raw/master/jmx_prometheus_javaagent-0.15.0.jar

 

 

Download and get the jar to local directory.




 

Configure JMX Exporter for Kafka



We need to configure the exporter configuration for Kafka such a way JMX service exports the required metrics from Kafka.


Get the following JMX Exporter configuration file from below location and place it in local drive.


https://github.com/confluentinc/jmx-monitoring-stacks/blob/6.1.0-post/shared-assets/jmx-exporter/kafka_broker.yml


OR


https://github.com/LiferaySavvy/kafka-monitoring/blob/master/kafka_broker.yml

 

We can use below one too.


 

lowercaseOutputName: true

rules:

# Special cases and very specific rules

- pattern : kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value

  name: kafka_server_$1_$2

  type: GAUGE

  labels:

    clientId: "$3"

    topic: "$4"

    partition: "$5"

- pattern : kafka.server<type=(.+), name=(.+), clientId=(.+), brokerHost=(.+), brokerPort=(.+)><>Value

  name: kafka_server_$1_$2

  type: GAUGE

  labels:

    clientId: "$3"

    broker: "$4:$5"

 

- pattern : kafka.server<type=KafkaRequestHandlerPool, name=RequestHandlerAvgIdlePercent><>OneMinuteRate

  name: kafka_server_kafkarequesthandlerpool_requesthandleravgidlepercent_total

  type: GAUGE

 

- pattern : kafka.server<type=socket-server-metrics, clientSoftwareName=(.+), clientSoftwareVersion=(.+), listener=(.+), networkProcessor=(.+)><>connections

  name: kafka_server_socketservermetrics_connections

  type: GAUGE

  labels:

    client_software_name: "$1"

    client_software_version: "$2"

    listener: "$3"

    network_processor: "$4"

 

- pattern : 'kafka.server<type=socket-server-metrics, listener=(.+), networkProcessor=(.+)><>(.+):'

  name: kafka_server_socketservermetrics_$3

  type: GAUGE

  labels:

    listener: "$1"

    network_processor: "$2"

 

# Count and Value

- pattern: kafka.(.*)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>(Count|Value)

  name: kafka_$1_$2_$3

  labels:

    "$4": "$5"

    "$6": "$7"

- pattern: kafka.(.*)<type=(.+), name=(.+), (.+)=(.+)><>(Count|Value)

  name: kafka_$1_$2_$3

  labels:

    "$4": "$5"

- pattern: kafka.(.*)<type=(.+), name=(.+)><>(Count|Value)

  name: kafka_$1_$2_$3

 

# Percentile

- pattern: kafka.(.*)<type=(.+), name=(.+), (.+)=(.*), (.+)=(.+)><>(\d+)thPercentile

  name: kafka_$1_$2_$3

  type: GAUGE

  labels:

    "$4": "$5"

    "$6": "$7"

    quantile: "0.$8"

- pattern: kafka.(.*)<type=(.+), name=(.+), (.+)=(.*)><>(\d+)thPercentile

  name: kafka_$1_$2_$3

  type: GAUGE

  labels:

    "$4": "$5"

    quantile: "0.$6"

- pattern: kafka.(.*)<type=(.+), name=(.+)><>(\d+)thPercentile

  name: kafka_$1_$2_$3

  type: GAUGE

  labels:

    quantile: "0.$4"

 

 



 

Start Kafka with JMX Exporter Agent



It’s required to start JMX Exporter agent with Kafka. Set JMX Exporter java agent in KAFKA_OPTS.


We can set KAFKA_OPTS different ways and below example is setting up directly in windows command prompt and then start Kafka.


JAVA agent syntax


 

-javaagent:C:/kafka-workspace/kafka-monitoring/jmx_prometheus_javaagent-0.15.0.jar=<port>:<exporter-config-file-path>




Set KAFKA_OPTS in windows as follow


 

set KAFKA_OPTS=-javaagent:C:/kafka-workspace/kafka-monitoring/jmx_prometheus_javaagent-0.15.0.jar=8181:C:/kafka-workspace/kafka-monitoring/kafka_broker.yml

 

 

Kafka Broker1 Startup


Open command prompt and locate kafka-broker1 root directory and use following commands.


 

cd C:\kafka-workspace\kafka-broker1

 

set KAFKA_OPTS=-javaagent:C:/kafka-workspace/kafka-monitoring/jmx_prometheus_javaagent-0.15.0.jar=8181:C:/kafka-workspace/kafka-monitoring/kafka_broker.yml

 

bin\windows\kafka-server-start.bat .\config\server.properties

 

 




Now Kafka broker1 started with JMX Exporter agent.

 

Repeat the same for other brokers in the cluster


Kafka Broker2 Startup



 

cd C:\kafka-workspace\kafka-broker2

 

set KAFKA_OPTS=-javaagent:C:/kafka-workspace/kafka-monitoring/jmx_prometheus_javaagent-0.15.0.jar=8282:C:/kafka-workspace/kafka-monitoring/kafka_broker.yml

 

bin\windows\kafka-server-start.bat .\config\server.properties

 

 


Kafka Broker3 Startup



 

cd C:\kafka-workspace\kafka-broker3

 

set KAFKA_OPTS=-javaagent:C:/kafka-workspace/kafka-monitoring/jmx_prometheus_javaagent-0.15.0.jar=8383:C:/kafka-workspace/kafka-monitoring/kafka_broker.yml

 

bin\windows\kafka-server-start.bat .\config\server.properties

 

 


Make sure all JMX Exporter are started successfully with following URL’s. All java agents are running in same machine, its required to change ports accordingly.



 

http://localhost:8181/

http://localhost:8282/

http://localhost:8383/

 

 


 



 

Install Prometheus



Go to Prometheus download page and download latest version.


https://prometheus.io/download/



Direct Links as follows


https://github.com/prometheus/prometheus/releases/download/v2.28.1/prometheus-2.28.1.windows-amd64.zip

 

Extract in local Drive





Configure Prometheus scrape for Kafka



Locate Prometheus directory and update “prometheus.yml” file with following Kafka scrape.


Find file from following location


https://github.com/LiferaySavvy/kafka-monitoring/blob/master/prometheus.yml

 



- job_name: "kafka"

    static_configs:

      - targets: ['localhost:8181','localhost:8282','localhost:8383']

        labels:

          env: "kafka-dev"

 


 

Targets should be JMX Exporter java agent host:port.

 



Complete “prometheus.yml” file



# my global config

global:

  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.

  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.

  # scrape_timeout is set to the global default (10s).

 

# Alertmanager configuration

alerting:

  alertmanagers:

  - static_configs:

    - targets:

      # - alertmanager:9093

 

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.

rule_files:

  # - "first_rules.yml"

  # - "second_rules.yml"

 

# A scrape configuration containing exactly one endpoint to scrape:

# Here it's Prometheus itself.

scrape_configs:

  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.

  - job_name: 'prometheus'

 

    # metrics_path defaults to '/metrics'

    # scheme defaults to 'http'.

 

    static_configs:

    - targets: ['localhost:9090']

   

  - job_name: "kafka"

    static_configs:

      - targets: ['localhost:8181','localhost:8282','localhost:8383']

        labels:

          env: "kafka-dev"

 

 

Start “Prometheus”



Open command prompt and locate to “Prometheus” root directory. Use the following start command and pass web listen addressand config file as options.



 

prometheus.exe --config.file prometheus.yml --web.listen-address ":9090" --storage.tsdb.path "data"

 

 



Verify Kafka brokers in Prometheus



Access “Prometheus” web interface with following URL and its running on 9090 port.


 

http://localhost:9090/

 

 




 

Targets Status



Go to Status menu and click on targets so we can see Kafka cluster health for each Broker.




 

Kafka brokers health in the cluster.

 

 

http://localhost:9090/targets

 

 

 

 



 

Reference


https://github.com/confluentinc/jmx-monitoring-stacks


https://github.com/confluentinc/jmx-monitoring-stacks/tree/6.1.0-post/shared-assets/jmx-exporter


http://www.liferaysavvy.com/2021/07/enable-metrics-provider-in-zookeeper.html

 



Author

 

 


Grafana Installation on Windows

$
0
0

Grafana is open-source analytics and visualization solution. It provides charts, graphs and alerts.  It connects to multiple data sources and fetch the metrics data and monitor in Grafana visualization dashboards.


It can be integrated with Prometheus monitoring tool to create dashboard and monitor alerts.



Download and Extract


Go to Grafana download page and download Grafana binary compressed version.



https://grafana.com/grafana/download?platform=windows



 Direct Link is below



https://dl.grafana.com/oss/release/grafana-8.0.6.windows-amd64.zip

 


Extract downloaded zip file in local drive.

 





Start Grafana



Open command prompt and locate to Grafana bin directory and start Grafana by executing “grafana-server.exe

 




 

Logs it will show startup details and it will start on port 3000.

 





 

Access Grafana with below URL

 

 

 

http://localhost:3000/

 

 

 


 

Use default credentials to login to Grafana

 

 

User Name:  admin

Password:     admin

 

 






Author

Prometheus installation on Windows

$
0
0

Prometheus is an open-source system monitoring and alerting toolkit.



Download and Extract

 


Go to Prometheus download page and download latest version.

 


https://prometheus.io/download/

 


Direct Links as follows

 


https://github.com/prometheus/prometheus/releases/download/v2.28.1/prometheus-2.28.1.windows-amd64.zip

 


Extract in local Drive





Start “Prometheus”



Open command prompt and locate to “Prometheus” root directory. Use following start command and pass web listen address and config file default Prometheus yml file as options.

 




 

 


prometheus.exe --config.file prometheus.yml --web.listen-address ":9090" --storage.tsdb.path "data"


 

 




 

Verify Installation



Access “Prometheus” web interface with following URL and its running on 9090 port.


 

 

http://localhost:9090/

 

 

 


 



 

Targets Status



Go to Status menu and click on targets so we can see Prometheus health.

 




 

Prometheus health.

 

 


http://localhost:9090/targets

 


 

 





Author

 

 

Kafka Cluster Monitoring with Prometheus and Grafana

$
0
0

Grafana is popular open-source solution for monitoring applications. It will provide graphical dashboards to build monitoring visualization.


Any graphical view required data or metrics, so metrics data will be provided by Prometheus.


Prometheus is another monitoring tools to pull data from different application with help of JMX Exporter Agent.


Grafana is ability to connect Prometheus to pull metrics data and it will be represented as graphical view. We can build nice dashboards to show metrics data in different graphical view in the Grafana web UI.

 






 

Prerequisite



Setup a Zookeeper Cluster


http://www.liferaysavvy.com/2021/07/setup-zookeeper-cluster.html

 


Setup a Kafka Cluster


http://www.liferaysavvy.com/2021/07/setup-kafka-cluster.html

 


Following are steps to Demonstrate Kafka Cluster Monitoring.

 

Setup Prometheus for Kafka Cluster

Install Grafana

Configure Prometheus Data source in Grafana

Create Kafka Cluster Dashboard

 


Setup Prometheus for Kafka Cluster


Follow the below article to setup Prometheus for Kafka cluster.


http://www.liferaysavvy.com/2021/07/kafka-monitoring-with-prometheus.html


 

Install Grafana



Follow the below article to install Grafana on windows.

 

http://www.liferaysavvy.com/2021/07/grafana-installation-on-windows.html



Configure Prometheus Data source in Grafana



Now its time to configure Prometheus Data source in Grafana. Prometheus already have metrics data which is pulled from Kafka with help of JMX exporter Java agent. That is already covered in the previous step.

 

Access Grafana Web UI with following URL

 

 

 

http://localhost:3000/

 

 

 

Home page click on Setting and Under Configurationclick on “Data sources

 



 

Click on Add Data source.

 



 

Select Prometheus Data source in the List.

 



 

Provide Prometheus URL where its running. Default port is 9090. Once provided required information click on Test &Save.  Grafana successfully connected to Prometheus data source.

 


 

http://localhost:9090/

 

 




 

Create Kafka Cluster Dashboard



Grafana home page click on Dashboard icon and Click on Manage

 





There is free available Grafana dashboards for Kafka. Creating Grafana dashboard is very easy and its just need to import Dashboard JSON file.

 


Go to following URL and get Grafana Kafka overview dashboard file to local.

 


https://github.com/LiferaySavvy/kafka-monitoring/blob/master/kafka-overview.json

 

OR

 


https://github.com/confluentinc/jmx-monitoring-stacks/blob/6.0.1-post/jmxexporter-prometheus-grafana/assets/prometheus/grafana/provisioning/dashboards/kafka-overview.json


 

Import “kafka-overview.json” file into Grafana dashboard.

 


Click on import button.

 




 

Click on Upload JSON file button and Select “kafka-overview.json” file from local drive.




 


Once selected file, click on import then Dashboard will be imported into Grafana.

 








Very important is Job Name in Prometheus yml (prometheus.yml) file and Grafana Dashboard JSON file (kafka-overview.json) Job name should be same.

 





 

Go to Dashboards in the Grafana home page and select Kafka Overview Dashboard.

 


We Can see Dashboard with many panels and all metrics will be represented as Graphs.

 


Dashboard Screen: 1

 




 

Dashboard Screen: 2

 




 

Dashboard Screen: 3

 




 

Dashboard Screen: 4

 








Author


Liferay Portal Logs Monitoring with PLG

$
0
0

PGL is Grafana Labs stack which is similar to ELK stack. PLG is combination of Promtail, Loki andGrafana.

 


P ---> Promtail

L ---> Loki

G ---> Grafana



Promtail


Promtail is independent agent which runs in every server and send logs to Loki.

 


Loki


Grafana Loki is a set of components that can be composed into a fully featured logging stack. Its log aggregation service which collects the logs from Promtail.

 


Grafana


Grafana is open-source analytics and visualization solution. It provides charts, graphs and alerts.  It connects to multiple data sources and fetch the metrics data and monitor in Grafana visualization dashboards. Its also have ability to monitoring applications logs.

 




 

Following are steps to implements Logs Monitoring Solution for Liferay Portal.

 


  • Install Promtail and Configure log scraps
  • Install Loki and setup Loki configuration file
  • Install Grafana
  • Configure Loki Data source in Grafana
  • Explore logs in Grafana

 



Prerequisite



Install Liferay Cluster


http://www.liferaysavvy.com/2021/07/liferay-portal-apache-webserver.html

 



Install Promtail and Configure log scraps



Go to Grafana Loki release page and download the latest Promtail zip file.


https://github.com/grafana/loki/releases/

 


Direct Link is below


https://github.com/grafana/loki/releases/download/v2.2.1/promtail-windows-amd64.exe.zip

 


Extract the downloaded zip file in local




Get default Promtail configuration file “promtail-local-config.yaml” from below URL and update the scraps.

 


https://github.com/LiferaySavvy/liferay-logs-monitoring-plg/blob/master/promtail-local-config.yaml


OR

 

https://raw.githubusercontent.com/grafana/loki/v2.2.1/cmd/promtail/promtail-local-config.yaml


It’s required to create Jobs and log file location so that Promtail will pull the logs from log files and send to Loki.

 


Following is configuration yaml file.


 


server:

  http_listen_port: 7060

  grpc_listen_port: 0

 

positions:

  filename: /tmp/positions.yaml

 

clients:

  - url: http://localhost:3100/loki/api/v1/push

 

scrape_configs:

- job_name: grafana

  static_configs:

  - targets:

      - grafana

    labels:

      job: grafana

      __path__: "C:/kafka-workspace/kafka-monitoring/grafana-8.0.6/data/log/grafana.log"

 

- job_name: liferay-node1

  static_configs:

  - targets:

      - liferay-node1

    labels:

      job: liferay-node1

      __path__: "C:/Liferay/Liferay74/liferay-ce-portal-7.4.1-ga2-node1/logs/*.log"

     

- job_name: liferay-node2

  static_configs:

  - targets:

      - liferay-node2

    labels:

      job: liferay-node2

      __path__: "C:/Liferay/Liferay74/liferay-ce-portal-7.4.1-ga2-node2/logs/*.log"

 


 



 


We are running two nodes of Liferay cluster, configured two jobs. It’s also required to specify Loki end point which is running on 3100 default port.


Promtail should be run on every server and configure the log location and file pattern in configuration yaml file.

 


Start Promtail service


Open command prompt and locate to Promtail root directory and use the below command.


 

promtail-windows-amd64.exe --config.file=promtail-local-config.yaml


 


Need to provide configuration file as input option. Make sure service started successfully. We can see startup logs on console for more details.

 







Install Loki and setup Loki configuration file

 


Go to Grafana Loki release page and download the latest Promtail zip file.


https://github.com/grafana/loki/releases/

 


Direct Link is below


https://github.com/grafana/loki/releases/download/v2.2.1/loki-windows-amd64.exe.zip

 


Extract the downloaded zip file in local




 

Get default Loki configuration file “loki-local-config.yaml” from below URL.

 


https://github.com/LiferaySavvy/liferay-logs-monitoring-plg/blob/master/loki-local-config.yaml

 


OR

 


https://raw.githubusercontent.com/grafana/loki/v2.2.1/cmd/loki/loki-local-config.yaml


Make sure there is no port conflicts and default port 3100used by Loki. If you change port for Loki, we must update information in Promtail configuration file.


Loki configuration yaml file.




auth_enabled: false

 

server:

  http_listen_port: 3100

  grpc_listen_port: 9096

 

ingester:

  wal:

    enabled: true

    dir: /tmp/wal

  lifecycler:

    address: 127.0.0.1

    ring:

      kvstore:

        store: inmemory

      replication_factor: 1

    final_sleep: 0s

  chunk_idle_period: 1h       # Any chunk not receiving new logs in this time will be flushed

  max_chunk_age: 1h           # All chunks will be flushed when they hit this age, default is 1h

  chunk_target_size: 1048576  # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first

  chunk_retain_period: 30s    # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)

  max_transfer_retries: 0     # Chunk transfers disabled

 

schema_config:

  configs:

    - from: 2020-10-24

      store: boltdb-shipper

      object_store: filesystem

      schema: v11

      index:

        prefix: index_

        period: 24h

 

storage_config:

  boltdb_shipper:

    active_index_directory: /tmp/loki/boltdb-shipper-active

    cache_location: /tmp/loki/boltdb-shipper-cache

    cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space

    shared_store: filesystem

  filesystem:

    directory: /tmp/loki/chunks

 

compactor:

  working_directory: /tmp/loki/boltdb-shipper-compactor

  shared_store: filesystem

 

limits_config:

  reject_old_samples: true

  reject_old_samples_max_age: 168h

 

chunk_store_config:

  max_look_back_period: 0s

 

table_manager:

  retention_deletes_enabled: false

  retention_period: 0s

 

ruler:

  storage:

    type: local

    local:

      directory: /tmp/loki/rules

  rule_path: /tmp/loki/rules-temp

  alertmanager_url: http://localhost:9093

  ring:

    kvstore:

      store: inmemory

  enable_api: true

 


 



 

Start Loki


Open command prompt and locate Loki exe file location in local drive and use following command.


 

loki-windows-amd64.exe --config.file=loki-local-config.yaml


 



 


Once service is up it can accessible on 3100port with below URL.


 

http://localhost:3100/metrics


 





It confirms that Loki is installed successfully.

 



Install Grafana



Now it’s time to install Grafana and configure the Loki data source in Grafana.


Follow the below URL to install Grafana.


http://www.liferaysavvy.com/2021/07/grafana-installation-on-windows.html

 



Configure Loki Data source in Grafana



To explore logs in Grafana, need to configure the Loki Data source.


Access Grafana with following URL

 

 

http://localhost:3000/


 

Go to setting -->  Configuration --> Data sources




 

Click Add data source




Select Loki Data source


 



 

Provide Loki URL in configuration. Click on Save and Test button then Loki data source configured in Grafana.


 

http://localhost:3100


 



 

Explore logs in Grafana



Go to Grafana home page and click on Exploreand Select Loki in explore list.

 




 

 

Click on Log Browser -->  Select Jobs --> Click on Show logs.

 

 



 

 

Now we can see Liferay logs and we can also stream the logs.

 

Logs Screen: 1




 

Logs Screen: 2






Author

Liferay Portal Monitoring with Prometheus

$
0
0

Prometheus is popular opensource monitoring tool. It can monitor applications with help of JMX Exporter Java agent.


This Article demonstrate the Liferay Portal Server Monitoring with Prometheus.



Prerequisite


Install Liferay Cluster

 


http://www.liferaysavvy.com/2021/07/liferay-portal-apache-webserver.html

 



  • Download JMX Exporter
  • Configure JMX Exporter for Liferay Portal Servers
  • Start Liferay Portal with JMX Exporter Agent
  • Install Prometheus
  • Configure Prometheus scrape for Liferay Portal Servers
  • Verify Liferay Cluster in Prometheus

 


 

Download JMX Exporter



Download JMX Exporter jar file from following location.


https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.16.1/



Direct link for latest JMX Exporter JAR


https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.16.1/jmx_prometheus_javaagent-0.16.1.jar

 

OR


https://github.com/LiferaySavvy/liferay-monitoring-prometheus/raw/master/jmx_prometheus_javaagent-0.16.1.jar


 

Download and get the jar to local directory.

 




 



Configure JMX Exporter for Liferay Portal Server



We need to configure the exporter configuration for Liferay Portal Server such a way JMX agent exports the required metrics from Liferay Portal Server.


Get the following JMX Exporter configuration file from below location and place it in local drive.



https://github.com/prometheus/jmx_exporter/blob/master/example_configs/tomcat.yml

 

OR


https://github.com/LiferaySavvy/liferay-monitoring-prometheus/blob/master/tomcat.yml

 


We can use below one too.



 

 

lowercaseOutputLabelNames: true

lowercaseOutputName: true

rules:

- pattern: 'Catalina<type=GlobalRequestProcessor, name=\"(\w+-\w+)-(\d+)\"><>(\w+):'

  name: tomcat_$3_total

  labels:

    port: "$2"

    protocol: "$1"

  help: Tomcat global $3

  type: COUNTER

- pattern: 'Catalina<j2eeType=Servlet, WebModule=//([-a-zA-Z0-9+&@#/%?=~_|!:.,;]*[-a-zA-Z0-9+&@#/%=~_|]), name=([-a-zA-Z0-9+/$%~_-|!.]*), J2EEApplication=none, J2EEServer=none><>(requestCount|maxTime|processingTime|errorCount):'

  name: tomcat_servlet_$3_total

  labels:

    module: "$1"

    servlet: "$2"

  help: Tomcat servlet $3 total

  type: COUNTER

- pattern: 'Catalina<type=ThreadPool, name="(\w+-\w+)-(\d+)"><>(currentThreadCount|currentThreadsBusy|keepAliveCount|pollerThreadCount|connectionCount):'

  name: tomcat_threadpool_$3

  labels:

    port: "$2"

    protocol: "$1"

  help: Tomcat threadpool $3

  type: GAUGE

- pattern: 'Catalina<type=Manager, host=([-a-zA-Z0-9+&@#/%?=~_|!:.,;]*[-a-zA-Z0-9+&@#/%=~_|]), context=([-a-zA-Z0-9+/$%~_-|!.]*)><>(processingTime|sessionCounter|rejectedSessions|expiredSessions):'

  name: tomcat_session_$3_total

  labels:

    context: "$2"

    host: "$1"

  help: Tomcat session $3 total

  type: COUNTER

 

 




 



Start Liferay Portal with JMX Exporter Agent



It’s required to start JMX Exporter agent with Liferay Portal. Set JMX Exporter java agent in JAVA_OPTS in setenv.bat file of Liferay Portal.


JAVA agent syntax


 

-javaagent:C:/kafka-workspace/kafka-monitoring/jmx_prometheus_javaagent-0.15.0.jar=<port>:<exporter-config-file-path>

 



Set JAVA_OPTS in Liferay setenv.bat file. Locate to tomcat bin directory and update setenv.bat file with following JAVA_OPTS. This need to be update in All Liferay Portal Servers in the cluster.


 

Liferay Portal Node1


https://github.com/LiferaySavvy/liferay-monitoring-prometheus/blob/master/setenv-node1.bat

 


 

set "JAVA_OPTS=%JAVA_OPTS% -javaagent:C:/kafka-workspace/kafka-monitoring/jmx_prometheus_javaagent-0.16.1.jar=7171:C:/kafka-workspace/kafka-monitoring/tomcat.yml"

 

 

 



 

Liferay Portal Node2



https://github.com/LiferaySavvy/liferay-monitoring-prometheus/blob/master/setenv-node2.bat

 

 

 

set "JAVA_OPTS=%JAVA_OPTS% -javaagent:C:/kafka-workspace/kafka-monitoring/jmx_prometheus_javaagent-0.16.1.jar=7272:C:/kafka-workspace/kafka-monitoring/tomcat.yml"

 

 






Start Liferay Node1



Open command prompt and locate Liferay Portal Tomcat bin directory and use following commands.



 

cd C:\Liferay\Liferay74\liferay-ce-portal-7.4.1-ga2-node1\tomcat-9.0.43\bin

startup.bat

 

 






Start Liferay Node2


Open command prompt and locate Liferay Portal Tomcat bin directory and use following commands.

 


 

cd C:\Liferay\Liferay74\liferay-ce-portal-7.4.1-ga2-node2\tomcat-9.0.43\bin

startup.bat

 

 

 





Now Liferay Portal servers started with JMX Exporter agent.


 

Make sure all JMX Exporter are started successfully and we can access metrics with following URL’s. We are running all java agents in same machine, its required to change ports accordingly.

 


 

 http://localhost:7171/metrics

 

http://localhost:7272/metrics

 

 




 


Install Prometheus

 


Follow the below URL and Install Prometheus.


http://www.liferaysavvy.com/2021/07/prometheus-installation-on-windows.html

 



Configure Prometheus scrape for Liferay Portal Servers

 


Locate to Prometheus directory and update “prometheus.yml” file with following Liferay Portal Server scrape.

 

Find the file from following location


https://github.com/LiferaySavvy/liferay-monitoring-prometheus/blob/master/prometheus.yml

 

 


  

- job_name: 'liferay'

    static_configs:

    - targets: ['localhost:7171','localhost:7272']

      labels:

          env: "liferay-dev"

 

Targets should be JMX Exporter java agent host:port.

 



Complete “prometheus.yml” file

 



# my global config

global:

  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.

  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.

  # scrape_timeout is set to the global default (10s).

 

# Alertmanager configuration

alerting:

  alertmanagers:

  - static_configs:

    - targets:

      # - alertmanager:9093

 

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.

rule_files:

  # - "first_rules.yml"

  # - "second_rules.yml"

 

# A scrape configuration containing exactly one endpoint to scrape:

# Here it's Prometheus itself.

scrape_configs:

  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.

  - job_name: 'prometheus'

 

    # metrics_path defaults to '/metrics'

    # scheme defaults to 'http'.

 

    static_configs:

    - targets: ['localhost:9090']

   

  - job_name: 'liferay'

    static_configs:

    - targets: ['localhost:7171','localhost:7272']

      labels:

          env: "liferay-dev"

         

                    

         

 

Start “Prometheus”



Open command prompt and locate to “Prometheus” root directory. Use the following start command and pass web listen addressand config file as options.



 

prometheus.exe --config.file prometheus.yml --web.listen-address ":9090" --storage.tsdb.path "data"

 

 

 



Verify Liferay Cluster in Prometheus



Access “Prometheus” web interface with following URL and its running on 9090 port.

 


 

http://localhost:9090/

 

 

 



Targets Status



Go to Status menu and click on targets so we can see Liferay cluster health for each Node.





 

We can see Liferay Portal Cluster health in the Prometheus.

 


 

http://localhost:9090/targets

 

 

 

 

 





Author

 

 

 

 

Liferay Portal Monitoring with Prometheus and Grafana

$
0
0

Grafana is popular open-source solution for monitoring applications. It will provide graphical dashboards to build monitoring visualization.

 


Any graphical view required data or metrics, so metrics data will be provided by Prometheus.

 


Prometheus is another monitoring tools to pull data from different application with help of JMX Exporter Agent.

 


Grafana is ability to connect Prometheus to pull metrics data and it will be represented as graphical view. We can build nice dashboards to show metrics data in different graphical view in the Grafana web UI.







Prerequisite



Setup a Liferay Cluster



http://www.liferaysavvy.com/2021/07/liferay-portal-apache-webserver.html

 




Following are steps to Demonstrate Liferay Cluster Monitoring.

 


  • Setup Prometheus for Liferay Cluster
  • Install Grafana
  • Configure Prometheus Data source in Grafana
  • Create Liferay Cluster Dashboard

 



Setup Prometheus for Liferay Cluster



Follow the below article to setup Prometheus for Liferay cluster.



http://www.liferaysavvy.com/2021/07/liferay-portal-monitoring-with.html

 


After configured JMX Exporter for Liferay Portal, Start all servers in the cluster.

 




Install Grafana

 


Follow the below article to install Grafana on windows.

 

http://www.liferaysavvy.com/2021/07/grafana-installation-on-windows.html

 




Configure Prometheus Data source in Grafana



Now it’s time to configure Prometheus Data source in Grafana. Prometheus already have metrics data which is pulled from Liferay portal server with help of JMX exporter Java agent. That is already covered in the previous step.

 


Access Grafana Web UI with following URL

 


 

 

http://localhost:3000/

 

 

 


Home page click on Setting and Under Configurationclick on “Data sources

 




 

Click on Add Data source.

 





 

Select Prometheus Data source in the List.

 





 

Provide Prometheus URL where its running. Default port is 9090. Once provided required information click on Test &Save.  Grafana successfully connected to Prometheus data source.

 


 

http://localhost:9090/

 

 




 





Create Liferay Cluster Dashboard



Grafana home page click on Dashboard icon and Click on Manage

 




 

There are free available Grafana dashboards for Tomcat. Creating Grafana dashboard is very easy and it’s just need to import Dashboard JSON file.

 


Go to following URL and get Grafana Tomcat dashboard file to local.

 

https://github.com/LiferaySavvy/liferay-monitoring-prometheus/blob/master/tomcat-dashboard_rev10.json

 

 

OR

 

https://grafana.com/api/dashboards/8704/revisions/10/download

 


Import “tomcat-dashboard_rev10.json” file into Grafana dashboard.

 


Click on import button.

 




 


Click on Upload JSON file button and Select “tomcat-dashboard_rev10.json” file from local drive.





 

Once selected file, click on import then Dashboard will be imported into Grafana.

 




 

 



 

 


Go to Dashboards in the Grafana home page and select Tomcat Dashboard.

 




 

We Can see Dashboard with many panels and all metrics will be represented as Graphs.

 


Dashboard Screen: 1

 




 

 


We can edit Dashboard based on metrics provided by Prometheus or JMX Exporter.

 


We can see all Metrics with following JMX URL’s and these metrics attribute keys are matching in Grafana Dashboard JSON file. This is how Grafana build dashboards based on JMX Metrics.

 


 

 

http://localhost:7171/metrics

 

http://localhost:7272/metrics

 

 

 




Author

 

Configure Remote Elasticsearch Cluster in Liferay Cluster

$
0
0

Liferay 7.xis using Elasticsearch and it’s embedded in the bundle. When we start Liferay then embedded elastic instance also will start.



Development environments, this embedded search is ok but for production environments we must need external Elasticsearch cluster.

 


If we are building Liferay cluster then all instances should connect with same Elasticsearch cluster. So, we must have some external/remote Elasticsearch.

 


This Article Demonstrate configure the remote Elasticsearch in Liferay.

 


From Liferay DXP 7.3 and CE 7.3 GA4+Elasticsearch connector is included in the bundle and previous versions are required Elasticsearch Connector which is available in Liferay Market Place.

 


This Demonstration is using Liferay 7.4 GA2, So Elastic connector already included in the latest Liferay portal bundle.

 


Software Stack



 

Elasticsearch-7.13.3

Liferay-ce-portal-7.4.1-ga2

 

 

 


 

Following are the steps to configure the External/Remote Elasticsearch for Liferay.

 


  • Install Elastic Cluster
  • Install Liferay Cluster.
  • Configure Elasticsearch in Liferay

 


 

Install Elastic Cluster

 


Follow the below article to install Elasticsearch Cluster

 


http://www.liferaysavvy.com/2021/07/install-elastic-search-cluster.html

 



Install Liferay Cluster

 


Follow the below article to install Liferay Cluster

 


http://www.liferaysavvy.com/2021/07/liferay-portal-apache-webserver.html



 

Configure Elasticsearch in Liferay

 


There is two ways we can configure Elasticsearch in Liferay.

 


  • From Control Panel
  • Using OSGi config file

 


From Control Panel



Login to Liferay Portal as Administrator. Access Liferay Cluster or Any one the node in the cluster.


 

Access direct Liferay node using Host and Port


Example:


 

http://localhost:8090/

 

 


Global Menu --> Control Panel --> Configuration --> System Settings

 




 



 

 

 

 

System Setting --> Click on Search


 



 

 

Provide ElasticSearch7 Configuration as follows.



Step:1

 


Click on Elasticsearch Connections and Provide Connection Id and Cluster Host names. Click + Button to add multiple Elasticsearch hosts in the cluster.

 


 

http://localhost:9200

 

http://localhost:9201

 

http://localhost:9202

 

 

 


 







 

Save the configuration Elasticsearch connection Id will be created. There are many configuration properties available and based on requirement provide required properties.

 

 



 


 

Step: 2



Click on Elasticsearch7 Settings and Enable Production mode and Select Elasticsearch Connection ID that we already created in the previous step. Save the configuration.

 

 




 


Default search index is liferay-* so we can see in the Elasticsearch that the indexes will be created.


Use below URL to confirm that indexes are created.



 

http://localhost:9200/_cat/indices?v

 

 






Elasticsearch connector configuration is stored in Database So these settings available for all Liferay Nodes in the cluster.

 


Just restart all Liferay Nodes in the cluster so that all Liferay Nodes will be connecting to Remote Elasticsearch cluster and indexes will be created in the Elasticsearch cluster. We can also use Reindex control panel feature to index all search values in Elasticsearch cluster.

 


 

Using OSGi config file

 



We can also use OSGi configuration file option to connect Liferay cluster with Remote Elasticsearch.


Create following configuration files and place in “osgi/config”directory in each Liferay instance.

 

 

Create “com.liferay.portal.search.elasticsearch7.configuration.ElasticsearchConnectionConfiguration.config” file and add following properties

 


 

 

active="true"

authenticationEnabled="false"

connectionId="RemoteElasticSearchCluster"

httpSSLEnabled="false"

networkHostAddresses=[ \

  "http://localhost:9200", \

  "http://localhost:9201", \

  "http://localhost:9202", \

  ]

proxyHost=""

proxyPort="0"

proxyUserName=""

truststorePath="/path/to/localhost.p12"

truststoreType="pkcs12"

username="elastic"

 

 

 

 






 


Create “com.liferay.portal.search.elasticsearch7.configuration.ElasticsearchConfiguration.config” file and add following properties.


 

additionalConfigurations=""

additionalIndexConfigurations=""

additionalTypeMappings=""

authenticationEnabled="false"

bootstrapMlockAll="false"

clusterName="LiferayElasticsearchCluster"

discoveryZenPingUnicastHostsPort="9300-9400"

embeddedHttpPort="9201"

httpCORSAllowOrigin="/https?:\\/\\/localhost(:[0-9]+)?/"

httpCORSConfigurations=""

httpCORSEnabled="true"

httpSSLEnabled="false"

indexNamePrefix="liferay-"

indexNumberOfReplicas=""

indexNumberOfShards=""

logExceptionsOnly="true"

networkBindHost=""

networkHost=""

networkHostAddresses=[ \

  "", \

  ]

networkPublishHost=""

nodeName=""

operationMode="REMOTE"

overrideTypeMappings=""

productionModeEnabled="true"

proxyHost=""

proxyPort="0"

proxyUserName=""

remoteClusterConnectionId="RemoteElasticSearchCluster"

restClientLoggerLevel="ERROR"

sidecarDebug="false"

sidecarDebugSettings="-agentlib:jdwp\=transport\=dt_socket,address\=8001,server\=y,suspend\=y,quiet\=y"

sidecarHeartbeatInterval="10000"

sidecarHome="elasticsearch7"

sidecarHttpPort=""

sidecarJVMOptions=[ \

  "-Xms1g", \

  "-Xmx1g", \

  "-XX:+AlwaysPreTouch", \

  ]

sidecarShutdownTimeout="10000"

trackTotalHits="true"

transportTcpPort=""

truststorePath="/path/to/localhost.p12"

truststoreType="pkcs12"

username="elastic"

 

 

 


 



 

 


Liferay Node1


 



 

Liferay Node2

 




 

 

 

Once place the configuration files in all Liferay Nodes osgi/config location, Restart all Liferay instances.

 

 

 

 

Author

 

 


Working with Liferay Scheduler

$
0
0

Scheduler is one of the important components in any application. To perform jobs periodically then application is required scheduler. Liferay portal is providing scheduler in portal and we can schedule jobs in portal applications.


Liferay implemented scheduler on top of the quartz scheduler engine. Quartz is popular open-source java implementation for scheduling jobs.


Liferay build another API on top native quartz so that we can easily configure or created schedule jobs in Liferay Portal Applications.



Software Stack


 

Liferay-ce-portal-7.4.1-ga2

 

 

 






Liferay Scheduler API is using Message Bus to delegate job to backend process.

 


High-level scheduler implementation in Liferay

 


  • Liferay provides scheduler API to create schedule job which is tagged to specific Message Bus destination.
  • Each destination has one or more message listeners register with destination. Destination may be Parallel or Serial.
  • Scheduler Engine will trigger Job at defined time frame based CRON expression.
  • Job is responsible to send message (Delegate job) to Message Bus on specific destination and that was already configured as part of scheduling.
  • Listener will receive the message which posted in Message Bus destination.
  • Once Listener is received the message then process the message and execute the further steps.

 


All business logic will be implemented in the listener so that it will be invoked when the message posted on the Message Bus. Sending message to Message Bus is configured by schedule job and scheduler engine will trigger job based on the cron expression.  Message Bus is really useful concept for scheduler to assign job in the blackened process.

 


Example


Sending email to group of users periodically like daily/weekly/month.



Liferay is providing different job storage types while creating schedule job.



Memory


When we use single liferay instance environment then memory storage type can be useful and all scheduled jobs information is stored in portal memory. If server restarted all scheduled jobs information will be lost.



Memory Cluster


Memory Cluster is similar to memory storage type but this is being used in Clustered environment. If job created in one instance memory that will be replicated in other instance memory. There are chances of loss of schedule job details if the both instances are crash or down.



Persisted


Persisted storage is another available type storage and job data will be stored in the database quartz tables. So, when the scheduler engine stared then all jobs will be scheduled based on the information available in the database. There is no loss of scheduled job data.



The following are the QUARTZ tabled created by Quartz scheduler during liferay startup.

 







 

 

When we create schedule job, we must select the storage type so that job details will be stored in either memory or database tables.

 


Clustered environment, jobs may fire from multiple instances so we have to be very careful. Quartz internally uses the locking mechanism so that only one scheduler instance fires the job and other instances never fire same job at same time.


 

The following is steps to create dynamic schedule jobs

 


  • Create Destination
  • Register Listener with Destination
  • Add destination to Message Bus
  • Create Scheduler job and tag to destination

 


Following is groovy code snipper to create schedule job and tag to Message Bus destination. Here is using the existing Dummy Message Listener to demonstrate the example. Usually, we will create our own message listener and implement business login as part of message listener.

 


Below schedule job is for every minute and for every minute scheduler engine post a message on Message Bus for specific destination and dummy message listener receive the message and execute the received method which prints the message object in logs.

 


Find script from below GitHub repository.

 


https://github.com/LiferaySavvy/liferay-admin-groovy-examples/blob/master/create-dynamic-quartz-schedule-job.groovy

 


OR

 


 

 


import com.liferay.portal.kernel.messaging.Destination;

import com.liferay.portal.kernel.messaging.DestinationConfiguration;

import com.liferay.portal.kernel.messaging.DestinationFactoryUtil;

import com.liferay.portal.kernel.messaging.DummyMessageListener;

import com.liferay.portal.kernel.messaging.Message;

import com.liferay.portal.kernel.messaging.MessageBusUtil;

import com.liferay.portal.kernel.messaging.MessageListener;

import com.liferay.portal.kernel.portlet.PortletClassLoaderUtil;

import com.liferay.portal.kernel.scheduler.SchedulerEngineHelperUtil;

import com.liferay.portal.kernel.scheduler.StorageType;

import com.liferay.portal.kernel.scheduler.Trigger;

import com.liferay.portal.kernel.scheduler.TriggerFactoryUtil;

import com.liferay.portal.kernel.util.PortalClassLoaderUtil;

 

String destinationName = "liferaysavvy/parallel-destination";

String className = DummyMessageListener.class.getName();

String jobName = "com.liferay.portal.kernel.messaging.DummyMessageListener";

String groupName = "com.liferay.portal.kernel.messaging.DummyMessageListener";

//Every minute

String cronExpression = "0 0/1 * * * ?";

String description = "Every one minute job using Parallel Destination..";

int exceptionsMaxSize = 10;

 

//Create destination and register listener

try {

          DestinationConfiguration destinationConfig =  DestinationConfiguration.createParallelDestinationConfiguration(destinationName);

          Destination parallelDestination = DestinationFactoryUtil.createDestination(destinationConfig);

         

          String portletId = null;

          ClassLoader classLoader = null;

          if(portletId != null){

                    //PortletClassLoaderUtil.setServletContextName(portletId);

                    classLoader = PortletClassLoaderUtil.getClassLoader(portletId);

          } else {

                    classLoader = PortalClassLoaderUtil.getClassLoader();

          }

          MessageListener messageListener = (MessageListener)classLoader.loadClass(className).newInstance();

          out.println("messageListener :: ${messageListener}");

          parallelDestination.register(messageListener);

          MessageBusUtil.addDestination(parallelDestination);

          //Create schedule job with cron expression.

 

          Trigger trigger = TriggerFactoryUtil.createTrigger(jobName,groupName,cronExpression);

          Message message = new Message();

          message.put("data","My Data required for job..");

          SchedulerEngineHelperUtil.schedule(trigger, StorageType.PERSISTED, description,destinationName, message, exceptionsMaxSize)

} catch (Exception e) {

          e.printStackTrace();

}

 


 

 

Global Menu --> Control Panel -->System -->Server Administration

 




 



 

Click on script, copy paste the above script in the editor and execute it.


Schedule job will be created and job details is stored in database.

 




 

 

 

We can see scheduled job details in the database table.

 




 

Cron trigger table have details of Trigger

 




 

 

Verify Dummy Listener logs



Created schedule job for every minute so scheduler engine will send message for every minute on given destination and Dummy Listener will receive the message object.


Enable info logs for “com.liferay.portal.kernel.messaging.DummyMessageListener

 


Global Menu --> Control Panel -->System -->Server Administration

 




 

 

 



 

 

Click on log Levels and Click on + Add Category

 




 

 

 

 

Provide logger name and Log levels as follows and save details. Now dummy listener will show all logs in the console.

 




 

 

 

Open server logs console and we can observe that every minute Dummy Message Listener is receiving message from Message Bus and print the message object in the logs.

 




 

This is how Liferay scheduler works and it used Native Quartz and Message Bus.

 


Following is sample groovy script to know the statistics of destination



 Execute from control panel


Global Menu --> Control Panel -->System -->Server Administration --> Script



Find script from below GitHub repository.

 


https://github.com/LiferaySavvy/liferay-admin-groovy-examples/blob/master/message-bus-destination-statistics.groovy

 

 


OR

 


 

 


import com.liferay.portal.kernel.messaging.Destination;

import com.liferay.portal.kernel.messaging.DestinationConfiguration;

import com.liferay.portal.kernel.messaging.DestinationFactoryUtil;

import com.liferay.portal.kernel.messaging.DummyMessageListener;

import com.liferay.portal.kernel.messaging.Message;

import com.liferay.portal.kernel.messaging.MessageBusUtil;

import com.liferay.portal.kernel.messaging.MessageListener;

import com.liferay.portal.kernel.portlet.PortletClassLoaderUtil;

import com.liferay.portal.kernel.scheduler.SchedulerEngineHelperUtil;

import com.liferay.portal.kernel.scheduler.StorageType;

import com.liferay.portal.kernel.scheduler.Trigger;

import com.liferay.portal.kernel.scheduler.TriggerFactoryUtil;

import com.liferay.portal.kernel.util.PortalClassLoaderUtil;

import com.liferay.portal.kernel.messaging.DestinationStatistics;

 

//Create destination and register listener

try {

                    String destinationName = "liferaysavvy/parallel-destination";

                    String className = DummyMessageListener.class.getName();

                    String jobName = "com.liferay.portal.kernel.messaging.DummyMessageListener";

                    String groupName = "com.liferay.portal.kernel.messaging.DummyMessageListener";

                    //Every minute

                    Destination destiNation = MessageBusUtil.getDestination(destinationName);

                    if(){

                              out.println(destiNation.getMessageListeners());

                              DestinationStatistics destinationStatistics = destiNation.getDestinationStatistics();

                             

                              out.println("Destination is registered with Messagebus::${destiNation.isRegistered()}");

                             

                              out.println("Sent Message Count    :: ${destinationStatistics.getSentMessageCount()}");

                              out.println("Pending Message Count :: ${destinationStatistics.getPendingMessageCount()}");

                              out.println("Active Thread Count   :: ${destinationStatistics.getActiveThreadCount()}");

                              out.println("Current Thread Count  :: ${destinationStatistics.getCurrentThreadCount()}");

                              out.println("Largest Thread Count  :: ${destinationStatistics.getLargestThreadCount()}");

                              out.println("Max Thread Pool Size  :: ${destinationStatistics.getMaxThreadPoolSize()}");

                              out.println("Min Thread Pool Size  :: ${destinationStatistics.getMinThreadPoolSize()}");

                    } else {

                              out.println("************Destination not in the Message Bus************"));

                    }

                   

                   

                   

                   

} catch (Exception e) {

          e.printStackTrace();

}

 


 



 

Important Points

 


Demonstration purpose, used Dummy Listener and groovy script to create schedule job. These messages bus destinations will be lost once restarted the server.

 


Realtime scenarios we have to create Destinations for every restart. We can use bundle activate method to create destinations and register listeners with listener component class.

 


All scheduled jobs are stored with GMT time zone there is issue to handle Daylight time change for the jobs. This is known issue in Liferay. We may have to use some custom logic to change CRON expression when DLT change happened in the regions.

 



Reference Code Points from Liferay

 


https://github.com/liferay/liferay-portal/blob/7.4.1-ga2/modules/apps/portal-scheduler/portal-scheduler-quartz/src/main/java/com/liferay/portal/scheduler/quartz/internal/QuartzSchedulerEngine.java

 


https://github.com/liferay/liferay-portal/blob/7.4.1-ga2/portal-kernel/src/com/liferay/portal/kernel/scheduler/StorageType.java

 


https://github.com/liferay/liferay-portal/blob/7.4.1-ga2/portal-kernel/src/com/liferay/portal/kernel/scheduler/SchedulerEngineHelperUtil.java

 


https://github.com/liferay/liferay-portal/blob/7.4.1-ga2/portal-kernel/src/com/liferay/portal/kernel/scheduler/TriggerFactoryUtil.java

 


https://github.com/liferay/liferay-portal/blob/7.4.1-ga2/modules/apps/portal-scheduler/portal-scheduler-quartz/src/main/java/com/liferay/portal/scheduler/quartz/internal/job/MessageSenderJob.java





Author

Working with Liferay Message Bus

$
0
0

Liferay Message Bus is Java Message Service (JMS) implementation which works like publish and subscribe to model. This is light weight component integrated in Liferay Portal.


Multiple senders will send messages on destination and other-end receivers listens the messages.


It is useful for batch process in the back ground or multi-threaded message processing and send message to multiple receivers when event is generated.

 

Software Stack


 

Liferay-ce-portal-7.4.1-ga2

 



Example


  • Send email to thousands of users in the background.
  • It will be integrated with scheduler which run jobs periodically in the background.
  • Notify to multiple receivers when events are generated.
  • If any user request takes longer time to process then use message bus to assign task in the back-ground and update user response once job is completed.
  • Liferay internally uses many places like deployment event notifier. When any module is deployed it will send message to message bus.

 

 




 


Following are important building blocks of Message Bus


  • Destination
  • Senders
  • Listeners

Destination


Destination is uniquely identifying name space in message bus to send messages. Its like topic or channel used by sender to send messages. Message bus contains as many as destinations.




Senders


Senders are responsible to send messages to message bus on specific destination. Sender may send messages to multiple destinations.




Listeners


Listeners are receiving the messages which send by senders. Every listener they must have register with at least one destination to receive the messages. Listeners may subscribe to multiple destinations.

 


Liferay is providing the 3 types of destination and based on requirement it can be used.

 


  • Parallel Destination
  • Serial Destination
  • Synchronous Destination


Parallel Destination


Parallel Destination, process the messages asynchronous model using multiple workers threads. Each message and listener subscribed to destination, process as separate worker thread asynchronously. Messages are in the queue if it reaches the max threads assigned in the pool. Worker thread per message per listener.

 



Serial Destination


Serial destination similar to parallel destination but here each message process as separate thread. Worker thread per message. Messages are in the queue if it reaches max threads assigned in the pool.

 



Synchronous Destination


Synchronous destination will send messages directly to the listeners no messages are in the queue.

 


Asynchronous model generally has following important parameters. Parallel or Serial destination we can set following parameters.




Maximum Queue Size


This parameter decides the number of messages that can put in the queue.

 


Workers Core Size


Initial number of workers threads to create thread pool.

 


Workers Max Size


Maximum number of worker threads in the pool if the worker threads reach the value, messages are in the queue for next available worker thread.

 


Rejected Execution Handler


Rejection handle is mechanism to handle failed messages when the queue is full.

 

Message Bus Implementation steps in the applications

 


  • Create Destination Configuration
  • Create Destination
  • Register Destination as OSGi Service
  • Manage the Destination Object
  • Register Listener
  • Send Message to Destination

 



Create Destination Configuration


Message Bus API is providing DestinationConfigurationclass to create destination configuration. We can create different types of destination as it specified in the above.

 


Parallel Destination Configuration


 


DestinationConfiguration destinationConfiguration =

              new DestinationConfiguration(

                   DestinationConfiguration.DESTINATION_TYPE_PARALLEL,LiferayMessageBusPortletKeys.DESTINATION_PARALLEL);

 


 

 

 

Serial Destination Configuration


 

DestinationConfiguration destinationConfiguration =

              new DestinationConfiguration(

                   DestinationConfiguration.DESTINATION_TYPE_SERIAL,LiferayMessageBusPortletKeys.DESTINATION_SERIAL);

 

 


Synchronous Destination Configuration


 

DestinationConfiguration destinationConfiguration =

              new DestinationConfiguration(

                   DestinationConfiguration.DESTINATION_TYPE_SYNCHRONOUS,LiferayMessageBusPortletKeys.DESTINATION_SYNCHRONOUS);

 

 

 

 


Create Destination


DestinationFactorywill create destination based on configuration

 


 

Destination destination = _destinationFactory.createDestination(

              destinationConfiguration);

 

 

 


Register Destination as OSGi Service


ServiceRegistration<Destination> is used to register destination as OSGi service.


 

_destinationServiceRegistration = _bundleContext.registerService(

              Destination.class, destination, destinationProperties);

         _log.info("Destination is registred with Service Regisration ..");

 

 

 

 

Manage the Destination Object



We have to manage destination object so that it can deregister when bundle is deactivated.



 

Dictionary<String, Object> destinationProperties =

              HashMapDictionaryBuilder.<String, Object>put(

                  "destination.name", destination.getName()).build();

 

 

 

Destroy Destination


 

@Deactivate

     protectedvoid deactivate() {

         if (_destinationServiceRegistration != null) {

              Destination destination = _bundleContext.getService(

                   _destinationServiceRegistration.getReference());

 

              _destinationServiceRegistration.unregister();

 

              destination.destroy();

         }

 

 

 

Setting Thread Pool for destination



 

destinationConfiguration.setMaximumQueueSize(_MAXIMUM_QUEUE_SIZE);

destinationConfiguration.setWorkersCoreSize(_CORE_SIZE);

destinationConfiguration.setWorkersMaxSize(_MAX_SIZE);

 

 

 


Rejection Handler to Handle Failed Messages


 

 

RejectedExecutionHandlerrejectedExecutionHandler =

              new ThreadPoolExecutor.CallerRunsPolicy() {

 

                  @Override

                  publicvoid rejectedExecution(

                       Runnable runnable, ThreadPoolExecutor threadPoolExecutor) {

 

     if (_log.isWarnEnabled()) {

         _log.warn("The current thread will handle the request " +"because the rules engine's task queue is at " +"its maximum capacity");

       }

 

super.rejectedExecution(runnable, threadPoolExecutor);

 }

 

};

 

destinationConfiguration.setRejectedExecutionHandler(rejectedExecutionHandler);

 

 

 

 

Register Listener



Listener should implement MessageListenerinterface. We have to implement receive(..) method and there we implement our business logic. Listeners are registered with destination to received messages from senders.

 

 


There are different ways to register listener

 


Automatic Registration


Create MessageListener component and pass destination name as property so that it will be register with destination automatically when component is create.



 

@Component (

         immediate = true,

         property = {"destination.name=liferaysavvy/synchronous-destination"},

         service = MessageListener.class

     )

publicclass AutomaticRegisteredSynchronousMessageListener implementsMessageListener {

     @Override

     publicvoidreceive(Message message) {

 

         try {

              _log.info("Message::"+message);

             

         }

         catch (Exception e) {

              e.printStackTrace();

         }

        

     }

 

    

 

     privatestaticfinal Log _log = LogFactoryUtil.getLog(

          AutomaticRegisteredSynchronousMessageListener.class);

 

}

 

 



Message Bus Registration


Listeners can register using Message Bus.


 

@Reference

private MessageBus _messageBus;

 

 

_messageListenerParallel = new MessageBusRegisteredParallelMessageListener();

        _messageBus.registerMessageListener(LiferayMessageBusPortletKeys.DESTINATION_PARALLEL, _messageListenerParallel);

 

       

 

 

Destination Registration


Listeners can also register with Destination


 

 

private MessageListener _messageListenerParallel;

 

@Reference(target = "(destination.name="+LiferayMessageBusPortletKeys.DESTINATION_PARALLEL+")")

    private Destination _destinationParellel;

 

 

_messageListenerParallel = new DestinationRegisteredParallelMessageListener();

_destinationParellel.register(_messageListenerParallel);

 

 

 



Send Message to Destination


Several ways to send message to destination and all are asynchronous process. We can also send messages synchronously.

 


Directly with Message Bus



 

@Reference

private MessageBus _messageBus;

 

Message messageobj = new Message();

messageobj.put("message", message);

_messageBus.sendMessage(destination, messageobj);

 

 


Using Message Bus Util

 


 

 

Message messageobj = new Message();

messageobj.put("message", message);

MessageBusUtil.sendMessage(destinationName, message);

 

 

 


 

Send Messages Synchronously



We can send message Synchronously using Message Bus Util. Message Bus block the message until it received the response or timeout.

 


 

Message messageobj = new Message();

messageobj.put("message"message);

try {

   MessageBusUtil.sendSynchronousMessage(destinationName, messageobj);

   //MessageBusUtil.sendSynchronousMessage(destinationName, message, timeout)

} catch (MessageBusException e) {

   // TODO Auto-generated catch block

   e.printStackTrace();

}

 

 


Find Message Bus Source in Git Hub

 


https://github.com/LiferaySavvy/liferay-messagebus-example

 





Author


 

Liferay Message Bus Implementation

$
0
0

Follow below Article to understand more about Message Bus in Liferay.



http://www.liferaysavvy.com/2021/07/working-with-liferay-message-bus.html

 





 


 

Implementation Steps



  • Create Destination Configuration
  • Create Destination
  • Register Destination as OSGi Service
  • Manage the Destination Object
  • Register Listener
  • Send Message to Destination

 

 


GitHub Project

 


https://github.com/LiferaySavvy/liferay-messagebus-example

 

 



Module Implementation

 


Module demonstrates to create different types of message bus destinations and different ways to register listeners with message bus.


Module have simple UI screens to send messages on Message Bus and view the statistics of each message bus destination.

 


Software Stack



 

Liferay-ce-portal-7.4.1-ga2

Liferay Developer Studio-3.9.3-ga4

 

 

 


Prerequisite



Ready with Portal Server and Liferay Workspace.



Deploy and Run

 


Import Liferay Module in your Liferay Workspace. Run build and deploy gradle tasks

 





 

Deploy module jar file in your OSGi deployment directory by running gradle deploy task or also copy manually to “osgi\modules”directory

 

.

 



 

 


Access Liferay Portal from Browser


 

http://localhost:8090/

 

 


Login as Liferay Admin

 


Create Page in Liferay and Add Widget (LiferayMessageBus) to the page

 




 

 

 



 

Access “Send Message” screen.

 




Send Message with below UI screen. Message will be sent to Message Bus on specified destination.

 




 

Once message sent on message bus then listeners will be receiving the messages.


We can see the logs of listeners on console logs which prints the message object. We will implemented listeners based on real time requirements.

 


 



 

 

Access Destination Statistics UI screen.

 




Select Destination from dropdown, it will show the statistics of destination.

 

 




 




Author

Liferay Dynamic Schedule Jobs Implementation

$
0
0

Liferay is providing scheduler API to create schedule job in Liferay Portal Applications. Liferay internally uses the Quartz scheduler engine. Liferay also uses the Message Bus implementation with scheduler API.

 





 Software Stack


 

Liferay-ce-portal-7.4.1-ga2

 

 

 


Below Article will provide more details on Liferay Scheduler

 


http://www.liferaysavvy.com/2021/07/working-with-liferay-scheduler.html

 



Liferay API is using Message Bus for every scheduler job and following article will provide more detail about Liferay Message Bus.

 


http://www.liferaysavvy.com/2021/07/working-with-liferay-message-bus.html

 


http://www.liferaysavvy.com/2021/07/liferay-message-bus-implementation.html

 



Following are the steps to implement Dynamic schedule jobs in Liferay

 

  • Create Message Bus Destination
  • Register Listener with Destination
  • Create Scheduler job and tag to Destination

 



Create Message Bus Destination

 

Create destination have following steps in OSGi module

 

  • Create Destination Configuration
  • Create Destination
  • Manage the Destination Object

 


Create Destination Configuration


Message Bus API is providing DestinationConfigurationclass to create destination configuration. We can create different types of destination as it specified as below.

 


Parallel Destination Configuration


 

DestinationConfiguration destinationConfiguration =

              new DestinationConfiguration(

                   DestinationConfiguration.DESTINATION_TYPE_PARALLEL,LiferayMessageBusPortletKeys.DESTINATION_PARALLEL);

 

 

 

 


Serial Destination Configuration


 

DestinationConfiguration destinationConfiguration =

              new DestinationConfiguration(

                   DestinationConfiguration.DESTINATION_TYPE_SERIAL,LiferayMessageBusPortletKeys.DESTINATION_SERIAL);

 

 


Synchronous Destination Configuration


 

DestinationConfiguration destinationConfiguration =

              new DestinationConfiguration(

                   DestinationConfiguration.DESTINATION_TYPE_SYNCHRONOUS,LiferayMessageBusPortletKeys.DESTINATION_SYNCHRONOUS);

 

 

 

 



Create Destination


DestinationFactorywill create destination based on configuration

 


 

Destination destination = _destinationFactory.createDestination(

              destinationConfiguration);

 

 



 

Register Destination as OSGi Service



ServiceRegistration<Destination> is used to register destination as OSGi service.


 

_destinationServiceRegistration = _bundleContext.registerService(

              Destination.class, destination, destinationProperties);

         _log.info("Destination is registred with Service Regisration ..");

 

 

 

 


Manage the Destination Object



We have to manage destination object so that it can deregister when bundle is deactivated.

 


 

Dictionary<String, Object> destinationProperties =

              HashMapDictionaryBuilder.<String, Object>put(

                  "destination.name", destination.getName()).build();

 

 

 


Destroy Destination


 

@Deactivate

     protectedvoid deactivate() {

         if (_destinationServiceRegistration != null) {

              Destination destination = _bundleContext.getService(

                   _destinationServiceRegistration.getReference());

 

              _destinationServiceRegistration.unregister();

 

              destination.destroy();

         }

 

 

 


Setting Thread Pool for destination



 

destinationConfiguration.setMaximumQueueSize(_MAXIMUM_QUEUE_SIZE);

destinationConfiguration.setWorkersCoreSize(_CORE_SIZE);

destinationConfiguration.setWorkersMaxSize(_MAX_SIZE);

 

 

 


Rejection Handler to Handle Failed Messages

 


 

RejectedExecutionHandlerrejectedExecutionHandler =

              new ThreadPoolExecutor.CallerRunsPolicy() {

 

                  @Override

                  publicvoid rejectedExecution(

                       Runnable runnable, ThreadPoolExecutor threadPoolExecutor) {

 

     if (_log.isWarnEnabled()) {

         _log.warn("The current thread will handle the request " +"because the rules engine's task queue is at " +"its maximum capacity");

       }

 

super.rejectedExecution(runnable, threadPoolExecutor);

 }

 

};

 

destinationConfiguration.setRejectedExecutionHandler(rejectedExecutionHandler);

 

 

 

 


Register Listener with Destination


Listener should implement MessageListenerinterface. We have to implement receive(..) method and there we implement our business logic. Listeners are registered with destination to received messages from senders.

 

 


There are different ways to register listener below is the one of the ways.

 


Automatic Registration


 

Create MessageListener component and pass destination name as property so that it will be register with destination automatically when component is create.



 

@Component (

         immediate = true,

         property = {"destination.name=liferaysavvy/synchronous-destination"},

         service = MessageListener.class

     )

publicclass AutomaticRegisteredSynchronousMessageListener implementsMessageListener {

     @Override

     publicvoidreceive(Message message) {

 

         try {

              _log.info("Message::"+message);

             

         }

         catch (Exception e) {

              e.printStackTrace();

         }

        

     }

 

    

 

     privatestaticfinal Log _log = LogFactoryUtil.getLog(

          AutomaticRegisteredSynchronousMessageListener.class);

 

}

 

 



Create Scheduler job and tag to Destination

 


SchedulerEngineHelperUtil is class will provide methods to create schedule Jobs. We can also use “SchedulerEngineHelper” OSGi reference to create same schedule jobs.

 


Following are API method to schedule Job in Liferay

 


public static void schedule(
      Trigger trigger
, StorageType storageType, String description,
     
String destinationName, Message message, int exceptionsMaxSize)
  
throws SchedulerException

 


public static void schedule(
      Trigger trigger
, StorageType storageType, String description,
      
String destinationName, Object payload, int exceptionsMaxSize)
  
throws SchedulerException

 


Following are the important Parameters

 


Trigger


Its cron trigger Object. Trigger factory will be used to create trigger object. It is required CRON expression and it should be valid quartz cron expression.

 


http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html

 

Storage Type


Its scheduler storage type like Memory or Persisted.

 


Destination Name


Its message bus destination where scheduler engine will send messages.

 


Message


It's JSON object to carry required data from Message Bus to Listener.

 


Liferay 7.x onwards Liferay scheduler API is providing the Timezone while creating Trigger and it will resolve the Daylight time zone issue for scheduled jobs.

 


 

public Trigger createTrigger(

                    String jobName, String groupName, Date startDate, Date endDate,

                    String cronExpression, TimeZone timeZone);

 

 


Example



//Trigger trigger = TriggerFactoryUtil.createTrigger(jobName,groupName,cronExpression);

         Trigger trigger = _triggerFactory.createTrigger(jobName,groupName,null,null,cron,TimeZone.getDefault());

 

 

@Reference

private TriggerFactory _triggerFactory;

      

 

 


Creating Schedule



 

//SchedulerEngineHelperUtil.schedule(trigger, StorageType.PERSISTED, description,destinationName, message, exceptionsMaxSize)

         _schedulerEngineHelper.schedule(trigger, StorageType.PERSISTED, description,destinationName, message, exceptionsMaxSize);

 

 

@Reference

private SchedulerEngineHelper _schedulerEngineHelper;

 

 



Deploy and Run Example OSGi Module

 


Find OSGi module from the below Git repository and it will demonstrate, create dynamic schedule jobs.

 


https://github.com/LiferaySavvy/liferay-scheduler-example

 

 


Clone the project and import into your Liferay workspace.

 


Build and Deploy OSGi module with Gradle tasks.

 


Add Widget to Liferay Page

 




 

Access “Create Dynamic Schedule Jobs” UI  screen and create job

 




Once job is created, Job details will be stored in Quartz Database tables.

 




 


 

In the console logs, every 2 min Scheduler enginewill trigger the job and send message to Message Bus. Other end listener will be receiving the message and print in the console logs.

 


 

 


2021-08-01 15:02:00.026 INFO  [liferaysavvy/parallel-destination-10][AutomaticRegisteredParellelMessageListener:34] Message::{destinationName=liferaysavvy/parallel-destination, response=null, responseDestinationName=null, responseId=null, payload=null, values={GROUP_NAME=Dynamic, companyId=20100, data=TwoMinJob:Dynamic, groupId=0, DESTINATION_NAME=liferaysavvy/parallel-destination, EXCEPTIONS_MAX_SIZE=10, JOB_STATE=com.liferay.portal.kernel.scheduler.JobState@31b87667, STORAGE_TYPE=PERSISTED, JOB_NAME=TwoMinJob}}

 

2021-08-01 15:04:00.025 INFO  [liferaysavvy/parallel-destination-11][AutomaticRegisteredParellelMessageListener:34] Message::{destinationName=liferaysavvy/parallel-destination, response=null, responseDestinationName=null, responseId=null, payload=null, values={GROUP_NAME=Dynamic, companyId=20100, data=TwoMinJob:Dynamic, groupId=0, DESTINATION_NAME=liferaysavvy/parallel-destination, EXCEPTIONS_MAX_SIZE=10, JOB_STATE=com.liferay.portal.kernel.scheduler.JobState@2dd02e8e, STORAGE_TYPE=PERSISTED, JOB_NAME=TwoMinJob}}

 


 

 

This is how we can create dynamic schedule job and every schedule job must be tagged with  destination.


 

Destinations are not persisted so it must be created when server startup or bundle is  activated. That’s reason we implemented destination creation as part of bundle activation. Schedule jobs are persisted as we are selected Persisted storage type.




Author

Viewing all 99 articles
Browse latest View live