Layer4

Efficient logging with Spring Boot, Logback and Logstash

Posted by in DevOps, Java

Logging is an important part of any entreprise application and Logback makes an excellent choice: it’s simple, fast, light and very powerful. Spring Boot has a great support for Logback and provides lot of features to configure it. In this article I will present you an integration for an entreprise logging stack using Logback, Spring Boot and Logstash. WARNING The Spring Boot recommands to use the -spring variants for your logging configuration (for example logback-spring.xml rather than logback.xml). If you use standard configuration locations (logback.xml), Spring cannot completely control log…read more

OS monitoring with… Java

Posted by in DevOps

Sometimes it may be useful to get system information like the usage of a disk or the available network interfaces. For instance, Elasticsearch use this kind of tools in order to display at startup time some infos about open file descriptors or the size of the direct memory available for the JVM. The aim is not to replace a real system monitoring agent, but to guide the user to take advantage of the product by configuring it properly. Sigar is an open source project from Hyperic which provides a portable…read more

Find and kill slow running queries in MongoDB

Posted by in DevOps, NoSQL

In Mongo, or more generally in any data storage engine, queries or updates that take longer than expected to run can be caused by many reasons: – Slow network – Wrong schema design (we all have seen the famous all-in-one table…) – Wrong database design (“let’s store 100To of data in a standalone mongod!”) – Bad partitioning (Hbase table with 200 regions with 2MB of data) – Lack of useful indexes – No statistics – Incorrect hardware sizing (in 99.999%, the memory…) – Missing or wrong configuration (aka the default…read more

How to kill Hadoop jobs matching a pattern?

Posted by in DevOps

Today, I had to kill a list of jobs (45) running on my Hadoop cluster. Ok, let’s have a look to the docs http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#job But wait a minute… No, Hadoop knows the “kill” command, but not the “pkill”… One solution is: import java.io.IOException; import org.apache.commons.cli.CommandLine; import org.apache.commons.cli.CommandLineParser; import org.apache.commons.cli.HelpFormatter; import org.apache.commons.cli.Options; import org.apache.commons.cli.ParseException; import org.apache.commons.cli.PosixParser; import org.apache.commons.lang.ArrayUtils; import org.apache.commons.lang.StringUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapred.JobClient; import org.apache.hadoop.mapred.JobStatus; import org.apache.hadoop.mapred.RunningJob; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class PKill { private final static Logger LOGGER = LoggerFactory.getLogger(PKill.class); private static void printUsage(Options options) { HelpFormatter usageFormatter…read more

0

How to generate a changelog from Jira for your deb/rpm/…

Posted by in DevOps

A changelog is a log or record of changes made to a project, such as a website or software project, usually including such records as bug fixes, new features, etc. Some open source projects include a changelog as one of the top level files in their distribution. If you are running a RHEL distribution (Centos, Fedora, Red Hat…), you can read it via the rpm command: rpm -q –changelog vim-enhanced.x86_64 | less For Debian based distributions, you can do it via apt-get: apt-get changelog vim | less If you build…read more

0

Installing Tomcat 7 on Debian/Ubuntu

Posted by in DevOps

First, a simple apt-get: apt-get install tomcat7 libtcnative-1 tomcat7-user tomcat7-docs tomcat7-admin Wait, “libtcnative-1”? Tomcat can use the Apache Portable Runtime to provide superior scalability, performance, and better integration with native server technologies. The Apache Portable Runtime is a highly portable library that is at the heart of Apache HTTP Server 2.x. APR has many uses, including access to advanced IO functionality (such as sendfile, epoll and OpenSSL), OS level functionality (random number generation, system status, etc), and native process handling (shared memory, NT pipes and Unix sockets). These features allows…read more

4

How to deploy an Elasticsearch cluster easily

Posted by in DevOps

Here is a simple sh allowing you to deploy ElasticSearch on multiple servers with dedicated roles: master, slave or monitor. -Master: can be an Elasticsearch master, acts as load balancer on the cluster, doesn’t store data and can use the http transport. -Slave: a data node, can not be an Elasticsearch master and can not use the http transport. -Monitor: doesn’t store data, can not be an Elasticsearch master, hold plugins and can use the http transport. By default, uses the paramedic plugin. #!/bin/bash slaves=(“ip2” “ip3” “ip5”) monitors=(“ip4”) masters=(“ip1”) install_path=/home/user/opt/elasticsearch…read more

2

Use Cloudfoundry in Spring without the dedicated XML namespace

Posted by in DevOps

Spring allows you to configure a dead simple connection on a provisioned service, like mysql, redis, rabbitmq and many others. It’s really simple, since you don’t even need to configure explicit credentials and connection strings. Instead, you can retrieve a reference to this service from the cloud itself using the CloudFoundry “cloud” namespace. Problem is, you have to create an XML file, and we have more and more projects without XML anymore. So, how to use the Cloudfoundry integration without XML? Like the XML version, you need the Maven dep:…read more

0