amazon machine learning

One of the most tedious and repetitive tasks for almost any developer is reviewing the logs. In particular is critical to detect application failures and malfunctions. Log Management Tools help to search, classify and generate useful metrics. They also can often show reports with tables and graphs. But they are only assistance tools, finally a human being must answer the question: does the application properly work ?

Using Machine Learning to dectect application failures

Machine Learning can help to automate this task. I explored this approach with a simple java application. My goal is to find out if a machine learning algorithm can determine if the application is working properly or not, based only on log analysis.

Obtaining a training dataset

We need a dataset to train the ml-algorithm. A dataset is a set of samples, each one containing an input and the desired output value. In our case, the input will be the logs generated and the desired output will be a binary variable that represents if the application is properly working. Typically the dataset is formatted as a CSV.

yes, main INFO org springframework context support ClassPathXmlApplicationContext Refreshing org springframework context ...
no, main INFO org springframework context support ClassPathXmlApplicationContext Refreshing org springframework context ...

The application prints logs using log4j. This is an example of the log4j output:

0 [main] INFO  - Refreshing startup date [Sun Oct 23 14:53:27 CEST 2016]; root of context hierarchy
1191 [main] INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader  - Loading XML bean definitions from class path resource [net/therore/kata/scheduler/spring3/context.xml]
1637 [main] INFO  - Pre-instantiating singletons in defining beans [org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,net.therore.kata.scheduler.spring3.ApplicationConfiguration#0,task,checkersSchedule,,org.springframework.scheduling.config.ScheduledTaskRegistrar#0,org.springframework.context.annotation.ConfigurationClassPostProcessor$ImportAwareBeanPostProcessor#0,applicationConfiguration,applicationMain,propertySourcesPlaceholderConfigurer,net.therore.kata.scheduler.spring3.ApplicationConfiguration#1,,org.springframework.scheduling.config.ScheduledTaskRegistrar#1]; root of factory hierarchy
1756 [main] INFO org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler  - Initializing ExecutorService  'checkersSchedule'

Now we need to generate enough dataset registers to ensure an effective learning process. The more complete and balanced the dataset is, the better the generated machine learning model.

I used a script that launchs the application n-times. Before each execution the script can mutate several pieces of code to force execution failures. Then, the application is launched and the output is checked to verify if the file contains the desired text. If the output is right, the column “isOk” will be filled with “yes” otherwise it will be “no”.

The other column of the dataset will include the log generated by the application. In order to improve the learning process, each log is previously cleaned, by removing non-word characters.

Building a predictive model with Amazon Machine Learning

The next step is to build the predictive model. If you don’t want to write the ml-algorithms by your own, you can use a library like Weka. Weka contains a collection of machine learning algorithms ready to be used. Another option is to take advantage of a platform like Amazon Machine Learning. I prefered to try this because of its simplicity and power.

The process of building a predictive model with Amazon is quite simple. Once you have the dataset on S3, you have to create three types of objects:

  • datasource: A datasource associates a schema with the dataset. I used this schema.
  "excludedAttributeNames": [], 
  "version": "1.0", 
  "dataFormat": "CSV", 
  "rowId": null, 
  "dataFileContainsHeader": false, 
  "attributes": [
      "attributeName": "isOk", 
      "attributeType": "CATEGORICAL"
      "attributeName": "logs", 
      "attributeType": "TEXT"
  "targetAttributeName": "isOk"
  • predictive model: The predictive model is a collection of patterns that amazon ml finds in the dataset during training.
  • model evaluation: The evaluation of your model that expresses its performance quality.

If the model performance is high enough, it is time to test the predictive model with a real case.

Using the predictive model

The predictive model can be used through the amazon api. I created a groovy script that invokes the model to determine if the application is working properly or not.


    ... construct a client object using the required credencials

record = [
        "logs" :

GetMLModelRequest modelRequest = new GetMLModelRequest()
GetMLModelResult model = client.getMLModel(modelRequest);

PredictRequest predictRequest = new PredictRequest()
PredictResult response = client.predict(predictRequest);

if (response.prediction.predictedLabel == 'yes')
    println "YES: the applicaiton is working properly"
    println "NO the applicaiton is not working properly"

All my tests with the predictive model were successful. It is able to identify if the application has failures or not only by using log analysis.


This small test allows me to discover that applying Machine Learning is easier than what many people could think.

Nowadays Machine Learning is available more than ever. Therefore we should consider using it in our development projects. There are lots of cases where we can apply ml techniques and take advantage of its power.