0

いくつかのダミー ログを生成する小さな Java プログラムを作成しました (基本的には txt ファイルに書き込みます)。次に、このデータを ELK スタックにフィードします。基本的に、logstash は txt ファイルからこのデータを読み取る必要があり、これらの変更をキバナで視覚化して、その感触をつかみたいと思います。

基本的にやりたいことは、プログラムがダミー ログを txt ファイルに書き込む速度を変更して、kibana で変更を確認できるようにすることです。

ELK スタックの調査を開始したばかりですが、この種の分析を行うには完全に間違った方法である可能性があります。これを行うための他のより良い方法があるかどうかを提案してください(現在作業する実際のログがないことを考慮して)

編集:@Val

input {
    generator {
        message => “’83.149.9.216 - - [17/May/2015:10:05:03 +0000] "GET /presentations/logstash-monitorama-2013/images/kibana-search.png HTTP/1.1" 200 203023 "http://semicomplete.com/presentations/logstash-monitorama-2013/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36””
        count => 10
    }
}

だからここに私のものがありますlogstash.conf

input {

 stdin { }

}


filter {
  grok {
    match => {
      "message" => '%{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "%{WORD:verb} %{DATA:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response:int} (?:-|%{NUMBER:bytes:int}) %{QS:referrer} %{QS:agent}'
    }
  }

  date {
    match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
    locale => en
  }

  geoip {
    source => "clientip"
  }

  useragent {
    source => "agent"
    target => "useragent"
  }
}

output {
  stdout {
codec => plain {
                        charset => "ISO-8859-1"
                }

}
  elasticsearch {
    hosts => "http://localhost:9200"
    index => "apache_elk_example"
    template => "./apache_template.json"
    template_name => "apache_elk_example"
    template_overwrite => true
  }
}

Elasticsearch と kabana を開始した後、次のことを行います。

cat apache_logs | /usr/local/opt/logstash/bin/logstash -f apache_logs

apache_logs私のJavaプログラムから供給された場所:

public static void main(String[] args) {
    // TODO Auto-generated method stub
    try {
        PrintStream out = new PrintStream(new FileOutputStream("/Users/username/Desktop/user/apache_logs"));
        System.setOut(out);
    } catch (FileNotFoundException ex) {
        System.out.print("Exception");
    }
    while(true)
    //for(int i=0;i<5;++i)
    {
        System.out.println(generateRandomIPs() + //other log stuff);
        try {
            Thread.sleep(1000);                 //1000 milliseconds is one second.
        } catch(InterruptedException ex) {
            Thread.currentThread().interrupt();
        }
    }
}

だからここに問題があります:

Kibana はリアルタイムの視覚化を表示しません。つまり、私の Java プログラムがデータをapache_logファイルにフィードするとき、それは表示されません。の実行時に「apache_log」にすでに書き込まれているデータまでのみ表示されます。

cat apache_logs | /usr/local/opt/logstash/bin/logstash -f apache_logs
4

2 に答える 2

1

might be a bit late but I wrote up a small sample of what I meant.

I modified your java program to add a timestamp like this:

public class LogWriter {


    public static Gson gson = new Gson();

    public static void main(String[] args) {

        try {
            PrintStream out = new PrintStream(new FileOutputStream("/var/logstash/input/test2.log"));
            System.setOut(out);
        } catch (FileNotFoundException ex) {
            System.out.print("Exception");
        }

        Map<String, String> timestamper = new HashMap<>();

        while(true)
        {

            String format = LocalDateTime.now().format(DateTimeFormatter.ISO_DATE_TIME);

            timestamper.put("myTimestamp", format);
            System.out.println(gson.toJson(timestamper));
            try {
                Thread.sleep(1000);                 //1000 milliseconds is one second.
            } catch(InterruptedException ex) {
                Thread.currentThread().interrupt();
            }
        }

    }
}

This now write json like:

{"myTimestamp":"2016-06-10T10:42:16.299"}
{"myTimestamp":"2016-06-10T10:42:17.3"}
{"myTimestamp":"2016-06-10T10:42:18.301"}

I then setup logstash to read that file and parse it and output to stdout:

input {
  file {
     path => "/var/logstash/input/*.log"
     start_position => "beginning"
     ignore_older => 0
     sincedb_path => "/dev/null"
  }   
}

filter {
   json {
      source => "message"
   }
}

output {
    file {
           path => "/var/logstash/out.log"
    }
    stdout { codec => rubydebug }
}

So it'll pick up my log, which knows when it was created, parses it, and creates a new timestamp which represents when it saw the log:

{
        "message" => "{\"myTimestamp\":\"2016-06-10T10:42:17.3\"}",
       "@version" => "1",
     "@timestamp" => "2016-06-10T09:42:17.687Z",
           "path" => "/var/logstash/input/test2.log",
           "host" => "pandaadb",
    "myTimestamp" => "2016-06-10T10:42:17.3"
}
{
        "message" => "{\"myTimestamp\":\"2016-06-10T10:42:18.301\"}",
       "@version" => "1",
     "@timestamp" => "2016-06-10T09:42:18.691Z",
           "path" => "/var/logstash/input/test2.log",
           "host" => "pandaadb",
    "myTimestamp" => "2016-06-10T10:42:18.301"
}

Here you can now see how long it takes for a log to be seen an processed. Which is around 300 miliseconds, which I would account to the fact that your java writer is an async writer and will not flush right away.

You can even make this a bit "cooler" by using the elapsed plugin which will calculate the difference between those timestamps for you.

I hope that helps for your testing :) Might not be the most advanced way of doing it, but it's easy to understand and pretty forward and fast.

Artur

于 2016-06-10T09:48:55.173 に答える