Why does Logstash reload duplicate data from a file in Linux?

I am using logstash, elasticsearch and kibana. My logstash config file looks like this.

input {
  file {
    path => "/home/rocky/Logging/logFiles/test1.txt"
    start_position => "end"
    sincedb_path => "test.db"
 }
}

output {
  stdout { codec => rubydebug }
  elasticsearch { host => localhost }
}

      

When I run Logstash on Windows it works fine, but when I use the same configuration on my virtual Linux OS (Fedora) it creates a problem. On Fedora, when I insert something at the end of the log file when logstash starts. Sometimes it sends all the data of the file from the beginning, sometimes half of the data. But it should only load new data added to this log file. also sincedb file saves data correctly. However, it does not provide correct data on Fedora. Please, help.

+3


source to share


1 answer


I had a similar issue on my LinuxMint machine using the official docker image. I used a text editor (Geany) to add new lines to the file. After playing a little more, I realized that it must have been related to what my text editor (Geany) was doing when I saved the file after adding new lines.

When I added newlines with a simple echo command, everything was fine:



echo "some new line" >> my_file.log

      

I know this thread is old, but this was the only thing that came into my mind when I searched for this, so hopefully this helps someone else ...

0


source







All Articles