How to create custom indices based on Kubernetes metadata using fluentd?

 

How to create custom indices based on Kubernetes metadata using fluentd?

In the previous articles, we learned about setting Fluentd on Kubernetes with the default setup config. Now in this article, we will learn how to create custom indices using Fluentd based on Kubernetes metadata and tweaking an EFK stack on Kubernetes.

Here, I will be using the Kubernetes metadata plugin to add metadata to the log. This plugin is already installed in the Docker image (fluent/fluentd-kubernetes-daemonset:v1.1-debian-elasticsearch), or you can install it using “gem install fluent-plugin-kubernetes_metadata_filter” into your Fluent docker file. Use the filter in Fluentd config value to add metadata to the log.

# we use kubernetes metadata plugin to add metadatas to the log
<filter kubernetes.**>
@type kubernetes_metadata
</filter>

Here, our source part is the same as we used in setting Fluentd on Kubernetes with the default setup configI will customize the matching part in the default config and create a custom index using Kubernetes metadata. So here we are creating an index based on pod name metadata. The required changes are below into the matching part:

# we send the logs to Elasticsearch
<match kubernetes.**>
@type elasticsearch_dynamic
@log_level info
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
reload_connections true
logstash_format true
logstash_prefix ${record['kubernetes']['pod_name']}
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever true
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 32
overflow_action block
</buffer>
</match>

I will use Kubernetes metadata in logstash_prefix “${record[‘kubernetes’][‘pod_name’]}” to create an index with the pod name. You can also create an index with any Kubernetes metadata ( like namespace & deployment ). And here, you can tweak some configuration for logging data to ES as per your need. Suppose I don’t want to send some unwanted logs to ES like Fluentd, Kube-system or other namespace containers’ logs, so you can add these lines before the Elasticsearch output:

    <match kubernetes.var.log.containers.**kube-logging**.log>
@type null
</match>
<match kubernetes.var.log.containers.**kube-system**.log>
@type null
</match>
<match kubernetes.var.log.containers.**monitoring**.log>
@type null
</match>
<match kubernetes.var.log.containers.**infra**.log>
@type null
</match>

Here, we have the final Kube-manifest config map, we just need to run it and apply the changes to the k8s cluster and rolling deploy the existing Fluentd.


apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-config
  namespace: infra
data:
  fluent.conf: |
    <match fluent.**>
        # this tells fluentd to not output its log on stdout
        @type null
    </match>

    # here we read the logs from Docker's containers and parse them
    <source>
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/containers.log.pos
      tag kubernetes.*
      read_from_head true
      <parse>
        @type json
        time_format %Y-%m-%dT%H:%M:%S.%NZ
      </parse>
    </source>

    # we use kubernetes metadata plugin to add metadatas to the log
    <filter kubernetes.**>
        @type kubernetes_metadata
    </filter>

    <match kubernetes.var.log.containers.**kube-logging**.log>
    @type null
    </match>

    <match kubernetes.var.log.containers.**kube-system**.log>
    @type null
    </match>

    <match kubernetes.var.log.containers.**monitoring**.log>
    @type null
    </match>

    <match kubernetes.var.log.containers.**infra**.log>
    @type null
    </match>


     # we send the logs to Elasticsearch
    <match kubernetes.**>
       @type elasticsearch_dynamic
       @log_level info
       include_tag_key true
       host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
       port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
       user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
       password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
       scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
       ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}"
       reload_connections true
       logstash_format true
       logstash_prefix ${record['kubernetes']['pod_name']}
       <buffer>
           @type file
           path /var/log/fluentd-buffers/kubernetes.system.buffer
           flush_mode interval
           retry_type exponential_backoff
           flush_thread_count 2
           flush_interval 5s
           retry_forever true
           retry_max_interval 30
           chunk_limit_size 2M
           queue_limit_length 32
           overflow_action block
       </buffer>
    </match>
$ kubectl apply -f fluentd-config-map-custome-index.yaml

After applying the changes, now we have indices with pod names which can be seen in ES and Kibana.

I hope this blog was useful to you. Looking forward to claps and suggestions. For any queries, feel free to comment.


Windows 2012/Windows 2012R2 - RDP lagująca myszka

Q: Jeśli podczas połączenia RDP laguje Ci myszka w oknie zdalnego pulpitu.
A: Wyłącz cień dla kursora :)




Windows 10 BSOD with video_tdr_failure igdkmd64.sys

That "igdkmd64.sys" is related to Intel HD Graphics 4600 Kernel Mode Driver. For me it looks like "igdkmd64.sys" file is corrupted. So I have tried below options,

1. Uninstall the Intel HD Graphics Drivers and reboot PC
2. Open Device Manager and navigate to Display Adapters-> Intel Graphics (right click) -> Disable.



After PC reboot you will not get any BSOD with video_tdr_failure.

Błąd 0x00000709 podczas wyboru drukarki na domyślną.

Jak Rozwiązać Problem:
należy zmienić uprawnienia wpisu w rejestrze dotyczącego drukarki domyślnej

w jaki sposób:

wpisujemy w pasku menu start regedit i przyznajemy akcji uprawnienia Admina.

odszukujemy :

HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Windows

i prawym przyciskiem myszki na powyższym Kluczu (który przypomina katalog) wybieramy  uprawnienia.



nadajemy bieżącemu użytkownikowi i administratorowi i pewnie systemowi pełną kontrolę



następnie w:

HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Windows

usuwamy wartość podklucza device (w sensie nie cały podklucz tylko wartość przypisaną)



oraz jeśli występuje jakaś ścieżka do sterownika poniżej to też usuwamy wartość przypisaną.

Zapisujemy i resetujemy.

Następnie Wybieramy w drukarkach domyślną drukarkę. (tym razem bez błędu)

U mnie Działa...

Pozdrawiam mam nadzieje, że komuś się przyda. Jeśli tak proszę o komentarz.

VMware 5.5 - Windows Server 2012R2 losing network

If you use network card: e1000 or e1000e

You must install update:

Better options change NIC to: VMXNET3 (better and faster)
If you change NIC to VMXNET3, must install VMware tools (driver to NIC)



HP ProCurve - switch update firmware

swx# show version
Image stamp:    /sw/pre/build/nemo(ndx)
                Jan 25 2008 13:54:14
                R.11.07
                104
Boot Image:     Primary
BL-C234-AS01# show flash
Image           Size(Bytes)   Date   Version
-----           ----------  -------- -------
Primary Image   : 3689315   01/25/08 R.11.07
Secondary Image : 3689315   01/25/08 R.11.07
Boot Rom Version: R.10.06
Current Boot    : Primary
We see that’s we’re running firmware version R.11.07. When copying the firmware to my TFTP server, I’ll use the same filename convention that HP does, so our filename will be R\_11\_07.swi. Let’s save a copy to the TFTP server:
swx# copy flash tftp 192.168.1.12 R_11_07.swi
I always like to verify that the file actually shows up on the TFTP server, regardless of any error messages (or lack thereof) from the switch:
$ ls -l /tftpboot/R_11_07.swi
-rw-r--r-- 1 nobody nogroup 3689315 2009-02-02 23:28 /tftpboot/R_11_07.swi
In the meantime, I’ve downloaded the latest version of the software, R.11.25, from HP’s FTP server and saved it as R\_11\_25.swi on the TFTP server. Let’s go ahead and copy it over to the primary flash on the switch:
swx# copy tftp flash 192.168.1.12 R_11_25.swi primary
The Primary OS Image will be deleted, continue [y/n]?  y
Here we tell the switch to copy from a TFTP server to flash memory, the TFTP server has IP address 192.168.1.12, the filename on the TFTP server is R\_11\_25.swi, and that we want that file saved to the primary flash on the switch. You’ll see a progress counter as the file is transferred, then:
Validating and Writing System Software to FLASH...
which takes a moment. After it has completed, we can verify success by examining the contents of flash:
swx# show flash
Image           Size(Bytes)   Date   Version
-----           ----------  -------- -------
Primary Image   : 3790986   01/14/09 R.11.25
Secondary Image : 3689315   01/25/08 R.11.07
Boot Rom Version: R.10.06
Current Boot    : Primary
We can now reboot the switch with the new firmware:
swx# reload
Device will be rebooted, do you want to continue [y/n]?  y
After the switch boots back up, we can verify that it is running the latest firmware:
swx# show version
Image stamp:    /sw/pre/build/nemo(ndx)
                Jan 14 2009 15:31:02
                R.11.25
                301
Boot Image:     Primary
HINT: Always read the release notes before upgrading firmware, especially on a production device!

MegaCLI - rebuild

While replacing a bad drive with a drive that used to be part of another RAID array configuration, the RAID refused to automatically rebuild, thinking that I might want to import the configuration from this disk (or that there's data on there that I might need).
Simply inserting the drive doesn't make the controller rebuild the array with that disk. Here's how to manually make the drive get along with the rest of the new array:
(Note: this is a 64 bit server, so the MegaCli client I'm using is called "MegaCli64", if you are not running x64, you can simply substitute the commands below with the path to your megacli binary.)
server:~# MegaCli64 -PDlist -aALL -a0
[...]
Enclosure Device ID: 32
Slot Number: 4
[...]
Firmware state: Unconfigured(bad)
[...]
Secured: Unsecured
Locked: Unlocked
Foreign State: Foreign
[...]
Based on the information obtained above, I now know that the disk drive I just replaced is [32:4] ([enclosureid:slotnumber]) and is currently being reported as 'Unconfigured(bad)'.
To bring this drive back online run:
server:~# MegaCli64 -PDMakeGood -PhysDrv[32:4] -a0
Adapter: 0: EnclId-32 SlotId-4 state changed to Unconfigured-Good.
The controller will now recognize the disk as being a "foreign" one. This does not mean it was made in Japan (though, it likely was). It means it has detected some RAID configuration/data on it and thus, considers it as a disk being part of an array that may be imported into current controller configuration. Because of this, it will not automatically rebuild until you force it to.
Now you can ask the controller to scan for foreign configurations and remove them:
server:~# MegaCli64 -CfgForeign -Scan -a0
There are 1 foreign configuration(s) on controller 0.
server:~# MegaCli64 -CfgForeign -Clear -a0
Foreign configuration 0 is cleared on controller 0.
The disk should now be available for rebuilding into your new RAID array. To confirm, run this:
server:~# MegaCli64 -PDList -a0
[...]
Enclosure Device ID: 32
Slot Number: 4
[...]
Firmware state: Unconfigured(good), Spun Up
Foreign State: None
[...]
Excellent. We have a good, recognized (yet still unconfigured) drive now. Now we have all we need to add the disk back into the new array, and rebuild:
Get the disk [32:4] back into array 1, as disk 4:
server:~# MegaCli64 -PdReplaceMissing -PhysDrv[32:4] -array1 -row4 -a0
Adapter: 0: Missing PD at Array 1, Row 4 is replaced
And finally start rebuilding it:
server:~# megacli -PDRbld -Start -PhysDrv[32:4] -a0
Started rebuild progress on device(Encl-32 Slot-4)
Now, sit back, relax, grab a smoke and wait for it to rebuild itself into your new RAID array. Not so foreign anymore, huh?

LinkWithin-4

Related Posts Plugin for WordPress, Blogger...