Hi all,
Does someone have any experience setting up a metricbeat with a OVH hosted Logstash collector?
The metric plugin is setup correctly on my server and talks with Logstash, I can see the stream data in my Sunrise interface. The problem is, Graylog does not show anything in the stream view.
Any ideas on how to debug this? I don't really know where to start, or even if my assumptions about the LDP are correct.
Thanks in advance!
Erreur connexion SSL - Metricbeat & Logstash
Related questions
- Ssh_init: Host does not exist
24582
13.11.2017 01:40
- Code d’erreur : DLG_FLAGS_SEC_CERT_CN_INVALID ?
23701
14.08.2018 09:32
- LetsEncrypt et erreur DNS A / AAAA
22576
16.04.2019 15:34
- SSL Cloudflare chez OVH
21251
28.04.2017 09:51
- Err_too_many_redirects
20642
12.11.2017 15:36
- Certificat Let's encrypt
20387
21.08.2017 17:44
- Impossible d'activer le certificat SSL pour HTTPS
20214
07.01.2021 02:44
- Net::err_cert_common_name_invalid
19663
29.05.2017 08:20
- Trop de redirections suite au HTTPS
19301
14.12.2016 14:30
- Prise en charge du protocole MQTT
17984
06.04.2017 13:57
Hello,
First of all thank you for your interest in Logs Data Platform and sorry for not having answered earlier, we usually respond in one business day so be sure we won't be as slow to answer in the future.
Using Metricbeats with the Logstash collector is feasible but not practical. you would have to use a mutate filter to add a "message" field in your configuration and select the fields you want to keep because we prevent messages with a number of fields > 200 fields. Moreover you would have to follow the https://docs.ovh.com/gb/en/logs-data-platform/field-naming-conventions/ LDP conventions.
However Metricbeats is fully compatible with https://docs.ovh.com/gb/en/logs-data-platform/index-as-a-service/ Index As A Service . This feature allows you to use most products of the ELK stack transparently. Note that you should use MetricBeats 5.6.9 since the platform is currently using Elasticsearch 5.6 (It could maybe work with 6.X versions but we have not tested them)
To use it Follow the following instructions :
1. Create an index on LDP As described in the https://docs.ovh.com/gb/en/logs-data-platform/index-as-a-service/ documentation
2. Metricbeats has a template for the mapping, but template creation on LDP IndexAAS is not permitted. You have to import the mapping by using the template in the file metricbeat.template.json provided by Metricbeat. You have to extract the \_default_ part of the json file (under the object mapping) and use the mapping API to import it in your template. For your convenience, i have uploaded the mapping to import here : https://plik.root.gg/file/AXctYWxEWguhKFjm/oABx8ArTN4jyNqAJ/default_mapping. The following curl command can be used to update the mapping effortlessly :
`curl -u : -X PUT https://.logs.ovh.com:9200/logs-/_default_ -d "@default_mapping" `
Where `:` is your LDP credentials (you can of course use https://docs.ovh.com/gb/en/logs-data-platform/tokens-logs-data-platform/ tokens ), `logs-` is the index you created, and `default_mapping` the file you just downloaded/extracted.
3. Configure Metricbeats with the following settings for the elasticsearch output under `output.elasticsearch`:
` # Array of hosts to connect to.
hosts: https://docs.ovh.com/gb/en/logs-data-platform/using-kibana-with-logs/ ".logs.ovh.com:9200"]
template.enabled: false
# Optional protocol and basic auth credentials.
protocol: "https"
username: ""
password: "token"
index: "logs-"
`
Launch filebeat, and you will be able to see your metrics by using the following command
`curl -u : -X GET https://.logs.ovh.com:9200/logs-/_search?pretty `
Note that LDP is compatible with Kibana, configure it by [using this documentation, if you prefer Grafana, you can https://docs.ovh.com/gb/en/logs-data-platform/using-grafana-with-logs/ click here
Last but not least : We have a dedicated https://www.ovh.com/fr/data-platforms/metrics/ Metrics Data Platform which is maybe more suited to your use case.
As Always don't hesitate to reach us here if you have any question about this response or on anything else.
And sorry again for taking that long to respond.
Happy Logging !
Thanks for the detailed answer, I'll try that!