Ошибка post gelf

Я столкнулся с проблемой при попытке отправить данные из Oracle через UTL_TCP в Graylog GELF TCP.

Я пытаюсь использовать одно событие (запись в формате JSON в соответствии с требованиями Graylog). Проблема заключается в том, что, хотя одно событие загружается правильно, я также получаю 45 внутренних событий Graylog, очевидно связанных с двумя из них:

.....
"2018-12-18T14:13:33.938Z","graylog-server","runit-service","true","6","ERROR [NettyTransport] Error in Input [GELF TCP/5c119b1eb358cb056b417b89] (channel [id: 0x1957cea0, /192.168.1.22:49550 :> /192.168.1.74:41000])"
.....
"2018-12-18T14:13:33.959Z","graylog-server","runit-service","true","6","java.lang.IllegalStateException: GELF message is too short. Not even the type header would fit."
.....

Код PL SQL, который я использую в Oracle, выглядит следующим образом:

create or replace procedure AAA_GRAYLOG_TCP_05
is

conn                utl_tcp.connection;
v_GL_record         clob;
ret_val             pls_integer;

v_lms_host          varchar2(4000) := '192.168.1.74';
v_lms_port          number := 41000;

begin

v_GL_record := 
'{ "version": "1.1", "host": "dataplus05", "short_message": "A short message", "level": 5, "_some_info": "foo" }' 
;

dbms_output.put_line(v_GL_record);

conn := utl_tcp.open_connection(
remote_host => v_lms_host, 
remote_port => v_lms_port
);
ret_val := UTL_TCP.WRITE_LINE(conn, v_GL_record);
utl_tcp.close_connection(conn);

end;

Я пробовал добавить:

"0", or , after the last }

до конца v_GL_record, но безрезультатно. У меня все еще есть 45 дополнительных записей, а также сгенерированные ERROR и Java.IllegalState

Пытался также поставить те же символы обратной косой черты-нули вместо стандартного CL-FR при открытии соединения:

conn := utl_tcp.open_connection(
remote_host => v_lms_host, 
remote_port => v_lms_port,  
newline => ''
);

все та же проблема.

может кто-нибудь помочь?

наилучшие пожелания,

Алтын

Список внутренних сообщений Graylog:

Graylog дополнительные внутренние сообщения

The GELF HTTP input is accepting invalid GELF messages and replies with Status 202 (Accepted) instead of Status 400 (Bad Request) or 422 (Unprocessable Entity).

Valid message

$ curl -v -X POST -p0 -d '{"short_message":"Hello there", "host":"example.org", "facility":"test", "_foo":"bar"}' http://localhost:12202/gelf
* Hostname was NOT found in DNS cache
*   Trying ::1...
* Connected to localhost (::1) port 12202 (#0)
> POST /gelf HTTP/1.0
> User-Agent: curl/7.37.1
> Host: localhost:12202
> Accept: */*
> Content-Length: 86
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 86 out of 86 bytes
* HTTP 1.0, assume close after body
< HTTP/1.0 202 Accepted
< Content-Length: 0
< Connection: close
<
* Closing connection 0

Invalid message

$ curl -v -X POST -p0 -d 'NOT A GELF MESSAGE' http://localhost:12202/gelf
* Hostname was NOT found in DNS cache
*   Trying ::1...
* Connected to localhost (::1) port 12202 (#0)
> POST /gelf HTTP/1.0
> User-Agent: curl/7.37.1
> Host: localhost:12202
> Accept: */*
> Content-Length: 18
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 18 out of 18 bytes
* HTTP 1.0, assume close after body
< HTTP/1.0 202 Accepted
< Content-Length: 0
< Connection: close
<
* Closing connection 0

Graylog Community

  • Graylog Project

    • Graylog website

    • Get Involved!

    • Github

  • Marketplace
  • Enterprise
  • Documentation

Expected Behavior

This morning my Graylog was working as intended. It’s a docker-compose set-up running behind an Apache TLS reverse proxy. I had to update the FQDN of the server. I added a new virtual host with the new FQDN. I also kept the old virtualhost. Then I updated my services’ log configurations to point to the new FQDN for logging. I expected things to keep working.

Current Behavior

I noticed that suddenly I was receiving very few log messages, but not none. I reverted the changes of the services’ configuration, and started testing the new FQDN using cURL:

while true
  do curl -i -X POST -H 'Content-Type: application/json' -d '{ "version": "1.1", "host": "redacted.com", "short_message": "A short message", "level": 5, "_some_info": "foo" }' 'https://redacted.example.org/gelf'
  sleep 3
done

I noticed that of the cURL messages, sent every 3 seconds, I received approximately one per minute on average.

I reverted my changes and went back to using the old FQDN. However the problem persisted.

Currently I can look at the input, see that the amount of messages received is 0/minute, the data received counters are slowly increasing, and when I look in the apache access logs I see that it responds a HTTP 202 for tens of log messages per second, none of which show up in graylog.

Everything I can see at the HTTP layer works perfectly fine. Each message is acknowledged with a HTTP 202. The Graylog GUI running behind the same proxy, including Apache OIDC (mod authz_oidc) + trusted proxy authentication works fine. Nevertheless, I get an average of 40 log messages an hour in Graylog the last 24 hours, even though the actual amount sent by the servers is around 6,000 an hour.

If I stop the HTTP Gelf input, I immediately see the stream of POSTS log errors.

Here’s the full trace7 apache2 output of a proxied request:

[Tue Nov 15 08:41:41.483634 2022] [core:trace5] [pid 2809430:tid 140008686089984] protocol.c(708): [client 52.233.195.70:61163] Request received from client: POST /gelf HTTP/1.1
[Tue Nov 15 08:41:41.483733 2022] [http:trace4] [pid 2809430:tid 140008686089984] http_request.c(436): [client 52.233.195.70:61163] Headers received from client:
[Tue Nov 15 08:41:41.483748 2022] [http:trace4] [pid 2809430:tid 140008686089984] http_request.c(439): [client 52.233.195.70:61163]   Host: REDACTED
[Tue Nov 15 08:41:41.483758 2022] [http:trace4] [pid 2809430:tid 140008686089984] http_request.c(439): [client 52.233.195.70:61163]   Cache-Control: no-cache
[Tue Nov 15 08:41:41.483767 2022] [http:trace4] [pid 2809430:tid 140008686089984] http_request.c(439): [client 52.233.195.70:61163]   Content-Type: application/json; charset=utf-8
[Tue Nov 15 08:41:41.483775 2022] [http:trace4] [pid 2809430:tid 140008686089984] http_request.c(439): [client 52.233.195.70:61163]   Content-Length: 455
[Tue Nov 15 08:41:41.483797 2022] [proxy:trace2] [pid 2809430:tid 140008686089984] mod_proxy.c(687): [client 52.233.195.70:61163] AH03461: attempting to match URI path '/gelf' against prefix '/gelf' for proxying
[Tue Nov 15 08:41:41.483809 2022] [proxy:trace1] [pid 2809430:tid 140008686089984] mod_proxy.c(773): [client 52.233.195.70:61163] AH03464: URI path '/gelf' matches proxy handler 'proxy:http://127.0.0.1:12202/gelf'
[Tue Nov 15 08:41:41.483833 2022] [authz_core:debug] [pid 2809430:tid 140008686089984] mod_authz_core.c(845): [client 52.233.195.70:61163] AH01628: authorization result: granted (no directives)
[Tue Nov 15 08:41:41.483844 2022] [core:trace3] [pid 2809430:tid 140008686089984] request.c(310): [client 52.233.195.70:61163] request authorized without authentication by access_checker_ex hook: /gelf
[Tue Nov 15 08:41:41.483860 2022] [proxy_http:trace1] [pid 2809430:tid 140008686089984] mod_proxy_http.c(62): [client 52.233.195.70:61163] HTTP: canonicalising URL //127.0.0.1:12202/gelf
[Tue Nov 15 08:41:41.483895 2022] [proxy:trace2] [pid 2809430:tid 140008686089984] proxy_util.c(2145): [client 52.233.195.70:61163] http: found worker http://127.0.0.1:12202/gelf for http://127.0.0.1:12202/gelf
[Tue Nov 15 08:41:41.483908 2022] [proxy:debug] [pid 2809430:tid 140008686089984] mod_proxy.c(1254): [client 52.233.195.70:61163] AH01143: Running scheme http handler (attempt 0)
[Tue Nov 15 08:41:41.483944 2022] [proxy_http:trace1] [pid 2809430:tid 140008686089984] mod_proxy_http.c(1985): [client 52.233.195.70:61163] HTTP: serving URL http://127.0.0.1:12202/gelf
[Tue Nov 15 08:41:41.483956 2022] [proxy:debug] [pid 2809430:tid 140008686089984] proxy_util.c(2341): AH00942: HTTP: has acquired connection for (127.0.0.1)
[Tue Nov 15 08:41:41.483982 2022] [proxy:debug] [pid 2809430:tid 140008686089984] proxy_util.c(2395): [client 52.233.195.70:61163] AH00944: connecting http://127.0.0.1:12202/gelf to 127.0.0.1:12202
[Tue Nov 15 08:41:41.483998 2022] [proxy:debug] [pid 2809430:tid 140008686089984] proxy_util.c(2604): [client 52.233.195.70:61163] AH00947: connected /gelf to 127.0.0.1:12202
[Tue Nov 15 08:41:41.484036 2022] [proxy:trace2] [pid 2809430:tid 140008686089984] proxy_util.c(2886): HTTP: reusing backend connection 127.0.0.1:41896<>127.0.0.1:12202
[Tue Nov 15 08:41:41.484051 2022] [core:trace6] [pid 2809430:tid 140008686089984] core_filters.c(519): [remote 127.0.0.1:12202] will flush because of FLUSH bucket
[Tue Nov 15 08:41:41.484776 2022] [proxy_http:trace3] [pid 2809430:tid 140008686089984] mod_proxy_http.c(1361): [client 52.233.195.70:61163] Status from backend: 202
[Tue Nov 15 08:41:41.484801 2022] [proxy_http:trace4] [pid 2809430:tid 140008686089984] mod_proxy_http.c(1016): [client 52.233.195.70:61163] Headers received from backend:
[Tue Nov 15 08:41:41.484813 2022] [proxy_http:trace4] [pid 2809430:tid 140008686089984] mod_proxy_http.c(1039): [client 52.233.195.70:61163] content-length: 0
[Tue Nov 15 08:41:41.484826 2022] [proxy_http:trace4] [pid 2809430:tid 140008686089984] mod_proxy_http.c(1039): [client 52.233.195.70:61163] connection: keep-alive
[Tue Nov 15 08:41:41.484841 2022] [proxy_http:trace3] [pid 2809430:tid 140008686089984] mod_proxy_http.c(1724): [client 52.233.195.70:61163] start body send
[Tue Nov 15 08:41:41.484853 2022] [proxy:debug] [pid 2809430:tid 140008686089984] proxy_util.c(2356): AH00943: http: has released connection for (127.0.0.1)
[Tue Nov 15 08:41:41.484874 2022] [http:trace3] [pid 2809430:tid 140008686089984] http_filters.c(1125): [client 52.233.195.70:61163] Response sent with status 202, headers:
[Tue Nov 15 08:41:41.484885 2022] [http:trace5] [pid 2809430:tid 140008686089984] http_filters.c(1134): [client 52.233.195.70:61163]   Date: Tue, 15 Nov 2022 07:41:41 GMT
[Tue Nov 15 08:41:41.484895 2022] [http:trace5] [pid 2809430:tid 140008686089984] http_filters.c(1137): [client 52.233.195.70:61163]   Server: Apache/2.4.41 (Ubuntu)
[Tue Nov 15 08:41:41.484905 2022] [http:trace4] [pid 2809430:tid 140008686089984] http_filters.c(955): [client 52.233.195.70:61163]   content-length: 0
[Tue Nov 15 08:41:41.484935 2022] [proxy_http:trace2] [pid 2809430:tid 140008686089984] mod_proxy_http.c(1870): [client 52.233.195.70:61163] end body send

Context

Running in docker-compose on Ubuntu VPS

Your Environment

  • Graylog Version: docker graylog/graylog:4.3.9
  • Elasticsearch Version: docker.elastic.co/elasticsearch/elasticsearch:7.17.2 (because of Log4J, I know it’s not officially supported, but it’s been running on this version without issues for a long time already.)
  • MongoDB Version: docker mongo:5.0
  • Operating System: Ubuntu 20.04.4 LTS host

I am getting the following error in my log4j2.xml file:

Error processing element GELF ([Appenders: null]): CLASS_NOT_FOUND

At first I thought it was because I was referencing an invalid appender, but I still have the error after commenting it out.

Here is what I have:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="info" packages="org.graylog2.log4j2">

<Appenders>
    <GELF   name="gelfAppender" 
            server="org.graylog2.log.GelfAppender" 
            port="12201"
            hostName="some.host" 
            facility="GELF-JAVA"
            extractStacktrace="true"
            addExtendedInformation="true">
        <PatternLayout pattern="${some_pattern}"/>
         <!-- Additional fields -->
        <KeyValuePair key="someKey" value="someVal"/>
    </GELF>
</Appenders>

Steve.NayLinAung's user avatar

asked Sep 21, 2016 at 21:01

mr nooby noob's user avatar

mr nooby noobmr nooby noob

1,8305 gold badges32 silver badges56 bronze badges

In my case I was missing this dependency in the pom.xml

    <dependency>
        <groupId>org.graylog2.log4j2</groupId>
        <artifactId>log4j2-gelf</artifactId>
        <version>1.3.1</version>
    </dependency>

This article helped me a lot.

answered Jan 4, 2021 at 20:25

Azucena H's user avatar

It turned out I was missing a couple of dependencies; no more error! :D

answered Sep 21, 2016 at 23:16

mr nooby noob's user avatar

mr nooby noobmr nooby noob

1,8305 gold badges32 silver badges56 bronze badges

3

Понравилась статья? Поделить с друзьями:
  • Ошибка po304 приора
  • Ошибка post error occurs как исправить
  • Ошибка po504 приора
  • Ошибка po303 солярис
  • Ошибка post code post