Worker process core dumped

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Worker process core dumped

zakirenish
version: 1.17.8

debug log:
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2020/09/03 14:09:21 [error] 320#320: *873195 upstream timed out (110:
Connection timed out) while connecting to upstream, client: 10.68.23.2,
server: , request: "POST /api/hmf-controller/v1/cometd/handshake HTTP/1.1",
upstream: "http://172.30.8.71:12280/api/hmf-controller/v1/cometd/handshake",
host: "10.226.208.117:28001", referrer:
"https://10.226.208.117:28001/uportal/framework/default.html"
2020/09/03 14:09:21 [debug] 320#320: *873195 finalize http upstream request:
504
2020/09/03 14:09:21 [debug] 320#320: *873195 finalize http proxy request
2020/09/03 14:09:21 [debug] 320#320: *873195 close http upstream connection:
359
2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000000E1A110, unused:
48
2020/09/03 14:09:21 [debug] 320#320: *873195 reusable connection: 0
2020/09/03 14:09:21 [debug] 320#320: *873195 http finalize request: 504,
"/api/hmf-controller/v1/cometd/handshake?" a:1, c:1
2020/09/03 14:09:21 [debug] 320#320: *873195 http special response: 504,
"/api/hmf-controller/v1/cometd/handshake?"
2020/09/03 14:09:21 [debug] 320#320: *873195 headers more header filter, uri
"/api/hmf-controller/v1/cometd/handshake"
2020/09/03 14:09:21 [debug] 320#320: *873195 lua header filter for user lua
code, uri "/api/hmf-controller/v1/cometd/handshake"
2020/09/03 14:09:21 [debug] 320#320: *873195 code cache lookup
(key='header_filter_by_lua_nhli_08c8ad024deaf339a3f72ac205896eb4', ref=3)
2020/09/03 14:09:21 [debug] 320#320: *873195 code cache hit
(key='header_filter_by_lua_nhli_08c8ad024deaf339a3f72ac205896eb4', ref=3)
2020/09/03 14:09:21 [debug] 320#320: shmtx lock
2020/09/03 14:09:21 [debug] 320#320: shmtx unlock
2020/09/03 14:09:21 [debug] 320#320: shmtx lock
2020/09/03 14:09:21 [debug] 320#320: shmtx unlock
2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:36:
header_filter():  executing plugin "msb_admin_controller": header_filter
2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:36:
header_filter():  executing plugin "hide-dexmesh-error-stack":
header_filter
2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:36:
header_filter():  executing plugin "addheaders": header_filter
2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:36:
header_filter():  executing plugin "divide": header_filter
2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:36:
header_filter():  executing plugin "redirect-transformer-plugin":
header_filter
2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:36:
header_filter():  executing plugin "auth-plugin": header_filter
2020/09/03 14:09:21 [debug] 320#320: shmtx lock
2020/09/03 14:09:21 [debug] 320#320: shmtx unlock
2020/09/03 14:09:21 [debug] 320#320: shmtx lock
2020/09/03 14:09:21 [debug] 320#320: shmtx unlock
2020/09/03 14:09:21 [debug] 320#320: *873195 lua capture header filter, uri
"/api/hmf-controller/v1/cometd/handshake"
2020/09/03 14:09:21 [debug] 320#320: *873195 HTTP/1.1 504 Gateway Time-out
2020/09/03 14:09:21 [debug] 320#320: *873195 write new buf t:1 f:0
0000000001C340D8, pos 0000000001C340D8, size: 348 file: 0, size: 0
2020/09/03 14:09:21 [debug] 320#320: *873195 http write filter: l:0 f:0
s:348
2020/09/03 14:09:21 [debug] 320#320: *873195 http output filter
"/api/hmf-controller/v1/cometd/handshake?"
2020/09/03 14:09:21 [debug] 320#320: *873195 http copy filter:
"/api/hmf-controller/v1/cometd/handshake?"
2020/09/03 14:09:21 [debug] 320#320: *873195 lua capture body filter, uri
"/api/hmf-controller/v1/cometd/handshake"
2020/09/03 14:09:21 [debug] 320#320: *873195 http postpone filter
"/api/hmf-controller/v1/cometd/handshake?" 0000000001C343C0
2020/09/03 14:09:21 [debug] 320#320: *873195 write old buf t:1 f:0
0000000001C340D8, pos 0000000001C340D8, size: 348 file: 0, size: 0
2020/09/03 14:09:21 [debug] 320#320: *873195 write new buf t:0 f:0
0000000000000000, pos 0000000000A64620, size: 114 file: 0, size: 0
2020/09/03 14:09:21 [debug] 320#320: *873195 write new buf t:0 f:0
0000000000000000, pos 0000000000A65960, size: 41 file: 0, size: 0
2020/09/03 14:09:21 [debug] 320#320: *873195 write new buf t:0 f:0
0000000000000000, pos 0000000000A657C0, size: 402 file: 0, size: 0
2020/09/03 14:09:21 [debug] 320#320: *873195 http write filter: l:1 f:0
s:905
2020/09/03 14:09:21 [debug] 320#320: *873195 http write filter limit 0
2020/09/03 14:09:21 [debug] 320#320: *873195 malloc: 0000000001AE0030:16384
2020/09/03 14:09:21 [debug] 320#320: *873195 SSL buf copy: 348
2020/09/03 14:09:21 [debug] 320#320: *873195 SSL buf copy: 114
2020/09/03 14:09:21 [debug] 320#320: *873195 SSL buf copy: 41
2020/09/03 14:09:21 [debug] 320#320: *873195 SSL buf copy: 402
2020/09/03 14:09:21 [debug] 320#320: *873195 SSL to write: 905
2020/09/03 14:09:21 [debug] 320#320: *873195 SSL_write: 905
2020/09/03 14:09:21 [debug] 320#320: *873195 http write filter
0000000000000000
2020/09/03 14:09:21 [debug] 320#320: *873195 http copy filter: 0
"/api/hmf-controller/v1/cometd/handshake?"
2020/09/03 14:09:21 [debug] 320#320: *873195 http finalize request: 0,
"/api/hmf-controller/v1/cometd/handshake?" a:1, c:1
2020/09/03 14:09:21 [debug] 320#320: *873195 set http keepalive handler
2020/09/03 14:09:21 [debug] 320#320: *873195 http close request
2020/09/03 14:09:21 [debug] 320#320: *873195 lua request cleanup:
forcible=0
2020/09/03 14:09:21 [debug] 320#320: *873195 lua log handler,
uri:"/api/hmf-controller/v1/cometd/handshake" c:1
2020/09/03 14:09:21 [debug] 320#320: *873195 code cache lookup
(key='nhlf_9c4416184f27253b6f5f86c35c6afc6b', ref=4)
2020/09/03 14:09:21 [debug] 320#320: *873195 code cache hit
(key='nhlf_9c4416184f27253b6f5f86c35c6afc6b', ref=4)
2020/09/03 14:09:21 [info] 320#320: *873195 [lua] logger.lua:27:
5382af7ecb49a6a9ce6f006cf859799b {"matched":"hmf-controller","auth-plugin
add Z-EXTENT":true,"svc_type":"api"} while logging request, client:
10.68.23.2, server: , request: "POST /api/hmf-controller/v1/cometd/handshake
HTTP/1.1", upstream:
"http://172.30.8.71:12280/api/hmf-controller/v1/cometd/handshake", host:
"10.226.208.117:28001", referrer:
"https://10.226.208.117:28001/uportal/framework/default.html"
2020/09/03 14:09:21 [debug] 320#320: fetching key
"ranoss|hmf-controller|v1|172.30.8.71:12280-2-start_time" in shared dict
"metrics"
2020/09/03 14:09:21 [debug] 320#320: shmtx lock
2020/09/03 14:09:21 [debug] 320#320: shmtx unlock
2020/09/03 14:09:21 [debug] 320#320: shmtx lock
2020/09/03 14:09:21 [debug] 320#320: shmtx unlock
2020/09/03 14:09:21 [debug] 320#320: shmtx lock
2020/09/03 14:09:21 [debug] 320#320: shmtx unlock
2020/09/03 14:09:21 [debug] 320#320: shmtx lock
2020/09/03 14:09:21 [debug] 320#320: shmtx unlock
2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:40:
log(): executing plugin "msb_admin_controller": log
2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:40:
log(): executing plugin "hide-dexmesh-error-stack": log
2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:40:
log(): executing plugin "addheaders": log
2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:40:
log(): executing plugin "divide": log
2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:40:
log(): executing plugin "redirect-transformer-plugin": log
2020/09/03 14:09:21 [debug] 320#320: *873195 [lua] base_plugin.lua:40:
log(): executing plugin "auth-plugin": log
2020/09/03 14:09:21 [debug] 320#320: shmtx lock
2020/09/03 14:09:21 [debug] 320#320: shmtx unlock
2020/09/03 14:09:21 [debug] 320#320: fetching key
"circuitbreaker.ranoss.hmf-controller.v1.2020-09-03T01:06:36Z.172.30.8.71:12280.status"
in shared dict "circuitbreaker"
2020/09/03 14:09:21 [debug] 320#320: shmtx lock
2020/09/03 14:09:21 [debug] 320#320: shmtx unlock
2020/09/03 14:09:21 [debug] 320#320: *873195 http log handler
2020/09/03 14:09:21 [debug] 320#320: *873195 http map started
2020/09/03 14:09:21 [debug] 320#320: *873195 http script var: "504"
2020/09/03 14:09:21 [debug] 320#320: *873195 http map: "504" "1"
2020/09/03 14:09:21 [debug] 320#320: *873195 http script var: "1"
2020/09/03 14:09:21 [debug] 320#320: *873195 posix_memalign:
0000000001343E90:4096 @16
2020/09/03 14:09:21 [debug] 320#320: *873195 run cleanup: 0000000001C32EB8
2020/09/03 14:09:21 [debug] 320#320: lua release ngx.ctx at ref 121
2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000001C315F0, unused:
0
2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000001C32600, unused:
0
2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000001C33610, unused:
189
2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000001343E90, unused:
3568
2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000000C404B0
2020/09/03 14:09:21 [debug] 320#320: *873195 hc free: 0000000000000000
2020/09/03 14:09:21 [debug] 320#320: *873195 hc busy: 0000000000000000 0
2020/09/03 14:09:21 [debug] 320#320: *873195 free: 0000000001AE0030
2020/09/03 14:09:28 [alert] 46#46: worker process 320 exited on signal 11
(core dumped)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

coredump backtrace:
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Program terminated with signal SIGSEGV, Segmentation fault.
#0  ngx_http_set_keepalive (r=0x1c31640) at
src/http/ngx_http_request.c:3178
3178        if (r->headers_out.persist_front_end_connection &&
!c->tcp_keepalive) {
(gdb) bt
#0  ngx_http_set_keepalive (r=0x1c31640) at
src/http/ngx_http_request.c:3178
#1  ngx_http_finalize_connection (r=0x1c31640) at
src/http/ngx_http_request.c:2720
#2  0x00000000004aca34 in ngx_http_upstream_handler (ev=0x15431d0) at
src/http/ngx_http_upstream.c:1290
#3  0x0000000000478776 in ngx_event_expire_timers () at
src/event/ngx_event_timer.c:94
#4  0x0000000000478405 in ngx_process_events_and_timers
(cycle=cycle@entry=0xdf7540) at src/event/ngx_event.c:271
#5  0x000000000048120d in ngx_worker_process_cycle (cycle=0xdf7540,
data=<optimized out>) at src/os/unix/ngx_process_cycle.c:821
#6  0x000000000047f7de in ngx_spawn_process (cycle=cycle@entry=0xdf7540,
proc=0x4811c0 <ngx_worker_process_cycle>, data=0x0, name=0x74ad49 "worker
process",
    respawn=respawn@entry=6) at src/os/unix/ngx_process.c:199
#7  0x0000000000482873 in ngx_reap_children (cycle=0xdf7540) at
src/os/unix/ngx_process_cycle.c:688
#8  ngx_master_process_cycle (cycle=0xdf7540, cycle@entry=0xb041a0) at
src/os/unix/ngx_process_cycle.c:181
#9  0x0000000000455e79 in main (argc=<optimized out>, argv=<optimized out>)
at src/core/nginx.c:385
(gdb) p r
$1 = (ngx_http_request_t *) 0x1c31640
(gdb) p c
$2 = (ngx_connection_t *) 0x7fb482641fa8
(gdb) p *r
Cannot access memory at address 0x1c31640
(gdb) p *c
$3 = {data = 0x13f5f50, read = 0x14587a0, write = 0x1542db0, fd = 168, recv
= 0x489bf0 <ngx_ssl_recv>, send = 0x4890e0 <ngx_ssl_write>,
  recv_chain = 0x48a2f0 <ngx_ssl_recv_chain>, send_chain = 0x489420
<ngx_ssl_send_chain>, listening = 0x11e21a0, sent = 905, log = 0x13f5ef0,
pool = 0x13f5e90, type = 1,
  sockaddr = 0x13f5ee0, socklen = 16, addr_text = {len = 10, data =
0x13f5f40 "10.68.23.2.2"}, proxy_protocol = 0x0, ssl = 0x13f5fa8, udp =
0x0,
  local_sockaddr = 0x13f6060, local_socklen = 16, buffer = 0x13f6000, queue
= {prev = 0x0, next = 0x0}, number = 873195, requests = 15, buffered = 0,
log_error = 2,
  timedout = 0, error = 0, destroyed = 1, idle = 0, reusable = 0, close = 0,
shared = 0, sendfile = 1, sndlowat = 0, tcp_nodelay = 1, tcp_nopush = 0,
need_last_buf = 0,
  tcp_keepalive = 0, logged = 0, sendfile_task = 0x0}
(gdb)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


It looks like the "r" is not accessable, but I remember the "request"
objects are pre-allocated NOT in the heap?

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289296,289296#msg-289296

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Worker process core dumped

zakirenish
I was wrong, the request object was created on the fly with the pool
object.
here the pool was detroyed before the r was referenced which caused the core
dump.

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289296,289298#msg-289298

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Worker process core dumped

zakirenish
to be more self assurance,
can somebody confirm that the "r" is no longer accessable after this?
                   ngx_http_free_request(r, 0);



thank you in advance!

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289296,289299#msg-289299

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Worker process core dumped

Maxim Dounin
Hello!

On Thu, Sep 03, 2020 at 07:19:08AM -0400, allenhe wrote:

> to be more self assurance,
> can somebody confirm that the "r" is no longer accessable after this?
>                    ngx_http_free_request(r, 0);

The ngx_http_free_request() function frees the request, and using
"r" after the call is an error.

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Worker process core dumped

zakirenish
Hi,

I found most times using "r" after ngx_http_free_request() won't have any
problem. the core dump would happen once for a while in the high load.
I am wondering if the "ngx.pfree" does not return the memory back to the os
when it's called?

Posted at Nginx Forum: https://forum.nginx.org/read.php?2,289296,289346#msg-289346

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Worker process core dumped

Maxim Dounin
Hello!

On Tue, Sep 08, 2020 at 10:49:44PM -0400, allenhe wrote:

> I found most times using "r" after ngx_http_free_request() won't have any
> problem. the core dump would happen once for a while in the high load.

That's because use-after-free errors not always result in
segmentation faults as long as the memory isn't yet returned to
the OS by the memory allocator.  To get consistent errors, consider using
AddressSanitizer (https://en.wikipedia.org/wiki/AddressSanitizer).

--
Maxim Dounin
http://mdounin.ru/
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx