High data on recv-q buffer and thread lock on java.io.BufferedInputStream in linux
Posted
by
Sagar Patel
on Server Fault
See other posts from Server Fault
or by Sagar Patel
Published on 2014-08-24T12:37:20Z
Indexed on
2014/08/24
16:22 UTC
Read the original article
Hit count: 242
We have a java application running on linux (ubuntu server). We have been facing high recv-q problem since quite some time.
Application gets hang and does not read data from socket every few hours. In thread dump, we have found below stack trace.
"Receiver-146" daemon prio=10 tid=0x00007fb3fc010000 nid=0x7642 runnable [0x00007fb5906c5000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.
socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
- locked <0x00000007688f1ff0> (a java.io.BufferedInputStream)
at org.smpp.TCPIPConnection.receive(TCPIPConnection.java:413)
at org.smpp.ReceiverBase.receivePDUFromConnection(ReceiverBase.java:197)
at org.smpp.Receiver.receiveAsync(Receiver.java:351)
at org.smpp.ReceiverBase.process(ReceiverBase.java:96)
at org.smpp.util.ProcessingThread.run(ProcessingThread.java:199)
at java.lang.Thread.run(Thread.java:722)
We are not able to trace the exact reason behind this? Kindly help.
We are using 16 core machine and load on the system is around 30-40 at the time of issue. We use command ss dst <ip>
to find out recv-q. Recently we have been facing issues with recv-q size getting hung, were in receive buffer gets stuck at some point of time. But recvQ size is not decreasing and as a result we are losing a lot of hits from the other side, our application is not accepting any data.
© Server Fault or respective owner