Survey
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
[APIMANAGER-5154] java.lang.OutOfMemoryError: unable to create new
native thread exception is observed when APIM is publishing data to the DAS
Created: 30/Jun/16 Updated: 05/Jul/16 Resolved: 05/Jul/16
Status:
Project:
Component/s:
Affects
Version/s:
Fix Version/s:
Resolved
WSO2 API Manager
None
1.9.1
Type:
Reporter:
Resolution:
Labels:
Remaining
Estimate:
Time Spent:
Original
Estimate:
Bug
viraj senevirathne
Fixed
None
Not Specified
2.0.0-Beta2
Priority:
Assignee:
Votes:
High
Nuwan Dias
0
Not Specified
Not Specified
Major
Severity:
Moderate
Estimated
Complexity:
Test cases added: Yes
Description
When APIM publish statistics to DAS the following error can be observed.
TID: [-1] [] [2016-06-27 10:48:44,130] WARN {sun.rmi.transport.tcp.TCPTransport$AcceptLoo
RMI TCP Accept-10056: accept loop for ServerSocket[addr=0.0.0.0/0.0.0.0,localport=10056] t
{sun.rmi.transport.tcp.TCPTransport$AcceptLoop}
603209 java.lang.OutOfMemoryError: unable to create new native thread
603210
at java.lang.Thread.start0(Native Method)
603211
at java.lang.Thread.start(Thread.java:714)
603212
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:94
603213
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1371
603214
at
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:414)
603215
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:371)
603216
at java.lang.Thread.run(Thread.java:745)
In the heap dump taken from the DAS, we can observe about 7000 thread with the following state.
Thread "pool-26-thread-2107355" threads.
java.net.SocketInputStream.socketRead0(java.io.FileDescriptor, byte[ ], int, int, int)
java.net.SocketInputStream.read(byte[ ], int, int, int) (line: 152)
java.net.SocketInputStream.read(byte[ ], int, int) (line: 122)
java.io.BufferedInputStream.fill() (line: 235)
java.io.BufferedInputStream.read1(byte[ ], int, int) (line: 275)
java.io.BufferedInputStream.read(byte[ ], int, int) (line: 334)
org.apache.thrift.transport.TIOStreamTransport.read(byte[ ], int, int) (line: 127)
org.apache.thrift.transport.TTransport.readAll(byte[ ], int, int) (line: 84)
org.apache.thrift.protocol.TBinaryProtocol.readAll(byte[ ], int, int) (line: 378)
org.apache.thrift.protocol.TBinaryProtocol.readI32() (line: 297)
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin() (line: 204)
org.apache.thrift.TBaseProcessor.process(org.apache.thrift.protocol.TProtocol,
org.apache.thrift.protocol.TProtocol) (line: 22)
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run() (line: 176)
java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$
(line: 1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run() (line: 615)
java.lang.Thread.run() (line: 745)
The issue resides in the publisher for DAS. The opened connection from the client is not closed, that's the reason
connection leak.
Therefore with the fix, we have to close the connection appropriately.
Comments
Comment by Nuwan Dias [ 05/Jul/16 ]
API Manager 2.0.0 uses the new data publisher. We haven't observed this issue in the
load/performance tests we have performed.
Generated at Thu Jun 08 03:40:18 IST 2017 using JIRA 7.2.2#72004sha1:9d5132893cc8c728a3601a9034a1f8547ef5c7be.