Efficiency of while(true) ServerSocket Listen
Posted
by Submerged
on Stack Overflow
See other posts from Stack Overflow
or by Submerged
Published on 2010-05-17T20:39:47Z
Indexed on
2010/05/17
21:10 UTC
Read the original article
Hit count: 319
I am wondering if a typical while(true) ServerSocket listen loop takes an entire core to wait and accept a client connection (Even when implementing runnable and using Thread .start())
I am implementing a type of distributed computing cluster and each computer needs every core it has for computation. A Master node needs to communicate with these computers (invoking static methods that modify the algorithm's functioning).
The reason I need to use sockets is due to the cross platform / cross language capabilities. In some cases, PHP will be invoking these java static methods.
I used a java profiler (YourKit) and I can see my running ServerSocket listen thread and it never sleeps and it's always running. Is there a better approach to do what I want? Or, will the performance hit be negligible?
Please, feel free to offer any suggestion if you can think of a better way (I've tried RMI, but it isn't supported cross-language.
Thanks everyone
© Stack Overflow or respective owner