Search Results

Search found 4578 results on 184 pages for 'connections'.

Page 47/184 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Load balancing and sessions

    - by vtortola
    Hi there, What is the better approach for load balancing on web servers? My services run in .NET and Mono, so they could be hosted on IIS or Apache2, and the will have to provide SSL connection. I've read two main approaches, store the state in a common server and use sticky sessions, there is any other else? I've read 3 diffent things about sticky sessions: 1)the load balancing device will know with which server did you start the connection and all the further connections from that host will be routed to the same server. 2)the load balancing devide read a cookie named: JSESSIONID 3)the load balancing devide read a cookie named: ASPSESSIONID I'm a little bit confused, what will happen exactly? As the connections will be SSL there is not a chance for the load balancing devide of read the cookies, so then what? About store the estate in a common server, what solutions do you know? I've read memcache is a good solution but is there any other else? Cheers.

    Read the article

  • Self referencing symmetrical Hibernate Map Table using @ManyToMany

    - by sammichy
    I have the following class public class ElementBean { private String link; private Set<ElementBean> connections; } I need to create a map table where elements are mapped to each other in a many-to-many symmetrical relationship. @ManyToMany(targetEntity=ElementBean.class) @JoinTable( name="element_elements", joinColumns=@JoinColumn(name="FROM_ELEMENT_ID", nullable=false), inverseJoinColumns=@JoinColumn(name="TO_ELEMENT_ID", nullable=false) ) public Set<ElementBean> getConnections() { return connections; } I have the following requirements When element A is added as a connection to Element B, then element B should become a connection of Element A. So A.getConnections() should return B and B.getConnections() should return A. I do not want to explicitly create 2 records one for mapping A to B and another for B to A. Is this possible?

    Read the article

  • TransactionScope and Connection Pooling

    - by Graham
    Hi, I'm trying to get a handle on whether we have a problem in our application with database connections using incorrect IsolationLevels. Our application is a .Net 3.5 database app using SQL Server 2005. I've discovered that the IsolationLevel of connections are not reset when they are returned to the connection pool (see here) and was also really surprised to read in this blog post that each new TransactionScope created gets its own connection pool assigned to it. Our database updates (via our business objects) take place within a TransactionScope (a new one is created for each business object graph update). But our fetches do not use an explicit transaction. So what I'm wondering is could we ever get into the situation where our fetch operations (which should be using the default IsolationLevel - Read Committed) would reuse a connection from the pool which has been used for an update, and inherit the update IsolationLevel (RepeatableRead)? Or would our updates be guaranteed to use a different connection pool seeing as they are wrapped in a TransactionScope? Thanks in advance, Graham

    Read the article

  • Poco SocketReactor Scalability

    - by Genesis
    I have written a proxy server for Linux using Poco but have since been reading up on the various approaches to achieving TCP/IP server scalability. I will need the server to handle persistent connections (not HTTP traffic) with an upper limit of about 250 simultaneous connections. Each connection typically uses about 5-10Kb/sec and the best possible latency in handling traffic is crucial. As it stands I am using the Poco SocketReactor which uses the Reactor model with a select() call at its heart however I have had a read on the C10K problem as well as few other resources and it seems that using this approach might not be the best idea. I believe there is a test implementation in the Poco libs that uses poll() so this could be an option to improve things. Does anyone have any experience using a Poco SocketReactor and do you have any idea how well it might scale for my scenario? If it will not scale well, suggestions on alternatives would be appreciated.

    Read the article

  • UITableView programatically create delegate object?

    - by fuzzygoat
    I have a question regarding setting up a custom delegate class for use with UITableView. What I have done is as follows: Setup a new class (in sperate *.h and *.m files for the class) Conformed that new class to the <UITableViewDelegate, UITableViewDataSource> protocols Added the required methods. Created a pointer to the new object using @property and IBOutlet. In InterfaceBuilder created and assigned an object template to my new class Assigned the dataSource and delegate connections. This all works fine. My question is if I don't want to use interfaceBuilder to setup and instantiate my new delegate class directly in Xcode how do I go about doing that? More specifically how would I: Instantiate the delegate class, would that be created / owned by the controller? Set the dataSource and delegate connections? What is the best way of doing this? any help / information is much appreciated. Gary

    Read the article

  • WCF Self-hosted service, client clean-up on service stop

    - by Sentax
    Hi everyone, I'm curious to know how I would go about setting up my service to stop cleanly on the server the service will be installed on. For example when I have many clients connecting and doing operations every minute and I want to shut-down the service for maintenance, how can I do this in the "OnStop" event of the service to then let the main service host to deny any new client connections and let the current connections finish before it actually shuts down its services to the client, this will ensure data isn't corrupted on the server as the server shuts down. Right now I'm not setup as a singleton because I need scalability in the service. So I would have to somehow get my service host to do this independently of knowing how many instances are created of the service class. I hope I explain myself good enough, if not, let me know, I'll try to explain better. Thanks, Scott

    Read the article

  • Able to ping but cannot browse after several hours running of my python program

    - by Shane
    It's a GUI program I wrote in python checking website/server status running on my XP SP3, multi threads are used to check different site/server. After several hours running, the program starts to get urlopen error timed out all the time, and this always happens right after a POST request from a server(not a certain one, might be A or B or C), and it's also not the first POST request causing the problem, normally after several hours running and it happens to make a POST request at an unknown moment, all you get from then on is urlopen error timed out. I'm still able to ping but cannot browse any site, once the program closed everything's fine. It's definitely the program causing this problem, well I just don't know how to debug/check what the problem is, also don't know if it's from OS side or my program wasting too many resources/connections(are you still able to ping when too many connections used?), would anybody please help me out?

    Read the article

  • Memory leak in Qt signal and slots

    - by Ajay
    Hello, I am running valgrind on my Qt code,and even on successful exit of the application, get the following report from valgrind 8,832 bytes in 92 blocks are still reachable in loss record 12 of 12 at 0x4025390: operator new(unsigned int) (vg_replace_malloc.c:214) ==3339== by 0x4B75F05: QMutex::QMutex(QMutex::RecursionMode) (qmutex.cpp:123) ==3339== by 0x4B77602: QMutexPool::get(void const*) (qmutexpool.cpp:137) ==3339== by 0x4CA0EC2: signalSlotLock(QObject const*) (qobject.cpp:112) ==3339== by 0x4CA3939: QMetaObjectPrivate::connect(QObject const*, int, QObject const*, int, int, int*) (qobject.cpp:2900) ==3339== by 0x4CA5C00: QObject::connect(QObject const*, char const*, QObject const*, char const*, Qt::ConnectionType) (qobject.cpp:2599) I disconnect all signal connections and also delete the objects. The above mentioned leak increases if i increase the amount of signal and slot connections? Can anybody help with this?

    Read the article

  • PHP pecl/memcached extension slow when setting option for consistent hashing

    - by HarryF
    Using the newer PHP pecl/memcached extension. Calls to Memcached::setOption() like; $m = new Memcached(); $m->setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT); are costing between 150 to 500ms - just in making the call to setOption() and as we're not using persistent connections but rather doing this on every request, it hurts. Delving deeper, setting Memcached::OPT_DISTRIBUTION to Memcached::DISTRIBUTION_CONSISTENT ends up calling update_continuum() in libmemcached which appears to be fairly intensive, although we're only passing a list of 15 memcached servers in, so somewhat surprising to see it take between 150 to 500ms to rebuild the continuum data structure. Could it be setting this option is only suitable for persistent connections, where it's called only once while making the initial connection? Or is this a bug libmemcached? Using the newer pecl/memcached extension 1.0.1 with libmemcached 0.38 Thanks.

    Read the article

  • Java multithreaded server - each connection returns data. Processing on main thread?

    - by oliwr
    I am writing a client with an integrated server that should wait indefinitely for new connections - and handle each on a Thread. I want to process the received byte array in a system wide available message handler on the main thread. However, currently the processing is obviously done on the client thread. I've looked at Futures, submit() of ExecutorService, but as I create my Client-Connections within the Server, the data would be returned to the Server thread. How can I return it from there onto the main thread (in a synchronized packet store maybe?) to process it without blocking the server? My current implementation looks like this: public class Server extends Thread { private int port; private ExecutorService threadPool; public Server(int port) { this.port = port; // 50 simultaneous connections threadPool = Executors.newFixedThreadPool(50); } public void run() { try{ ServerSocket listener = new ServerSocket(this.port); System.out.println("Listening on Port " + this.port); Socket connection; while(true){ try { connection = listener.accept(); System.out.println("Accepted client " + connection.getInetAddress()); connection.setSoTimeout(4000); ClientHandler conn_c= new ClientHandler(connection); threadPool.execute(conn_c); } catch (IOException e) { System.out.println("IOException on connection: " + e); } } } catch (IOException e) { System.out.println("IOException on socket listen: " + e); e.printStackTrace(); threadPool.shutdown(); } } } class ClientHandler implements Runnable { private Socket connection; ClientHandler(Socket connection) { this.connection=connection; } @Override public void run() { try { // Read data from the InputStream, buffered int count; byte[] buffer = new byte[8192]; InputStream is = connection.getInputStream(); ByteArrayOutputStream out = new ByteArrayOutputStream(); // While there is data in the stream, read it while ((count = is.read(buffer)) > 0) { out.write(buffer, 0, count); } is.close(); out.close(); System.out.println("Disconnect client " + connection.getInetAddress()); connection.close(); // handle the received data MessageHandler.handle(out.toByteArray()); } catch (IOException e) { System.out.println("IOException on socket read: " + e); e.printStackTrace(); } return; } }

    Read the article

  • VirtualTreeView add roots with Threads

    - by Benjamin Weiss
    I would like to add roots to a VirtualTreeView http://www.delphi-gems.com/index.php/controls/virtual-treeview with a thread like this: function AddRoot ( p : TForm1 ) : Integer; stdcall; begin p.VirtualStringTree1.AddChild(NIL); end; var Dummy : DWORD; i : Integer; begin for i := 0 to 2000 do begin CloseHandle(CreateThread(NIL,0, @ADDROOT, Self,0, Dummy)); end; end; The reason for this is that I want to add all connections from my INDY Server to the TreeView. Indy's onexecute/onconnect get's called as a thread. So if 3+ connections come in at the same time the app crashes due to the TreeView. Same is if a client gets disconnected and I want to delete the Node. I am using Delphi7 and Indy9 Any Idea how to fix that?

    Read the article

  • How to measure the time HTTP requests spend sitting in the accept-queue?

    - by David Jones
    I am using Apache2 on Ubuntu 9.10, and I am trying to tune my configuration for a web application to reduce latency of responses to HTTP requests. During a moderately heavy load on my small server, there are 24 apache2 processes handling requests. Additional requests get queued. Using "netstat", I see 24 connections are ESTABLISHED and 125 connections are TIME_WAIT. I am trying to figure out if that is considered a reasonable backlog. Most requests get serviced in a fraction of a second, so I am assuming requests move through the accept-queue fairly quickly, probably within 1 or 2 seconds, but I would like to be more certain. Can anyone recommend an easy way to measure the time an HTTP request sits in the accept-queue? The suggestions I have come across so far seem to start the clock after the apache2 worker accepts the connection. I'm trying to quantify the accept-queue delay before that. thanks in advance, David Jones

    Read the article

  • Is there a Java equivalent to libevent?

    - by JoelPM
    I've written a high-throughput server that handles each request in its own thread. For requests coming in it is occasionally necessary to do RPCs to one or more back-ends. These back-end RPCs are handled by a separate queue and thread-pool, which provides some bounding on the number of threads created and the maximum number of connections to the back-end (it does some caching to reuse clients and save the overhead of constantly creating connections). Having done all this, though, I'm beginning to think an event-based architecture would be more efficient. In searching around I haven't found any equivalents to libevent for Java, but maybe I'm not looking in the right place? Mina-statemachine from Apache was the closest thing I found, but it looks more verbose than I need and there's no real release available. Any suggestions?

    Read the article

  • BindException/Too many file open while using HttpClient under load

    - by Langali
    I have got 1000 dedicated Java threads where each thread polls a corresponding url every one second. public class Poller { public static Node poll(Node node) { GetMethod method = null; try { HttpClient client = new HttpClient(new SimpleHttpConnectionManager(true)); ...... } catch (IOException ex) { ex.printStackTrace(); } finally { method.releaseConnection(); } } } The threads are run every one second: for (int i=0; i <1000; i++) { MyThread thread = threads.get(i) // threads is a static field if(thread.isAlive()) { // If the previous thread is still running, let it run. } else { thread.start(); } } The problem is if I run the job every one second I get random exceptions like these: java.net.BindException: Address already in use INFO httpclient.HttpMethodDirector: I/O exception (java.net.BindException) caught when processing request: Address already in use INFO httpclient.HttpMethodDirector: Retrying request But if I run the job every 2 seconds or more, everything runs fine. I even tried shutting down the instance of SimpleHttpConnectionManager() using shutDown() with no effect. If I do netstat, I see thousands of TCP connections in TIME_WAIT state, which means they are have been closed and are clearing up. So to limit the no of connections, I tried using a single instance of HttpClient and use it like this: public class MyHttpClientFactory { private static MyHttpClientFactory instance = new HttpClientFactory(); private MultiThreadedHttpConnectionManager connectionManager; private HttpClient client; private HttpClientFactory() { init(); } public static HttpClientFactory getInstance() { return instance; } public void init() { connectionManager = new MultiThreadedHttpConnectionManager(); HttpConnectionManagerParams managerParams = new HttpConnectionManagerParams(); managerParams.setMaxTotalConnections(1000); connectionManager.setParams(managerParams); client = new HttpClient(connectionManager); } public HttpClient getHttpClient() { if (client != null) { return client; } else { init(); return client; } } } However after running for exactly 2 hours, it starts throwing 'too many open files' and eventually cannot do anything at all. ERROR java.net.SocketException: Too many open files INFO httpclient.HttpMethodDirector: I/O exception (java.net.SocketException) caught when processing request: Too many open files INFO httpclient.HttpMethodDirector: Retrying request I should be able to increase the no of connections allowed and make it work, but I would just be prolonging the evil. Any idea what is the best practise to use HttpClient in a situation like above? Btw, I am still on HttpClient3.1.

    Read the article

  • Async Socket Listener on separate thread - VB.net

    - by TheHockeyGeek
    I am trying to use the code from Microsoft for an Async Socket connection. It appears the listener runs in the main thread locking the GUI. I am new at both socket connections and multi-threading all at the same time. Having a hard time getting my mind wrapped around this all at once. The code used is at http://msdn.microsoft.com/en-us/library/fx6588te.aspx Using this example, how can I move the listener to its own thread? Public Shared Sub Main() ' Data buffer for incoming data. Dim bytes() As Byte = New [Byte](1023) {} ' Establish the local endpoint for the socket. Dim ipHostInfo As IPHostEntry = Dns.GetHostEntry(Dns.GetHostName()) Dim ipAddress As IPAddress = ipHostInfo.AddressList(1) Dim localEndPoint As New IPEndPoint(ipAddress, 11000) ' Create a TCP/IP socket. Dim listener As New Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp) ' Bind the socket to the local endpoint and listen for incoming connections. listener.Bind(localEndPoint) listener.Listen(100)

    Read the article

  • Multiple complete HTTP requests stuck in TCP CLOSE_WAIT state

    - by Sean Owen
    I have a Java and Tomcat-based server application which initiates many outbound HTTP requests to other web sites. We use Jakarta's HTTP Core/Client libraries, very latest versions. The server locks up at some point since all its worker threads are stuck trying to close completed HTTP connections. Using 'lsof' reveals a bunch of sockets stuck in TCP CLOSE_WAIT state. This doesn't happen for all, or even most connections. In fact, I saw it before and resolved it by making sure to set the Connection: Close response header. So that makes me think it may be bad behavior of remote servers. It may have come up again since I moved the app to a totally new service provider -- different OS, network situation. But, I am still at a loss as to what I could do, if anything, to work around this. Some poking around on the internet didn't turn up anything I'm not already doing. Just thought I'd ask if anyone has seen and solved this?

    Read the article

  • DriverManager always returns my custom driver regardless of the connection URL

    - by JGB146
    I am writing a driver to act as a wrapper around two separate MySQL connections (to distributed databases). Basically, the goal is to enable interaction with my driver for all applications instead of requiring the application to sort out which database holds the desired data. Most of the code for this is in place, but I'm having a problem in that when I attempt to create connections via the MySQL Driver, the DriverManager is returning an instance of my driver instead of the MySQL Driver. I'd appreciate any tips on what could be causing this and what could be done to fix it! Below is a few relevant snippets of code. I can provide more, but there's a lot, so I'd need to know what else you want to see. First, from MyDriver.java: public MyDriver() throws SQLException { DriverManager.registerDriver(this); } public Connection connect(String url, Properties info) throws SQLException { try { return new MyConnection(info); } catch (Exception e) { return null; } } public boolean acceptsURL(String url) throws SQLException { if (url.contains("jdbc:jgb://")) { return true; } return false; } It is my understanding that this acceptsURL function will dictate whether or not the DriverManager deems my driver a suitable fit for a given URL. Hence it should only be passing connections from my driver if the URL contains "jdbc:jgb://" right? Here's code from MyConnection.java: Connection c1 = null; Connection c2 = null; /** *Constructors */ public DDBSConnection (Properties info) throws SQLException, Exception { info.list(System.out); //included for testing Class.forName("com.mysql.jdbc.Driver").newInstance(); String url1 = "jdbc:mysql://server1.com/jgb"; String url2 = "jdbc:mysql://server2.com/jgb"; this.c1 = DriverManager.getConnection( url1, info.getProperty("username"), info.getProperty("password")); this.c2 = DriverManager.getConnection( url2, info.getProperty("username"), info.getProperty("password")); } And this tells me two things. First, the info.list() call confirms that the correct user and password are being sent. Second, because we enter an infinite loop, we see that the DriverManager is providing new instances of my connection as matches for the mysql URLs instead of the desired mysql driver/connection. FWIW, I have separately tested implementations that go straight to the mysql driver using this exact syntax (al beit only one at a time), and was able to successfully interact with each database individually from a test application outside of my driver.

    Read the article

  • ODBC Connection Pooling

    - by beansy
    I have inherited a suite of .Net c# applications from a developer which talk to an Informix database on a unix server. Instead of using the usual practices for managing the database connections (disposable pattern / "open late / close early"), the code seems to open one ODBC connection when each app loads and doesn't close it. Connection pooling is turned via the ODBC administrator. Is there any way of seeing how many ODBC connections are open? What is the effect of turning off connection pooling? Am I right in thinking the .NET framework will use connection pooling anyway?

    Read the article

  • Matlab: Adding symbols to figure

    - by niko
    Hi, Below is the user interface I have created to simulate LDPC coding and decoding The code sequence is decoded iteratively by passing values between the left and right nodes through the connections. The first thing it would be good to add in order to improve visualization is to add arrows to the connections in the direction of passing values. The alternative is to draw a bigger arrow at the top of the connection showing the direction. Another thing I would like to do is displaying the current mathematical operation below the connection (in this example c * H'). What I don't know how to do is displaying special characters and mathematical symbols and other kinds of text such as subscript and superscript in the figure (for example sum sign and subscript "T" instead of sign ="'" to indicate transposed matrix). I would be very thankful if anyone could point to any useful resources for the questions above or show the solution. Thank you.

    Read the article

  • Ruby Rack: startup and teardown operations (Tokyo Cabinet connection)

    - by clint.tseng
    I have built a pretty simple REST service in Sinatra, on Rack. It's backed by 3 Tokyo Cabinet/Table datastores, which have connections that need to be opened and closed. I have two model classes written in straight Ruby that currently simply connect, get or put what they need, and then disconnect. Obviously, this isn't going to work long-term. I also have some Rack middleware like Warden that rely on these model classes. What's the best way to manage opening and closing the connections? Rack doesn't provide startup/shutdown hooks as I'm aware. I thought about inserting a piece of middleware that provides reference to the TC/TT object in env, but then I'd have to pipe that through Sinatra to the models, which doesn't seem efficient either; and that would only get be a per-request connection to TC. I'd imagine that per-server-instance-lifecycle would be a more appropriate lifespan. Thanks!

    Read the article

  • Handle mysql restart in SQLAlchemy

    - by wRAR
    My Pylons app uses local MySQL server via SQLAlchemy and python-MySQLdb. When the server is restarted, open pooled connections are apparently closed, but the application doesn't know about this and apparently when it tries to use such connection it receives "MySQL server has gone away": File '/usr/lib/pymodules/python2.6/sqlalchemy/engine/default.py', line 277 in do_execute cursor.execute(statement, parameters) File '/usr/lib/pymodules/python2.6/MySQLdb/cursors.py', line 166 in execute self.errorhandler(self, exc, value) File '/usr/lib/pymodules/python2.6/MySQLdb/connections.py', line 35 in defaulterrorhandler raise errorclass, errorvalue OperationalError: (OperationalError) (2006, 'MySQL server has gone away') This exception is not caught anywhere so it bubbles up to the user. If I should handle this exception somewhere in my code, please show the place for such code in a Pylons WSGI app. Or maybe there is a solution in SA itself?

    Read the article

  • How should I move my code from dev to production?

    - by Teddy
    I have created a PHP web-application. I have 3 environments: DEV, TEST, PROD. What's a good tool / business practice for me to move my PHP web-application code from DEV to TEST to the PROD environment? Realizing that my TEST environment still only connects to my TEST database; whereas, I need to PROD environment to connect to my PROD database. So the code is mostly the same, except that I need to change my TEST code once moved into PROD to connect to the PROD database and not TEST database. I've heard of people taking down Apache in such away that it doesn't allow new connections and once all the existing connections are idle it simply brings down the web server. Then people manually copy the code and then manually update the config files of the PHP application to also point to the PROD instance. That seems terribly dangerous. Does a best practice exists?

    Read the article

  • PHP: How should I move my code from dev to production?

    - by Teddy
    I have created a PHP web-application. I have 3 environments: DEV, TEST, PROD. What's a good tool / business practice for me to move my PHP web-application code from DEV to TEST to the PROD environment? Realizing that my TEST environment still only connects to my TEST database; whereas, I need to PROD environment to connect to my PROD database. So the code is mostly the same, except that I need to change my TEST code once moved into PROD to connect to the PROD database and not TEST database. I've heard of people taking down Apache in such away that it doesn't allow new connections and once all the existing connections are idle it simply brings down the web server. Then people manually copy the code and then manually update the config files of the PHP application to also point to the PROD instance. That seems terribly dangerous. Does a best practice exists?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >