Search Results

Search found 18132 results on 726 pages for 'connection timeout'.

Page 61/726 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Java RMI timeout in callback

    - by sakra
    We are using Java RMI for communication. An RMI client passes a processing request and an object with a callback method to an RMI server. The server invokes the callback when it is done with processing. The setup is similar to the one described in RMI Callbacks. Occasionally we are getting a "read time out" exception in the server upon invoking the callback method. The callback thread stalls for about a minute before the exception is raised. java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is: java.net.SocketTimeoutException: Read timed out at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:286) at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:110) at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:178) at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:132) at $Proxy2.finished(Unknown Source) at com.unrisk.db.grid.GridTask.invokeCallback(com.unrisk.db.grid.GridTask:1292) at com.unrisk.db.grid.GridTask.invokeCallbacks(com.unrisk.db.grid.GridTask:1304) at com.unrisk.db.service.tasks.EquityMDTask.afterRun(com.unrisk.db.service.tasks.EquityMDTask:276) at com.unrisk.db.grid.GridTask.run(com.unrisk.db.grid.GridTask:720) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:129) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read(BufferedInputStream.java:237) at java.io.DataInputStream.readByte(DataInputStream.java:248) at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:228) ... 12 more We are using Sun Java JDK 1.6.0_18 under Windows Server 2003 32-bit. Is it possible to work around the connection problems by tuning RMI related system properties?

    Read the article

  • C# NotifyIcon ShowBalloonTip timeout

    - by Dubila
    Hi, In my c# (2.0 framework) application I'm using notify Icon control. I want to show from this control a balloon tip. but the "showBalloonTip" event i slimite to a timeout and I want to show this balloon forever. I've tried to use a timer that will show the balloon again and again but in vista there is a fading effect for balloons and if it is not disabled every 25-30 seconds the balloon will fade in. Any idea? Thanks.

    Read the article

  • Timeout event in netty 4

    - by user1819425
    Hi I would like receive an event where messageReceived does not get called within an expected time. I tried with ReadTimeoutHandler where it generates exception where I can handle in exceptionCaught() where I would would do some work and return without closing the context. but right after that I got a bunch of exception Nov 18, 2012 8:56:34 AM io.netty.channel.ChannelInitializer WARNING: Failed to initialize a channel. Closing: [id: 0xa81de260, /127.0.0.1:59763 = /127.0.0.1:59724] io.netty.channel.ChannelHandlerLifeCycleException: io.netty.handler.timeout.ReadTimeoutHandler is not a @Sharable handler, so can't be added or removed multiple times. at io.netty.channel.DefaultChannelPipeline.callBeforeAdd(DefaultChannelPipeline.java:629) at io.netty.channel.DefaultChannelPipeline.addLast0(DefaultChannelPipeline.java:173) Am I doing correctly? Thanks

    Read the article

  • ASP.NET Session expires in no time?

    - by Galilyou
    Weired problem! ASP.NET Session expires instantly. In my web.config I have this session settings: <sessionState mode="InProc" timeout="10000" /> AFAIK the timeout attribute's value is in minutes and can't be greater than 525,600 minutes (1 year). I don't understand what I am doing wrong here. Why is the session expiring. Is it a server memory issue? I don't think so, the server is pretty descent and it has only one site which isn't doing much after all. Ideas? EDIT: After setting the cookiless attribute to true, and while noticing the session id on the url, I can see that the session id CHANGING. I assume that this means the session is expiring. The IIS Settings are correct AFAIK (the enable session state checkbox is checked, and the value of the time is 20). A Picture is worth 100 words:

    Read the article

  • Clearquest Database Timeout

    - by onaclov2000
    I have a tool that is setup to query our Clearquest Database to return information to the user automatically every 9000 milliseconds. I came in today and the connection had timed out over the weekend, I found in the oSession object a "check heartbeat" function, but I'm not sure that is what I want to use to determine if i need to "re-login", I saw a db.timeoutinterval, but I can't seem to find any good reference on how to call it, since the oSession Object doesn't actually call it, and any references in the API guide mention it with regard to actually creating the db using the adminsession object. What "object" do I need to create to access the timeout interval and how? Thank you for the help! Or is it better to use the "check heartbeat function" and will it return a true or false depending on current state of login?

    Read the article

  • Very Slow WebResponse triggering TimeOut

    - by David Fdez
    Hello: I have a function in C# that fetches the status of Internet by retrieving a 64b XML from the router page public bool isOn() { HttpWebRequest hwebRequest = (HttpWebRequest)WebRequest.Create("http://" + this.routerIp + "/top_conn.xml"); hwebRequest.Timeout = 500; HttpWebResponse hWebResponse = (HttpWebResponse)hwebRequest.GetResponse(); XmlTextReader oXmlReader = new XmlTextReader(hWebResponse.GetResponseStream()); string value; while (oXmlReader.Read()) { value = oXmlReader.Value; if (value.Trim() != ""){ return !value.Substring(value.IndexOf("=") + 1, 1).Equals("0"); } } return false; } using Mozilla Firefox 3.5 & FireBug addon i guessed it normally takes 30ms to retrieve the page however at the very huge 500ms limit it stills reach it often. How can I dramatically improve the performance? Thanks in advance

    Read the article

  • NoHostAvailableException With Cassandra & DataStax Java Driver If Large ResultSet

    - by hughj
    The setup: 2-node Cassandra 1.2.6 cluster replicas=2 very large CQL3 table with no secondary index Rowkey is a UUID.randomUUID().toString() read consistency set to ONE Using DataStax java driver 1.0 The request: Attempting to do a table scan by "SELECT some-col from schema.table LIMIT nnn;" The fail: Once I go beyond a certain nnn LIMIT, I start to get NoHostAvailableExceptions from the driver. It reads like this: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered)) at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:64) at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:214) at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:169) at com.jpmc.es.rtm.storage.impl.EventExtract.main(EventExtract.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered)) at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:98) at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:165) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) Given: This is probably not the most enlightened thing to do to a large table with millions of rows, but this is how I learn what not to do, so I would really appreciate someone who could volunteer how this kind of error can be debugged. For example, when this happens, there are no indications that the nodes in the cluster ever had an issue with the request (there is nothing in the logs on either node that indicate any timeout or failure). Also, I enabled the trace on the driver, which gives you some nice autotrace (ala Oracle) info as long as the query succeeds. But in this case, the driver blows a NoHostAvailableException and no ExecutionInfo is available, so tracing has not provided any benefit in this case. I also find it interesting that this does not seem to be recorded as a timeout (my JMX consoles tell me no timeouts have occurred). So, I am left not understanding WHERE the failure is actually occurring. I am left with the idea that it is the driver that is having a problem, but I don't know how to debug it (and I would really like to). I have read several posts from folks that state that query'g for resultSets 10000 rows is probably not a good idea, and I am willing to accept this, but I would like to understand what is causing the exception and where the exception is happening. FWIW, I also tried bumping the timeout properties in the cassandra.yaml, but this made no difference whatsoever. I welcome any suggestions, anecdotes, insults, or monetary contributions for my registration in the house of moron-developers. Regards!!

    Read the article

  • RSA encryption results in server execution timeout

    - by Nilambari
    Hi, I am using PHP Crypt_RSA (http://pear.php.net/package/Crypt_RSA) for encrypting and decrypting the contents. Contents are of 1kb size. Following are the results: keylength = 1024 Encryption function takes time: 225 secs keylength = 2048 Encryption function takes time: 115 secs I need to reduce this execution time as most of the live apache servers have 120 sec limit for execution time. How to reduce this execution time? RSA alorithm docs says the only 1024 - 2048 keys are generated. I ACTUALLY tried to generate larger key, but it always results in execution timeout. How do i work on reducing encryption - decryption execution time? Thanks, Nila

    Read the article

  • Any way to get read timeouts with Java NIO/selectors?

    - by mmebane
    I'm converting a Java server application which used blocking IO and thread-per-client to NIO and a single IO thread (probably a thread pool after I get the basic implementation done). The one thing I am having an issue with is disconnecting clients after they have been idle for a period. I had previously been using SO_TIMEOUT and blocking reads. However, with selector-based IO, reads don't block... I was hoping that I'd be able to set a timeout and be able to select on read timeout, with something like SelectionKey.isReadTimeout(), but nothing like that seems to exist. The current best solution I have come up with is to have a Timer with a TimerTask which keeps track of the keys which are waiting on read, and then canceling them and re-scheduling them on each read. Is there a better solution?

    Read the article

  • PHP set timeout for script, set_time_limit not working

    - by tehalive
    I have a command-line PHP script that runs a wget request using each member of an array with foreach. This wget request can sometimes take a long time so I want to be able to set a timeout for killing the script if it goes past 15 seconds for example. I have PHP safemode disabled and tried set_time_limit(15) early in the script, however it continues indefinitely. I've given up troubleshooting set_time_limit() and was trying to find other ways to kill the script after 15 seconds of execution. However, I'm not sure if it's possible to check the time a script has been running while it's in the middle of a wget request at the same time (a do while loop did not work). Thanks for any tips!

    Read the article

  • redis timeout with predis

    - by Patrick
    Hello, I'm using redis with php (predis at http://github.com/nrk/predis/) and am experiencing frequent timeout. The stack trace shows: [04-Apr-2010 03:39:50] PHP Fatal error: Uncaught exception 'Predis_ClientException' with message 'Connection timed out' in redis.php:697 Stack trace: #0 redis.php(757): Predis_Connection->connect() #1 redis.php(729): Predis_Connection->getSocket() #2 redis.php(825): Predis_Connection->writeCommand(Object(Predis_Commands_ListRange)) #3 redis.php(165): Predis_ConnectionCluster->writeCommand(Object(Predis_Commands_ListRange)) #4 redis.php(173): Predis_Client->executeCommandInternal(Object(Predis_ConnectionCluster), Object(Predis_Commands_ListRange)) #5 redis.php(157): Predis_Client->executeCommand(Object(Predis_Commands_ListRange)) #6 [internal function]: Predis_Client->__call('lrange', Array) This happens consistently and I have no idea why. Anyone has any idea?

    Read the article

  • Internet Dropping?!

    - by stead1984
    I have a virtual DC running DNS and Routing and Remote Access, that routes ALL workstations Internet traffic out to the Internet, this works fine but noticed that the Internet drops occasionally. I've checked with our service provider (Managed Communications) and they are adamant that it's not their fault. The Internet drops seem to affect everyone. We also have a server configured to use the same Internet service on a different network over a site-to-site VPN connection which also suffers from packet drops. I've spoken to Cisco and have done many tests with Cisco and they believe the problem is down to the ISP. I'm wondering if it's a DNS issue, as the Internet service uses OpenDNS. Any ideas?

    Read the article

  • FTP transfer timeouts while uploading small files

    - by Hamed Momeni
    I have this problem that when I need to transfer some files (mostly small files < 100KB) the connections time out. Well actually it uploads one file and it fails on the next until my client reconnects to the server and the same thing happens over and over again. I googled the problem and some said that switching from passive mode to active mode could solve the it but it didn't work for me. Even continuously pinging the server to keep the connection alive was to no avail. P.S. I have root access to the server. Update: I'm running ProFTPD on a CentOS vps. I tried a few clients (FireFTP, FileZilla) all having the same problem.

    Read the article

  • WCF: what timeout property to use?

    - by Tom234
    I have a piece of code like so NetTcpBinding binding = new NetTcpBinding(SecurityMode.Transport); binding.Security.Message.ClientCredentialType = MessageCredentialType.Windows; binding.CloseTimeout = new TimeSpan(0, 0, 1); binding.OpenTimeout = new TimeSpan(0, 0, 1); binding.SendTimeout = new TimeSpan(0, 0, 1); binding.ReceiveTimeout = new TimeSpan(0, 0, 1); EndpointAddress endPoint = new EndpointAddress(new Uri(clientPath)); DuplexChannelFactory<Iservice> channel = new DuplexChannelFactory<Iservice>(new ClientCallBack(clientName), binding, endPoint); channel.Ping() When the endpoint doesn't exist it still waits 20seconds before throwing an EndpointNotFoundException. The weird thing is that when i changed the SendTimeout the exception message changed from The connection attempt lasted for a time span of 00:00:20 to ....01 but still took 20seconds to throw the exception! How can i change this timeout?

    Read the article

  • Getting exception when trying to monkey patch pymongo.connection._Pool

    - by Creotiv
    I use pymongo 1.9 on Ubuntu 10.10 with python 2.6.6 When i trying to monkey patch pymongo.connection._Pool i'm getting error on connection: AutoReconnect: could not find master/primary But when i change _Pool class in pymongo.connection module, it work pretty fine. Even if i copy _Pool implementation from pymongo.connection module and will try to monkey patch by the same code, it still giving same exception. I need to remove threading.local from _Pool class, because i use gevent and i need to implement Pool for all mongo connections(for all threads). I use this code: import pymongo class GPool: """A simple connection pool. Uses thread-local socket per thread. By calling return_socket() a thread can return a socket to the pool. Right now the pool size is capped at 10 sockets - we can expose this as a parameter later, if needed. """ # Non thread-locals __slots__ = ["sockets", "socket_factory", "pool_size","sock"] #sock = None def __init__(self, socket_factory): self.pool_size = 10 if not hasattr(self,"sock"): self.sock = None self.socket_factory = socket_factory if not hasattr(self, "sockets"): self.sockets = [] def socket(self): # we store the pid here to avoid issues with fork / # multiprocessing - see # test.test_connection:TestConnection.test_fork for an example # of what could go wrong otherwise pid = os.getpid() if self.sock is not None and self.sock[0] == pid: return self.sock[1] try: self.sock = (pid, self.sockets.pop()) except IndexError: self.sock = (pid, self.socket_factory()) return self.sock[1] def return_socket(self): if self.sock is not None and self.sock[0] == os.getpid(): # There's a race condition here, but we deliberately # ignore it. It means that if the pool_size is 10 we # might actually keep slightly more than that. if len(self.sockets) < self.pool_size: self.sockets.append(self.sock[1]) else: self.sock[1].close() self.sock = None pymongo.connection._Pool = GPool

    Read the article

  • Kill a Perl system call after a timeout

    - by Fergal
    I've got a Perl script I'm using for running a file processing tool which is started using backticks. The problem is that occasionally the tool hangs and It needs to be killed in order for the rest of the files to be processed. Whats the best way best way to apply a timeout after which the parent script will kill the hung process? At the moment I'm using: foreach $file (@FILES) { $runResult = `mytool $file >> $file.log`; } But when mytool hangs after n seconds I'd like to be able to kill it and continue to the next file.

    Read the article

  • How to get fopen to timeout properly

    - by beagleguy
    hey all, I have the following snippet of php code if($fp = fopen($url, 'r')) { stream_set_timeout($fp, 1); stream_set_blocking($fp, 0); } $info = stream_get_meta_data($fp); I'd like the request to timeout after 1 second... if I put a sleep(20) in my $url that I'm reading it just waits the whole 20 seconds and never times out. Is there a better way to do timeouts with fopen? If I use ini_set('default_socket_timeout',2); above that code it times out properly but $info then becomes null so ideally I'd like to use the stream functions. thanks

    Read the article

  • php import to mysql hosted on godaddy

    - by julio
    Yeah, I know! It's not my choice. I am doing a large data import using a PHP script into a mysql DB hosted on godaddy. It seems their mysql connection gets killed every few hours regardless of what work it's doing. Their tech support is useless, and I've exhausted myself writing attempted workarounds. Right now, I'm trying to do a mysql_ping every few minutes, and if the ping returns false, I attempt to open up a new db connection. My script (which takes many hours to complete), keeps failing with the very unhelpful message of "mysql server has gone away". I understand mysql trying to close a connection that's been open too long, but the connection is not idle-- it's busy basically the whole time, and with the pings I've written in, it should not be idle longer than 5 minutes at most at any time. (These same scripts work with no errors on Amazon AWS servers, my local servers, etc.) Any help most appreciated! I'm about to give up.

    Read the article

  • IIS 6 session timing out a lot quicker than expected

    - by Echiban
    I am working with an web application that has its sessions timing out a lot quicker than expected. We expected a timeout of 15 minutes but it's timing out at 3-4 minutes. Info about environment: IIS6 classic ASP / COM+ app timeout OK on current PROD, much quicker in dev / QA environments We already disabled app pool recycling, and even put IIS in isolation mode - no effect HTTP err log doesn't display any lines when session times out We've done a close comparison of PROD and DEV / QA environments, and given we use virtual machines on all of them, settings should be preserved. I tried to find IIS blog notes from David Wang but many of them now have HTTP 404 errors, and I don't know what else to do. Please help! At the very least, is there a way to get IIS to log every time a session expires? At the very least some means of logging / debugging IIS would be useful. Thanks in advance.

    Read the article

  • How to set timeout for exclusive lock in PostgreSQL

    - by Low Kian Seong
    I have an import script that was failing because of the 'Exclusive nowait' option I set my script. This caused the script to error out the first time it could not get the exclusive lock on the table. My script did it this way: "LOCK TABLE %s IN EXCLUSIVE MODE NOWAIT" Now my script works it's just that I want to be able to set the timeout for PostgreSQL instead of having it wait for the maximum time which is 15mins. I prefer to set it in posgresql.conf. Is there a way to do this?

    Read the article

  • setting a timeout for an InputStreamreader variable

    - by Noona
    I have a server running that accepts connections made through client sockets, I need to read the input from this client socket, now suppose the client opened a connection to my server without sending anything through the server's socket outputstream, in this case, while my server tried to read the input through the client socket's inputstream, an exception will be thrown, but before the exception is thrown i would like a timeout say of 5 sec, how can I do this? currently here's how my code looks like on the server side: try { InputStreamReader clientInputStream = new InputStreamReader(clientSocket.getInputStream()); int c; StringBuffer requestBuffer = new StringBuffer(); while ((c = clientInputStream.read()) != -1) { requestBuffer.append((char) c); if (requestBuffer.toString().endsWith(("\r\n\r\n"))) break; } request = new Request(requestBuffer.toString(), clientSocket); } catch (Exception e) // catch any possible exception in order to keep the thread running { try { if (clientSocket != null) clientSocket.close(); } catch (IOException ex) { ex.printStackTrace(); } System.err.println(e); //e.printStackTrace(); }

    Read the article

  • GetMessage with a timeout

    - by qdii
    I have an application which second thread calls GetMessage() in a loop. At some point the first thread realizes that the user wants to quit the application and notifies the second thread that he should terminate. As the first thread is stuck on GetMessage(), the program never quits. Is there a way to wait for messages with a timeout? I’m open to other ideas too. EDIT: (additional explanations) The second thread runs that snippet of code: while ( !m_quit && GetMessage( &msg, NULL, 0, 0 ) ) { TranslateMessage( &msg ); DispatchMessage( &msg ); } The first thread sets m_quit to true.

    Read the article

  • sqlobject: No connection has been defined for this thread or process

    - by Claudiu
    I'm using sqlobject in Python. I connect to the database with conn = connectionForURI(connStr) conn.makeConnection() This succeeds, and I can do queries on the connection: g_conn = conn.getConnection() cur = g_conn.cursor() cur.execute(query) res = cur.fetchall() This works as intended. However, I also defined some classes, e.g: class User(SQLObject): class sqlmeta: table = "gui_user" username = StringCol(length=16, alternateID=True) password = StringCol(length=16) balance = FloatCol(default=0) When I try to do a query using the class: User.selectBy(username="foo") I get an exception: ... File "c:\python25\lib\site-packages\SQLObject-0.12.4-py2.5.egg\sqlobject\main.py", line 1371, in selectBy conn = connection or cls._connection File "c:\python25\lib\site-packages\SQLObject-0.12.4-py2.5.egg\sqlobject\dbconnection.py", line 837, in __get__ return self.getConnection() File "c:\python25\lib\site-packages\SQLObject-0.12.4-py2.5.egg\sqlobject\dbconnection.py", line 850, in getConnection "No connection has been defined for this thread " AttributeError: No connection has been defined for this thread or process How do I define a connection for a thread? I just realized I can pass in a connection keyword which I can give conn to to make it work, but how do I get it to work if I weren't to do that?

    Read the article

  • Timeout not working in SQL Connection

    - by carlos
    I have this simple code to test that a DB is ready: Function testlocalcon() As Boolean Dim constr As String = _clconstr Try Using t As New SqlConnection() constr = constr & " ; Connect Timeout=1" If Not t.State = Data.ConnectionState.Open Then t.ConnectionString = constr t.Open() If t.State = Data.ConnectionState.Open Then Return True Else Return False End If Else Return True End If End Using Catch ex As Exception Return False End Try End Function I do not want to execute a query, just to check the connection, but no matter what the time out parameter is ignored. I search here (Stackoverflow) and internet and found nothing in how to fix this. Any one else have this problem? Or, are there any other ideas on how to let the application know that the DB is ready?

    Read the article

  • Slow connection to Linux MySQL from Windows only (XAMPP)

    - by Josh
    I'm having a problem with a PHP project (using Kohana 3.2 framework) on my Windows 7 64-bit machine connecting to the database. The development database is stored on a Ubuntu Linux server on the local network. Other development machines running OSX and Linux are connecting fine. There are no other Windows development machines to test with. I can access MySQL fine using MySQL Workbench, and other projects (which I believe to be less database heavy) run mostly ok, only occasionally getting timeout messages. I'm constantly getting Maximum execution time of 30 seconds exceeded when functions such as mysql_query() are run in this particular project. Specifically, the Kohana file where the timeout occurs is MODPATH\database\classes\kohana\database\mysql.php [ 186 ]. My local set-up is: Windows 7 Professional 64bit XAMPP 1.7.7 (PHP 5.3.8) The output of uname -a of the Linux server is: Linux peach 2.6.38-11-server #50-Ubuntu SMP Mon Sep 12 21:34:27 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux I've tried the following, with no success: Disabling Windows firewall Switching between using a persistant and normal connection In my.cnf, adding skip-name-resolve Increasing wait_timeout Enabling bind-address I've run out of ideas now, and have no idea how to debug an odd issue like this. Has anyone come across this before, or have any idea how I could find the root of the issue, or what might be the problem?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >