Search Results

Search found 1802 results on 73 pages for 'a bright worker'.

Page 20/73 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Postgres 9.0 locking up, 100% CPU

    - by Jake
    We are having a problem where our Postgres 9.0 server occasionally locks up and kills our webapp. Restarting Postgres fixes the problem. Here's what I've been able to observe: First, usage of one CPU jumps to 100% for a few minutes Disk operations drop to ~0 during this time Database operations drop to 0 (blocks and tuples per sec) Logs show during this time: WARNING: worker took too long to start; cancelled WARNING: worker took too long to start; cancelled No Queries in logs (only those over 200ms are logged) No unusually long-running queries logged before or during Then the second CPU jumps to 100% The number of postgres processes jumps from the usual 8-10 to ~20 Matched by a spike in Postgres Blocks per second (about twice normal) Logs show LOG: could not accept SSL connection: EOF detected Queries are running but slow Restarting postgres returns everything to normal Setup: Server: Amazon EC2 Large Ubuntu 10.04.2 LTS Postgres 9.0.3 Dedicated DB server Does anyone have any idea what's causing this? Or any suggestions about what else I should be checking out?

    Read the article

  • Explanation of WCF application life cycle in IIS 6 hosting environment.

    - by David Christiansen
    Hi all and thanks for reading, I have a delay issue where my application takes a long time to start up when first called after an determinate period since the last call. The web application is a WCF service and we are talking about a delay of ~18seconds before the actual processing starts. Now, I believe I know how to reduce this delay so that is not my question (it's more a stackoverflow deal anyway) My question is, Can anyone explain to me why is it that despite me disabling worker process shutdown, and worker process recycling the application still 'winds down' after a indeterminate period of time of inactivity? To understand this I need to know more about the innerworkings of WCF services hosted in IIS. I fully expect there to be a straight forward answer to this. Thank you v. much for any help you may offer, DC

    Read the article

  • What's Your Favorite Harmless Computer Practical Joke?

    - by Ben Griswold
    A couple of months ago, I returned from lunch to find that typing any key launched a random application. As I always lock my machine before walking away from it, my first thought was that a co-worker was playing a practical joke on me. As it turned out, the cause was random computer wonkiness. But just in case there's a need :), what's your favorite harmless computer practical joke? For example, would you alter host file entries to direct google.com to a random site or would you put tape over the optical sensor of a co-worker’s mouse?

    Read the article

  • Remote deploying wars to a liferay installation

    - by iftrue
    With vanilla tomcat, you can POST to URLs beneath SOMURL/manager/ with a proper manager user role defined. The liferay deployment of tomcat, however, is missing the manager and host-manager applications, and when I copy the directories from a vanilla Tomcat installation, I get the exception below: Exception: javax.servlet.ServletException: Error allocating a servlet instance org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:636) root cause java.lang.SecurityException: Servlet of class org.apache.catalina.manager.HTMLManagerServlet is privileged and cannot be loaded by this web application org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:636) What's the proper way to remote deploy wars to a liferay instance? (Not portlets, in my case.)

    Read the article

  • Varnish, hide port number

    - by George Reith
    My set up is as follows: OS: CentOS 6.2 running on an OpenVZ virtual machine. Web server: Nginx listening on port 8080 Reverse proxy: Varnish listening on port 80 The problem is that Varnish redirects my requests to port 8080 and this appears in the address bar like so http://mysite.com:8080/directory/, causing relative links on the site to include the port number (8080) in the request and thus bypassing Varnish. The site is powered by WordPress. How do I allow Varnish to use Nginx as the backend on port 8080 without appending the port number to the address? Edit: Varnish is set up like so: I have told the Varnish daemon to listen to port 80 by default. VARNISH_VCL_CONF=/etc/varnish/default.vcl # # # Default address and port to bind to # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. # VARNISH_LISTEN_ADDRESS= VARNISH_LISTEN_PORT=80 # # # Telnet admin interface listen address and port VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 VARNISH_ADMIN_LISTEN_PORT=6082 # # # Shared secret file for admin interface VARNISH_SECRET_FILE=/etc/varnish/secret # # # The minimum number of worker threads to start VARNISH_MIN_THREADS=1 # # # The Maximum number of worker threads to start VARNISH_MAX_THREADS=1000 # # # Idle timeout for worker threads VARNISH_THREAD_TIMEOUT=120 # # # Cache file location VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin # # # Cache file size: in bytes, optionally using k / M / G / T suffix, # # or in percentage of available disk space using the % suffix. VARNISH_STORAGE_SIZE=1G # # # Backend storage specification VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" # # # Default TTL used when the backend does not specify one VARNISH_TTL=120 The VCL file that Varnish calls (through an include in default.vcl) consists of: backend playwithbits { .host = "127.0.0.1"; .port = "8080"; } acl purge { "127.0.0.1"; } sub vcl_recv { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { set req.backend = playwithbits; set req.http.Host = regsub(req.http.Host, ":[0-9]+", ""); if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_hit { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } } sub vcl_miss { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_fetch { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; } } }

    Read the article

  • prevent apt dependency from being satisfied (permanently)

    - by Bryan Hunt
    I want to install mailman (just to use it's mail archiving feature) but Ubuntu wants to pull down a load of extra dependencies. sudo apt-get install mailman Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: apache2 apache2-mpm-worker apache2.2-common Suggested packages: apache2-doc apache2-suexec apache2-suexec-custom spamassassin lynx listadmin Is there any way to mark those packages ( apache2 apache2-mpm-worker apache2.2-common ) as never to be installed? This is not 2002 ;)

    Read the article

  • IIS 7 AppPool logs an error after recycle due to inactivity

    - by ddysart
    We have Windows 2008 RS Server running IIS hosting an ASP.NET site. This morning there was a weird sequence. First a notice that the AppPool was being recycled due to inactivity: "A worker process with process id of '6896' serving application pool 'xxxx' was shutdown due to inactivity. Application Pool timeout configuration was set to 20 minutes. A new worker process will be started when needed." This makes sense and jibes with out timeout settings, but 30 seconds later we see: "A process serving application pool 'xxxx' terminated unexpectedly. The process id was '6896'. The process exit code was '0xc0000005'." I found an older KB article that explains a condition where this might happend on IIS6 due to permission issues, but am curious what might cause this on IIS7.5, especially since we are not seeing it regularly.

    Read the article

  • Automatically allowing SSH into a machine behind a UPNP router?

    - by GJ
    Hi I have a MacBook connecting to the Internet from behind various routers from time to time (home, office, etc). All of the routers support UPNP. I need to allow a co-worker to SSH into the machine, without configuring each router each time to forward port 22 to the MacBook. Is there any way to get the MacBook to use UPNP (or some other method) to automatically configure any supporting router that it is behind to forward port 22 to itself? That would allow the co-worker to SSH into the MacBook but just knowing its external IP, which is easy.

    Read the article

  • Nginx Static Content Server Maxing Out?

    - by Harry
    I use nginx to serve the static content for a decently busy website of mine. I have the logging disabled, and 4 worker processes enabled with 5,000 connections per worker (which should yield a max connection limit of 20,000. The server is only operating at about 10% CPU usage and 50% ram, but it's very laggy, and sometimes nginx is so slow to respond to the requests, it times out. For a small number of connections, it's fine, but once any load starts occurring (~2,500 connections), it backs up and bogs down. Is there any other bottlenecks or limits that I might be hitting? This is a FreeBSD server, and all the static files are located locally (not NFS). The NIC is an unmetered gigabit, and it's only using around 75 megabit. Any insight would be appreciated. Thanks.

    Read the article

  • tcpsndbuf high fail count

    - by Matthew Crenshaw
    I've got a small setup, one machine that acts as a load balancer and two machines that do all the work. The load balancer runs nginx (static content + php proxying to workers) and mysql, the two workers run php5-fpm and memcached (pooled between workers). Here's beancounters for the balancer: tcpsndbuf 2171848 2386280 10000000 20000000 3947733 tcprcvbuf 1248288 1669504 10000000 20000000 0 Here's worker 1: tcpsndbuf 951976 1262672 20000000 40000000 0 tcprcvbuf 278528 393496 20000000 40000000 0 Here's worker 2: tcpsndbuf 989888 527472 20000000 40000000 0 tcprcvbuf 212992 452520 20000000 40000000 0 The balancer has 1GB ram, the two workers have 2GB ram each. What is eating my send buffer?

    Read the article

  • PHP-FPM runs PHP scripts as root

    - by fwalch
    I have a web server setup using nginx and PHP-FPM listening on a Unix socket. In my php-fpm.conf, I have specified user = www group = www When I run ps aux, I can see that the php-fpm worker processes run as www; the php-fpm master process runs as root. However, I noticed that PHP scripts are executed as root; at least that's the output of echo get_current_user(); What can I do to run scripts as the www user? How can this even happen if the worker processes run as www?

    Read the article

  • Getting NoClassdef on HMAC_SHA1 in Webpshere

    - by defjab
    We have WAS 6.0 (I know) .2.43 ND running in multiple regions. Our Dev-B region runs fine, but Dev-C throws a java exception when we make web-calls (at least this is what the developer tells me)...Same code in both regions and I checked the obvious suspects (Global security, SSL ciphers etc) and they all seem to match. Here's the stack trace from SystemErr: [8/1/12 4:02:31:758 EDT] 0000005c ServletWrappe E SRVE0068E: Could not invoke the service() method on servlet action. Exception thrown : java.lang.NoClassDefFoundError at javax.crypto.Mac.getInstance(DashoA12275) at net.oauth.signature.HMAC_SHA1.computeSignature(HMAC_SHA1.java:73) at net.oauth.signature.HMAC_SHA1.getSignature(HMAC_SHA1.java:39) at net.oauth.signature.OAuthSignatureMethod.getSignature(OAuthSignatureMethod.java:83) at net.oauth.signature.OAuthSignatureMethod.sign(OAuthSignatureMethod.java:54) at com.harcourt.hsp.utils.LTIUtil.generateSignature(LTIUtil.java:62) at com.harcourt.hsp.web.struts.lti.action.BaseLTIAction.generateSignature(BaseLTIAction.java:238) at com.harcourt.hsp.web.struts.lti.action.BaseLTIAction.execute(BaseLTIAction.java:96) at org.springframework.web.struts.DelegatingActionProxy.execute(DelegatingActionProxy.java:106) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:419) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:224) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1194) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414) at javax.servlet.http.HttpServlet.service(HttpServlet.java:743) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1796) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:887) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:90) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:1937) at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:130) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:434) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:373) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:253) at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminaters(NewConnectionInitialReadCallback.java:207) at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:109) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.requestComplete(WorkQueueManager.java:566) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.attemptIO(WorkQueueManager.java:619) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.workerRun(WorkQueueManager.java:952) at com.ibm.ws.tcp.channel.impl.WorkQueueManager$Worker.run(WorkQueueManager.java:1039) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1498) at javax.crypto.Mac.getInstance(DashoA12275) at net.oauth.signature.HMAC_SHA1.computeSignature(HMAC_SHA1.java:73) at net.oauth.signature.HMAC_SHA1.getSignature(HMAC_SHA1.java:39) at net.oauth.signature.OAuthSignatureMethod.getSignature(OAuthSignatureMethod.java:83) at net.oauth.signature.OAuthSignatureMethod.sign(OAuthSignatureMethod.java:54) at com.harcourt.hsp.utils.LTIUtil.generateSignature(LTIUtil.java:62) at com.harcourt.hsp.web.struts.lti.action.BaseLTIAction.generateSignature(BaseLTIAction.java:238) at com.harcourt.hsp.web.struts.lti.action.BaseLTIAction.execute(BaseLTIAction.java:96) at org.springframework.web.struts.DelegatingActionProxy.execute(DelegatingActionProxy.java:106) at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:419) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:224) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1194) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:414) at javax.servlet.http.HttpServlet.service(HttpServlet.java:743) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1796) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:887) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:90) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:1937) at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:130) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:434) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:373) at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.ready(HttpInboundLink.java:253) at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.sendToDiscriminaters(NewConnectionInitialReadCallback.java:207) at com.ibm.ws.tcp.channel.impl.NewConnectionInitialReadCallback.complete(NewConnectionInitialReadCallback.java:109) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.requestComplete(WorkQueueManager.java:566) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.attemptIO(WorkQueueManager.java:619) at com.ibm.ws.tcp.channel.impl.WorkQueueManager.workerRun(WorkQueueManager.java:952) at com.ibm.ws.tcp.channel.impl.WorkQueueManager$Worker.run(WorkQueueManager.java:1039) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1498) Thanks for your help. I'm sure it's a config that I'm missing.

    Read the article

  • Deploying an EAR to JBOSS times out (org.rhq.core.pc.inventory.TimeoutException:)

    - by rangalo
    Hi, I am trying to deploy an ear file to JBOSS AS (defalut server). The application is the mavenised version of examples of SeamInAction book. When I copy the file to $JBOSS_HOME/server/default/deploy, I don't get any exception but the application doesn't respond, after some time trying to access the application from the browser gives following in the log... While deploying with admin-console (http://localhost:8080/admin-console) I get following error messgae: PS: After this Jboss gets into unusable state. I cannot even access admin-console. I just have to kill it. ErrorMessage in admin-console: Failed to create Resource Open18.ear - cause: org.rhq.core.pc.inventory.TimeoutException: Call to [org.rhq.plugins.jbossas5.ApplicationServerComponent.createResource()] with args [[CreateResourceReport: ResourceType=[ResourceType[id=0, category=Service, name=Enterprise Application (EAR), plugin=JBossAS5]], ResourceKey=[null]]] timed out. Invocation thread will be interrupted at org.rhq.core.pc.inventory.ResourceContainer$ResourceComponentInvocationHandler.invokeInNewThreadWithLock(ResourceContainer.java:437) at org.rhq.core.pc.inventory.ResourceContainer$ResourceComponentInvocationHandler.invoke(ResourceContainer.java:406) at $Proxy266.createResource(Unknown Source) at org.rhq.core.pc.inventory.CreateResourceRunner.call(CreateResourceRunner.java:113) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Error Logs: 4:08:58,555 INFO [TableMetadata] foreign keys: [fkaf42e01ba13c3380, fk_course_ref_facility] 14:08:58,555 INFO [TableMetadata] indexes: [course_pkey] 14:08:58,645 INFO [TableMetadata] table found: public.facility 14:08:58,645 INFO [TableMetadata] columns: [zip, phone, state, type, uri, city, country, id, price_range, address, county, description, nam e] 14:08:58,645 INFO [TableMetadata] foreign keys: [] 14:08:58,645 INFO [TableMetadata] indexes: [facility_pkey] 14:08:58,705 INFO [TableMetadata] table found: public.hole 14:08:58,705 INFO [TableMetadata] columns: [id, m_par, l_handicap, name, l_par, number, course_id, m_handicap] 14:08:58,705 INFO [TableMetadata] foreign keys: [fk_hole_ref_course, fk30f4c09c3f1200] 14:08:58,705 INFO [TableMetadata] indexes: [hole_pkey, uniq_hole_number] 14:08:58,764 INFO [TableMetadata] table found: public.tee 14:08:58,764 INFO [TableMetadata] columns: [hole_id, distance, tee_set_id] 14:08:58,764 INFO [TableMetadata] foreign keys: [fk1c014f8de7677, fk_tee_ref_hole, fk1c014c69de560, fk_tee_ref_tee_set] 14:08:58,764 INFO [TableMetadata] indexes: [tee_pkey] 14:08:58,826 INFO [TableMetadata] table found: public.tee_set 14:08:58,826 INFO [TableMetadata] columns: [id, color, m_slope_rating, l_slope_rating, name, course_id, m_course_rating, l_course_rating, p os] 14:08:58,826 INFO [TableMetadata] foreign keys: [fk_tee_set_ref_course, fkaa6881b79c3f1200] 14:08:58,826 INFO [TableMetadata] indexes: [tee_set_pkey, uniq_tee_set_pos, uniq_tee_set_color] 14:08:58,827 INFO [SchemaUpdate] schema update complete 14:08:58,829 INFO [NamingHelper] JNDI InitialContext properties:{java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory, java. naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces} 14:08:58,850 INFO [TomcatDeployment] deploy, ctxPath=/Open18 14:15:53,969 WARN [DiscoveryComponentProxyFactory] The discovery component for resource type [ResourceType[id=0, category=Service, name=Connector, plugin=JBossAS5]] has been blacklisted 14:15:53,970 WARN [InventoryManager] Failure during discovery for [Connector] Resources - failed after 300002 ms. org.rhq.core.pc.inventory.TimeoutException: Call to [org.rhq.plugins.jbossas5.ConnectorDiscoveryComponent.discoverResources()] with args [[org.rhq.core.pluginapi.inventory.ResourceDiscoveryContext@96db1]] timed out. Invocation thread will be interrupted at org.rhq.core.pc.util.DiscoveryComponentProxyFactory$ResourceDiscoveryComponentInvocationHandler.invokeInNewThread(DiscoveryComponentProxyFactory.java:208) at org.rhq.core.pc.util.DiscoveryComponentProxyFactory$ResourceDiscoveryComponentInvocationHandler.invoke(DiscoveryComponentProxyFactory.java:181) at $Proxy249.discoverResources(Unknown Source) at org.rhq.core.pc.inventory.InventoryManager.invokeDiscoveryComponent(InventoryManager.java:272) at org.rhq.core.pc.inventory.InventoryManager.executeComponentDiscovery(InventoryManager.java:1697) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.discoverForResource(RuntimeDiscoveryExecutor.java:218) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.discoverForResource(RuntimeDiscoveryExecutor.java:234) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.runtimeDiscover(RuntimeDiscoveryExecutor.java:134) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.call(RuntimeDiscoveryExecutor.java:94) at org.rhq.core.pc.inventory.RuntimeDiscoveryExecutor.call(RuntimeDiscoveryExecutor.java:51) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:207) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) 14:15:53,981 WARN [NavigationContent] Unable to find node for deleted resource [Resource[id=-5, type=Connector, key=ajp://127.0.0.1:8009, name=ajp://127.0.0.1:8009, parent=JBoss Web]].

    Read the article

  • Can't run my servlet from tomcat server even though the classes are in package

    - by Mido
    Hi there, i am trying to get my servlet to run, i have been searching for 2 days and trying every possible solution and no luck. The servet class is in the appropriate folder (i.e under the package name). I also added the jar files needed in my servlet into lib folder. the web.xml file maps the url and defines the servlet. So i did everything in the documentation and wt people said in here and still getting this error : type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception javax.servlet.ServletException: Error instantiating servlet class assign1a.RPCServlet org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:108) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:558) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:379) org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:282) org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:357) org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1687) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) java.lang.Thread.run(Thread.java:619) root cause java.lang.NoClassDefFoundError: assign1a/RPCServlet (wrong name: server/RPCServlet) java.lang.ClassLoader.defineClass1(Native Method) java.lang.ClassLoader.defineClassCond(ClassLoader.java:632) java.lang.ClassLoader.defineClass(ClassLoader.java:616) java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2820) org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1143) org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1638) org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1516) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:108) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:558) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:379) org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:282) org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:357) org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1687) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) java.lang.Thread.run(Thread.java:619) note The full stack trace of the root cause is available in the Apache Tomcat/7.0.5 logs. Also here is my servlet code : package assign1a; import java.io.IOException; import java.util.logging.Level; import java.util.logging.Logger; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import lib.jsonrpc.RPCService; public class RPCServlet extends HttpServlet { /** * */ private static final long serialVersionUID = -5274024331393844879L; private static final Logger log = Logger.getLogger(RPCServlet.class.getName()); protected RPCService service = new ServiceImpl(); public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { response.setContentType("text/html"); response.getWriter().write("rpc service " + service.getServiceName() + " is running..."); } public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { try { service.dispatch(request, response); } catch (Throwable t) { log.log(Level.WARNING, t.getMessage(), t); } } } Please help me :) Thanks. EDIT: here are the contents of my web.xml file <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" metadata-complete="true"> <servlet> <servlet-name>jsonrpc</servlet-name> <servlet-class>assign1a.RPCServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>jsonrpc</servlet-name> <url-pattern>/rpc</url-pattern> </servlet-mapping> </web-app>

    Read the article

  • Cutting large XML file into smaller pieces in C#

    - by NDraskovic
    I have a problem that I'm working on for quite some time now. I have an XML file with over 50000 records (one record has 3 levels). This file is used by one of my applications to control document sending (the record holds, among other informations, the type of document that has to be sent to a certain person). So in my application I load the XML file into a XmlDocument, and then by using SelectNodes method, I create a XmlNodeList from which I read the data I want. The process is like this - our worker takes the persons ID card (simple eith barcode) and reads it with barcode reader. When the barcode value has been read, my application finds the person with that ID in the XML file, and stores the type of the document into a string variable. Then the worker takes the document and reads its barcode, and if the value of documents barcode and the value in the value in the string variable match, the application makes a record that document of type xxxxxxxx will be sent to the person with ID yyyyyyyyy. This is very simple code, it works perfectly for now, and this is how it looks: On textBox1_TextChanged event (worker read persons ID): foreach(XmlNode node in NodeList){ if(String.Compare(node.Attributes.GetNamedItem("ID").Value.ToString(),textBox1.Text)==0) { ControlString = node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(); break; } } textBox2.Focus(); And on textBox2_TextChanged event (worker read the documents barcode): if(String.Compare(textBox2.Text,ControlString)==0) { //Create a record and insert it into a SQL database } My question is - how will my application perform with larger XML files (I was told that the XML file might be up to 500,000 records large), will this approach be valid, or will I need to cut the file into smaller files. If I have to cut it, please give me an idea with some code samples, I've tried to do it like this: Reading entire record and storing it into a string: private void WriteXml(XmlNode record) { tempXML = record.InnerXml; temp = "<" + record.Name + " code=\"" + record.Attributes.GetNamedItem("code").Value + "\">" + Environment.NewLine; temp += tempXML + Environment.NewLine; temp += "</" + record.Name + ">"; SmallerXMLDocument += temp + Environment.NewLine; temp = ""; i++; } tempXML, temp and SmallerXMLDocument are all string variables. And then in button_Click method I load the XML file into a XmlNodeList (again by using XmlDocument.SelectNodes method) and I try to create one big string value that would hold all records like this: foreach(XmlNode node in nodes) { if(String.Compare(node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(),doctype1)==0) { WriteXML(node); } } My idea was to create a string value (in this case called SmallerXmlDocument), and when I pass trough the entire XML file, to simply copy the value of that string into a new file. This works, but only for files that have up to 2000 records (and my has way more than that). So, if I need to cut the file into smaller pieces, what would be the best way to do it (keep in mind that there could be up to half a million records in a XML file)? Thanks

    Read the article

  • COM port read - Thread remains alive after timeout occurs

    - by Sna
    Hello to all. I have a dll which includes a function called ReadPort that reads data from serial COM port, written in c/c++. This function is called within an extra thread from another WINAPI function using the _beginthreadex. When COM port has data to be read, the worker thread returns the data, ends normaly, the calling thread closes the worker's thread handle and the dll works fine. However, if ReadPort is called without data pending on the COM port, when timeout occurs then WaitForSingleObject returns WAIT_TIMEOUT but the worker thread never ends. As a result, virtual memory grows at about 1 MB every time, physical memory grows some KBs and the application that calls the dll becomes unstable. I also tryied to use TerminateThread() but i got the same results. I have to admit that although i have enough developing experience, i am not familiar with c/c++. I did a lot of research before posting but unfortunately i didn't manage to solve my problem. Does anyone have a clue on how could i solve this problem? However, I really want to stick to this kind of solution. Also, i want to mention that i think i can't use any global variables to use some kind of extra events, because each dll's functions may be called many times for every COM port. I post some parts of my code below: The Worker Thread: unsigned int __stdcall ReadPort(void* readstr){ DWORD dwError; int rres;DWORD dwCommModemStatus, dwBytesTransferred; int ret; char szBuff[64] = ""; ReadParams* params = (ReadParams*)readstr; ret = SetCommMask(params->param2, EV_RXCHAR | EV_CTS | EV_DSR | EV_RLSD | EV_RING); if (ret == 0) { _endthreadex(0); return -1; } ret = WaitCommEvent(params->param2, &dwCommModemStatus, 0); if (ret == 0) { _endthreadex(0); return -2; } ret = SetCommMask(params->param2, EV_RXCHAR | EV_CTS | EV_DSR | EV_RLSD| EV_RING); if (ret == 0) { _endthreadex(0); return -3; } if (dwCommModemStatus & EV_RXCHAR||dwCommModemStatus & EV_RLSD) { rres = ReadFile(params->param2, szBuff, 64, &dwBytesTransferred,NULL); if (rres == 0) { switch (dwError = GetLastError()) { case ERROR_HANDLE_EOF: _endthreadex(0); return -4; } _endthreadex(0); return -5; } else { strcpy(params->param1,szBuff); _endthreadex(0); return 0; } } else { _endthreadex(0); return 0; } _endthreadex(0); return 0;} The Calling Thread: int WINAPI StartReadThread(HANDLE porthandle, HWND windowhandle){ HANDLE hThread; unsigned threadID; ReadParams readstr; DWORD ret, ret2; readstr.param2 = porthandle; hThread = (HANDLE)_beginthreadex( NULL, 0, ReadPort, &readstr, 0, &threadID ); ret = WaitForSingleObject(hThread, 500); if (ret == WAIT_OBJECT_0) { CloseHandle(hThread); if (readstr.param1 != NULL) // Send message to GUI return 0; } else if (ret == WAIT_TIMEOUT) { ret2 = CloseHandle(hThread); return -1; } else { ret2 = CloseHandle(hThread); if (ret2 == 0) return -2; }} Thank you in advance, Sna.

    Read the article

  • Git-based storage and publishing, infrastructure advice

    - by Joel Martinez
    I wanted to get some advice on moving a system to "the cloud" ... specifically, I'm looking to move into some of Windows Azure's managed services, as right now I'm managing a VM. Basically, the system operates on some data stored in a github git repository. I'll describe the current architecture: Current system (all hosted on a single server): GitHub - configured with a webhook pointing at ... ASP.NET MVC application - to accept the webhook from git. It pushes a message onto ... Azure service bus Queue - which is drained by ... Windows Service - pulls the message from the queue and ... Fetches the latest data from the git repository (using GitLib2Sharp) onto the local disk and finally ... Operates on the data in git to produce a static HTML website hosted/served by IIS. The system works really well, actually ... but I would like to get out of the business of managing the VM, and move to using some combination of Azure web and worker roles. But because the system relies so heavily on the git repository on the local filesystem, I'm finding it difficult to figure out how to architect in the cloud. I know you can get file system access, so in theory I could just fetch the repository if there's nothing on disk ... but the performance/responsiveness of the system sort of depends on the repository being available and only having to fetch diffs, which is relatively quick. As opposed to periodically having to fetch the entire (somewhat large) git repository if the web or worker role was recycled, or something. So I would love some advice on how you would architect such a system :) Ultimately, the only real requirement is to be able to serve HTML content that's been produced from the contents of a git repository (in a relatively responsive manner, from a publishing perspective) ... please feel free to ask any clarifying questions if there's something I omitted. Thanks!

    Read the article

  • Rails/Node.js interaction

    - by lpvn
    I and my co-worker are developing a web application with rails and node.js and we can't reach a consensus regarding a particular architectural decision. Our setup is basically a rails server working with node.js and redis, when a client makes a http request to our rails API in some cases our rails application posts the response to a redis database and then node.js transmits the response via websocket. Our disagreement occurs in the following point: my co-worker thinks that using node.js to send data to clients is somewhat business logic and should be inside the model, so in the first code he wrote he used commands of broadcast in callbacks and other places of the model, he's convinced that the models are the best place for the interaction between rails and node. I on the other hand think that using node.js belongs to the runtime realm, my take is that the broadcast commands and other node.js interactions should be in the controller and should only be used in a model if passed through a well defined interface, just like the situation when a model needs to access the current user of a session. At this point we're tired of arguing over this same thing and our discussion consists in us repeating to ourselves our same opinions over and over. Could anyone, preferably with experience in the same setup, give us an unambiguous response saying which solution is more adequate and why it is?

    Read the article

  • How to restore curropted installation?

    - by nightweels
    I have a laptop with dead battery that when there is no current connected to it turns critical so fast and in the energy management in ubuntu, when the battery is critical, there are 2 options: shutdown and hibernate witch is in grey (unclickable), so I have no choice but to chose immediate shutdown, there is no standby even if it is an option in the screen behavior. An immediate shutdown (and I mean by immediate the one that we use when we ended using the computer) happened while I was installing a program called quickly, so after the power was restored, I tried to reinstall the program then I get this translated message: An untreatable error occurred: It seems there is a software error in aptdaemon, the program that lets you install and remove software and any other task related to package management. details: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 968, in simulate trans.unauthenticated = self._simulate_helper(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1092, in _simulate_helper return depends, self._cache.required_download, \ File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 235, in required_download pm.get_archives(fetcher, self._list, self._records) SystemError: E:I wasn't able to locate a file for the libpng12-dev package. This might mean you need to manually fix this package.

    Read the article

  • NDC Oslo

    - by Alan Smith
    Originally posted on: http://geekswithblogs.net/asmith/archive/2013/06/14/153136.aspx2013 has been a hectic year for conference presentations so far, NDC in Oslo has been the 6th conference I have attended, and my session there was my 11th conference presentation this year. I have been meaning to make the short trip over from Stockholm to NDC for a few years, and this was the first time I made it. I have heard a lot of great things about the event, and was impressed with the location, the sessions, and most of all the atmosphere around the event boots and during the party on Thursday evening. The session I was delivering was my “Grid Computing with 256 Windows Azure Worker Roles & Kinect” demo, which I have delivered at many events over the past 12 months. The demo went fine. I’m always a little nervous when I try to scale out the application to 256 worker roles, it almost always works well and the application will scale in minutes, but very occasionally there can be a longer delay due to the provisioning process in the Windows Azure data centers. This would not be an issue for many scenarios, but when standing on stage in front of a room full of developers you really want things to run smoothly. A number of people have suggested that I should pre-provision an environment so that it is guaranteed to be there when I run the demo during a session. For me the aim has always been to show the rapid scalability on cloud-based platforms live on stage. Pre-provisioning an environment may make for a more reliable demo but to me that would be cheating, and not half as much fun!

    Read the article

  • How to manage long running background threads and report progress with DDD

    - by Mr Happy
    Title says most of it. I have found surprising little information about this. I have a long running operation of which the user wants to see the progress (as in, item x of y processed). I also need to be able to pause and stop the operation. (Stopping doesn't rollback the items already processed.) The thing is, it's not that each item takes a long time to get processed, it's that that there are usually a lot of items. And what I've read about so far is that it's somewhat of an anti-pattern to put something like a queue in the DB. I currently don't have any messaging system in place, and I've never worked with one either. Another thing I read somewhere is that progress reporting is something that belongs in the application layer, but it didn't go into the details. So having said all this, what I have in mind is the following. User request with list of items enters the application layer. Application layer gets some information from the domain needed to process the items. Application layer passes the items and the information off to some domain service (should the implementation of this service belong in the infrastructure layer?) This service spins up a worker thread with callbacks for both progress reporting and pausing/stopping it. This worker thread will process each item in it's own UoW. This means the domain information from earlier needs to be stored in some DTO. Since nothing is really persisted, the service should be singleton and thread safe Whenever a user requests a progress report or wants to pause/stop the operation, the application layer will ask the service. Would this be a correct solution? Or am I at least on the right track with this? Especially the singleton and thread safe part makes the whole thing feel icky.

    Read the article

  • Windows Azure Horror Story: The Phantom App that Ate my Data

    - by jasont
    Originally posted on: http://geekswithblogs.net/jasont/archive/2013/10/31/windows-azure-horror-story-the-phantom-app-that-ate-my.aspxI’ve posted in the past about how my company, Stackify, makes some great tools that enhance the Windows Azure experience. By just using the Azure Management Portal, you don’t get a lot of insight into what is happening inside your instances, which are just Windows VMs. This morning, I noticed something quite scary, and fittingly enough, it’s Halloween. I deleted a deployment after doing a new deployment and VIP swap. But, after a few minutes, the instance still appeared in my Stackify dashboard. I thought this odd, as we monitor these operations and remove devices that have been deleted. Digging in showed that it wasn’t a problem with Stackify. This instance was still there, and still sending data. You’ll notice though that the “Status” check shows that the instance no longer exists. And it gets scarier. I quickly checked the running services, and sure enough, the Windows Azure Guest Agent (which runs worker roles) was still running: “Surely, this can’t be” I’m thinking at this point. Knowing that this worker role is in our QA environment, it has debug logging turned on. And, using our log file tailing feature, I can see that, yes, indeed, this deleted deployment is still executing code. Pulling messages off queues. Sending alerts. Changing data. Fortunately, our tools give me the ability to manually disable the Windows service that runs this code. In the meantime, I have contacted everyone I know (and support) at Microsoft to address this issue. Needless to say, this is concerning. When you delete a deployment, it needs to actually delete. Not continue to eat your data. As soon as I learn more, I’ll be posting an update.

    Read the article

  • Corporate Efficiency

    - by AndyScott
    Thoughts on streamlining the process of getting someone up to speed when they join a project as a new hire; or as is common in some companies, switch from one project to another: Has anyone heard of a strategy (including emphasis towards consistent, ongoing documentation) that would bring a user up to speed quickly? Has there been any thought given to focused documentation, specific to a role within a project? Or formalized mentoring within a project, that goes beyond a “system walkthrough”?   Often it's overlooked what time is wasted when a senior level worker is brought on board.  It's assumed that they will know the right questions to ask. They are the type of people that normally learn quickly, and in their own ways, so let them get by with what's out there.   Having a user without a computer will cost you measurable worker hours, making it an easy target to shoot at (and rightly so). Not getting them up to speed as quickly as possible is an efficiency issue, that seems to have become an industry standard as an accepted loss. Given the complexity of the projects within most companies, and the frequency with which users are shifted from one project to another based on need; I think this is an area that bears consideration.

    Read the article

  • How do I make XTerm not use bold?

    - by mike
    I like using XTerm, I like its default "fixed" font, and I like using terminal colors rather than having a monochromatic terminal. However, XTerm seems to insist on using a bold version of the font whenever it's displaying a bright color: I hate hate hate the bold version of the font, but I like the brightness. The man page seems to suggest that adding "XTerm.VT100.boldMode:false" to my ~/.Xresources would disable this "feature", but it doesn't seem to have any effect. I've had it in there for months, so it's not a rebooting issue. How can I force XTerm to always use the standard, non-bold version of the fixed font, even when it's displaying bright text? Edit: Some have suggested putting "XTerm*boldMode: false" in my ~/.Xresources. That didn't help either. I've confirmed that the changes have taken effect with xrdb, though: $ xrdb -query | grep boldMode XTerm*boldMode: false And if i run xprop and click an xterm, I get "WM_CLASS(STRING) = "xterm", "XTerm"" .. so i'm definitely running real xterms. BTW, this is just a plain-vanilla Ubuntu Intrepid box. If anyone else here is running the same, can you try running: echo -e '#\e[1m#' ...and let me know whether the # on the right has a black pixel in the middle like the one on the left does?

    Read the article

  • I think my laptop just died

    - by Joel Coehoorn
    I have a Dell 1330M that as of about 15 minutes ago will no longer POST. What happened was I was working, stepped away for a moment, and when I came back it was turned off. I thought that was odd, but turned it on and things seemed fine. About 1/2 hour later it crashed and restarted, but came up fine again. It did this once more. At this point I was starting to get worried, but I hadn't had any problems with the laptop before and every crash was after doing some work in a virtual machine that I don't often use, so I at put the blame there. It didn't feel like it was overheating anywhere and there's no ozone smell of overheated electronics. Then it crashed a final time and now when I turn it on all I see is a bright screen with a bunch of vertical lines (noise). I've tried removing the memory sticks one at a time, but I get the same result with either memory stick in either slot. With no memory at all it stops earlier in the POST process and the screen is completely blank (black, no backlight). As I type this, I hear a double beep from the system about once every 10 minutes. I'm pretty sure the hard drive is fine because it fails during post, before anything off the drive is needed. The power supply seems good because the screen is nice and bright. It's not the RAM because swapping that around made no difference. The leaves motherboard (which I doubt and can replace) and CPU (which just might be changable). Any ideas? Is there any hope for this laptop? I'm rather fond of it and I'd have a hard time replacing it with anything near as nice.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >