Search Results

Search found 3205 results on 129 pages for 'unexpected shutdown'.

Page 110/129 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Mono through FastCGI on nginx

    - by Stijn
    I'm going through http://www.mono-project.com/FastCGI_Nginx and can't get it to work. The FastCGI server seems to be running. The following is from the error log: upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 192.168.1.125, server: arch, request: "GET /Default.aspx HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "arch" Command used to start the server (I've tried server2 and server4, using a simple .NET 2.0 or .NET 4.0 project): fastcgi-mono-server2 /applications=arch:/:/var/www/test/public/ /socket=tcp:127.0.0.1:9000 /stopable=True nginx config: server { listen 80; server_name arch; access_log /var/www/test/log/access.log; error_log /var/www/test/log/error.log; location / { root /var/www/test/public; index index.html index.htm default.aspx Default.aspx; fastcgi_index Default.aspx; fastcgi_pass 127.0.0.1:9000; fastcgi_param PATH_INFO ""; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Using xsp4 works fine, I can browse the site. I've enabled FastCGI logging, this is the output: [2012-04-15 23:51:18Z] Debug Accepting an incoming connection. [2012-04-15 23:51:18Z] Notice Beginning to receive records on connection. [2012-04-15 23:51:18Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 386) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-04-15 23:51:18Z] Debug Read parameter. (PATH_INFO = ) [2012-04-15 23:51:18Z] Debug Read parameter. (SCRIPT_FILENAME = /var/www/test/public/Home) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_HOST = arch) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-gb,en;q=0.5) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip, deflate) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_COOKIE = ASP.NET_SessionId=2C3D702C9B0F23F69B80820B) [2012-04-15 23:51:18Z] Error Failed to process connection. Reason: Argument cannot be null. Parameter name: s [2012-04-15 23:51:18Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug The FastCGI connection has been closed.

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • Debian Apache2 SSL Issues - Error code: ssl_error_rx_record_too_long

    - by Tone
    I'm setting up apache on Debian lenny and having issues with SSL. I've been through numberous tutorials and i had this working on Ubuntu server, but for the life of me can't get anywhere with Debian. Port 80 (http) works fine, but port 443 (https) gives me the following error (in firefox) - homeserver is my hostname and my dhcp assigned ip is 192.168.1.109. I have a feeling it's something with my config and not with the cert/key generation. An error occurred during a connection to homeserver. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) Anyone see any issues with the following config files? /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin webmaster@localhost ServerName homeserver DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> SSLEngine on SSLCertificateFile /etc/ssl/certs/server.crt SSLCertificateKeyFile /etc/ssl/private/server.key SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 </VirtualHost> </IfModule> /etc/apache2/ports.conf NameVirtualHost *:80 Listen 80 Listen 443 #<IfModule mod_ssl.c> # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here #Listen 443 #</IfModule> /etc/hosts 127.0.0.1 localhost 127.0.0.1 homeserver #192.168.1.109 homeserver #tried this but it didn't work # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts

    Read the article

  • Old operational master still thinks it is the "one"

    - by Doug
    Hi there, I have a domain with 3 AD servers for now i'll just call them: AD01 (Win 2008 GC, Operations master) AD02 (Win 2008 GC) AD03 (Win 2003 GC) A couple of months there was some hardware issues with AD01 so the operations master, PDC and Infrastructure Master was moved to AD02. All machines where on while this was happening. AD01 (Win 2008 GC) AD02 (Win 2008 GC, Operations master) AD03 (Win 2003 GC) AD01 was then shutdown for a month. Upon starting this machine up with replaced hardware (NIC and RAID card) i now have a weird problem. AD01 Thinks it is operations master still in AD on the local box AD02 & AD03 Thinks AD02 is operations master in AD on both boxes When running DCDIAG on AD01 i get a number of issues (listed below) When running "dcdiag /test:advertising" on AD01: Doing primary tests Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Running partition tests on : ForestDnsZones Running partition tests on : DomainDnsZones Running partition tests on : Schema Running partition tests on : Configuration Running partition tests on : domain Running enterprise tests on : domain.local When running "dcdiag" on AD01 i get the following errors (excerpt of the Final output): Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=domain,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=domain,DC=local Starting test: Replications [Replications Check,Replications Check] Inbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_INBOUND_REPL" [Replications Check,AD01] Outbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_OUTBOUND_REPL" So the problem appeasr to be that when i moved the operations master, AD01 never got the memo, and now that it's started up, all the other AD servers don't think its the boss anymore when it trys to replicate etc. So i really need to manually update AD01 so that it knows who the operations master, instrastructure and PDC is - but i'm not having any luck I've been googling for nearly a day and all solutions lead to "the cake is a lie" Your ninja skills will be greatly appreciated

    Read the article

  • ActiveMQ Pure Master / Slave - Out of sync

    - by pico
    What i have : 1 master broker and 1 slave broker both in ActiveMQ 5.4.0 What i use : waitForSlave on master side and failover uri on slave side (in the master connector URI) What i want to do : I want to wait some delay (like 5 seconds) in case of a tiny network failures between master and slave before starting slave transpôrt connectors So i put this in slave config : <broker xmlns="http://activemq.apache.org/schema/core" brokerName="slave" dataDirectory="${activemq.base}/data" useJmx="true" persistent="true" populateJMSXUserID="true" masterConnectorURI="failover://(tcp://master:61616)?initialReconnectDelay=1000&amp;maxReconnectDelay=30000" shutdownOnMasterFailure="false" advisorySupport="false"> It seems to work but after a network hang between master and slave, the slave reconnect successfully and then the master logs a lot of : 2010-10-18 17:08:44,421 | ERROR | Slave Failed | org.apache.activemq.broker.ft.MasterBroker | ActiveMQ Task java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1040-634226732611718750-0:0 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveConsumer(TransportConnection.java:561) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:76) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) On the slave side everything is fine. So after that, i've tried to stop the master to see if the slave is capable of turning master after these "network hangs". The master took long time to shutdown (10 seconds) and then some error message appears in slave logs : 2010-10-18 17:09:32,915 | WARN | Async error occurred: java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 | org.apache.activemq.broker.TransportConnection.Service | VMTransport: vm://slave#5 java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:600) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:74) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) Are they any ways to keep my kaha stores (they are individual stores) synchronised? The main problem is that the slave never turn master after a master failure, it stay block on this message : 2010-10-18 17:09:33,681 | WARN | Transport (master/172.21.60.61:61616) failed to tcp://master:61616 , attempting to automatically reconnect due to: java.net.SocketException: Software caused connection abort: socket write error | org.apache.activemq.transport.failover.FailoverTransport | ActiveMQ Transport: tcp://master/172.21.60.61:61616 I'm totally stuck with these syncs problems, any help welcome! Regards

    Read the article

  • Cannot run a VM with more than three network interfaces with KVM

    - by Bostonvaulter
    I'm running KVM on top of Ubuntu 10.10 Server I can create VM's (Virtual Machine) and network interfaces fine but I cannot seem to add more than three network interfaces. As soon as I have a VM with four network interfaces it gets stuck on startup at the starting SeaBIOS page with this message: Starting SeaBIOS (version pre-0.6.1-20100702_143500-palmer) So far I've verified this with two VM's, a Ubuntu 10.10 desktop and a Vyatta router. The specific network hardware I assign to the VM's doesn't seem to matter. I'm trying to have one bridged interface and three private networks using Vyatta to route between them. Does anyone know why I can't run a VM with more than three network interfaces? Edit: Additionally the KVM thread responsible for the specific VM hangs using ~100% CPU (i.e. one core). Here's the command for the process that is hanging: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name vyatta -uuid 6dff7c94-6810-423e-5fea-fec10da0e9b7 -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/vyatta.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive file=/home/rams/virtual-machines/vyatta.img,if=none,id=drive-ide0-0-0,boot=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device rtl8139,vlan=0,id=net0,mac=00:54:00:be:cc:4b,bus=pci.0,addr=0x3 -net tap,fd=97,vlan=0,name=hostnet0 -device rtl8139,vlan=1,id=net1,mac=52:54:00:da:59:ed,bus=pci.0,addr=0x5 -net tap,fd=98,vlan=1,name=hostnet1 -device rtl8139,vlan=2,id=net2,mac=52:54:00:ce:22:b6,bus=pci.0,addr=0x6 -net tap,fd=99,vlan=2,name=hostnet2 -device rtl8139,vlan=3,id=net3,mac=52:54:00:1e:bc:46,bus=pci.0,addr=0x7 -net tap,fd=101,vlan=3,name=hostnet3 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 Edit: I've also found an error in dmesg that might be related (it also shows up when running virtd in verbose mode): 14:47:24.399: warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed I've also tried disabling app armor but that doesn't seem to make a difference.

    Read the article

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • kill -9 doesn't work

    - by Daniel
    I have a server with 3 oracle instances on it, and the file system is nfs with netapp. After shutdown the databases, one process for each database doesn't quit for a long time. Each kill -i doesn't work. I tried to truss, pfile it, the command through error. And iostat shows there are lots of IO to the netapp server. So someone said the process was busy writing data to remote netapp server, and before the write complete, it won't quit. So what need to be done was just wait until all the IO was done. After wait for longer time (about 1.5 hours), the processes exit. So my question is: how can a process ignore the kill signal? As far as I know, if we kill -9, it will stop immediately. Do you encounter such situation kill -i doesn't kill the process right away? TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1469 25053 0 22:36:53 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 1051 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1493 25053 0 22:37:07 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 471 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 oracle 1495 25053 0 22:37:22 pts/1 0:00 grep dbw0 TEST7-stdby-phxdbnfs11$ ps -ef|grep smon oracle 1524 25053 0 22:38:02 pts/1 0:00 grep smon TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1526 25053 0 22:38:06 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 1051 471 26795 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1528 25053 0 22:38:19 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ truss -p 26795 truss: unanticipated system error: 26795 TEST7-stdby-phxdbnfs11$ pfiles 26795 pfiles: unanticipated system error: 26795

    Read the article

  • Windows Vista: Networking can only connect "local only"

    - by Damien
    I am attempting to debug a problem on a Windows Vista laptop - not mine! Until just recently (last week or so), it was operating normally for about 4 years :) The problem is that I am having issues connecting to the local network (a basic wireless home router; more later) and the internet (via ADSL). This is both for wired [Broadcom chipset] and wireless [Intel chipset]. I will elaborate further later. To detail the network. I have three other clients (HTC phone, Ubuntu 12.04 desktop [wired] and Ubuntu 10.04 laptop [wireless]), all of whom are able to connect to the network and internet normally. A windows 7 virtual machine running on said desktop connects normally. I have tried two different wireless routers - Netgear DG834G and Netgear DGN3500. The same error mode is common to both. Updating the firmware to the latest on both routers does not help. Overall, it seems safe to say it's localised to the laptop in question. I do not have another Vista client to test with. The specific symptoms are as follows: When "connected", it says "Local Only", and says it cannot connect to the internet. This is true for both wired and wireless. It can get an IP address (192.168.0.5), and the router (192.168.0.1) reports that it can see the device. When I try to ping, I get the following results: ping 192.168.0.1 - (router) all packets lost ping 192.168.0.5 - (laptop's address) OK ping 192.168.0.4 - (desktop) all packets lost Pinging from the desktop to the problematic laptop results in "From 192.168.0.4 icmp_seq=1 Destination Host Unreachable" The most promising "fix" from trawling forums is KB928233 which does not work for me. The problem is persistent across reports (both full shutdown and hibernate) so it appears not to be sleep related. I am not a regular vista user, though I can fumble my way about a bit. Is there any other suggestions as to what I should do? Is there any further information I can provide?

    Read the article

  • Linux Mint 14 severe overheating

    - by agvares
    I've tried switching to Mint (it was Mint 12) about 6 months ago, but failed to do it due to enormous overheating which made it virtually impossible to run any applications more complex than Gedit. Time had passed, and now I'm back again with Mint 14, but feeling way more determined. What I face are the following issues: 1) Great overheating (and by great I mean that just by doing nothing my CPU temp is floating around 70-75 C, which I find a lot) 2) Running multiple applications (let's say Chrome, Skype and Pidgin) results in critical overheating and immediate shutdown of the system 3) Due to the stuff listed above, my battery drains in about 10-15 minutes, pretty much turning my laptop into a crippled desktop machine I've got a HP-dv6 laptop (i7, 6gb RAM, dual graphics) Here's the output of lspci: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Whistler XT [AMD Radeon HD 6700M Series] 07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) 0d:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 13:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5209 PCI Express Card Reader (rev 01) 19:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) what i've tried to do already: 1) I've edited my grub file to add some "splash" arguments there 2) Installed jupiter and powertop 3) Tried to upgrade to newer kernels (up to 3.8), btw running anything newer than 3.5 results in both resolution and wi-fi detection fail 4) Read lots of forum threads devoted to the topic So, my question is simple: What else might I do to become finally able to use Mint as my default OS without the risk of being burned alive by the CPU heat?

    Read the article

  • Weird random application hang problem

    - by haridsv
    I am trying to understand an application hang problem that started up lately on my windows xp system. The system runs fine for days (sometimes) without ever shutting down or putting it to sleep, but the problem first shows up as one of the apps hanging. The application's UI stops responding or one or more background threads hang, so even though the GUI is responding, it is not doing anything (e.g., in VirtualDub's case, the UI responds fine, but the job doesn't progress and I won't even be able to abort it). The weirdness part comes from the fact that if I try to kill such an app, the program that is used to kill it goes into the same mode (i.e, that hangs instead of the original). E.g., if I use Process Explorer to kill it, the original program exits, but procexp now hangs. If I use another instance of procexp to kill the one that is hanging, this repeats, so there is always at least one program hanging in that state. This is not specific to procexp, I tried the native task manager and even the "End Process" dialog from windows explorer that shows up when you try to close a non-responsive GUI (in this case, the explorer itself hangs). The only program that didn't hang after the kill, is the command line taskkill. However, in this case, explorer hangs instead of taskkill. Also, once this problem starts manifesting, it soon ends up freezing the whole system to the extent that even a clean shutdown is not possible, so I have learned to reboot as soon as I notice this problem, however this is very inconvenient, as I often have encoding batch jobs going on which can't continue the job after the restart. The longer I leave the system running after seeing this problem, the more applications get into this state. I have tried to do a repair install but that didn't make any difference. I also uninstalled some of the newer installs, but again no difference. I tried to search online, but got inundating results for generic hang and crash related problems. Though I couldn't notice any pattern, it seems as though the problem is more frequent if I have some video encoding going on at that time. I had the system running for days when I only do browsing and internet audio/video chat before I decide to start encoding something and the problem starts to show up. I am not too sure if it is the encoding program that first hangs, though I almost always noticed that too hanging (like the VirtualDub stopping to make progress). I also had to reboot 3 times on one day when I was heavily experimenting with encoding. I would appreciate any help in narrowing down this problem and save me the trouble of reinstalling. I don't especially want to loose my gotd installs.

    Read the article

  • Parsing xml files locally from assets folder using XmlPullParser

    - by Randolphg
    Im trying to parse a local xml file that I place in my assets folder. I've been trying to do this for almost a week now. Here is my test xml file Test1 Test2 Test3 Test4 Test5 I keep getting the same error: W/System.err(22458): org.xmlpull.v1.XmlPullParserException: unexpected type (position:TEXT Code: public void xmlParser() throws XmlPullParserException, IOException, ParserConfigurationException, SAXException { Log.d("tag", "xmlParsing...."); Arithmetic arthm = new Arithmetic(); XmlPullParserFactory xmlPF = XmlPullParserFactory.newInstance(); xmlPF.setValidating(false); XmlPullParser xml = xmlPF.newPullParser(); InputStream raw = getApplication().getAssets().open("menu.xml"); xml.setInput(raw, null); xml.nextTag(); Log.d("tag", "start parsing...."); String elementText = null; String elemName = null; int nofTags = 0; while (xml.getEventType() != XmlPullParser.END_DOCUMENT) { Log.d("tag", "while(xml.next)..."); switch (xml.getEventType()) { case XmlPullParser.START_DOCUMENT: Log.d("tag", "while (xml.getEventType() != XmlPullParser.END_DOCUMENT)"); break; case XmlPullParser.START_TAG: Log.d("tag", " case XmlPullParser.START_TAG"); elementText = xml.getName(); Log.d("tag", "elementText = " + elementText); if (xml.getEventType() != XmlPullParser.END_TAG) { xml.nextTag(); } break; case XmlPullParser.TEXT: Log.d("tag", "case TEXT"); if (elementText.equals("menu") && xml.isWhitespace()) { Log.d("tag", "<" + elementText + ">"); arthm.menu_name = xml.getText(); Log.d("tag", "value " + xml.getText() + " added"); } else if (elementText.equals("item")) { arthm.description = xml.getText(); Log.d("tag", "value " + xml.getText() + " added"); } else if (elementText.equals("SUBCATEGORY NAME")) { arthm.subcategoryDesc.add(xml.getText()); Log.d("tag", "value " + xml.getText() + " added"); } else if (elementText.equals("SUBCATEGORY DESC")) { arthm.subcategoryName.add(xml.getText()); Log.d("tag", "value " + xml.getText() + " added"); } break; case XmlPullParser.END_TAG: Log.d("tag", "case END_TAG"); nofTags += 1; String tags = Integer.toString(nofTags); Log.d("tags", elementText + " number of tags" + tags); if (xml.nextTag() != XmlPullParser.START_TAG) { xml.next(); } break; case XmlPullParser.END_DOCUMENT: Log.d("tag", "case END_DOCUMENT"); break; default: break; } } Log.d("tag", "Success!"); } Thanks in advance.

    Read the article

  • Persuading openldap to work with SSL on Ubuntu with cn=config

    - by Roger
    I simply cannot get this (TLS connection to openldap) to work and would appreciate some assistance. I have a working openldap server on ubuntu 10.04 LTS, it is configured to use cn=config and most of the info I can find for TLS seems to use the older slapd.conf file :-( I've been largely following the instructions here https://help.ubuntu.com/10.04/serverguide/C/openldap-server.html plus stuff I've read here and elsewhere - which of course could be part of the problem as I don't totally understand all of this yet! I have created an ssl.ldif file as follows; dn:cn=config add: olcTLSCipherSuite olcTLSCipherSuite: TLSV1+RSA:!NULL add: olcTLSCRLCheck olcTLSCRLCheck: none add: olcTLSVerifyClient olcTLSVerifyClient: never add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/ldap_cacert.pem add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/my.domain.com_slapd_cert.pem add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/my.domain.com_slapd_key.pem and I import it using the following command line ldapmodify -x -D cn=admin,dc=mydomain,dc=com -W -f ssl.ldif I have edited /etc/default/slapd so that it has the following services line; SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///" And everytime I'm making a change, I'm restarting slapd with /etc/init.d/slapd restart The following command line to test out the non TLS connection works fine; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldap://mydomain.com" "cn=roger*" But when I switch to ldaps using this command line; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldaps://mydomain.com" "cn=roger*" This is what I get; ldap_url_parse_ext(ldaps://mydomain.com) ldap_create ldap_url_parse_ext(ldaps://mydomain.com:636/??base) ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP mydomain.com:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 127.0.0.1:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 TLS: can't connect: A TLS packet with unexpected length was received.. ldap_err2string ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) Now if I check netstat -al I can see; tcp 0 0 *:www *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:https *:* LISTEN tcp 0 0 *:ldaps *:* LISTEN tcp 0 0 *:ldap *:* LISTEN I'm not sure if this is significant as well ... I suspect it is; openssl s_client -connect mydomain.com:636 -showcerts CONNECTED(00000003) 916:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: I think I've made all my certificates etc OK and here are the results of some checks; If I do this; certtool -e --infile /etc/ssl/certs/ldap_cacert.pem I get Chain verification output: Verified. certtool -e --infile /etc/ssl/certs/mydomain.com_slapd_cert.pem Gives "certtool: the last certificate is not self signed" but it otherwise seems OK? Where have I gone wrong? Surely getting openldap to run securely on ubuntu should be easy and not require a degree in rocket science! Any ideas?

    Read the article

  • CentOS 5.7 keeps rebooting after fresh installation

    - by Wagner Maestrelli
    I have just installed CentOS 5.7 x86_64 on a new computer. The installation went on without any issues. But, after it finnished, the machine started to show an awkward behaviour: it restarts every time it tries to boot. It happens after all the services have been started. The screen just goes black and it shows an error message from the monitor: Input not supported. And then it reboots. I took a look at the logs, but I couldn't manage to find anything. Any help? Update Before doing the hardware diagnosis, as pointed out, I decided to make some tests. First, I changed the runlevel to 3, adding the 3 parameter at the end of the kernel command. Then, after logging in in text mode, I checked the xorg.conf file out for some problems regarding the screen resolution. There was nothing unexpected set. Well, if there had to be a problem with it, I couldn't start the X server at the command line, right? So, I typed startx and Gnome started! So, probably, it's not an issue with the screen resolution, I suppose. Then I selected the Log Out root... Gnome menu option and something odd happened: the screen went black, the Input not supported monitor error message was displayed and the system rebooted. Yes, the same problem I was having while trying to boot! After that, I decided to try yet another test: I removed the rhgb quiet parameters from the kernel command to see if some error would show up. Well, to my surprise, the boot went on without problems! The Gnome login screen showed up, I logged in and the session started. But then I selected the Shut Down... menu option and guess what? Same problem: black screen, same monitor error and the system rebooted. Yes, it rebooted, it did not shut down. I repeated both of the tests and the behaviours were the same. I really don't know what's going on. It seems to be an issue regarding the changing of the screen mode or something like that. Any ideas? Could this be a hardware problem? Or does it seem to be something regarding the system configuration?

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • Nginx / uWsgi / Django site can handle more traffic with rewrite URL

    - by Ludo
    Hi there. I'm running a Django app, using uWsgi behind Nginx. I've been doing some performance tuning and load testing using ApacheBench and have discovered something unexpected which I wonder if someone could explain for me. In my Nginx config I have a rewrite directive which catches lots of different URL permutations and then forwards them to the canonical URL I wish to use, eg, it traps www.mysite.com/whatever, www.mysite.co.uk/whatever and forwards them all to http://mysite.com/whatever. If I load test against any of the URLs listed with a redirect (ie, NOT the canonical URL which it is eventually forwarded to), it can serve 15000 concurrent connections without breaking a sweat. If I load test against the canonical URL, which the above test I would have expected got forwarded to anyway, it can't handle nearly as much. It will drop about 4000 of the 15000 requests, and can only handle about 9000 reliably. This is the command line I'm using to test: ab -c15000 -n15000 http://www.mysite.com/somepath/ and ab -c15000 -n15000 http://mysite.com/somepath/ I've tried several different types - it makes no different which order I do them in. This doesn't make sense to me - I can understand why the requests involving a redirect may not handle quite so many concurrent connections, but it's happening the other way round. Can anyone explain? I'd really prefer it if the canonical URL was the one which could handle more traffic. I'll post my Nginx config below. Thanks loads for any help! server { server_name www.somesite.com somesite.net www.somesite.net somesite.co.uk www.somesite.co.uk; rewrite ^(.*) http://somesite.com$1 permanent; } server { root /home/django/domains/somesite.com/live/somesite/; server_name somesite.com somesite-live.myserver.somesite.com; access_log /home/django/domains/somesite.com/live/log/nginx.log; location / { uwsgi_pass unix:////tmp/somesite-live.sock; include uwsgi_params; } location /media { try_files $uri $uri/ /index.html; } location /site_media { try_files $uri $uri/ /index.html; } location = /favicon.ico { empty_gif; } }

    Read the article

  • My virtualhost not working for non-www version

    - by johnlai2004
    I have a development web server (ubuntu + apache) that can be accessed via the url glacialsummit.com. For some reason, http://www.glacialsummit.com serves pages from the /srv/www/glacialsummit.com/ directory, but http://glacialsummit.com serves pages from the /var/www/ directory. Here's what some of my virtualhost config files look like filename: /etc/apache2/sites-enabled/glacialsummit.com <VirtualHost 97.107.140.47:80> ServerAdmin [email protected] ServerName glacialsummit.com ServerAlias www.glacialsummit.com DocumentRoot /srv/www/glacialsummit.com/public_html/ ErrorLog /srv/www/glacialsummit.com/logs/error.log CustomLog /srv/www/glacialsummit.com/logs/access.log combined </VirtualHost> <VirtualHost 97.107.140.47:443> ServerAdmin [email protected] ServerName glacialsummit.com ServerAlias www.glacialsummit.com DocumentRoot /srv/www/glacialsummit.com/public_html/ ErrorLog /srv/www/glacialsummit.com/logs/error.log CustomLog /srv/www/glacialsummit.com/logs/access.log combined SSLEngine on SSLCertificateFile /etc/ssl/localcerts/www.glacialsummit.com.crt SSLCertificateKeyFile /etc/ssl/localcerts/www.glacialsummit.com.key <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 </VirtualHost> <VirtualHost 97.107.140.47:80> ServerAdmin [email protected] ServerName project.glacialsummit.com ServerAlias www.project.glacialsummit.com DocumentRoot /srv/www/project.glacialsummit.com/public_html/ ErrorLog /srv/www/project.glacialsummit.com/logs/error.log CustomLog /srv/www/project.glacialsummit.com/logs/access.log combined </VirtualHost> ## i have many other vhosts that work fine in this file filename /etc/apache2/sites-enabled/000-default <VirtualHost 97.107.140.47:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> filename: /etc/apache2/ports.conf NameVirtualHost 97.107.140.47:80 Listen 80 <IfModule mod_ssl.c> # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here Listen 443 </IfModule> How do I make http://glacialsummit.com serve web pages from /srv/www/glacialsummit.com/public_html just like http://www.glacialsummit.com?

    Read the article

  • DVD Drive Failing on Windows 7

    - by Seth Spearman
    Hello, I have x64 Windows 7 running on an ASUS M50VM. The DVD drive works completely unreliably if not at all. But the story is not that simple so bear with me...here are the gory details. When I first got the machine it came with Windows XP and I upgraded it to Windows Vista X64 and the DVD worked fine. When Windows 7 RC2 came out I tried it on a Virtual Machine and I liked it so much that I upgraded the machine to Win7 RC1. The DVD worked fine. Of course, RC1 was going to start spontaneously rebooting, so when Windows 7 was released I DID A CLEAN INSTALL of Windows 7. Just to clarify...by clean install I mean I did a FORMAT of the HARD DRIVE and INSTALLED it from scratch. EVER since then the DVD mostly doesn't work. I can sometime read from disk but that will often hang. (Please see my description below of HANG for details.) CD or DVD writes ALWAYS fail with a HANG (I have done a successful write only one time.) Here is what I mean by HANG... *Explorer Window is unresponsive. *Any software accessing the DVD drive is unresponsive. *The DVD tray will not eject. *Using a paper clip will eject but the disk is usually spinning real hard. *Attempting to shut down windows will fail. I have waited as long as ten minutes but the whole OS seems to hang. I do a hard shutdown. *Sometimes accessing the DVD (when it does not cause a HANG) will still fail and the device will actually seem to disappear from the system until I reboot. A couple of other things. It is NOT a hardware failure. It is the Windows OS. I know this because I swapped out my DVD drive with a friend with the same model...his machine is fine (he is still running Vista X64) and my machine still fails. For what it is worth. I swapped out my primary disk with the INTEL 160GB SSD. EDIT Here is what System Information shows about my DVD drive Drive D: Description CD-ROM Drive Media Loaded No Media Type DVD Writer Name HL-DT-ST DVDRAM GSA-T50N ATA Device Manufacturer (Standard CD-ROM drives) Status OK Transfer Rate -1.00 kbytes/sec SCSI Target ID 0 PNP Device ID IDE\CDROMHL-DT-ST_DVDRAM_GSA-T50N________________RR04____\5&2B5B7F1D&0&1.0.0 Driver c:\windows\system32\drivers\cdrom.sys (6.1.7600.16385, 144.00 KB (147,456 bytes), 7/13/2009 7:19 PM) Any ideas? HELP! Seth B Spearman

    Read the article

  • DVD Drive Failing on Windows 7

    - by Seth Spearman
    I have x64 Windows 7 running on an ASUS M50VM. The DVD drive works completely unreliably if not at all. But the story is not that simple so bear with me...here are the gory details. When I first got the machine it came with Windows XP and I upgraded it to Windows Vista X64 and the DVD worked fine. When Windows 7 RC2 came out I tried it on a Virtual Machine and I liked it so much that I upgraded the machine to Win7 RC1. The DVD worked fine. Of course, RC1 was going to start spontaneously rebooting, so when Windows 7 was released I DID A CLEAN INSTALL of Windows 7. Just to clarify...by clean install I mean I did a FORMAT of the HARD DRIVE and INSTALLED it from scratch. EVER since then the DVD mostly doesn't work. I can sometime read from disk but that will often hang. (Please see my description below of HANG for details.) CD or DVD writes ALWAYS fail with a HANG (I have done a successful write only one time.) Here is what I mean by HANG... *Explorer Window is unresponsive. *Any software accessing the DVD drive is unresponsive. *The DVD tray will not eject. *Using a paper clip will eject but the disk is usually spinning real hard. *Attempting to shut down windows will fail. I have waited as long as ten minutes but the whole OS seems to hang. I do a hard shutdown. *Sometimes accessing the DVD (when it does not cause a HANG) will still fail and the device will actually seem to disappear from the system until I reboot. A couple of other things. It is NOT a hardware failure. It is the Windows OS. I know this because I swapped out my DVD drive with a friend with the same model...his machine is fine (he is still running Vista X64) and my machine still fails. For what it is worth. I swapped out my primary disk with the INTEL 160GB SSD. EDIT Here is what System Information shows about my DVD drive Drive D: Description CD-ROM Drive Media Loaded No Media Type DVD Writer Name HL-DT-ST DVDRAM GSA-T50N ATA Device Manufacturer (Standard CD-ROM drives) Status OK Transfer Rate -1.00 kbytes/sec SCSI Target ID 0 PNP Device ID IDE\CDROMHL-DT-ST_DVDRAM_GSA-T50N________________RR04____\5&2B5B7F1D&0&1.0.0 Driver c:\windows\system32\drivers\cdrom.sys (6.1.7600.16385, 144.00 KB (147,456 bytes), 7/13/2009 7:19 PM) Any ideas? HELP! Seth B Spearman

    Read the article

  • Multiple SSL Certificates Running on Mac OS X 10.6

    - by frodosghost.mp
    I have been running into walls with this for a while, so I posted at stackoverflow, and I was pointed over here... I am attempting to setup multiple IP addresses on Snow Leopard so that I can develop with SSL certificates. I am running XAMPP - I don't know if that is the problem, but I guess I would run into the same problems, considering the built in apache is turned off. So first up I looked into starting up the IPs on start up. I got up an running with a new StartupItem that runs correctly, because I can ping the ip address: ping 127.0.0.2 ping 127.0.0.1 And both of them work. So now I have IP addresses, which as you may know are not standard on OSx. I edited the /etc/hosts file to include the new sites too: 127.0.0.1 site1.local 127.0.0.2 site2.local I had already changed the httpd.conf to use the httpd-vhosts.conf - because I had a few sites running on the one IP address. I have edited the vhosts file so a site looks like this: <VirtualHost 127.0.0.1:80> DocumentRoot "/Users/jim/Documents/Projects/site1/web" ServerName site1.local <Directory "/Users/jim/Documents/Projects/site1"> Order deny,allow Deny from All Allow from 127.0.0.1 AllowOverride All </Directory> </VirtualHost> <VirtualHost 127.0.0.1:443> DocumentRoot "/Users/jim/Documents/Projects/site1/web" ServerName site1.local SSLEngine On SSLCertificateFile "/Applications/XAMPP/etc/ssl-certs/myssl.crt" SSLCertificateKeyFile "/Applications/XAMPP/etc/ssl-certs/myssl.key" SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/Users/jim/Documents/Projects/site1"> Order deny,allow Deny from All Allow from 127.0.0.1 AllowOverride All </Directory> </VirtualHost> In the above code, you can change the 1's to 2's and it is the setup for the second site. They do use the same certificate, which is why they are on different IP addresses. I also included the NameVirtualHost information at the top of the file: NameVirtualHost 127.0.0.1:80 NameVirtualHost 127.0.0.2:80 NameVirtualHost 127.0.0.1:443 NameVirtualHost 127.0.0.2:443 I can ping site1.local and site2.local. I can use telnet ( telnet site2.local 80 ) to get into both sites. But in Safari I can only get to the first site1.local - navigating to site2.local gives me either the localhost main page (which is included in the vhosts) or gives me a Access forbidden!. I am usure what to do, any suggestions would be awesome.

    Read the article

  • Proxy / Squid 2.7 / Debian Wheezy 6.7 / lots of TCP Timed-out

    - by Maroon Ibrahim
    i'm facing a lot of TCP timed-out on a busy cache server and here below my sysctl.conf configuration as well as an output of "netstat -st" Kernel 3.2.0-4-amd64 #1 SMP Debian 3.2.57-3 x86_64 GNU/Linux Any advice or help would be highly appreciated #################### Sysctl.conf cat /etc/sysctl.conf net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 65536 net.ipv4.tcp_low_latency = 1 net.core.wmem_max = 8388608 net.core.rmem_max = 8388608 net.ipv4.ip_local_port_range = 1024 65000 fs.aio-max-nr = 131072 net.ipv4.tcp_fin_timeout = 10 net.ipv4.tcp_keepalive_time = 60 net.ipv4.tcp_keepalive_intvl = 10 net.ipv4.tcp_keepalive_probes = 3 kernel.threads-max = 131072 kernel.msgmax = 32768 kernel.msgmni = 64 kernel.msgmnb = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.ip_forward = 1 net.ipv4.tcp_timestamps = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_syncookies = 1 net.ipv4.ip_dynaddr = 1 vm.swappiness = 0 vm.drop_caches = 3 net.ipv4.tcp_moderate_rcvbuf = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_ecn = 0 net.ipv4.tcp_max_orphans = 131072 net.ipv4.tcp_orphan_retries = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_max_syn_backlog = 32768 net.core.netdev_max_backlog = 131072 net.ipv4.tcp_mem = 6085248 16227328 67108864 net.ipv4.tcp_wmem = 4096 131072 33554432 net.ipv4.tcp_rmem = 4096 174760 33554432 net.core.rmem_default = 33554432 net.core.rmem_max = 33554432 net.core.wmem_default = 33554432 net.core.wmem_max = 33554432 net.core.somaxconn = 10000 # ################ Netstat results /# netstat -st IcmpMsg: InType0: 2 InType3: 233754 InType8: 56251 InType11: 23192 OutType0: 56251 OutType3: 437 OutType8: 4 Tcp: 20680741 active connections openings 63642431 passive connection openings 1126690 failed connection attempts 2093143 connection resets received 13059 connections established 2649651696 segments received 2195445642 segments send out 183401499 segments retransmited 38299 bad segments received. 14648899 resets sent UdpLite: TcpExt: 507 SYN cookies sent 178 SYN cookies received 1376771 invalid SYN cookies received 1014577 resets received for embryonic SYN_RECV sockets 4530970 packets pruned from receive queue because of socket buffer overrun 7233 packets pruned from receive queue 688 packets dropped from out-of-order queue because of socket buffer overrun 12445 ICMP packets dropped because they were out-of-window 446 ICMP packets dropped because socket was locked 33812202 TCP sockets finished time wait in fast timer 622 TCP sockets finished time wait in slow timer 573656 packets rejects in established connections because of timestamp 133357718 delayed acks sent 23593 delayed acks further delayed because of locked socket Quick ack mode was activated 21288857 times 839 times the listen queue of a socket overflowed 839 SYNs to LISTEN sockets dropped 41 packets directly queued to recvmsg prequeue. 79166 bytes directly in process context from backlog 24 bytes directly received in process context from prequeue 2713742130 packet headers predicted 84 packets header predicted and directly queued to user 1925423249 acknowledgments not containing data payload received 877898013 predicted acknowledgments 16449673 times recovered from packet loss due to fast retransmit 17687820 times recovered from packet loss by selective acknowledgements 5047 bad SACK blocks received Detected reordering 11 times using FACK Detected reordering 1778091 times using SACK Detected reordering 97955 times using reno fast retransmit Detected reordering 280414 times using time stamp 839369 congestion windows fully recovered without slow start 4173098 congestion windows partially recovered using Hoe heuristic 305254 congestion windows recovered without slow start by DSACK 933682 congestion windows recovered without slow start after partial ack 77828 TCP data loss events TCPLostRetransmit: 5066 2618430 timeouts after reno fast retransmit 2927294 timeouts after SACK recovery 3059394 timeouts in loss state 75953830 fast retransmits 11929429 forward retransmits 51963833 retransmits in slow start 19418337 other TCP timeouts 2330398 classic Reno fast retransmits failed 2177787 SACK retransmits failed 742371590 packets collapsed in receive queue due to low socket buffer 13595689 DSACKs sent for old packets 50523 DSACKs sent for out of order packets 4658236 DSACKs received 175441 DSACKs for out of order packets received 880664 connections reset due to unexpected data 346356 connections reset due to early user close 2364841 connections aborted due to timeout TCPSACKDiscard: 1590 TCPDSACKIgnoredOld: 241849 TCPDSACKIgnoredNoUndo: 1636687 TCPSpuriousRTOs: 766073 TCPSackShifted: 74562088 TCPSackMerged: 169015212 TCPSackShiftFallback: 78391303 TCPBacklogDrop: 29 TCPReqQFullDoCookies: 507 TCPChallengeACK: 424921 TCPSYNChallenge: 170388 IpExt: InBcastPkts: 351510 InOctets: -609466797 OutOctets: -1057794685 InBcastOctets: 75631402 #

    Read the article

  • Tomcat with virtual hosts - 404

    - by Thardas
    I have a CentOS 5.2 server set up with Apache 2.2.3 and Tomcat 5.5.27. The server hosts multiple virtual hosts connected to multiple Tomcats. For instance we have one tomcat for development and testing and one tomcat for production. project.demo.us.com points to dev tomcat and project.us.com points to production tomcat. Here's the virtual host's configuration: <VirtualHost *:80> ServerName project.demo.us.com CustomLog logs/project.demo.us.com/access_log combined env=!VLOG ErrorLog logs/project.demo.us.com/error_log DocumentRoot /var/www/vhosts/project.demo.us.com <Directory /var/www/vhosts/project.demo.us.com> Allow from all AllowOverride All Options -Indexes FollowSymLinks </Directory> ########## ########## ########## JkMount /project/* online </VirtualHost> JkMount line defines that we use online worker and our workers.properties contains this: worker.list=..., online, ... worker.online.port=7703 worker.online.host=localhost worker.online.type=ajp13 worker.online.lbfactor=1 And tomcat's conf/server.xml contains: <Connector port="7703" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" URIEncoding="UTF-8" maxThreads="80" minSpareThreads="10" maxSpareThreads="15"/> I'm not sure what redirectPort is but I tried to telnet to that port and there's no one answering, so it shouldn't matter? Tomcat's webapps directory contains project.war and the server automatically deployed it under project directory which contains index.jsp and hello.html. The latter is for static debugging purposes. Now when I try to access http://project.demo.us.com/project/index.jsp, I get Tomcat's HTTP Status 404 - The requested resource () is not available. The same thing happens to hello.html so it's not working with static content either. Apache's access_log contains: 88.112.152.31 - - [10/Aug/2009:12:15:14 +0300] "GET /demo/index.jsp HTTP/1.1" 404 952 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2" I couldn't find any mention of the request in Tomcat's logs. If I shutdown this specific tomcat, I no longer get Tomcat's 404 but Apache's 503 Service Temporarily Unavailable, so I should be configuring the correct Tomcat. Is there something obvious that I'm missing? Is there any place where I could find out what path the Tomcat is using to look for requested files?

    Read the article

  • Blank displays after inactivity, mouse cursor not showing up

    - by Mike Christiansen
    I have a Windows 7 Enterprise x86 machine that is exhibiting some strange problems. After some inactivity (it varies how long it takes), when I come back to my computer, both of my monitors are black. The monitors are on, with a black display. Moving the mouse and pressing keys on the keyboard do not work. This happens at least once a day. When I first encountered the issue, I performed a hard shutdown and restarted the computer. About 10% of the time when I did this, the resolutions on my monitors were messed up. The native resolution on the primary monitor is 1600x900, and the native on my secondary is 1440x900. None of the widescreen resolutions would be present in the display properties. I had 800x600, 1024x768, and 1280x1024. As a workaround to this issue, I've found that if I plug in my secondary monitor as the primary monitor, and leave the secondary unplugged, then Windows will start in 1024x768 on the secondary. Then, I can use Windows 7's "Detect" feature and it finds the 1440x900 resolution, and sets it. Then I have to connect the primary monitor to the secondary port and set it as the primary. Then I can switch the two cables, putting the primary into the primary, the secondary into the secondary. Then use the "Detect" feature again, and it finds the correct resolutions. Some time after performing the above, I discovered that when the screens were blanked, I could simply press "Control+Alt+Delete", type in my password, and everything comes up fine - except my mouse cursor. All applications and features work as intended, except there is no mouse cursor (the mouse actually works though...). I have the "Show location of pointer when I press CTRL key" option selected, so I can press CTRL and use my mouse as normal - but I have no actual cursor. When the computer is in this state, UAC is also not functional. Whenever a UAC prompt appears, the screens go black (the original symptoms) and I have to press Escape to exit UAC. Because of the above symptoms (the UAC black screen thing, etc) I beleive winlogon.exe may be the culprit. However, I have no idea how to fix it. I am unable to restart winlogon.exe due to the problems I am having with winlogon.exe (the UAC black screen) Looking for any ideas.... More information Windows 7 x86 Enterprise Domain environment (I have a secondary account with administrator rights) Dell Optiplex 960 Cannot perform "optional" windows updates due to an activation issue (I am testing a windows 7 image, and an activation infrastructure has not been created yet. However, this issue was happening before I was unable to perform windows update, and windows was up to date at the time Updated video card driver (as an attempt to fixing the resolution issue) Disabled all power saving options Please let me know what else you need to help me solve this issue! Thanks in advance, Mike

    Read the article

  • CPU Lockup when loading folder of bookmarks in Firefox

    - by Gary M. Mugford
    I am running Firefox 3.6 on WinXPSP3 on a duo Core machine with 4G of memory. I am also running Avast! free anti-virus and ZoneAlarm free firewall, both latest versions. Within the last month, my service provider basically forced me to upgrade to a Docsis 3.0 compliant modem (offered me a deal I couldn't turn down). At that point, I also upgraded to FF3.6. Basically, I am not unhappy with many aspects of this switchover, EXCEPT, when I now load a folder of bookmarks (anywhere from 10 to 38) I get nothing like the load times I experienced a couple of months back. It's taking minutes rather than seconds. And the first bookmark in one of the folders, GMail, rarely loads before timing out. I have used the old trick of powering off the cable modem before my day's work. This used to fix 'load-lag' in the old days. I have switched from my ISP's DNS server to Google, OpenDNS and back. And nothing seems to work currently. It's not my DNS cache. That's been flushed and secondary computers also have the same issue when loading folders. I have watched the CPU usage and loading the folder will send VSMon (ZoneAlarm) usage over 40 percent, AVastSvc (Avast!) over 30 and Firefox will then push the needle to 100. There's a brief burst by SVCHost when the others falter in devouring cycles. Then everything subsides to single digits once the last tab is loaded. The only other nominal nastiness is VSMon ALWAYS hitting 50 percent when ANY program starts downloading content from the internet. If I shutdown ZoneAlarm (and VSMon with it), the same slow loading takes place, but this time System is running 50% plus, again driving the usage to 100 per cent. I have my doubts FF3.6 vs FF3.5 is an issue, since the other computers are still running 3.5 and suffer the same issue. Those computers are on, but inactive, most of the time, being backups. Obviously, when the CPU hits 100, I can't do much of anything in FireFox OR in other programs. Video play through WMPC or VLC is extremely choppy, although it doesn't seem to affect the audio. Any ideas what I can try next? Thanks, GM

    Read the article

  • Why apcupsd won't see the UPS connected to the USB posrt on FreeBSD 8.0 amd64

    - by Max Kosyakov
    Hello, Recently I installed an apcusbd on a FreeBSD 8.0 amd64 box via ports system. It installed perfectly but it won't run. Here what is says in the log: FATAL ERROR in generic-usb.c at line 636 Cannot find UPS device It appeared that HID driver picked the /dev/ugen4.2 which could cause the apcusb being unable to find the device. After I had discovered this, I rebuilt the kernel and removed the hid driver. Now it just shows "ugen4.2: <Tripp Lite> at usbus4" and no uhid0 device appears. Nevertheless the problem persisted. I tried to leave the DEVICE config setting blank --- won't help. Then I specified the particular device in the config, but it did not help either. Below you is the output of several commands that can provide some useful information on my case. server# /usr/local/etc/rc.d/apcupsd start Starting apcupsd. server# tail /var/log/messages | grep apcupsd Jun 17 22:30:00 server apcupsd[1520]: apcupsd FATAL ERROR in generic-usb.c at line 636 Cannot find UPS device -- For a link to detailed USB trouble shooting information, please see . Jun 17 22:30:00 server apcupsd[1520]: apcupsd error shutdown completed server# cat /usr/local/etc/apcupsd/apcupsd.conf ## apcupsd.conf v1.1 ## UPSCABLE usb UPSTYPE usb DEVICE /dev/ugen4.2 LOCKFILE /var/lock UPSCLASS standalone UPSMODE disable server# dmesg | grep '^u' uhci0: port 0xa800-0xa81f irq 16 at device 26.0 on pci0 uhci0: [ITHREAD] uhci0: LegSup = 0x0f00 usbus0: on uhci0 uhci1: port 0xa880-0xa89f irq 21 at device 26.1 on pci0 uhci1: [ITHREAD] uhci1: LegSup = 0x0f00 usbus1: on uhci1 uhci2: port 0xac00-0xac1f irq 18 at device 26.2 on pci0 uhci2: [ITHREAD] uhci2: LegSup = 0x0f00 usbus2: on uhci2 usbus3: EHCI version 1.0 usbus3: on ehci0 uhci3: port 0xa080-0xa09f irq 23 at device 29.0 on pci0 uhci3: [ITHREAD] uhci3: LegSup = 0x0f00 usbus4: on uhci3 uhci4: port 0xa400-0xa41f irq 19 at device 29.1 on pci0 uhci4: [ITHREAD] uhci4: LegSup = 0x0f00 usbus5: on uhci4 uhci5: port 0xa480-0xa49f irq 18 at device 29.2 on pci0 uhci5: [ITHREAD] uhci5: LegSup = 0x0f00 usbus6: on uhci5 usbus7: EHCI version 1.0 usbus7: on ehci1 uart0: port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0 uart0: [FILTER] usbus0: 12Mbps Full Speed USB v1.0 usbus1: 12Mbps Full Speed USB v1.0 usbus2: 12Mbps Full Speed USB v1.0 usbus3: 480Mbps High Speed USB v2.0 usbus4: 12Mbps Full Speed USB v1.0 usbus5: 12Mbps Full Speed USB v1.0 usbus6: 12Mbps Full Speed USB v1.0 usbus7: 480Mbps High Speed USB v2.0 ugen0.1: at usbus0 uhub0: on usbus0 ugen1.1: at usbus1 uhub1: on usbus1 ugen2.1: at usbus2 uhub2: on usbus2 ugen3.1: at usbus3 uhub3: on usbus3 ugen4.1: at usbus4 uhub4: on usbus4 ugen5.1: at usbus5 uhub5: on usbus5 ugen6.1: at usbus6 uhub6: on usbus6 ugen7.1: at usbus7 uhub7: on usbus7 uhub0: 2 ports with 2 removable, self powered uhub1: 2 ports with 2 removable, self powered uhub2: 2 ports with 2 removable, self powered uhub4: 2 ports with 2 removable, self powered uhub5: 2 ports with 2 removable, self powered uhub6: 2 ports with 2 removable, self powered uhub3: 6 ports with 6 removable, self powered uhub7: 6 ports with 6 removable, self powered ugen4.2: at usbus4 server#

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >