Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 573/924 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • What can inexperienced admin expect after server setup completed seemingly fine? [closed]

    - by Miloshio
    Inexperienced person seems to have done everything fine so far. This is his very first time that he is the only one in charge for LAMP server. He has installed OS, network, Apache, PHP, MySQL, Proftpd, MTA & MDA software, configured VirtualHosts properly (facts because he calls himself admin), done user management and various configuration settings with respect to security recommendations and... everything is fine for now... For now. If you were directing horror movie for server admin above mentioned what would you make up for boogieman that showed up and started to pursue him? Omitting hardware disaster cases for which one cannot do anything 'from remote', what is the most common causes of server or part-of-server or server-related significant failure when managed by inexperienced admin? I have in mind something that is newbie admins very often missing which is leading to later intervention of someone with experience? May that be some uncontrolled CPU-eating leftover process, memory-related glitch, widely-used feature that messes up something unexpected on anything like that? Newbie admin for now only monitors disk-space and RAM usage, and number of running processes. He would appreciate any tips regarding what's probably going to happen to his server over time.

    Read the article

  • redirecting output from telnet / nc to file in script fails when cron'd

    - by qhartman
    So, I have device on my network which sits there listening on a port for a connection, and when a connection is made it dumps ascii data out. I need to capture that data to a file. I wrote a dead simple shell script that does this: #!/bin/bash #Config Variables. Age is in Days. DATA_ROOT=/root/data FILENAME=data_`date +%F`.dat HOST=device COMPRESS_AGE=3 #Sanity Checks if [ ! -e $DATA_ROOT ] then echo "The directory $DATA_ROOT seems to not exist. Please create it." exit 1 fi if [ -e $DATA_ROOT/$FILENAME ] then echo "You seem to have extracted data already today. Aborting" exit 1 fi #Get Data nc $HOST 2202 > $DATA_ROOT/$FILENAME #Compress old Data find $DATA_ROOT -type f -mtime +$COMPRESS_AGE -exec gzip {} \; exit 0 It works great when I run it by hand, but when I run it from cron, it doesn't capture any of the output. If I replace nc with telnet I see the initial telnet headers about escape sequences and whatnot, but not the data. Ideas? I've tried forcing bash to act like an interactive shell with -i. I've tried redirecting both stderr and stdout. I know it's got to be some silly simple thing, but I'm utterly failing. This is driving me nuts... EDIT I also just noticed that the nc processes from all my previous attempts at this have been siting sleeping, and when I killed them, cron sent me a bunch of non-sensical error messages. At least now I have something to dig into!

    Read the article

  • How to use iptables to forward all data from an IP to a Virtual Machine

    - by jro
    OK, in an attempt to get some response, a TL;DR version. I know that the following command: iptables -A PREROUTING -t nat -i eth0 --dport 80 --source 1.1.1.1 -j REDIRECT --to-port 8080 ... will redirect all traffic from port 80 to port 8080. The problem is that I have to do this for every port that is to be redirected. To be future-proof, I want all ports for an IP to be redirected to a different (internal) IP, so that if one might decide to enable SSH, they can directly connect without worrying about iptables. What is needed to reliable forward all traffic from an external IP, to an internal IP, and vice versa? Extended version I've scoured the internet for this, but I never got a solid answer. What I have is one physical server (HOST), with several virtual machines (VM) that need traffic redirected to them. Just getting it to work with a single machine is enough for now. The VM's run under VirtualBox, and are set to use a host-only adapter (vboxnet0). Everything seems to work, but it is greatly lagging. Both the host (CentOS 5.6) and the guest (Ubuntu 10.04) machine are running Linux. What I did was the following: Configure the VM to have a static IP in the network of the vboxnet0 adapter. Add an IP alias to the host, registering to the dedicated (outside) IP. Setup iptables to allow traffic to come through (via sysctl). Configure iptables to DNAT and SNAT data from a given IP address to the internal address. iptables commands: sudo iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT sudo iptables -A POSTROUTING -t nat -j MASQUERADE iptables -t nat -I PREROUTING -d $OUT_IP -I eth0 -j DNAT --to-destination $IN_IP iptables -t nat -I POSTROUTING -s $IN_IP -o eth0 -j SNAT --to-source $OUT_IP Now the site works, but is really, really slow. I'm hoping I missed something simple, but I'm out of ideas for now. Some background info: before this, the site was working with basic port forwarding. E.g. port 80 was mapped to port 8080 using iptables. In VirtualBox (having the network adapter configured as NAT), a port forwarding the other way around made things work beautifully. The problem was twofold: first, multiple ports needed to be forwarded (for admin interfaces, https, ssh, etc). Second, it only allowed one IP address to use port 80. To resolve things, multiple external IP addresses are used for different (sub)domains. Likewise, the "VirtualBox" network will contain the virtual machines: DNS Ext. IP Adapter VM "VirtalBox" IP ------------------------------------------------------------------ a.example.com 1.1.1.1 eth0:1 vm_guest_1 192.168.56.1 b.example.com 2.2.2.2 eth0:2 vm_guest_2 192.168.56.2 c.example.com 3.3.3.3 eth0:3 vm_guest_3 192.168.56.3 And so on. Put simply, the goal is to channel all traffic from a.example.com to vm_guest_1 (of put differently, from 1.1.1.1 to 192.168.56.1). And achieve this with an acceptable speed :).

    Read the article

  • what are these weird IP address connections in resource monitor?

    - by bill
    I decided to check out Resource Monitor (on the 'Performance' tab in Task Manager, Windows 7) and I noticed in the "Network" section that the 'System' image name kept making a bunch (~5 at a time) of connections to random IP addresses, it would show anywhere from 1-500 bytes/sec 'sent'. They would stay connected for 1-2 minutes. -All web browsers are closed So, first thing I did was run a trace from network-tools.com on some of these IP addresses. 8/10 were outside of US and did not resolve to any host name. Of the 10 IP addresses I traced, 2 were in US, 4 showed origins in China, and one each to Algeria, Russia, Pakistan, Korea. (!) So, the next thing I did was turn off my wireless card, watch the connections disappear, then turn the card back on, and within 30 seconds more random connections were created by System, with different IP addresses from the first time. The next thing I did was go open Task Manager, Show Processes From All Users, then I killed just about everything that wasn't (what appeared to be) a windows process. Turned on wi-fi, and again within 30 seconds, random IP addresses connect for ~ 1 min at a time, new ones coming and going. I occasionally use bit torrent on this machine, but there was definitely no process that seemed related to bt running after I went through task manager, and bt wasn't open to begin with. So, any ideas on what these connections might be for? I have been using Ad-Aware Free and AVG Free on this computer for a while now, always up to date..

    Read the article

  • Ubuntu + LigHTTPd: Server requests taking ages

    - by ctrl_freak
    I've had an issue since upgrading my distro a couple of weeks ago from hardy; receiving data after making a request has increasing intervals of nothing, as you can see from the picture below. http://i49.tinypic.com/2w5lvr9.png I have since reinstalled fresh from an Ubuntu 10.04 Server (i386) disk, but am still having the same issues. I'm running on a LigHTTPd, MySQL, PHP5 stack. The surprising thing is, that local browsing using lynx is super fast, as expected. Initially, after reinstalling, I copied over the old configuration files from the previous installation, but have since reinstalled LigHTTPd and rebuilt the config file from scratch. The only correlation I could find, was that I attempted installation of ionCube and Zend Optimizer for a script I was testing, however I would think that it could no longer impact seeing I had reinstalled the OS. I have also removed Suhosin just in case, however it had no impact. I'm thinking it possibly has something to do with networking, but I wouldn't know where to start. The server is manually assigned an IP by it's MAC address on the router. The fact that the time seems to be exponential (to a point) worries me. I've tried strace'ing the LigHTTPd and MySQL processes, however I couldn't see anything obvious, not that I'd really know what I'm looking for. RAM and CPU usage don't seem to be out of the ordinary, but I can't say its perfect.. I'm hoping someone has experienced the same, or can point me in a direction, as searching has proved fruitless as I don't know anything specific. Config files can be posted, if requested.

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • NGINX AEGIR DRUPAL permissions 403 forbidden

    - by nlam
    New to nginx Installed on mac os for use with aegir & drupal It's running great, but I have a problem with permissions My hostmaster installation is here : /var/aegir/hostmaster-6.x-1.7/ The hostmaster settings file here : /var/aegir/hostmaster-6.x-1.7/sites/aegir.ldev/settings.php Permissions for settings.php are set to 440 automatically by hostmaster, but I'm getting a 403 forbidden page because of this. If I give read permission to "other" the site works great (444 or even 004). Drupal is also telling me that the file system paths are not writable (sites/aegir.ldev/files & sites/aegir.ldev/private). I would have to change the permissions there too. Moreover, I would also have to change permissions for every site installed by hostmaster. Anyway. In my nginx.conf I have the following : user "myuser" _www; Owner and group for settings.php, /sites/example.ldev/files, /sites/example.ldev/private are "myuser" and "_www". Changing permissions to 004 solves this problem, but really confuses me. Why do "other" have permission and not owner or group? I've checked the processes running in activity monitor. Nginx is running as "myuser". Except for one process running as root. So I'm stumped. Hope someone can help.

    Read the article

  • System randomly freezes yet mouse still moves

    - by user784446
    This problem has lasted for the past 48 hours. The first time it happened, a program I was running stopped responding, so I tried to end it from task manager. The processes at first were listed fine until hovered upon. Eventually, despite the mouse still being able to move, after a few persisting clicks the mouse finally stopped moving. The screen went blank shortly thereafter. The second time it occurred, items on the screen stopped responding - hovering over the taskbar or such wouldn't elicit a response. Sound would still play however. Eventually, the mouse became unresponsive and the system restarted itself. I suspect that it may be a problem of my SSD drive. After looking through some Google search results, I downloaded HDTunePro to determine if there's a problem with the drive. Results returned a problem of reallocated sector count. An error scan also revealed 48 bad sectors. Also, an attempt to backup the contents of the most important areas of the drive returned a few explorer "Error: cannot read source from disk" errors. Should I ditch the drive and use another drive or is there anything that can be done to repair the drive? SSD: OCZ Petrol 64gb CPU: AMD Athlon II X4 640 RAM: Generic 3GB DDR2 Motherboard: Gigabyte MA74GM-S2H OS: Windows 7 Ultimate x64 Thanks!

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • connections in FIN_WAIT and CLOSE_WAIT state

    - by Raj
    I would like to elaborate the setup so You guys can understand the question and answer more accurately. I have HAProxy as load-balancer, 4 webservers (apache 2.2.3) and one database server (MySQL 5). I am monitoring these servers by nagios. I have disabled the keepalive on apache as we have only 8GB of memory. Now what happens whenever I receive alerts for high memory and cpu utilization, I have observed that the connections from apache to database server hang in established mode (keepalive with timeout value of 7200) and at other side means connections between haproxy and apache shows status as FIN_WAIT on haproxy server and CLOSE_WAIT at apache side. I also see the huge memory swapping and apache taking the most of the memory. I did strace on apache process and did not find any information. strace gets attached to apache process but did not produce any output. The processlist on Mysql server show s those processes in sleep mode. The application on webserver is Magento a php application. if you need further information please let me know. Thanks.

    Read the article

  • Apache, Permissions, and Convenience

    - by Mike
    I'm on Mac OSX and i I have apache2 installed via MacPorts, running as the _www user. I have some files I want to serve in the /Users/Me/Documents/abc folder. Right now, though, the permissions of /Users/Me/Documents are 700. So, _www can't get in, even if abc is chmod 777. I recognize the following options: Allow _www access to my Documents folder. Put the files I want to share outside of my Documents folder. Hard-link the files outside of my Documents folder, and point apache to the hard links. None of these solutions are acceptable to me, however. I don't feel safe allowing _www access to my entire Documents folder. I really want to keep the files in my Documents folder for other reasons. The files are changing all the time, so hard-linking would not always reflect the right file structure, and, as I understand it, you can't hard-link a directory (though, if you could, that would solve it). Any ideas for a solution? Is there a way to run a few httpd processes as my user account so it can get in there? Or, is there some way to hard-link a directory, or some way to get httpd to follow a symlink past a directory that is 700 not owned by _www? Thanks!

    Read the article

  • Debian 100% cpu every 30 minutes but not loggable?

    - by user654123
    I have a Debian 7 x64 machine running with Digital Ocean that has every 30 Minutes a 100% cpu usage for about 1 minute. A couple of days ago it stayed there for a couple of hours so the server finally crashed and I had to repair my Mysql databases. The server is a pure webserver running apache2 and Mysql. I tried tracing which processes use the cpu but with no luck. The script I used: #!/bin/sh while true; do ps -A -eo pcpu,pid,user,args | sort -k 1 -r | head -3 >> proclog.txt; echo "\n" >> proclog.txt; sleep 2; done I was monitoring htop as well while this was happening, but the top processess' cpu usage didn't add up to ~15% even though htop's cpu meter showed constant 100%. htop was configured to show all users' processess, user- and kernel-threads. Edit: By stopping Apache2 & Mysql prior to the expected 100% usage I can tell both are not responsible for it. The 100% usage occurred anyway. This is what the graph looked like the past hours:

    Read the article

  • Arduino IDE "launch 4j" error

    - by John
    I have a computer running Windows XP. I am trying to run the Arduino IDE 0022. I double-click on arduino.exe, it waits about 30 seconds on the load up title screen, and then it gives me this error: Launch 4j: an error occurred while starting the application My only choice is to click "OK"; the error goes away, and the Arduino IDE closes. If I try to delete the Arduino files (to try overwriting with some different files), I get an error that doesn't allow me to do so: Cannot delete awt.dll: Access denied Make sure the disk is not full or write protected and that the file is not currently in use. The only way to delete the file is by restarting the computer. So something must still be trying to run after that first error. I have noticed in Task Manager that some Java programs are still running: javaw.exe (3 processes) I think this is a problem with Java, but I checked and updated all of my Java software and it is all up to date. I have looked on other forums for this issue and none of them seemed to help. From the forums I have tried: Different Arduino IDE versions Updating Java Opening arduino.exe as Administrator Nothing has worked. Anyone have any suggestions?

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • Mac always boots with incorrect display gamma (for years now including Lion)

    - by Alex Wayne
    I think somewhere, something got installed but I have no idea what or how to fix it :( Basically, my old MacBook Pro running 10.5 Leopard had a problem where on boot it would show everything on the screen in a very sort of crunched color space. Everything below 15% white would just be pure black, everything above 85% white would be pure white and all colors look to be a touch more saturated. It's garish. To fix it, I found that I could boot into almost any fullscreen 3D game. When the game launches, the colors would still be off, but when I then quite the game and return the desktop everything is normal again. I've noticed Blizzard games work most reliably for this (World of Warcraft or Starcraft2). This problem has followed me through the years. When I upgraded to an iMac I migrated everything over to it, and the issue now happens on the iMac too. I then got a new MacBook Pro for work and migrated my iMac over to that, and it has the problem too. I had thought that it was an OS bug, but upgrading to 10.6 Snow Leopard didn't fix it and neither did 10.7 Lion. Furthermore I can't find any reference on any forum or help site where anyone else has this problem. If anyone has any idea what processes or settings or apps I should look at to figure out why this is happening I should would appreciate it! It looks sort of irresponsible when I open my laptop in the office to work and then boot up Starcraft 2 full screen...

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • How do I know if I managed to completely remove an undetected trojan?

    - by ubuntuisbetter
    I catched a trojan that uses explorer.exe to reproduce itself in case of deletion of its autostart entry or main exe file in Programs/x. It had already tried to contact a suspicious server over explorer.exe, blocked that via my firewall. I: Removed the autostart entries from the registry Looked through my services if there was anything suspicious Deleted the trojan from Programs/ Went through System Volume Information to find a 2 month old explorer.exe and replaced the possibly infected one. There are no suspicious processes running now anymore (no duplicate explorer.exe) and nothing wants to connect this trojan owners sever either. I checked my system with several anti-malware programs too. What the trojan did: Started a second explorer.exe Always when I deleted the main trojan exe file it was reproduced (by the second explorer.exe) Always when I deleted the autostart entry it was reproduced by the explorer.exe too. When I terminated the suspicious explorer.exe, which used only half as much memory as the less suspicious one from Windows, a strange thing that I know from the computers in my Informatics class happened: A window popped up in the top left of my explorer-less desktop, titled "Personal settings for ... are ..." that obviously copied some files. Then both explorer.exes started again and the trojan was everywhere again. What did the trojan actually do to get explorer to rescue it? Is my PC clean of this newish trojan now? What are the other locations I should check for the trojan? The trjoan doesn't seem very high-level, could it have changed other system files or is the autostart entry vital for it? Thanks in advance, Your trojan paranoid friend (Getting linux in a week)

    Read the article

  • How do I set up Tomcat 7's server.xml to access a network share with an different url?

    - by jneff
    I have Apache Tomcat 7.0 installed on a Windows 2008 R2 Server. Tomcat has access to a share '\server\share' that has a documents folder that I want to access using '/foo/Documents' in my web application. My application is able to access the documents when I set the file path to '//server/share/documents/doc1.doc'. I don't want the file server's path to be exposed on my link to the file in my application. I want to be able to set the path to '/foo/Documents/doc1.doc'. In http://www3.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_More.html under 'Setting the Context Root Directory and Request URL of a Webapp' item number two says that I can rename the path by putting in a context to the server.xml file. So I put <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> <Context path="/foo" docBase="//server/share" reloadable="false"></Context> </Host> The context at the bottum was added. Then I tried to pull the file using '/foo/Documents/doc1.doc' and it didn't work. What do I need to do to get it to work correctly? Should I be using an alias instead? Are there other security issues that this may cause?

    Read the article

  • Imported Hyper-V VM hangs

    - by BParker
    I have a pair of Hyper-V servers (Server 2008 R2 + Hyper-V role). I have duplicated one VM (exported from the first Hyper-V server to the Second) which seems to keep hanging. The VM is running XP SP3, and has the Virtual Additions installed. The first version runs fine, cpu normally around 0%-2% when idle and responds happily. The second version runs very poorly, and keeps running at an idle of around 8% cpu. I have checked the VM with Process Explorer, and the Idle process is (as expected) taking nearly all the cpu time, but what it's not using doesn't seem to show up in any other process. For example at one point idle dipped to 73%, but i could only account for anoth 4.5% in other processes. After some time (around 1-2 days) the VM seems to become so unresponsive, the only way to resurect it is to power-off the VM and reboot (shutdown via the Hyper- V console of via the VM seems to be impossible). Does anyone have any suggestions on how to trace the source of the problem? Thanks

    Read the article

  • Autossh startup on Ubuntu 10.04 - fails after powering off

    - by grant
    I'm using upstart to keep a reverse ssh tunnel alive using auto ssh similar to Using Upstart to Manage AutoSSH Reverse Tunnel. This works fine, except after a manual power down I can no longer connect to the machine through the "central server" using the tunnel. I receive "ssh_exchange_identification: Connection closed by remote host". The autossh process is running on the client. I can connect again after re-starting networking. I'm trying to figure out why this is failing consistently after a manual shutdown. Is it possible that I need to do some cleanup on startup that would allow the tunnel to work in this situation, or are there some other debugging/troubleshooting steps I can take to determine the problem? Machine A is the client machine, using autossh. This machine sits behind a firewall and uses the following command in upstart to create an ssh tunnel: /usr/bin/autossh -fN -i /keyfile -o StrictHostKeyChecking=no -R 20098:localhost:22 user@centralserver Machine B we'll call the "central server", which sits in the cloud and is the host. This machine is "centralserver" in the command above. When Machine A is hard powered off, and back on, I cannot connect to it by SSH'ing from my machine (C) to Machine B in the cloud, then using the following command to get to Machine A: ssh -p 2098 user@localhost Again, after a reboot of the client (A), this works fine. It is only after a hard power down that the problem occurs. There are autossh processes that are running on the client machine (A) after powering down and back up, but they just don't seem to doing their job.

    Read the article

  • Timestamp Updating Constantly on /dev/null

    - by motorleague
    I've been working on a problem with a /dev/null file on an AIX system (just for background it looks as though it was inadvertently deleted and recreated as a normal file by somebody), but in trying to determine what caused the problem, I noticed that the timestamp on it seems to update every minute. I've observed this on several AIX servers at my workplace. At present I can't entirely rule out this be something specific to the Application being used at my workplace, so I compared with CentOS and Debian based computers at home last night. The CentOS box, which runs 24 hours, had a mod time on /dev/null of around 4 days ago (during which time it was essentially just being used as a web browser and multimedia player, although it would have had active but essentially unused Apache, MySQL and VMM processes running in the background). The timestamp on /dev/null on the Debian machine, which was a just booted laptop, pretty much reflected the boot time, but I tested redirecting STDIN from, and STDOUT to it, and the modification time was unchanged (I'm not sure 100% sure if directing data to /dev/null constitutes "writing to it" in the way it would a normal file). So my question is essentially, could anybody please offer any advice with regards to what circumstances (permissions changes etc.. aside) might cause the timestamp on /dev/null to update? Thanks very much for any suggestions. Alex.

    Read the article

  • Am I safe on Windows if I continue like this?

    - by max
    Of all the available tons of anti-malware software for Windows all over the internet, I've never used any paid solution(I am a student, I have no money). Since the last 10 years, my computers running Windows have never been hacked/compromised or infected so badly that I had to reformat them(of course I did reformat them for other reasons). The only program I have for security is Avast Home Edition, which is free, installed on my computers. It has never caused any problems; always detected malware, updated automatically, has an option to sandbox programs and everything else I need. Even if I got infected, I just did a boot-time scan with it, downloaded and ran Malwarebytes, scanned Autoruns logs, checked running processes with Process Explorer and did some other things and made sure I cleaned my computer. I am quite experienced and I've always taken basic precautions like not clicking suspicious executables, not going to sites which are suspicious according to WOT, and all that blah. But recently I've been doing more and more online transactions and since its 2012 now, I'm doubtful whether I need more security or not. Have I been just lucky, or do my computing habits obviate the need to use any more(or paid) security software?

    Read the article

  • Oracle Linux screen freezes during installation

    - by Fearless
    I was installing Oracle Linux 6.4 on a server, and the screen suddenly froze. Here were the previous steps: I put in the disk, clicked install, checked the disk (no errors), did pre-install setup (clock, root password, host+domain name, etc.), configured two 40GB hard drives in a RAID1 array (no swap, 3100mb encrypted raid partitions, ~100mb ext4 partition mounting to /boot, encrypted ext4 RAID device with mounting to /), selected packages, hit continue. The system did its short preinstall processes, then when to the main installation screen with the long status bar. The installer proceeded like always, but around package 250 out of ~1000, the screen suddenly went black with a text cursor in the upper left corner of the screen and the mouse cursor in its previous place. Neither cursor moved and the only thing that triggered a response was a ctrl-alt-delete that rebooted it. I have run this in VMs before without this issue. Memtest hasn't reported anything, and the media check went smoothly. The machine has supported Ubuntu server without issues before. Any ideas? I have tried booting after that, but the grub bootloader tries to find fd0 for some reason (I have no idea why it would search for the floppy disk). UPDATE My server successfully installed, but won't boot up. I think that, for some reason, it is still using the old bootloader from the previous installation. Any ideas on how to fix that?

    Read the article

  • Windows 7 misses keystrokes from internal keyboard after hibernation (on Acer Aspire 5820)

    - by ron
    I face a very strange symptom on my Acer Aspire laptop (with the factory default Win7 install and divers. Windows update running). After waking the computer from hibernation, it is a pain to type, since on average 5-10 keypresses are missing per 100 presses, using the laptop's keyboard. Steps to reproduce: 1) Power off 2) Power on, wait for system to become usable 3) Open notepad, for 5 times do hit 10x the same character. This gives a similar pattern of 50 chars total: xxxxxxxxxxyyyyyyyyyyaaaaaaaaaassssssssssdddddddddd 4) Optionally repeat. Everything is fine this far. 5) Hibernate. 6) Power on and resume. 7) Repeat steps 3)-4). This time approximately 3-5 character will be missing from each 50 characters. What I ruled out: putting to Sleep or just Locking and resuming from there does not cause problem battery / AC usage does not matter net connection does not matter running processes seem to be the same before and after hibernation key press speed doesn't really matter. For the test I use a nominal 3-5 strokes/second beat. plugging in an external USB keyboard works fine, but the built-in one still misbehaves What could be the problem? How could I diagnose if the keypresses arrive in, but get swallowed at some point? (maybe some nasty keyboard handler hook misbehaves?). Update: It seems that pushing the PowerSmart button and toggling to power saving state fixes the problem. Also, toggling it again back to the original state keeps it fixed. So this may be a fine workaround, but is not a conforming solution.

    Read the article

  • Deployment and monitoring tools for java/tomcat/linux environment

    - by Ran
    I'm a developer for many years, but don't have tons of experience in ops, so apology if this is a newbe question. In my company we run a web service written in Java mainly based on a Tomcat web server. We have two datacenters with about 10 hosts each. Hosts are of several types: Dababase, Tomcats, some offline java processes, memcached servers. All hosts are Linux CentOS Up until now, when releasing a new version to production we've been using a set of inhouse shell script that copy jars/wars and restart the tomcats. The company has gotten bigger so it has become more and more difficult operating all this and taking code from development, through QA, staging and to production. A typical release many times involves human errors that cost us precious uptime. Sometimes we need to revert to last known good and this isn't easy to say the least... We're looking for a tool, a framework, a solution that would provide the following: Supports the given list of technology (java, tomcat, linux etc) Provides easy deployment through different stages, including QA and production Provides configuration management. E.g. setting server properties (what's the connection URL of each host etc), server.xml or context configuration etc Monitoring. If we can get monitoring in the same package, that'll be nice. If not, then yet another tool we can use to monitor our servers. Preferably, open source with tons of documentation ;) Can anyone share their experience? Suggest a few tools? Thanks!

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >