Search Results

Search found 51790 results on 2072 pages for 'long running'.

Page 395/2072 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • Setup a automatic server reboot on when a particular service fails

    - by user1179459
    I am running linux based server (centos 6.0) with cpanle and WHM, I have critical website running with a chat server which uses a openfire as the chat server backend server, i have monitored last few weeks this service crashes quite often, i have no way of knowing that, and i have to wait till the next day to restart the server. (and this can only be fixed by using server reboot as its got to do with some java memory problem) is there a way i can setup a monitoring service to the server and if this service goes down server itself will reboot ? is this something possible or is there a better way to overcome this problem ?

    Read the article

  • Scheduling VMWare ESXi 4.1 VM Restart

    - by Robin Day
    We had a virtual machine running on a VMWare Server host on Windows Server 2003. The machine is set up with non persistent disks. We had a windows task schedule set up that ran a batch file to reset the machine each week so that it returned to it's original state. The batch file that we had running was: "C:\Program Files\VMware\VMware Server\vmware-cmd" "C:\Virtual Machines\VirtualMachineName\VirtualMachineName.vmx" stop hard "C:\Program Files\VMware\VMware Server\vmware-cmd" "C:\Virtual Machines\VirtualMachineName\VirtualMachineName1.vmx" start We have since migrated this machine to the free version of ESXi 4.1. Can anyone let me know if and how it is possible to schedule such a restart?

    Read the article

  • Linux Read-Ahead Downsides

    - by JPerkSter
    Hi Everyone, Hope all is well. I have a question regarding read-ahead caching. Are there any downsides to raising the size of the read-ahead cache? On our farm, we're currently running at 256, and upon raising that higher, we are seeing significant throughput gains.   [root@server~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 7352 MB in 2.00 seconds = 3677.62 MB/sec 3 Timing buffered disk reads: 244 MB in 3.10 seconds = 78.68 MB/sec [root@server ~]# blockdev --setra 10240 /dev/sda [root@server ~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 11452 MB in 2.00 seconds = 5728.52 MB/sec Timing buffered disk reads: 422 MB in 3.17 seconds = 133.04 MB/sec We are running on 2.6. Thanks!

    Read the article

  • Windows 8 Metro UI with Multiple Monitors -- Keep in always maximized?

    - by Jersey Dude
    Before installing Win 8 today, my plan was to keep the Metro UI up on my smaller, laptop monitor and have the classic UI running on my two larger monitors. However, in reality, as soon as I click on something in one of the classic UI monitors, the Metro UI minimizes (thus exposing the classic UI in its place). Is there any way to keep the Metro UI from minimizing when I do something in another monitor? Oddly enough, if there is an app running/suspended in the Metro Window, then the Metro UI is not minimized. If the Start screen is currently viewed, clicking in a classic window/monitor switches the Metro UI from the start screen to the last run app. Very peculiar .. Thanks

    Read the article

  • Caching static content from Adobe, Microsoft, etc

    - by Tim
    I'm currently running the Apple SUS on a Mac OS X Server in a small office environment. It works well for Apple updates, but I'm still stuck with either manually downloading and installing Adobe/Microsoft updates on each computer or running them through a Squid cache, with the blind faith that Squid will keep the files I actually want to stay cached. What is the best way to cache updates locally for applications like the Adobe Updater or Microsoft AutoUpdate? Ideally cached in such a way that I can tell which files I do or do not have cached. It would also be nice to be able to cache things for other software like Firefox and Sparkle-enabled apps, but these are usually small enough to ignore.

    Read the article

  • Wireless dropouts that only affect subset of devices

    - by jwaddell
    When watching videos streamed over WiFi from a NAS box (D-Link DNS-323) I am getting wireless dropouts. However they only appear to occur when I have left my laptop (Dell Inspiron 9300 running Windows XP SP3) running; the laptop is usually suspended if I'm not using it. The dropouts have occurred when streaming to a Netgear EVA8000 streaming device, and also to a PS3. I'm using a Netgear DG834G as the wireless modem/router. When a dropout occurs I go to the laptop and see that its wireless connection has also dropped out. The odd thing is that my wife's MacBook and my iPhone still maintain their connections. What could be causing this behaviour, and how do I go about fixing it?

    Read the article

  • How to circumvent ISP Limiting "Unknown" traffic - (SSH)Proxy, VPN

    - by connery
    I am having issues with using a proxy/VPN, with my current ISP (Comenersol, Spain). From my point of view they limit traffic by protocol or by traffic they "know" and "dont know". I'll explain my findings so far below. Internet connection in Spain: ~400-420KByte/sec (speedtest.net) OpenVPN Server in Sweden(pfsense): 100/100Mbit. LZO Compression. TCP. Tun. Aes128 Squid Proxy server in Sweden (pfsense): 100/100 (same box as the vpn server). Plain, no encryption. Runs in stealth mode to hide the use of proxy. NOT running OpenVPN or Squid Proxy, this is my findings: When I download a file from my pfsense box in Sweden, I get maximum speed When I run speedtest.net and choose any european server (including Swedish), I get max speed When I download a torrent (with non default port above 10K), I get limited to ~100KByte/sec. Encryption is turned off If I download something through https, I get max speed Running either Squid Proxy or VPN, this is my findings When I download a file from my pfsense box in Sweden, I get ~100KByte/sec When I run speedtest.net and choose any european server (including Swedish and Spanish), I get ~100Kbyte/sec When I download a torrent, I get same limitation ~100KByte/sec When I download something through https, I get ~100KByte/sec I verify the speeds above with speedtest.net measure, firefox measure in addition to having bmon running in terminal in the background. This way I am certain that the speeds I get presented, are in fact correct. If I connect through a different ISP with VPN or Squid Proxy, I get better speeds (400KByte/sec ++) In short: Whenever I tunnel my traffic through Sweden, my SPanish ISP throttles the traffic. I thought tunneling it through Squid would solve the issue, since I then would no longer hide my traffic through encryption. This does not seem to be the case. Wget and fetch gives same result. I did not try 'nc', but I assume this would give the same result. Does anyone know how to circumvent this issue? I would very much like to be able to get full speed with Swedish ip, as this would make me able to stream TV at higher quality than today. 100KByte/sec just does not cut it quality wise. Thanks for reading. Looking forward for your help.

    Read the article

  • Trouble with mod_proxy and mongrel_rails

    - by x3ro
    Hey there I'm trying to set up a mod_proxy - mongrel combination, but somehow, apache/mod_proxy is unable to access mongrel locally. The following is my configuration for mod_proxy: ProxyRequests Off ProxyPreserveHost On <Location / ProxyPass http://localhost:3000/ ProxyPassReverse http://localhost:3000/ Order deny,allow Allow from all </Location Mongrel/Rails ist running just fine, because I can access it from my browser, and even with lynx on the server. However, I get the following error when trying to use the proxy: [error] [client 127.0.0.1] Invalid Content-Length I would appreciate any help :D PS: Oh, and the server is running Plesk to configure vhosts, if thats important.

    Read the article

  • Resolving host names to their domain name in an internal BIND domain

    - by Adam Plumb
    I'm setting up a domain on my home network for learning purposes, using BIND on CentOS to act as the name server. I've got the name server up and running as type master for my internal domain (plumbnicoll.family), and can do forward and reverse lookups from other computers in my LAN. For example, host office2.plumbnicoll.family correctly returns office2.plumbnicoll.family has address 192.168.1.3. What I'd like is to be able to resolve just office2 to its address, without needing to put .plumbnicoll.family at the end. Is this possible, or even desirable to do? I'm running a mixed environment at home with both Linux and Windows computers.

    Read the article

  • SSL Certificate for local web server

    - by Firefly
    Is it at all possible to create a self-signed certificate for use on multiple machines on a local network which would stop the browser complaining it is not a trusted site? We have a product which is basically a computer running lighttpd to serve a web interface for configuring the computer (sort of how a router has a web interface). There can also be many of these machines running on the same network with dynamic IP's. What I basically want to do is enable SSL for extra security but I don't want people who are on the local network to be given a browser warning about the certificate not being trusted. Is this at all possible?

    Read the article

  • 504 Gateway Timeout on server clusters

    - by Sixfoot Studio
    Hi All, Here's our scenario: We're running three websites on three web servers hosted on virtual machines running the following: IIS6 Win 2003 Standard Edition Oracle 10g Sitefinity 3.7 We do not have constant control over these servers and something's gone wrong in the interim in that we're getting constant 504 Gateway Timeouts. The other thing that's happening is that if you hit one of the sites, that site is trying to pull themes from one of the sites App_Themes folder in Sitefinity, which of course should not be happening at all. If someone has any ideas on why this should suddenly have started happening I would appreciate it. Many thanks

    Read the article

  • Why do some software not get load balanced even when there are multiple cores?

    - by Nav
    While VTune Analyzer was running on a blade server with 8 cores, I observed the cpu useage percentage using mpstat -P ALL 1. mpstat showed me that VTune was taking up 100% of a single core, while all other cores were idle. Why does that happen? Shouldn't the OS (RHEL Server 5.2) automatically distribute load across cores? The same happened when I tried running MATLAB (even after enabling multithreading support in the MATLAB settings). p.s: I'm a developer. Not a sys admin. So felt it better to ask here rather than at serverfault.

    Read the article

  • Unable to map to web folder using WebDAV client on Windows Server 2008 R2

    - by user74989
    Hello, I have a client running Windows Server 2008 R2 on several servers. One of the servers is also running SharePoint 3.0 and my client has created a web folder to map to. I can map to the web folder from all Server 2008 R2 boxes that have the WebDAV client (part of Desktop Experience feature) installed, except for the server the folder resides on. When I attempt to map to the web folder on the server which the folder resides, I am repeatedly prompted to enter my credentials. I am using the same account that I used to map the web folder on the other servers. I have also tried mapping from the command line and receive 'Access Denied' What may be causing the problem? I would think that if I can map to the drive from one server, I should be able to map the drive from the rest as long as the WebDAV client is installed, especially on the server where the folder is located. Jesse

    Read the article

  • Transferring DHCP using Windows Server Migration Tool - Why is Powershell is crashing on the import of the .mig file?

    - by Mike
    I am migrating DHCP from a windows server 2003R2 DC to a Windows Server 2008R2 DC I've followed this video and its predecessor (Installing Windows Server Migration Tools) http://technet.microsoft.com/en-us/video/migrating-dhcp-using-the-windows-server-2008-r2-migration-tools.aspx I went through everything smoothly until the last step. I have exported a .mig file with my DHCP configuration on the old 2003r2 server. I transferred this .mig file over to my 2008R2 server, when running the import command, it will appear to work for a minute or two and then I get a generic windows "Powershell has stopped working" error and I have to close the program. Under the problem details I see the following: FileVersionOfSystemManagementAutomation: 6.1.7600.16385 InnermostExceptionType: System.AccessViolationException OutermostExceptionType: System.AccessViolationException DeepestPowerShellFrame: unknown OS Version: 6.1.7600.2.0.0.272.7 LocaleID: 1033 Seems like there are permissions issues maybe? I am running powershell as an admin and am logged in to the server as a domain administrator. Any Ideas? Thanks

    Read the article

  • central log-server with auditdisp

    - by johan
    I want to setup a central log-server. The log-server is running with debian 6.0.6 and the audit daemon is installed in version 1.7.13-1. The Clients are running with Red Hat 5.5 and they connect to the log-server via audispd. The connection works fine and i get all messages from each node. My questions is: is it possible that the auditd daemon from the log server write the messages from each node in a separate file? I try to transfer the messages via the syslog daemon, that works but i can not use tools like ausearch to analyze these log-files.

    Read the article

  • IISReset to remote server fails

    - by Rob
    I'm attempting to run iisreset 192.168.100.182 (against a Windows Server 2003 machine) from another machine on the same domain (running Windows 7 Professional) and am receiving the following error message: Attempting stop... Restart attempt failed. Access denied, you must be an administrator of the remote computer to use this command. Either have your account added to the administrator local group of the remote computer or to the domain administrator global group. I'm running the command from an elevated command prompt with my domain account added to the Administrators group on the target machine. I've attempted this when being a member of the administrators group both directly and by virtue of membership of a domain group that's a member of the administrators group. I've reviewed the event log on the target machine and it shows a selection of Success Audits for my domain credentials immediately after attempting the iisreset, but no failure audits.

    Read the article

  • High load on a nagios server -- How many service checks for a nagios server is too many?

    - by Josh
    I have a nagios server running Ubuntu with a 2.0 GHz Intel Processor, a RAID10 array, and 400 MB of RAM. It monitors a total of 42 services across 8 hosts, most of which are checked using the check_http plugin even 5 minutes, some every minute. Recently the load on the nagios server has been above 4, often as high as 6. The server also runs cacti, gathering statistics every minute for 6 hosts. I wonder, how many services should hardware like this be able to handle? Is the load so high because I am pushing the limits of the hardware, or should this hardware be able to handle 42 service checks plus cacti? If the hardware is inadequate, should I look to add more RAM, more cores, or faster cores? What hardware / service checks are others running?

    Read the article

  • OpenVPN client on Amazon EC2

    - by Matt Culbreth
    I have an account with an OpenVPN service, and I'd like to get that running on my EC2 instance running Ubuntu 12.04. I have my config file in /etc/openvpn, and it connects fine when I run sudo openvpn --config matt.ovpn. However, I then lose connectivity to the EC2 machine, and I can't SSH back to it until I reboot. Previously I have done things like sudo ip rule add from IP_ADDRESS table 10 and then sudo ip route add default via GATEWAY_IP table 10, but that's not working on EC2. Any ideas? My private IP address right now is 10.209.29.XXX and my gateway is 10.209.29.1.

    Read the article

  • Windows 7 [virtualized] resolutions in Macbook Pro Retina

    - by Trevor Sullivan
    So, I was considering picking up a Macbook Pro Retina, but then I realized that Apple forces you to scale the resolution, so you don't actually see the true benefits of the 2880x1800 display. Instead, you see upscaled, pixelated icons -- I saw this for myself in an Apple store a couple days ago. That's ok though, because the main reason I'd purchase one is to run Windows 7 on it, however I understand that the bootcamp drivers have not been updated to work with the MBP Retina. Instead, the option would be to run Windows 7 virtualized, but I haven't found any conclusive evidence to indicate whether the entire 2880x1800 resolution would act the same virtualize (VMware Fusion, VirtualBox, Parallels) as running Windows 7 natively. My question is: Does Windows 7 see the entire 2880x1800 virtualized, same as running it on bare metal (boot camp)?

    Read the article

  • VMware + SQL Server - sqlserver.exe not using both CPU cores

    - by fistameeny
    Hi, I am working on a virtual machine that runs SQL Server Express (as part of Sage Line 50 Manufacturing). The details are as follows: Physical Server (host machine) - Intel Xeon Quad Core 2.1GHz - 4GB RAM - VMDK image stored on RAID-5 500GB SATA drives (7200RPM) - Running Ubuntu 10.04 Server 64 bit - VMware Server 2 Virtual Machine - Windows Small Business Server 2003 - Allocated 2 vCPU's and 2GB RAM - Using 100GB pre-allocated flat VMDK file The problem I have is that there is process that runs in SQL Server that is CPU intensive. On the old physical server that we migrated to the virtual machine from, this would utilise both CPU cores so the sqlserver.exe process would be running 100% on each of the CPU cores. On the virtual machine, it only seems to use one of the two CPU cores, meaning that the process is much slower to run. Question Is there a way to force SQL Server (sqlserver.exe process) to use both of the CPU cores, and distribute it's load between them? Is this a VMware setting that needs changing to allow processes to use both cores?

    Read the article

  • Windows doesn't get access to internet though linux easily does

    - by flashnik
    We have a very interesting problem. The network is configured in this way: internet is connected to Trendnet switch TS DHCP server at 192.168.0.1 running on Ubuntu (S) is connected to internet switch DNS is also configured on 192.168.0.1 on S D-Link Wi-Fi boosters are connected to switch TS PCs use D-Link PCI-E Wi-Fi cards to get access to network PCs have both Ubuntu and Windows 7 There are about 40 PCs. When PC is booted to Ubuntu it easily gets access to internet. But when it's booted to Windows 7, it gets a valid IP-address, but doesn't get access to internet. The address, mask, DNS, GW-address are totally the same as when it's booted under Ubuntu. The S is reacheble and pingable. Sometimes when we are lucky the PC gets access to Internet, but after rebooting it can lose it. When PC under Windows has access, it has totally the same settings as when it doesn't. What can be done? UPDATE I shared a dropbox with 2 captures of traffic. Ping.pcap is a capture of pinging 8.8.8.8. And google-browser.pcap is a capture of opening a google.com in a browser, both of them are in tcpdump formats and made by Wireshark on Win PC. The MAC of Win PC ends on b7:63 and IP is 192.168.0.130. UPDATE2 This is ifconfig output from Ubuntu Server eth0 Link encap:Ethernet HWaddr 00:1e:67:13:d5:8d inet addr:193.200.211.74 Bcast:193.200.211.78 Mask:255.255.255.0 inet6 addr: fe80::21e:67ff:fe13:d58d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:196284 errors:0 dropped:44 overruns:0 frame:0 TX packets:190682 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:158032255 (158.0 MB) TX bytes:156441225 (156.4 MB) Interrupt:19 Memory:c1400000-c1420000 eth0:2 Link encap:Ethernet HWaddr 00:1e:67:13:d5:8d inet addr:192.168.0.1 Bcast:192.168.0.254 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Memory:c1400000-c1420000 eth1 Link encap:Ethernet HWaddr 00:1e:67:13:d5:8c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 Memory:c1300000-c1320000 nslookup from Win results in DNS request timeout, nbtstat in 'not found'.

    Read the article

  • Network on linux server is periodically down

    - by Fabian
    I have an old server running Fedora 4 that occasionally just stops responding via network for about an hour. This happens 1-2 times a week. Also no connection from the server itself to any other computer on the network is possible when it happens. The network settings and routes look fine. There are no unusual log messages and no unusual processes running at that time. If I restart the network or just do an ifconfig eth0 down & ifconfig eth0 up it works fine afterwards. I know that the server should be updated to a currently supported OS, but that is not really an easy option right now. Any ideas on how I could diagnose and fix that problem?

    Read the article

  • installing a script as startup service in ubuntu

    - by Jibin
    I have a script openerp-server.py in ~/openerp/stable6/server/bin/.I want it to be run at startup.(As a service or not - I don't know the difference) These are the steps I followed 1 Created a script 'openerp-server' with the following lines in /etc/init.d/ #!/bin/sh cd ~/openerp/stable6/server/bin/ exec /usr/bin/python ./openerp-server.py $@ 2 Made the script executable by using the following command sudo chmod +x /etc/init.d/openerp-server 3 Made the link run on startup by using the following command sudo update-rc.d openerp-server I checked using sysv-rc-conf.And openerp-server was selected for run level 2,3,4,5. Now after restarting I checked if the openerp-server.py is running, it was not running. Please help.

    Read the article

  • Looking for a software / something to automate some simple audio processing

    - by Daniel Magliola
    I'm looking for a way to take a 1-hour podcast MP3 file and split it into several several 2-minute MP3s. Along the way, I'd like to also do a few things like Amplify the volume. The problem I'm solving is that I have a crappy MP3 player that won't let me seek forward or backward, nor will it remember where I left it when I turn it off, plus, I listen to these in a seriously high-noise situation. Thus, I need to be able to skip forward in large chunks (2-5 minutes) to the point where I left it. Is there any decent way to do this? Audacity doesn't seem to have command-line capabilities. I'm willing to write some code, for example, to call something over the command line and get how long the MP3 file is, to later know how many pieces i'll have, and then say "create an MP3 with 0:00 to 2:00", "create an MP3 with 2:00 to 4:00", etc. I'm also willing to pay for the right tools if necessary. I also don't care how slow this runs, as long as I can automate it :-) I'm doing this on Windows. Any pointers / ideas? Thanks!

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >