Search Results

Search found 12310 results on 493 pages for 'andrew day'.

Page 390/493 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • Windows 7 intermittently drops wired internet/lan connection.

    - by CraigTP
    In a nutshell, my Windows 7 Ultimate PC intermittently drops it's internet connection. Why? Background: My PC is wired to my ADSL modem/router which is directly connected to the phone line. I also have wireless connectivity turned on within the router for a laptop to connect wirelessly. Every few hours or so, when using my PC, I find I cannot access the internet and pages will not load. Eventually, Windows7 will update the network icon in the task-tray to show the exclamation mark symbol on the network icon. Opening up the Network And Sharing Centre will show the red cross between the "Multiple Networks" and "The Internet". Here's a picture of the "Network And Sharing Centre" (grabbed when everything was working!) As you can see, I'm running Sun's VirtualBox on this machine and that creates a Network connection for itself. This doesn't seem to affect the intermittent dropping (i.e. the intermittent drops occur whether the VirtualBox connection is in use or not). When the connection does drop, I cannot access any internet pages, nor can I access the router's web admin page at http://192.168.1.1/, so I'm assuming I've lost all local LAN access too. It's definitely not the router (or the internet connection itself) as my laptop, using the wireless connection (and running Vista Home Premium) continues to be able to access the internet (and the router's web admin pages) just fine. Every time this happens, I can immediately restore all internet and LAN access by opening Network Adapter page, disabling the "Local Area Connection" and then re-enabling it. Give it a few seconds and everything is fine again. I assume this is because, beneath the GUI, it's effectively doing an "ipconfig /release" then "ipconfig /renew". Why does this happen in the first place, though? I've googled for this and seen quite a few other people (even on MSDN/Technet forums) experiencing the same or almost the same problem, but with no clear resolution. Suggestions of turning off IPv6 on the LAN adapter, and ensuring there's no power management "sleeping" the network adapter have been tried but do not cure the problem. There does not seem to be any particular sequence of events that cause it to happen either. I've had it go twice in 20 minutes when just randomly browsing the web with no other traffic, and I've also had it go once then not go again for 2-3 hours with the same sort of usage. Can anyone tell me why this is happening and how to make it stop? EDIT: Additional information based upon the answer provided so far: Firstly, I forgot the mention that this is Windows 7 64 bit if that makes any difference at all. I mentioned that I don't think the VirtualBox network adpater is causing this problem in any way, and I also have VirtualBox installed on two other machines, one running Vista Home Premium and the other running XP. Neither of these machine experience the same network connectivity issues as the Windows 7 machine. The IP assignment for the Windows 7 machine is the same both before and after the "drop". I have a DHCP server on the router issuing IP Addresses, however my Windows 7 machine uses a static address. Here's the output from "ipconfig": Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.1.2(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Within the system's event logs, the only event that relates to the connection dropping is a "DNS Client Event" and this is generated after the connection has dropped and is an event detailing that DNS information can't be found for whatever website I may be trying to access, just as the connection drops: Log Name: System Source: Microsoft-Windows-DNS-Client Event ID: 1014 Task Category: None Level: Warning Keywords: User: NETWORK SERVICE Description: Name resolution for the name weather.service.msn.com timed out after none of the configured DNS servers responded. The network adapter chipset is Realtek PCIe GBE Family Controller and I have confirmed that this is the correct chipset for the motherboard (Asus M4A77TD PRO), and in fact, Windows Update installed an updated driver for this on 12/Jan/2009. The details of the update say that it's a Realtek software update from December 2009. Incidentally, I was still having the same intermittent problems prior to this update. It seems to have made no difference at all. EDIT 2 (1 Feb 2010): In my quest to solve this problem, I have discovered some more interesting information. On another forum, someone suggested that I should try running Windows in "Safe Mode With Networking" and see if the problem continues to occur. This was a fantastic suggestion and I don't know why I didn't think of it sooner myself. So, I proceeded to run in Safe Mode with Networking for a number of hours, and amazingly, the "drops" didn't occur once. It was a positive discovery, however, due to the intermittent nature of the original problem, I wasn't completely convinced that the problem was cured. One thing I did note is that the fan on my GFX card was running alot louder than normal. This is due to the fact that I have an ASUS ENGTS250 graphics card (http://www.asus.com/product.aspx?P_ID=B6imcoax3MRY42f3) which had a known problem with a noisy fan until a BIOS update fixed the issue. (See the "Manufacturer Response" here: http://www.newegg.com/Product/Product.aspx?Item=N82E16814121334 for details). Well, running in safe mode had the fan running (incorrectly) at full speed (as it did before the BIOS update), but with an (apparently) stable network connection. Obviously some driver was not loaded for the GFX card when in Safe Mode so this got me thinking about the GFX card (since the very noisy fan was quite obvious when running in Safe Mode). I rebooted into normal mode, and found that Nvidia had a very up-to-date new driver for my GFX card (only about 1 week old), so I downloaded the appropriate driver and installed it. After installation and a reboot, I was able to use my PC for an entire day with NO NETWORK DROPS!!! This was on Saturday. However, on the Sunday, I also had my PC for pretty much the entire day and experienced 2 network drops. No other changes have been made to my PC in this time. So, the story seems to be that updating my graphics card drivers seems to have improved (if not completely fixed) the issue, however, I'm still searching for a proper fix for this problem. Hopefully, this information may help anyone who may have additional ideas as to why this problem is occuring in the first place. (And why does new GFX card drivers have anything to do with the network?) I appreciate everyone's feedback so far. However, I'll have to ask once more if anyone has any further ideas of how to fix this particular problem? Thanks in advance.

    Read the article

  • Windows Server Backup - Recover only shows the latest backup

    - by Steffen
    We're having quite some trouble at work using Windows Server Backup. We have a HyperV server (Win 2008) running 8 virtual web servers, these are running a variety of OS'es: Win 2003, Win 2008 and a lone Debian. Each virtual server has a separate partition on the physical HyperV server, so e.g. E: is virtual server #1, F: is #2 and so forth. For backup we use Windows Server Backup, or more exactly we use the commandline tool: wbadmin.exe We need to make the backups without stopping the virtual servers, as we cannot afford the downtime (we've got users online both day and night), and Windows Server Backup offers to use the shadow copy provider to archive this. We run wbadmin like this: wbadmin start backup -backuptarget:\\remotebackuplocation\somefolder -include:E: -quiet We run it once per partition, because we've got a script wrapped around that command, for sending us an email about how it went. Each time we run wbadmin it'll delete the Backup xxxx folder it created in last backup, and just create a new. In order to prevent this from happening, we rename the backup xxx folder after each backup is run, before starting the next one. I realize we must rename it back to its original name prior to recovering, and we obviously do this. Now the issue is as follows: Even though we have all the backed up files, and rename whichever backup we want to use, to its original name, we can only see the latest backup in the Windows Server Backup GUI when we select "Recover". This means we can only recover the last partition we backup up, so e.g. E: can never be recovered. In other words we're screwed :-( My question is: Does anyone know how to use Windows Server Backup for a scenario like this ? Or is it simply not possible due to the simplicity of Windows Server Backup ? If it's not possible, could you recommend some backup software for this purpose ? We've already looked at MS' System Center Data Protection Manager, however it's quite expensive and the boss doesn't like that :-/

    Read the article

  • Commvault Oracle RMAN Restore to new host

    - by Glenn Stauffer
    We use Commvault Simpana 8 and I have a situation where I have backups of an Oracle database on tape that were taken from Host A. Host A suffered a disk failure (lost its raid configuration) and the sys admins are trying to restore it; in the meantime, I'd working to bring the database back up on another host - Host B. I'm running into problems and am trying to sort out the parameters that need to be passed to the Commvault media agent to get this to work. Unfortunately, I do not have access to Commvault support and the backup person is unavailable. Any one have a clue? The backups are there and the media agent reported a successful write when they ran last night. This is what fails: RMAN run { allocate channel t1 device type sbt_tape parms='SBT_LIBRARY=/usr/local/galaxy/Base/libobk.so,BLKSIZE=262144, ENV=(CvClientName=dbsrv2,CvInstanceName=Instance001, CVOraRacDBName=BBDB, CVOraRACDBClientName=BBDB)'; restore spfile to pfile '/tmp/bbdb.ora' from autobackup; }2 3 4 allocated channel: t1 channel t1: sid=34 devtype=SBT_TAPE channel t1: CommVault Systems for Oracle: Version 7.0.0(Build76) Starting restore at 09-MAY-10 channel t1: looking for autobackup on day: 20100509 channel t1: autobackup found: c-3941155360-20100509-01 released channel: t1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of restore command at 05/09/2010 18:01:35 ORA-19870: error reading backup piece c-3941155360-20100509-01 ORA-19507: failed to retrieve sequential file, handle="c-3941155360-20100509-01", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: Error received from media manager layer, error text: sbtrestore: Job[0] thread[26316]: InitializeCLRestore() failed.

    Read the article

  • i5 540M or i7 720QM for laptop running VMs and software development tools?

    - by Donald Hughes
    I'm a software developer that would primarily be running Windows 7 as the primary operating system. On a typical day, I might, at any given moment, be running Visual Studio, Expression Web, SQL Server developer (and Management Console), IIS, Photoshop, a dozen browser tabs in 2-3 different browsers, Skype video chat, streaming music, and a couple of VMs (WinXP and Ubuntu) for testing/experimentation. Obviously, RAM is a concern, which is why I plan to use 8 GB so I can devote enough to the VMs to be usable. I'm also tempted to use an ExpressCard SSD for storing the VM disks to ease disk contention. And I know that that is asking a lot from a laptop, and I should just use a desktop, but I need to be able to take my work with me between several locations. It seems that at a reasonable price point, it comes down to the i5 540M versus the i7 720QM. I'm leaning toward the i7 since it would allow me to dedicate a whole hyperthreaded core to each VM, and still have two cores left for the primary OS. I've heard that the i5 has better battery life, but I'm curious for my scenario if there would be a meaningful difference. I don't usually work without a plug, but I do occasionally ride the train or fly and it would be nice to have at least 3 hours of juice for unusual circumstances. And, finally, for this usage scenario, would a dedicated video option be preferred over the i5's integrated video? It sounds like Visual Studio 2010 (and Windows 7) can take advantage of the video card.

    Read the article

  • Windows 7 Folder Redirection (GPO)

    - by Kev
    Hi - I have been fighting this issue for a day or two now, so I am looking for some insight. I am taking over admin duties in a domain of 800 users, and the previous admins really did not employ much of any GPO settings for the clients of the Domain. In each site, there is a location on the file server where "Home" folders were manually created. EX: \server\home\enduser Whenever a user got a machine, the admin would manually right-click on the "My Documents" folder and manually enter the path to the home folder. We are planning to start putting Windows 7 machines on the Network, and I am wanting to automate as much as I can, everything that was not done in the past. Since everyone has exising "Home" folders I have been fighting and trying to get Folder Redirection to work with a new Windows 7 machine (In a Test OU). I am getting all kinds of errors and I can't get the Windows 7 "Documents" folder to redirect to the users EXISTING home folders. As I stated earlier, all of the Home folders were (and still are) manually created on the File Server and are set with the following Security permissions - Domain Admins - Full Control euser (end user) - Modify (Everything but Full) Can someone point me in the right direction on the proper setting to put in the Folder Redirection GPO to get this to work with the Existing Home folders. I can't seem to make it work but I will keep trying. Thanks In Advance !

    Read the article

  • Commvault Oracle RMAN Restore to new host

    - by Glenn Stauffer
    We use Commvault Simpana 8 and I have a situation where I have backups of an Oracle database on tape that were taken from Host A. Host A suffered a disk failure (lost its raid configuration) and the sys admins are trying to restore it; in the meantime, I'd working to bring the database back up on another host - Host B. I'm running into problems and am trying to sort out the parameters that need to be passed to the Commvault media agent to get this to work. Unfortunately, I do not have access to Commvault support and the backup person is unavailable. Any one have a clue? The backups are there and the media agent reported a successful write when they ran last night. This is what fails: run { allocate channel t1 device type sbt_tape parms='SBT_LIBRARY=/usr/local/galaxy/Base/libobk.so,BLKSIZE=262144, ENV=(CvClientName=dbsrv2,CvInstanceName=Instance001, CVOraSID=BBPROD)'; restore spfile to pfile '/tmp/bbdb.ora' from autobackup; } allocated channel: t1 channel t1: sid=34 devtype=SBT_TAPE channel t1: CommVault Systems for Oracle: Version 7.0.0(Build76) Starting restore at 09-MAY-10 channel t1: looking for autobackup on day: 20100509 channel t1: autobackup found: c-3941155360-20100509-01 released channel: t1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of restore command at 05/09/2010 18:01:35 ORA-19870: error reading backup piece c-3941155360-20100509-01 ORA-19507: failed to retrieve sequential file, handle="c-3941155360-20100509-01", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: Error received from media manager layer, error text: sbtrestore: Job[0] thread[26316]: InitializeCLRestore() failed.

    Read the article

  • DNS slow after losing DNS Server

    - by Tim
    We have set up a small Windows Server 2008 R2 network with a domain controller which is also acting as the DNS server for the network (we opted to install DNS when setting up the domain). This network isn't connected to the Internet in any way, so all machines have been configured to use the IP address of the domain controller as their primary DNS and no secondary DNS server has been configured. If we shut down or unplug the network cable from the domain controller, DNS lookups become quite slow and the performance of the network suffers. For example, running a ping command using a hostname takes around 5-6 seconds to resolve the name. I presume this is because it is looking for the DNS, then falling back to some other method of resolving the names as the DNS server is now gone. All the machines have static IP addresses so we are considering just putting all entries in the HOSTS file of each machine. However, it would be nice to have a centralised DNS in case we one day change the IP of one of the machines. Is there a better way to speed this up?

    Read the article

  • Disable raid member check upon mount to mount damaged nvidia raid1 member

    - by Halfgaar
    Hi, A friend of mine destroyed his Nvidia RAID1 array somehow and in trying to fix it, he ended up with a non-working array. Because of the RAID metadata, the actual disk data was stored at an offset from the beginning. I was able to identify this offset with dd and a hexeditor and then I used losetup to create a loop device with the proper offset, so that I could mount the partition. It was then that I ran into problems, namely that mount says: "mount: unknown filesystem type 'nvidia_raid_member'". I also had this when trying to mount a Linux MD component the other day, and because I can remember that doing that in the past worked, I surmised that it may be some kind of protection. I therefore booted an old Sysrescue CD and tried it there, which worked (because of the older version of mount/libc/kernel/whatever). I still need to try to get more data, and because I don't want to keep using that SysrecueCD, I'd like to be able to mount the disk on my normal system. So, my question is: can the check for a disk being a raid member be disabled? I guess I could also zero out blocks that look like the raid block, but I'd rather not... I made an image of the disk with par2 data, so it's revertable, but still...

    Read the article

  • Server 2008 R2 Dns Lockup

    - by Richard Maynard
    Hi, We've deployed our first 2008 R2 server on a client site which has replaced their existing 2003 DC. This server provides DNS resolution services to all client machines on that site for general internet usage. Since using the 2008 R2 DNS services we have noticed every couple of days the DNS server starts timing out when requests to certain sites are made (google is the only example I can provide at this time although it seems to be larger sites with problems rather than small - CDN compatiblity issue?). When you restart the DNS Server service then resolution returns to normal... just only for a day or so. Is anybody aware of any significant changes to the DNS server architecture or configuration out of the box in R2 that may explain this intermittent behaviour? I have already tried the fix listed here to no avail: http://weblogs.asp.net/owscott/archive/2009/09/15/windows-server-2008-r2-dns-issues.aspx The following PS command prompt info illustrates the issue: PS C:\Users\Administrator.UK> nslookup Default Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 > www.google.com Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 Non-authoritative answer: Name: www.l.google.com Addresses: 66.102.9.99 66.102.9.104 66.102.9.105 66.102.9.103 66.102.9.147 Aliases: www.google.com > www.google.co.uk Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 * s8209001.uk.kingdomfaith.com can't find www.google.co.uk: Server failed Thanks in advance. Regards,

    Read the article

  • Why am I getting a warning that windows is logging on with a temporary profile to run a task scheduler task?

    - by Dan C
    I am having a strange problem with the Windows Server 2008 Task Scheduler. I have to run a small command-line application every few minutes. This application just executes a quick web service call on the localhost and adds an entry to a log file; so it should not need anything special in terms of permissions. First, I created a new user account "my_scheduler" just for the task. This account is a member of the Users group (not sure what other settings I should turn on/off) and set it's password to not expire. I then create a task to run the application every few minutes. I set it to "Run whether user is logged on or not" and turned on "Do not store password. The task will only have access to local resources" (I did this since it's not hitting anything on the network. I did not turn on "Run with highest privileges" since it does not seem to need them. I set the schedule to "After triggered, repeat every 30 minutes for a duration of 1 day" and "Allow task to be run on demand" (no other settings enabled). However, I notice that in the Event Log, I see a bunch of these warnings whenever the task is run: "Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off." Even though I get the warning, the task is executing (I see the log entries appearing). Another (possibly related) issue is that I also see that it's starting multiple copies of the task (within a few seconds of each other) even though it should only start one. This is also a big problem. Any idea how I can fix this? Thanks in advance, Dan

    Read the article

  • Arch Linux drops me on my school network.

    - by Kravlin
    I'm running a Lenovo X61 which i carry around my college for getting on the internet at various points in the day. The network has always been finicky but recently it's gotten worse. I'll connect using iwconfig, get an ip from dhcpcd and log in using vpnc to their system. Sometimes I'll stay connected for hours but most of the time within 30 seconds my network traffic will drop to zero and i'll be unable to do anything. My computer still belives it's connected, however to try again i need to put my wireless interface down, put it back up and try again. It's gotten so bad that i've got a window on my computer pinging yahoo or google constantly in order to know if i'm still able to get online. I know other people who have used Arch Linux that don't have the same problems as well as people who use Ubuntu who haven't had any problems either. It seems like my computer is a special case. Does anyone have any suggestions on how to fix it? dmesg doesn't show anything out of the ordinary going on and i don't know where else to look for errors or other things to try.

    Read the article

  • HowTo import Certificate (pfx) with private key in WinXP

    - by Gunther
    Hello, I tried the whole day just to import a cetrificate in winXP, but I allways failed. I did following: Create the certificate with private key (no pasword): makecert -sr LocalMachine -ss My -pe -sky exchange -n "CN=TestCert" -a sha1 -sv TestCert.pvk TestCert.cer Then put certificate and private key together into pfx file: pvk2pfx.exe -pvk TestCert.pvk -spc TestCert.cer -pfx TestCert.pfx Import pfx file with commandline tool (German System): winhttpcertcfg.exe -i TestCert.pfx -a NT-AUTORITÄT\NETZWERKDIENST -c LOCAL_MACHINE\My Error: Unable to import contents of PFX file. Please make sure the filename and path, as well as the password, are correct. Hint: "NT-AUTORITÄT\NETZWERKDIENST" -- "NT-AUTHORITY\NETWORKSERVICE" Filename is ok, password was not set. Even if I set the password (e.g. "MyPassword") in Step 1 and type at the end of step 3: ... -p MyPassword I got the same error. Then I tried to import in the certificate console (mmc with certificate snap-in). There i got following error: "Der private Schlüssel, den Sie importieren, erfordert möglicherweise einen Dienstanbieter, der nicht installiert ist." -- "The imported private key may requires a service-supplier which is not installed". But the Microsoft Crypto-Service is up and running. What else can I do? On Windows Vista and Windows 7 I got this running without these problems. I need this Certificate to run a WCF Service. Thanks in advance for any hint. Regards, Gunther

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    Hello, I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • Recovery of Windows DFS partion with shadow copy versioned files when overwritten with older modifie

    - by patjbs
    I've noticed the following "bug" on a DFS volume with shadow copies: Pretend you have the following folders/files under shadow copy versioning, going back two weeks. MyDirectory+ MyFile - Modified Date 8/1/2009 The current date: 8/30/2009 You have another version of MyFile stored elsewhere, with a modified date of 7/1/2009. Copy your other version of MyFile into MyDirectory, overwriting the newest version. I expected that you could roll back to the version that was there when it last imaged, say on the prior day and recover your 8/1 version. Not the case. Now, when you go to look at previous versions for the past two weeks, the versioning of that file will be entirely lost, and you'll be stuck with your older 7/1 version. Suckage. Questions: (1) Is this intentional, and if so, what's the rationale? I assume that DFS picks up on the versioning based on the current file, and that's what's wiping out prior versions, but it seems like a fairly stupid/naive way of handling versioning to me. (2) Is there a way to backtrack out of this, without resorting to restoration from other backup mediums? Thanks!

    Read the article

  • alias gcc='gcc -fpermissive' or modifying ./configure script

    - by robo
    I am compiling quite big project from source. The compilation always ends with: error: invalid conversion from ‘const char*’ to ‘char*’ [-fpermissive] I have already compiled this project one year ago. So I know a solution to this. Actualy I found more solutions: Adding a typecast to appropriate line of cpp code (It went to endless number of changes in each file. So I found next solution.) Modifying a makefile to compile that file with -fpermissive option. (I had to modify a lot of lines in each makefile. So I find even better solution.) "g++" or "gcc" was stored in a variable so I added -fpermissive to these variables. This is the best solution I have. It is sufficient to add this option to each makefile once. Unfortunately this software has big number of subdirectories. So I need to modify more than 100 makefiles. It took me whole day one year ago. Is there a way how to do this faster. What about this? alias gcc='gcc -fpermissive' I am not familiar with aliases. But it should be easy to try this. Is the syntax correct? And is this one correct? alias g++='g++ -fpermissive' ? And do I need to export the alias somehow? Will the make program respect the alias? Should I maybe change ./configure script? Or the ./configure.in? Or other file?

    Read the article

  • NSclient++ NRPE issues

    - by Kyle
    I have had NSclient++ working with Nagios for a while now. Recently I started testing Nagwin just to see how it would work, out of pure curiosity. I stopped checking a test server with my main Nagios config, set NSclient++ to NRPE mode, and pointed Nagwin at it. It worked great for a few hours then suddenly I started seeing "UNKNOWN: No Handler for that command." I figured it has to be Nagwin's fault since it's so new, I'll just unload NRPElistner.dll and return my server to being monitored by check_NT. However now check_NT doesn't work my main Nagios server returns timeout errors and is unable to connect at all. My Nagwin server can connect to it, the server just doesn't know how to handle the check_NRPE commands even though it did with no changes a few hours earlier. I have been working on this for a day now and am fairly certain it is NSclient++ who is to blame here. My nagwin box has successfully stayed connected to a similar server throughout the night, without any issues. And my main Nagios config is not having any problems at all. I have been able to successfully switch another server between being monitored by nagios and nagwin without any problems by simply loading and unloading the NRPE.dll. I have tried uninstalling NSclient++ and reinstalling with fresh configuration but still receive the errors. As of now the firewall is off on the server, NSclient++ is setup to accept connection from any server, there is no password, I have also turned ssl off, and the NRPE module is loaded. Any Ideas would be appreciated, I am not an advanced Nagios user but I do know my way around it and can easily break it down and set it up again. I also want to add that while in test mode NSclient++ is unable to handle check_NRPE commands there either.

    Read the article

  • Need an expert advice on *X display manager, window manager and composit manager combination

    - by fakemustache
    Hello! I have fought with myself whether or not i should ask this question but I find myself stuck and I need another expert opinion. I can't seem find the right combination of display and window manager (and composit manager). I have tried so many different combinations but most of them don't work for me. I have been working with Linux for a few years now and currently I'm running Gentoo with GDM, Openbox(stand alone, Gnome aware) and xcompmgr. But I have tried Metacity, Awesome and Fluxbox with and without Compiz, but always with GDM. What I want: A lightweight, HIGHLY configurable environment that doesn't rely on mouse-input too much (except for web browsing and image processing). At 95% I work with multiple consoles and desktops on multiple screens. What makes me ask is that most lightweight environments seem somewhat "unfinished" and show unexpected behavior quite often. And of course I want an environment that's not TOO ugly to look at as I use it at an average of 10 hours a day. :) Any thoughts? What do you use in a similar situation? Thanks for any advice! (Please excuse my english as I'm from Germany, btw greetings from Berlin ;)))

    Read the article

  • Apache error: could not make child process 25105 exit, attempting to continue anyway

    - by Temnovit
    Hello! I have a web server based on Ubuntu Server 9.10 with this software: apache 2 PHP 5.3 MySQL 5 Python 2.5 Few of my websites are PHP based, few use python/django through mod_wsgi. For month or so, every day my apache server stops responding until I manually restart it. Error logs show: [Fri Mar 05 17:06:47 2010] [error] could not make child process 25059 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25061 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 24930 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25084 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25105 exit, attempting to continue anyway and so on. I tried to google this problem but it seems, that I can't find a solution there. How can I determine the cause of this error and how do I fix it? Thank you for your help. UPDATE Updating mod-wsgi to version 3.1 didn't solve the problem Updating PHP to 5.3 also didn't solve it Here is a list of all installed modules: core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status mod_wsgi Here's how my virtual host with wsgi looks: <VirtualHost *:80> ServerName example.net DocumentRoot /var/www/example.net #wcgi script that serves all the thing WSGIScriptAlias / /var/www/example.net/index.wsgi WSGIDaemonProcess example user=wsgideamonuser group=root processes=1 threads=10 WSGIProcessGroup example Alias /static /var/www/example.net/static #serving admin files Alias /media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media/ <Location "/static"> SetHandler None </Location> <Location "/media"> SetHandler None </Location> ErrorLog /var/www/example.net/error.log </VirtualHost> Error log now contains two types of errors fallowed one by another: [error] child process 9486 still did not exit, sending a SIGKILL [error] could not make child process 9106 exit, attempting to continue anyway

    Read the article

  • Open Office crashes, recovers, crashes again

    - by Daniel R Hicks
    After completely reinstalling my laptop due to apparent registry corruption, I've encountered a problem with Open Office: I open a simple Calc spreadsheet, it comes up normally, but then after anywhere from 5 seconds to several minutes (without even touching the Calc window) OO crashes, then comes up through recovery. If I let it "recover" it will do so and bring the spreadsheet up again, only to repeat the crash scenario again. If I kept clicking "OK" it would apparently do this all day. I reinstalled OO once and the problem went away for awhile, but it came back. I then attempted to "reset" my profile (ie, rename the OO user directory in App Data), but OO crashed during the first startup after that, then resumed the original behavior. If I open the same file using Excel it complains of errors in the file, and "recovers" them, but the "error report" it generates contains no details. If I save the "recovered" file then OO Calc will open it, but the problem returns after saving again. Any ideas? (The system is Vista SP2, running OO 3.4.1) How to reproduce: Start Open Office Calc. Save workspace as "CrashTest.ods" From Task Manager kill Open Office (soffice.exe/bin -- one of each) Double click on the saved "CrashTest.ods" in Explorer. OO puts up a message that recovery will occur -- allow it. When the Calc window comes up, don't touch it -- just wait about 10 seconds. Calc window closes and OO puts up a message that recovery will occur -- from now on the sequence will repeat. I suspect this behavior is limited to a few (recent) versions of OO, and very possibly only Calc. Reported as Open Office Bug 1211094. Sigh!! As much as it irritates me, I'm having to switch over to Excel for several things I used to do with Calc. Excel has a miserable UI, but at least it says up for longer than 10 seconds.

    Read the article

  • Deleted, then added user w/ same name, now logs on w/ temp profile

    - by labyrinth
    I am a new admin at a high school lab and am trying to spearhead separation of normal IT accounts from IT admin accounts. I made my normal account (e.g. ITuser) and an admin account (e.g. ITuser-adm) on the server (Win Server 2008 R2). I used both accounts on my my main desktop for about a day, but decided I hadn't set up the admin account correctly. I deleted the my admin account, then made a new one with the same name. The problem is that on my main desktop (Windows 7 Pro), whenever I log in with my admin account, it gives the following errors: Windows has backed up this user profile. Windows will automatically try to use the backup profile the next time this user logs on. (Error 1515) Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off. (Error 1511) This is more of a nuisance than anything for me, I just thought I could use the same name for a user account I'd just deleted since they would have separate SSIDs anyway. If it's less trouble, I could just make a new admin account. Or I could just keep using it as is since I don't need to be saving anything locally anyway and the typical folder redirects work fine. I'm just curious and want to understand what's going on. There are no errors listed regarding the registry.

    Read the article

  • How do I prevent Pidgin from loading XMPP chat room history on join?

    - by Mr. Jefferson
    In Pidgin, when I join a chat room, it loads the chat room history. iChat on the Mac has a preference in the Accounts section to set a variable amount of history to load, or disable loading history entirely. How do I do the same thing in Pidgin? Is there a preference somewhere that I've missed? The object is to have the chat room start fresh each day, so I'd also be fine with disabling chat room history entirely on the server if that's possible. But I didn't see that option either when I looked in Server Admin on the server. I found this list of XMPP room types, and it looks like creating a Temporary Room might be the best way to do this, but I don't want to have to create the room manually every morning. Right now I've got Pidgin set to auto-join the room when I log in; I want it to do that without loading history. EDIT: The XMPP multi-user chat spec referenced above also contains a section on managing history. And I got this to work by pulling up the XMPP Console plugin in Pidgin, copying the <presence /> stanza it sent when I joined the room, closing the room, pasting the stanza into the console, adding the <history /> element and sending it. When I opened the room again, I had no history. But it all came back the next time! So: how do I get Pidgin to send the <history /> stanza by default?

    Read the article

  • Java plugin in the browser doesn't work even though it is enabled

    - by Pratyush Nalam
    I installed the Java Development Kit (64-bit) recently and saw it includes the JRE plugin for 64-bit as well. But, since Firefox is 32-bit, I also installed JRE 32-bit version. This is what is shown in Programs and Features. Now, the problem is, the other day, I opened a site which required the Java plugin. The frame showed the regular Java loading animation and hung. Nothing happened after that. Like this: I checked Firefox's plugins section and it shows Java is enabled, so no issue there I tried other browsers - IE10 and Chrome but to no avail. It doesn't work anywhere. I saw another question which said that you have to install 64-bit then 32-bit. That's what I actually did as well. First, installed JDK 7 64-bit (which includes JRE 7 64-bit) and then installed JRE 7 32-bit. I even tried the Java website's Do I have Java? section and over there too, it just keeps spinning for ages (I have waited for more than 10-20 seconds). How do I go about now? This never happened to me in Windows 7. I am on Windows 8 Pro.

    Read the article

  • Laptop choice for development: MacBook Pro 17 vs Dell Studio XPS 16 vs HP Envy 15

    - by Shalan
    Hey! First things first - let me state that I am not intending to play games on this - I have narrowed down to these 3 purely based on specs and its individual brand reliability in the market. I intend to primarily use: Visual Studio 2008 Pro a lot (develop and deploy on Windows platforms) SQL Server 2005 Oracle 10g Adobe Photoshop CS4 Microsoft Expression Studio Google Sketchup I currently use a desktop PC (Core2Duo 2.66Ghz with 3GB DDRII memory) running Vista Business 32-bit - and I have to admit that, especially for Visual Studio, its quite sluggish to a point where it affects productivity. Furthermore, I intend to only use the notebook on a table - with a cooled surface, like granite :) - so I would appreciate people's input with regard to heat issues. Im aware that the Dell's primary exhaust gets blocked by the lid when open, but some reviews don't seem to place extraordinary emphasis on heat issues resulting from this. My options for the Dell/Alienware: Core i7 720QM 4GB DDRIII memory ATI mobility 3670 (512) 128GB Solid State Drive 16-inch Full HD RGB-LED LCD display (1080p) 3-year next-business-day support My configuration for the Apple MBP: Core2Duo 2.8Ghz (Im assuming the T9600) 4GB DDRIII memory 128GB Solid State Drive standard 1 year support The one advantage I think of with the MBP is that I can have the addition of OSX (though Im unsure what I would use it for, but purely to play around with a much-boasted-about OS) What are your thoughts on this, especially regarding build-quality, heat, performance and battery-life? Much thanks! ~shalan

    Read the article

  • Is it necessary to burn-in RAM for server-class systems?

    - by ewwhite
    When using server-class systems with ECC RAM, is it necessary or even useful to burn-in the memory DIMMs prior to deployment? I've encountered an environment where all server RAM is placed through a lengthy multi-day burn-in/stress-tesing process. This has delayed system deployments on occasion and adds an extra step to the hardware lead-time. The server hardware is primarily Supermicro, so the RAM is sourced from a variety of vendors; not directly from the manufacturer like a Dell Poweredge or HP ProLiant. Is this process useful? In my past experience, I simply used vendor RAM out of the box. Isn't that what the POST memory tests are for? I've encountered and responded to ECC errors long before a DIMM actually failed. The ECC thresholds were usually the trigger for warranty placement. Do you burn your RAM in? If so, what method do you use to perform the tests? Has the burn-in process resulted in any additional platform stability? Has it identified any pre-deployment problems?

    Read the article

  • Apache Won't Restart After Compiling PHP with Postgres

    - by gonzofish
    I've compiled PHP (v5.3.1) with Postgres using the following configure: ./configure \ --build=x86_64-redhat-linux-gnu \ --host=x86_64-redhat-linux-gnu \ --target=x86_64-redhat-linux-gnu \ --program-prefix= \ --prefix=/usr/ \ --exec-prefix=/usr/ \ --bindir=/usr/bin/ \ --sbindir=/usr/sbin/ \ --sysconfdir=/etc \ --datadir=/usr/share \ --includedir=/usr/include/ \ --libdir=/usr/lib64 \ --libexecdir=/usr/libexec \ --localstatedir=/var \ --sharedstatedir=/usr/com \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --cache-file=../config.cache \ --with-libdir=lib64 \ --with-config-file-path=/etc \ --with-config-file-scan-dir=/etc/php.d \ --with-pic \ --disable-rpath \ --with-pear \ --with-pic \ --with-bz2 \ --with-exec-dir=/usr/bin \ --with-freetype-dir=/usr \ --with-png-dir=/usr \ --with-xpm-dir=/usr \ --enable-gd-native-ttf \ --with-t1lib=/usr \ --without-gdbm \ --with-gettext \ --without-gmp \ --with-iconv \ --with-jpeg-dir=/usr \ --with-openssl \ --with-zlib \ --with-layout=GNU \ --enable-exif \ --enable-ftp \ --enable-magic-quotes \ --enable-sockets \ --enable-sysvsem \ --enable-sysvshm \ --enable-sysvmsg \ --with-kerberos \ --enable-ucd-snmp-hack \ --enable-shmop \ --enable-calendar \ --with-libxml-dir=/usr \ --enable-xml \ --with-system-tzdata \ --with-mime-magic=/usr/share/file/magic \ --with-apxs2=/usr/sbin/apxs \ --with-mysql=/usr/include/mysql \ --without-gd \ --with-dom=/usr/include/libxml2/libxml \ --disable-dba \ --without-unixODBC \ --disable-pdo \ --enable-xmlreader \ --enable-xmlwriter \ --without-sqlite \ --without-sqlite3 \ --disable-phar \ --enable-fileinfo \ --enable-json \ --without-pspell \ --disable-wddx \ --with-curl=/usr/include/curl \ --enable-posix \ --with-mcrypt \ --enable-mbstring \ --with-pgsql=/mnt/mv/pgsql I'm using Postgres 8.4.0 and Apache 2.2.8; I have the following line in my Apache conf file: LoadModule php5_module /usr/lib64/httpd/modules/libphp5.so And when I attempt to restart Apache, I get the following error message: Starting httpd: httpd: Syntax error on line 205 of /etc/httpd/conf/httpd.conf: Cannot load /usr/lib64/httpd/modules/libphp5.so into server: /usr/lib64/httpd/modules/libphp5.so: undefined symbol: lo_import_with_oid Now, I know that this is a problem with Postgres with PHP because lo_import_with_oid is a function in the Postgres source which allows the importing of large objects; also, if I remove the --with-pgsql option, PHP and Apache get along great. I've scoured the Internet looking for answers all day, but to no avail. Does anyone have ANY insight into what is causing my problems.

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >