Search Results

Search found 9611 results on 385 pages for 'low power'.

Page 303/385 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • Probability of Blade Chassis Failure

    - by ChrisZZ
    In my organisation we are thinking about buying blade servers - instead of rack servers. Of course technology vendors also make them sound very nice. A concern, that I read very often in different forums, is, that there is a theoretical possibility of the server chassis going down - which would in consequence take all the blades down. That is due to shared infrastructure. My reaction on this probability would be to have redundancy and by two chassis instead of one (very costly of course). Some people (including e.g. HP Vendors) try to convince us, that the chassis is very very unlikely to fail, due to many redundancies (redundant power supply, etc.). Another concern on my side is, that if something goes down, spare parts might be required - which is difficult in our location (Ethiopia). So I would ask to experienced administrators, that have managed blade server: What is your experience? Do they go down as a whole - and what is the sensible shared infrastructure, that might fail? That question could be extended to shared storage. Again I would say, that we need two storage units instead of only one - and again the vendors say, that this things are so rock solid, that no failure is expected. Well - I can hardly believe, that such a critical infrastructure can be very reliable without redundancy - but maybe you can tell me, whether you have successfull blade-based projects, that work without redundancy in its core parts (chassis, storage...) At the moment, we look at HP - as IBM looks much to expensive... thanks a lot best regards Christian

    Read the article

  • Is Unix not a PC Operating System?

    - by Corelgott
    I am doing my Bachelor at a university. In a written assignment the professor posted the task: "Name 3 PC-Operating Systems". Well, I went on an included a variety of OS (Linux, Windows, OSx) including Unix & Solaris. Today I recieved a mail from my prof saying: Unix is not a PC-Operating System. Many Unix-variants are not PC-hardware compatible (like AIX & HP-UX. About Solaris: there was one PC-compatible version...) I am kind of suprised: Even if may Unix-variants are Power-PC and different bit-order – Those don't stop being PCs now, right? The question was given in a written assigment! It was not a question that came up during lecture! Due to the original task being in German, I'll include it just to make sure nobody suspects an error in the translation. Frage: Nennen Sie 3 PC-Betriebssysteme. Antwort: Unix ist kein PC-Betriebssystem, viele Unix-Varianten sind nicht auf PC-Hardware lauffähig (AIX, HP-UX). Von Solaris gab es mal eine PC-Variante.

    Read the article

  • Hard drive spark, can it be recovered?

    - by user163558
    Alright, so I was going to install Source Film Maker but I didn't have any space, so I decided to connect an HDD via an USB converter(image below). I shut down the machine, turned the PSU off, and connected via a Molex connector & the USB converter. I turned back on the PSU, no sparks or anything, everything normal, but when I turned on the machine, I heard some sizzing(lol?) and sparks flying and a little flame, but the PC was running fine. I pressed the power button instead pulling out the plug (I panicked) so it continued to short circuit for about 10 seconds. There's a very little part on the HDD that become ash, it's near the Molex connector and the circuit is a little black as well. I'm afraid that I will damage the HDD more so I didn't hook up the HDD after all. Do you think it's the PSU(came default with Cooler Master Elite 430, 500W) or it's the HDD(Samsung SP1203N)? P.S: I've attached the HDD same way before(like 3 months ago), and it worked. HDD burn: USB connector: Sorry for the bad image quality, taken with my phone.

    Read the article

  • Exchange 2010 DAG + VMWare HA = no support?

    - by Dan
    We currently have an Exchange 2003 clustered environment (two machine cluster) that we're looking to upgrade to 2010. We recently purchased a VMWare virtualization environment (three Dell R710's with an EMC NS-120 serving up NFS datastores - iSCSI is available) that we wish to use for this new environment. I'm seeing that Microsoft does not support Exchange 2010 DAGs with a virtualization high availability solution (see links below). I would like to utilize the DAG to ensure the data stays available if one host goes down, and HA to ensure that if the physical host goes down, the VM will come back up on the other available host. Does anybody know why MS does not support this? VMWare HA will only restart the VM if it is hung/down - I don't see any difference between this and restarting the physical box if someone pulled the power... Will we only run into issues with support if it has something to do with HA/DAG failover or will they see we have HA and tell us to put it on a physical box even if it has nothing to do with HA? If we disable HA for these VM's will that satisfy them on a support case? Has anybody set up an Exchange 2010 DAG on VMware with HA enabled? Will they have any issues with using an NFS datastore? We have much greater flexibility on the EMC with NFS vs iSCSI, so I would prefer to continue utilizing that. Thanks for any input! http://www.vmwareinfo.com/2010/01/verifying-microsoft-exchange-2010.html Take a look at the second image under "Not Supported" http://technet.microsoft.com/en-us/library/aa996719.aspx "Microsoft doesn't support combining Exchange high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers."

    Read the article

  • How to choose the most optimal RAID settings on PE2950

    - by javano
    I have some Dell PowerEdge 2950's with 4x 15k, 150GB Cheetah SAS drives in them. They are going to be VM hosts, CentOS running ESXi with Windows Server 2k8 guests. Some guests will be hosting IIS servers, and others MSSQL servers. I am trying to set the RAID virtual disks settings and can't decide which is more optimal given this situation; Read Policy: Out of Read-Ahead, No-Read-Ahead and Adaptive Read-Ahead, the default is Read-Ahead. I will be making large sequential writes initially, writing out blank images for virtual machine hard drives (lets say 30GBs from /dev/zero for example) so Read-Ahead seems good at first. But within the virtual machines reads could be random from anywhere within their file systems as they are IIS and MSSQL servers, so perhaps No-Read-Ahead is a better idea? Now I think Adaptive Read-Ahead would be better then as a compromise but I don't know much about this option, how does it compare in performance to the others? Write Policy: write-back caching, write-through caching, the default is write-back caching. The default of write-back caching is safer than write-through caching but at a performance expense. My thinking here is that in the event of power loss for example, it seems more likely in my head (this is why I need some clarification!) that damage will occur to a guest VM with write-back caching enabled, so I should favour write-through? I have searched around and there is obviously no definitive answer, so I would like to find out what is best for my situation.

    Read the article

  • Dell Studio 1555 not starting up

    - by Abhishek
    This is a 3-year-old laptop. I never had a big problem with it until now. I updated Kubuntu the night before yesterday. And Firefox got updated to version 18 and a few other related packages got updated. Then I shut down the laptop and restarted it, but it failed to start. I could hear the fan and the hard disk and the optical disk drive initialize. And the power button also lighted up. But there was no video - no POST or BIOS menu. I even opened the laptop up to the point when only the motherboard was the only thing attached to the base cover. I took it to the technician this evening. He checked it casually, and said that it might be a motherboard problem and will cost quite a bit to fix. Though he was not sure and said that he will give me a call after confirming the problem. Has anyone else had the same problem? What was it and what was the fix?

    Read the article

  • CHKDSK is unable to fix NTFS errors

    - by HackToHell
    After my PC shutdown due to power failure, I noticed several errors in EventViewer. The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume \Device\HarddiskVolume2. and The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume C:. So I forced a chkdsk check at startup, and it finds a stream of error, here is the output, it is smaller than the actual log, because, Event Viewer only seems to have this much, the same line was repeated thousands of times.Here is that line. Some clusters occupied by attribute of type 0x80 and instance tag 0x4 in file 0x198f2 is already in use. Deleting corrupt attribute record (128, "") from file record segment 104690. Attribute record of type 0x80 and instance tag 0x0 is cross linked Also, even after running CHKDSK, the same errors were being reported again so I ran CHKDSK another time and it still loops the same line above, without fixing the error. Can anyone tell me how to fix it?

    Read the article

  • Trying to delete directory with "rm -rf", but get message that it's not empty

    - by Ben Hocking
    I've tried deleting a directory using "rm -rf" and I'm getting the message "Directory not empty": Bens-MacBook-Pro:please benjaminhocking$ ls -lart empty_directory/ total 16 drwxr-xr-x 5 benjaminhocking staff 170 Aug 27 14:46 . drwxr-xr-x 3 benjaminhocking staff 102 Aug 27 15:28 .. Bens-MacBook-Pro:please benjaminhocking$ rm -rf empty_directory/ rm: empty_directory/: Directory not empty Bens-MacBook-Pro:please benjaminhocking$ rmdir empty_directory/ rmdir: empty_directory/: Directory not empty If I try the same thing using Finder (dragging the folder to the Trash), I get the message The operation can’t be completed because the item “empty_directory” is in use. I've tried doing xattr -d com.apple.quarantine, purely out of superstition, but it did no good. A probably important piece of context is that this directory was initially in a directory that should've been deleted by a "make clean" command I issued prior to Terminal locking up on me, after which a little over half of the other programs I had running also locked up, including Skype, and eventually the OS itself. I ended up having to reboot the computer by pressing and holding the power key. Edit to add: Another important piece of information I left off was that this was happening in an encrypted folder à la encfs. I was able to track down the corresponding folder in the encrypted side of things and delete it there. I still don't know why I couldn't do it from the decrypted side of things like I normally do. I'll leave this unanswered for now in case anyone has a good answer for that.

    Read the article

  • No video signal and server shuts down

    - by Ilya
    I have a brand new server. The motherboard is Intel S2600CP4, two 8-core Intel E5-2600 processors. RAM is 8 DIMM slots of 8 GB each (KVR1600D3D4R11SK4/32GI, I installed them into the blue slots), Power supply is 1050W Corsair. Most of the time the server won't start up - the fans are spinning, but I don't have video signal. And it keeps restarting on its own every 3 mins. But maybe after 30 mins it will eventually load and show something on the screen. I even was able to install ESXi 5.0 (vSphere) on it. It recognizes both CPUs and all of the 64GB of RAM. But even then it worked only for around 5 hours and then restarted on its own. What's the problem? That's a very expensive peace of hardware and I can't afford purchasing a new motherboard/CPU. By the way, on the front panel the "System Status" LED is constantly amber (not blinking), even when the server started successfully. And also in the BIOS I can see lots of "processor 01 unable to apply microcode update 8160" fatal errors. Please help me with issue, I will really appreciate this!

    Read the article

  • How to disable irritating Office File Validation security alert?

    - by Rabarberski
    I have Microsoft Office 2007 running on Windows 7. Yesterday I updated Office to the latest service pack, i.e. SP3. This morning, when opening an MS Word document (.doc format, and a document I created myself some months ago) I was greeted with a new dialog box saying: Security Alert - Office File Validation WARNING: Office File Validation detected a problem while trying to open this file. Opening this is probably dangerous, and may allow a malicious user to take over your computer. Contact the sender and ask them to re-save and re-send the file. For more security, verify in person or via the phone that they sent the file. Including two links to some microsoft blabla webpage. Obviously the document is safe as I created it myself some months ago. How to disable this irritating dialog box? (On a sidenote, a rethorical question: Will Microsoft never learn? I consider myself a power user in Word, but I have no clue what could be wrong with my document so that it is considered dangerous. Let alone more basic users of Word. Sigh....)

    Read the article

  • Diagnosing PCI issues

    - by dtsazza
    I'm upgrading a PC for a friend, and have run into a problem with upgrading the motherboard. I've been assembling custom PCs for the best part of a decade now, so I'm happy enough with the basics at the very least. The motherboard, CPU and graphics card were all updated at once - after this was done, the machine POSTs but the PCI wireless card, as well as the PCI-E graphics card, do not seem to be recognised at all by the system. No trace of them anywhere in the BIOS, or the POST output, or in Windows. I booted into Linux and ran an lspci which also showed up no sign of them. What is the best step to go about diagnosing this? Is it likely/feasible that the motherboard's PCI bus is just defective and it needs to be RMAed? Are there any other common gotchas that might cause these symptoms? For reference, the components in question are: CPU: Celeron E1400 Motherboard: Gigabyte GA-G31M-ES2L Graphics card: TBC (a low end card from a couple of years ago; worked flawlessly before the mobo change) PCI WNIC: Edimax 7128G Thanks in advance for any help.

    Read the article

  • Setting MSN or Yahoo! Messenger status to Invisible or Offline when idle for an hour

    - by Jian Lin
    Where, or how, do I set it up in MSN Messenger or Yahoo! Messenger to automatically switch my status to either "Invisible" or "Offline" when idle for a half hour, or an hour? I know how to set my status as "Away" or "Busy" after 10 minutes, but can't seem to find a way to set the offline status options without manual intervention. Back story As a software developer, I am very used to turning the computer on for the whole day and not turning it back off. (For example, checking email for urgent fixes, fix issue and push to web server). It's not even turned off when heading to sleep in case I might find it hard to fall asleep and come back to check on the computer. Or to have it there ready in the morning to check that everything is okay. If I'm seen as being online for 24 hours of a day, some people see me as weird. Their perception of my value decreases as I'm always there (hard to get = high value; always there = low value). Leaving it on makes everyone in my contacts list think I have nothing better to do all day than sit in front of the computer. Even though it's my job and I do admittedly spend more time online than other people. That's why I'd like to find a way to set my status as Invisible or Offline.

    Read the article

  • My Computer hangs for a few minutes just after startup, and then is fine.

    - by EvilChookie
    So I just built myself a reasonably beefy computer, and I installed Windows 7 on it. However, I start the machine up each morning and within a few minutes, the computer will semi hang. That is the mouse is responsive, and most of the time I can open task manager, or a new tab in Chrome. Occasionally windows will be labelled as 'Not responding'. Then, the machine will get over it's problem, and will be nice and quick until I turn it off. Here's my specs: CPU: AMD Phenom-II X4 955 Black (Quad Core, 3.2ghz) RAM: 4GB of DDR3 1300 MOBO: ASUS M4A785T-M (Latest BIOS) HARD DRIVES: 2x1TB Western Digital Caviar Blacks in RAID-0. OS: Windows 7 Ultimate x64. GPU: ASUS GT240 1GB. I believe this issue relates to the RAID array, as I didn't have the lockup problem before I created the array. I purchased a second drive and reformatted after creating a RAID array, since the single drive was a little on the pokey side (compared to the rest of the computer). What I have tried: Updated Raid Drivers Malware checks Windows Updates Unecessary Services CPU and Disk activity appears to be low (via Resource Monitor) No strange errors in the error log. Any thoughts?

    Read the article

  • Dell Dimension 8400 Startup error

    - by Michael
    Hello all, thanks first for taking the time to read this and possibly help me...... now I am pretty decent of a computer tech...but not enough. I am having an issue with my computer which is running windows xp and as I mentioned it is a dell Dimension 8400. as soon as I power the system up the fan goes into hyper drive (spins like crazy and is very loud) then the start up screen with dell comes up and the loading bar gets stuck on the process of "Bios Revision A00" and never loads beyound that. I have read alot about it and think that the main problem was that it can not locate the file (which does have an updated version) I think it is A09. I can not enter safe mode, Bios mode or anything. I do have the file on my other computer and I was wondering if there is a way that I can use a usb flash drive (as I have read on other sites) to create a bootable MS-Dos diskette but I am failing at creating as such....is this possible? or is there anything else I can do? I tried to remove the battery from the system for about 10 minutes while it was completely unplug and tried then to reboot it and go into the bios menu but the same thing keeps happening....can anyone help me :-(

    Read the article

  • How to forever disable driver signature enforcement in Windows 8

    - by IneedHelp
    Is there a way to forever disable driver signature enforcement in Windows 8? I keep seeing this solution posted on various blogs and forums: bcdedit -set loadoptions DISABLE_INTEGRITY_CHECKS bcdedit -set TESTSIGNING ON but it is not working, at least not like when you manually disable driver signature enforcement. To disable driver signature enforcement, I currently have to do this everytime I restart my PC: 1. From the Metro Start Screen, open Settings (move your mouse to the bottom-right-corner of the screen and wait for the pop-out bar to appear, then click the Gear icon). 2. Click ‘More PC Settings’. 3. Click ‘General’. 4. Scroll down, and click ‘Restart now’ under ‘Advanced startup’. 5. Click ‘Troubleshoot’. 6. Click ‘Advanced Options’ 7. Click ‘Windows Startup Settings’ 8. Click Restart. Is there at least a way to create a shortcut or something? F8, Shift+F8 don't work any more at boot. These MS developers freaks really want everything their way only. What's worse is that when I have to power up my PC, I first have to start normally and then restart with driver signature enforcement disabled, because there is no way to tell Windows 8 to show the troubleshoot screen when a PC is powered up.

    Read the article

  • SSD causing 100% CPU usage in Apache/PHP

    - by Tim Reynolds
    I wanted to increase the performance on my development laptop so I added an Intel 320 Series SSD as my primary drive. Everything is amazingly fast, as expected, except Apache/PHP. I develop Magento by using an Ubuntu 10.10 virtual machine. Information: Host OS: Win 7 Professional 64bit Guest OS: Ubuntu 10.10 32bit Processor: i7 Chipset QM55 SSD: Intel 320 Series 160gb 30% full HDD: Hitachi 320gb 50% full (in side bay using an adapter) Laptop: Lenovo T510 Using: Shared folders Apache Version: 2.2.16 PHP Version: 5.3.3-1 APC Version: 3.1.3p1 APC Memory: 128M Using tmpfs for cache, log, session directories in Magento In the VM running on the SSD (VM files and source files are on the same drive) loading a product page in the Admin takes on average 26.2 seconds and uses 100% CPU for nearly the entire time. In the VM running on the old HDD loading the same page takes on average 4.4 seconds. It mostly uses around 40-50% of the CPU while rendering the page. I have read this post: Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)? It says to change some settings in the bios. I have turned any and all power management features off in the bios. I can't for the life of me understand why this would be happening.

    Read the article

  • SSH freeze when UFW is enabled

    - by Cristian Vrabie
    I have a small Ubuntu 10.10 server and i recently noticed a weird behavior (not sure if it was happening before). If I have ufw enabled (with default deny all in, allow all out, allow all http, allow all on a random port i use for ssh) when i perform some actions in a ssh sesion, the ssh console completely freezes. The server continues to work and if i close the console i can start another ssh session. This happens no matter from where I log in (tried from another ubuntu and a mac). The actions are fairly reproducible, for example vim some config files (though vim-ing other files works), cat some other file, etc. The freeze never happens if ufw is disabled. Any idea what's going on? Thanks! Cristian Addition: if you're wondering, yes, I have TcpKeepAlive on yes and I doubt is related (it would happen with ufw disabled too) As requested: my ufw conf below. Also, i don't know if it has something to do but the server has 2 ips. On one is configured the ssh domain, and on one to serve hhtp (via apache2) Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 19922/tcp ALLOW IN Anywhere 9418/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443/tcp ALLOW IN Anywhere

    Read the article

  • OC'ing a Sapphire 7950, problems with memory clock

    - by Cliff
    I bought a new rig with sapphire radeon 7950 flex w/ boost. Good scores initially in 3dmark 11, around 7300 stock. Problem is, as soon as i start overclocking, issues arise. the core clock is at 860, but rises to 925 with boost applied in pressured situations. So all the OC tools are showing 925 as base clock for some reason. Ocing the core clock has been no problem though, got up to 1200 pretty stable, but only an incremental increase in 3dmark 11, to 7600 from 7300, which is worrying. The real trouble starts when i start touching the memory clock. As soon as i touch it even by 1 point up to 1251mhz, the performance goes way down. suddenly i score under 5000 graphics in 3dmark11, no matter if its 1251hz or 1500hz. Ive tried adjusting every other parameter, different tools (sapphiretrixx, catalyst, afterburner) all still the same. Tried upping the power, still same. Where is the issue here?

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • Troubleshooting: Monitor never turns on, system fans running, DVD-ROM does not open.

    - by Wesley
    Hi all, Here are my specs beforehand: ECS P4VXASD2+ (V5.0) motherboard FSB 533MHz Intel Pentium 4 2.40A GHz Prescott Socket 478 2x 256MB PC2100 DDR RAM, 2x 256MB PC133 SDRAM CoolMax 350W PSU DVD-ROM - will edit with brand & model 128MB ATi Radeon 9800 Pro AGP No hard drive So, I just put those parts together today and I tried to power it up, with the monitor connected to the Radeon 9800 in the AGP slot (mobo does not have VGA port). After turning it on, the CPU fan, graphics fan and system fan go on. However, the monitor remains in standby mode, despite being plugged in. Also, after pushing the button on the DVD-ROM drive, it does not open. I've used the DVD-ROM drive before with absolutely no issues. The graphics card was slightly buggy when I put it on another machine, which was left outside in winter weather for 3 months. (Still that computer's integrated graphics worked fine.) CMOS battery was replaced and jumpers are all set correctly. Now, I'm wondering whether the motherboard, CPU, PSU or GPU is the problem. What can I do to test which part is the problem? Just to clarify, I don't have a hard drive, so I usually boot Ubuntu from the disc drive. Anyways, thanks in advance!

    Read the article

  • Have it fixed or buy a new one?

    - by Workshop Alex
    My dual-monitor system has just become a single-monitor system again when the older monitor decided it would be nice to just turn to black. It's a Samsung LCD monitor and is over three years old. Not sure if the warranty is still valid but I just wonder what option would me more efficient: 1) Have the monitor fixed for a small amount. 2) Buy a new monitor for a slightly bigger amount. When monitors were still expensive, I wouldn't doubt about this and would just have my monitor repaired. But prices are so low nowadays, (and repairs are expensive) that I wonder if it's worth the trouble... Of course, I'm in no hurry since I still have another monitor. It's just that I liked the dual-monitor setup. Solved! Just ordered a new monitor. A Samsumg Syncmaster T260HD 25,5". Much more than it would cost me if I just had my old one repaired but I noticed that this one has a build-in TV tuner, plus speakers. It's way more expensive than a repair, but it's worth the additional value it provides.

    Read the article

  • Weird PCI bug: lots of missed packets, or data comes in "bursts"

    - by Thomas O
    I have an ABIT KN9 motherboard. It has one PCI-e x16 slot, three PCI-e x4 slots and two legacy PCI. My problem is with the legacy PCI (which I shall just call "PCI".) I currently have an Nvidia GeForce 8600 GT (a low end card) installed in the x16 slot and a TV card in PCI #1; the x4 slots are unused, as is PCI #2. I plan to upgrade the graphics card soon, the current card was spare. I sometimes install a USB expander in PCI #2 but it causes a lot of problems - see below. The problem is under Linux (Ubuntu 10.10, Linux 2.6.35-22-generic), but probably under all operating systems (I have not yet been able to test Windows, but I suspect it will do the same as the problems occur on the BIOS/POST side too, e.g. when using a USB keyboard on the expander the keyboard will not work at all) PCI has an enourmous delay, and packets arrive in large chunks. For example, when using the USB expander, my USB mouse lags and jumps in large steps every second or so, while using the motherboard USB does not present this problem. My TV card will only do one or two frames per second, and the program (xawtv) usually times out and crashes. In dmesg, I'm getting messages like: bttv0: timeout: drop=74, irq=154/100476, risc=31f6256c, bits: VSYNC HSYNC OFLOW RISCI for my TV card, and similar timeout issues for my USB expander with a mouse. I received the motherboard, processor and RAM second hand and have only just got around to building it, so I don't know if this problem has always existed, or if it's a result of my set up. If anyone has any hints or solutions it would be appreciated - this is kind of a show-stopper for me.

    Read the article

  • Can I trick Carbonite into backing up an external hard drive?

    - by Brian
    I use Carbonite to back up my PC (Windows XP). We were running low on disk space on our home PC (down to 15 GB), so I went out and purchased an external hard drive. However, Carbonite will not back it up. Is it possible to set up Carbonite to backup an external hard drive? I just want the external drive to be extra disk space. From their FAQ: The current version of Carbonite backs up only the files that reside on permanent hard drives on your computer. It will not back up network drives, external drives, and NAS (network accessed storage) drives. If there are files on a remote drive that you wish to include in your Carbonite backup, you should copy the files to a folder on your local hard drive. If the files are on a shared network drive, you could install Carbonite on the computer on which the network shared drive physically exists, and back the files up directly from that computer. Check back soon for a Carbonite service plan that will allow you to back up your external drives.

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software?

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >