Search Results

Search found 472 results on 19 pages for 'xeon'.

Page 14/19 | < Previous Page | 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • ASUS P8B WS - Endless Reboots

    - by tuxGurl
    I am running a Intel XEON 1245 with 4GBx2 Kingston Memory ECC Unbuffered DDR3 on an ASUS P8B WS motherboard. BIOS Version 0904 x64. This system is a little over a month old. It is running Ubuntu 11.10. This evening I found the machine turned off. When I tried to restart it, it would POST and stop at the GRUB screen. When I selected Ubuntu and hit enter within 2-3 seconds the would shutdown and restart. If I stayed at the GRUB screen and did nothing the system would not cut out. I tried booting off a USB stick and again 2-3 seconds after selecting 'Try Ubuntu without Installing' the machine will cut power and reboot. Things I have tried so far: Resetting the BIOS using the on board jumper Resetting the BIOS settings to default Disconnecting all external hardware - except keyboard & monitor Booting with 1 stick of RAM - I tried different single sticks Ensured that onboard EPU and GPU boost switches are in the off position. I am running a Memtest86 right now and it has been running for 38+ minutes. This is not an OS problem or an overheating issue (I have a CoolerMaster HAF Case with 3 fans besides the CPU fan) I am at a loss as to what to try next. I think the BIOS is mis-configured somehow but I don't know what to look for.

    Read the article

  • VMware vSphere Hypervisor 5 with Intel SPL 5000 in Raid 0 no boot from DVD?

    - by Richard
    I hope this is the correct StackExchange, since I am only using StackOverflow for Web development, but need some help with my server configuration. I would like to install VMware vSphere Hypervisor 5 on my server here at home and run a view machines on it such as Windows Server 2008 and Red Hat. I used to have either OpenSuse or Windows Server 2008 installed but I would like to get into VMWare Hypervisor. My hardware configuration: - Intel S5000PSL with bios version S5000.86B.10.60.0091 build date 10/09/2008 as of read out of bios - E5420 @ 2.5GHz Intel Xeon CPU The Intel Virtualization Technology is enabled in the BIOS - DVD DH20A4P DVD Writer - 8GB ECC Ram I have configured a RAID 0 on my 2 WD 2TB SATA drives I have burned the Hypervisor 5 on an empty DVD and it is bootable, I tested it on my client PC. The main problem here is basically, that I cannot boot the DVD on my server. I have set the Boot Option to the DVD drive. I have booted from the BIOS straight in the DVD drive and it does not work. I do not see any error messages. The only thing I see are the PXE error messages when it tries booting from the network and other devices, obviously without any result. Does anybody know why I cannot boot the DVD? What could cause the problem? I have sucessfully installed Windows Server 2008 via original DVD about 1 year ago, so the DVD drive can read and does work. The DVD drive is available in the BIOS and I have checked all cables and none of them is loose in any way. I even see the light flashing but it does not want to boot from the DVD. I am looking forward to suggestions and things that I should check. Thank you very much

    Read the article

  • High Apache CPU usage, but low nginx - Configured correctly?

    - by Buckers
    We've just moved a website of ours over to a brand new high-spec Linux server (1x Intel Xeon E3-1230 v2 @ 3.30GHz, 8GB DDR3 ECC, 2x 128GB SATA SSD RAID1). The server has been configured to use nginx but we're not sure if its working correctly. The site always loads very fast to us (http://www.onedirection.net), but Plesk often sends us reports that the Apache CPU usage percentage reaches high leves, yet when we look at the nginx percentage it's always very low. We've come from a Windows background so are very new to Linux, but shouldn't nginx run INSTEAD of apache? Here's a screenshot from Plesk showing the CPU usage: http://www.pixelkicks.co.uk/_download/plesk.JPG The website gets around 20,000 visitors per day, and we use W3 Total Cache to get it running as fast as possible. MySQL has been optimised well. Memory usage is only running at 2GB of the 8GB. Does this look right? How can we tell that nginx is doing most of the work? Thanks, Chris.

    Read the article

  • Send mail on event log error trigger safe check frequency

    - by Zeb Rawnsley
    I want to use powershell to alert me when an error occurs in the event viewer on my new Win2k12 Standard Server, I was thinking I could have the script execute every 10mins but don't want to put any strain on the server just for event log checking, here is the powershell script I want to use: $SystemErrors = Get-EventLog System | Where-Object { $_.EntryType -eq "Error" } If ($SystemErrors.Length -gt 0) { Send-MailMessage -To "[email protected]" -From $env:COMPUTERNAME + @company.co.nz" -Subject $env:COMPUTERNAME + " System Errors" -SmtpServer "smtp.company.co.nz" -Priority High } What is a safe frequency I can run this script at without hurting my server? Hardware: Intel Xeon E5410 @ 2.33GHz x2 32GB RAM 3x 7200RPM S-ATA 1TB (2x RAID1) Edit: With the help of Mathias R. Jessen's answer, I ended up attaching an event to the application & system log with the following script: Param( [string]$LogName ) $ComputerName = $env:COMPUTERNAME; $To = "[email protected]" $From = $ComputerName + "@company.co.nz"; $Subject = $ComputerName + " " + $LogName + " Error"; $SmtpServer = "smtp.company.co.nz"; $AppErrorEvent = Get-EventLog $LogName -Newest 1 | Where-Object { $_.EntryType -eq "Error" }; If ($AppErrorEvent.Length -eq 1) { $AppErrorEventString = $AppErrorEvent | Format-List | Out-String; Send-MailMessage -To $To -From $From -Subject $Subject -Body $AppErrorEventString -SmtpServer $SmtpServer -Priority High; };

    Read the article

  • What configuration changes can I make to speed up extremely slow Windows VM's in ESXi 4.0.

    - by Shawn Anderson
    I've recently moved from VMWare Server to ESXi 4.0. Running on Dell T310. My VM's have been restored but they are running dog slow compared to VMWare Server. I loaded ESXi 4.0 using only default values. Where are some areas where I can tweak the performance? Even logging onto the VM's can be extremely sluggish. Trying to install software on any of them is a new experience in pain. Dell PowerEdge T310 Xeon X3460 2.80 GHz 32 GB RAM 1 HD (2 TB) I have 16 VM's on this server, but only six or so will be running during my testing. I keep an eye on the Resource Allocation and Performance tabs for the host and I never see CPU or RAM getting anywhere close to pegged. Events tab does show some notices for video RAM issues and some hints on Windows activation issues, but nothing that would point to the sort of sluggishness that I'm experiencing. 1 Windows Server 2008 R2 (64-bit) - 4 GB RAM 1 Windows 7 (32-bit) - 2 GB RAM 1 Vista (32-bit) - 1 GB RAM 3 XP (32-bit) - 1 GB RAM Over to you! Thanks - Shawn

    Read the article

  • Dell Poweredge 1950 with Perc 5i keeps losing raid config -> "Foreign Configuration Found"

    - by nosage
    The quick and dirty: the machine is a Dell Poweredge 1950, dual xeon quad cores, 8GB of ram, 2 2TB seagate SATAs in (supposed to be raid1) using a Perc 5i raid card. They are hot-swappable with a back-plane. I can build the raid fine and after a little while an install of server 08 r2 will blue screen and restart. When it comes up the raid controller says "Foreign Configuration Found." When I go into the raid configuration panel there is no raid listed but I can import the "foreign config", and the OS will boot up fine, until it blue screens again after a little while. The issue is OS independent. I have tried swapping raid cards, swapping the RAM module on the raid card and swapping the raid battery, all to no avail. Its almost as if there is a loose connection from the raid card to the back plane and both of disks get lost and the raid card drops the config. But it sees the disks fine when it boots back up. The raid card uses a SCSI SAS cable to connect to the back-plane so I guess the next step is to replace that, but... then I might as well replace the back-plane with a SCSI SAS to sata breakout cable, but... then I need a way to power the disks. Sorry for the wall of txt but it would be great to get some thoughts from people who worked with perc raid cards or poweredge servers with this type of issue before. Ironically I want to get this system up and running so I can work on MCITP labs. Thank you for any/all help and feel free to ask questions!

    Read the article

  • Apache2 - 500 internal server error

    - by Lucio Coire Galibone
    i'm running a VPS with Linux CentOs 6 with 4 GB of RAM, 10 GB of HD and 2 virtual CPU Intel(R) Xeon(R)CPU L5640 @ 2.27GHz. As my host says each virtual CPU must be at least 0.5 physical cpu. At certain times of the day, those with more traffic, trying accessing my php script i receive intermittently "500 internal server error". I activate logging to debug level from apache, and also the PHP logging with E_ALL, but I can't find reference to Error 500 in any logs(I checked the right logs!). I haven't got any .htaccess file in path script. The strange thing is that the error start at first php line in the script (the previous html displays correctly, but at the first php line the script send 500 error). The cpu load is always good (max 0.15 0.08 0.01) and RAM is close to 95% but it arrived to swap just 2 times in a month with 2-5 MB. Apache works with prefork with this values: <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 280 MaxClients 280 MaxRequestsPerChild 4000 </IfModule> Everthing works correctly and I don't get any error in quiet times, but i start receive errors when traffic rises (6-9000 visits per hour). Can i solve the problem increasing resources? (i can upgrade RAM up to 16 GB). It can depend from reaching MaxClients (but apache must write it on log, right?)? If I upgrade RAM to 6 or 8 GB i have to calculate MaxClients value with this? MaxClients = Total RAM dedicated to the web server / Max child process size Max child process size is around 20M. How else can the problem be? Thanks in advance

    Read the article

  • GlusterFS on VMWare ESXi 5

    - by Dharmavir
    I want to build network file system on top of my VMWare ESXi based virtual nodes which are running Ubuntu 12.04 LTS. I am evalaluating options and found that GlusterFS (http://www.gluster.org/) can turn out to be a good choice. Purpose: I have about 2 dozen VM nodes with different configurations, on 2 physical nodes which has following configuration: 16 core Intel Xeon 1 TB 48 GB RAM Now as I said earlier each Physical server has about 1TB hdd and I can increase if I want additional so for now I have 2TB disk space available, these space is distributed in VM nodes I have created on which about 2 dozen VM nodes live. Now some of them being application server and mgmt server, they have plenty of free disk space which I want to utilize for some heavy storage which I can not design if I do that individually on single VM node. This way if my storage is distributed between dozens of VM nodes and about 2 or more physical nodes I have some sort of backup as well. I do not mind if data gets stored redundently but per my knowledge it might hapeen that individual VM nodes will not be able to store all of the data because complete data size for example if we take 100GB will exceed VM disk size of 70GB and then VM will also have system and program files on it. I need some suggestion that will GlusterFS be the solution for which I am looking forward to or I should go with something like hadoop? I am not too sure. But yes, I would like to utilize my free space on each VM node and while doing that if I get store data redundently I am okay because it will give me data security.

    Read the article

  • does my machine configuration make sense?

    - by user1227914
    i couldn't think of a better place to ask this question, so here it goes. we're putting together a dedicated server for a website that will initially host the web server and the mysql database. as the website grows, we'll move the database to a different server and this machine will eventually only server the actual website. so the question is ...does my configuration look okay? it's the first time i'm building a server from scratch so i want to make sure i don't combine components that don't fit or something. things like ..do the drives i picked work for the hot swap ..etc. what do you guys think? am i good to go with this configuration? :) Chassis: Supermicro SuperServer 6016T-MTHF (6x DDR3 SDRAM - ECC DIMM 240-pin, 2x LGA1366 Socket, Power Provided: 600 Watt, 4 (free) x hot-swap - 3.5") CPU: Intel BX80614E5620 Xeon E5620 Processor - 4 Core, 2.40GHz, LGA 1366, 5.86GT/s QPI 12MB Cache, 64-Bit, 80W, HyperThreading Memory: Crucial CT51272BB1339 4GB PC10600 DDR3 Memory - 1333MHz, ECC, Registered, 1x4096MB (possibly 3 or 4 of them) Hard Drives: Western Digital WD2002FAEX Caviar Black Hard Drive - 2TB, 3.5", SATA 6Gbps, 7200 RPM, 64MB (possibly 2 or 3). thank you very much for any professional advice :)

    Read the article

  • SYS-5016T-MTFB will not POST without manual assistance (Motherboard: X8STi-F)

    - by Dan
    I have a Supermicro 5016T-MTFB 1U server which I am in the process of setting up, but it has a really strange problem. When the system is powered on it will not POST until I press the reset button a few times, followed by pressing the delete key on the keyboard to "wake it up". If I power it on and do nothing, the fans spin up but nothing else happens at all. After pressing the reset button once, the red "overheat" light comes on and blinks which is supposed to indicate a fan failure - but all the fans are working. Pressing reset again usually stops the blinking, and the system starts the normal POST routine but it will not actually get to the bios screen unless I press delete. If I don't press delete, it just continues to hang. After pressing delete it will take me into the bios setup screen, if I exit without saving changes I can boot the system normally. I was able to successfully install Linux with no trouble...but upon rebooting the same problem happened again. This board has integrated IPMI which I thought was the problem, so I disabled it via the jumper on the board. Did not help. Each time this system powers on, it goes on for a second, then turns off again for another second, then turns back on again. I don't know why it does that. Here is what I put in the system: 1 x Xeon E5630 (Nehalem) 80W TDP (it's not overheating, CPU temps stay under 40 degrees C) 2 x Kingston 2GB x 3 DDR3-1066 Memory ECC, unbuffered, unregistered (kvr1066d3e7sk3/6g) 1 x Intel X25-M 160 GB 2 x Western Digital RE3 1TB

    Read the article

  • Issue running 32-bit executable on 64-bit Windows

    - by David Murdoch
    I'm using wkhtmltopdf to convert HTML web pages to PDFs. This works perfectly on my 32-bit dev server [unfortunately, I can't ship my machine :-p ]. However, when I deploy to the web application's 64-bit server the following errors are displayed: (running from cmd.exe) C:\>wkhtmltopdf http://www.google.com google.pdf Loading pages (1/5) QFontEngine::loadEngine: GetTextMetrics failed () ] 10% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () ] 36% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () // ...etc.... and the PDF is created and saved... just WITHOUT text. All form-fields, images, borders, tables, divs, spans, ps, etc are rendered accurately...just void of any text at all. Server information: Windows edition: Windows Server Standard Service Pack 2 Processor: Intel Xeon E5410 @ 2.33GHz 2.33 GHz Memory: 8.00 GB System type: 64-bit Operating System Can anyone give me a clue as to what is happening and how I can fix this? Also, I wasn't sure what to tag/title this question with...so if you can think of better tags/title comment them or edit the question. :-)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Looking for a short term solution to improve website performance with additional server

    - by Tanim Mirza
    I am working with a small team to run an internal website running with PHP 5.3.9, MySQL 5.0.77. All the files and database are hosted on a dedicated Linux machine with the following configuration: Intel Xeon E5450 8 CPU cores @3.00GHz, 2992.498 MHz, Cache 6148 KB, Cent OS – Red Hat Enterprise Linux Server release 5.4 We started small and then the database got bigger and now the website performance degraded significantly. We often get server space overrun, mysql overloaded with too many calls, etc. We don't have much experience dealing with these issues. We recently got another server that we were thinking to use to improve performance. Since it has better configuration, some of us wanted to completely move everything to the new machine. But I am trying to find out how we can utilize both machine for optimized performance. I found options such as MySQL clustering, Load balancer, etc. I was wondering if I could get any suggestion for this situation "How to utilize two machines in short term for best performance", that would be great. By short term we are looking for something that we can deploy in a month or so. Thanks in advance for your time.

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Computer Comparison - which is "better"

    - by David Murdoch
    A company I work with recently replaced their old server and gave it to me. Their old server is a Dell PowerEdge 2600. I've been playing with the machine and even installed Windows Server 2008 on it...and it seems to run it pretty well. Here are the specs for the two machines: Dev Machine: AMD Athlon64 3000+ 2.38 GHz (overclocked from 1.8GHz [@ 280x8.5] - it is stable-ish) Memory (RAM): 1x1GB OCZ PC3200 (Dual-Channel) 300GB HD OS: Windows XP Pro (32bit) SuperPi 1M digit test: 40 seconds Dell PowerEdge 2600 Server: Intel Xeon CPU 2.8GHz 2.8GHz Memory (RAM): 512MBx2 (PC2700, not dual channel) 68GB HD (RAID 5) OS: Windows Server 2000 (32bit) SuperPi 1M digit test: 56 seconds [using 1 processor] (Themes and Aero-Flass UI turned off, of course) I use my computer to regularly run Photoshop CS5, Illustrator CS5, Flash CS5, 5 browsers (Chrome, FF, IE, Safari, Opera), iTunes, Visual Studio 2010, and Kaspersky Internet Security 2010 [sometimes simultaneously :-) ]. The SuperPi test has my dev machine coming in about 30% faster than the Server machine...though this could be due to the server running "Vista" with background processes prioritized. Do you think it would be realistic/advantageous for me to move from my dev machine to the Dell PowerEdge 2600? Is it possible to install additional DVD drives/burners on the server? Can I install my internal 300 GB hard drive on the server? Can I add some USB 2.0 ports? Note: I'll probably install Win XP Pro on the dev machine if I do switch. If not, are there any creative and useful way for me to take advantage of this server (with the goal of faster computing)?

    Read the article

  • Curios: What makes CPUs better than others? [closed]

    - by Zizma
    I have been wondering about this for a long while now and was hoping someone here could answer it pretty easily. If I was looking for the most powerful CPU what should I really be looking at? There are so many different parameters of a CPU and I am wanting to know what each thing does and what really matters. Basically this: What is the deal with cores? If I take using optimized applications out of the mix would it theoretically better to get quad core 1.0GHz CPU or a 1 core 4 GHz CPU? Also, what is the difference between maybe an Sandy Bridge CPU versus an Ivy Bridge CPU? If they both were had the same clock speed and number of cores would the Ivy Bridge perform better? Does an older Xeon with an equal clock speed and number of cores to a new i7 really perform worse/slower? Does size matter? Why would I go with a 22nm CPU over a 32nm when the size difference is so trivial? What about the cache? When does the cache come into play with performance?

    Read the article

  • Mysql ndb cluster - node restart.

    - by Arafat
    Hi guys! I just setup a mysql cluster on a fairly decent baby (IBM x3650 M3) with 24GB memory, xeon 6core, SAS 6Gbps HDD. Running Debian Lenny 5. 64bits. Ndb version is 7.1.9a. Our database size on MyISAM is around 3.2 GB. Ndb_size estimation is 58GB for ndbengine. A little info about my database is as follows. 150 common tables for global purpose. 130 tables for each clients. So it goes like this, 130 x 115(clients) = 14950 tables. Is it normal or usual to have 14000 tables on one database? The reasons why we did this was, Easy maintenance and per client based customization. Now, the problem is, ndb cluster can only support, 20320 tables. But it can support 5,000,000,000 rows in one table if I'm not wrong. My real head ache is my cluster data node takes less than two minutes to startup with out any data. But as soon as convert my tables into ndb, that too only 2000 tables, data node takes at least 30 to 40 mins to start up. Is it normal? If I convertt all my tables into ndb, will it take even longer? Or let's say if consolidate my 14000 table's data into one, which is 130 tables, will it help? Or is there anything idiotically wrong which I'm doing? I'll attach my config.ini file soon. here's the simple overview of my config Datamemory = 14G Indexmemory = 3GB Maxnooftable = 14000 Maxnoofattributes = 78000 I'm just testing these values with 2000 tables first. Please advise, how to increase the start up speed. Please point out where I'm going wrong. Thanks in advance guys!

    Read the article

  • EFI vs MBR - Installing Windows Server 2008 R2 or 2012 on 8TB

    - by Riaan de Lange
    I'm having some difficulty installing Windows Server 2008 R2 and Windows Server 2012 on an Intel Server platform. The server specs is as follows: Intel Grizzly Pass Server System - R2308GZ4GC 2x Intel Xeon 2620 - 2.0 GHZ - BX80621E52620 132 GB of Memory REG-DIMM - TS1GKR72V6H 4x Seagate Constellation ES 2TB 3.5" 7200rpm 6GB/S - ST32000645NS Intel Big Laurel 4CH 6G SAS RAID 512MB - RS2BL040 On the Intel RAID Controller Setup, I have setup the HDD to be in RAID-0 - for testing purposes. (Ultimately configured in RAID-5) So, the total size of HDD space I can use is 7.6 TB something... When I install the Server OS's, they don't seem to go beyond 2 TB (1.76 TB) I have read up on EFI and UEFI boot, and this seems to work in 2012, but I could not install any drivers for the motherboard... So, I also tried EFI for 2008R2, and this worked while installing the OS, it did not however work with the Windows Boot Manager option in the BIOS. It kept on freezing once it tries to load the partition. My idea was to allocate the complete 8 TB for the OS, and load a few VM's on there. I have now started with a new approach where I'll have a 256 GB OS Partition, and a secondary 7.5 TB Data partition. Oh, and I also did a diskpart - convert disk to gpt whilst installing 2008R2. The whole disk was accessible, 7.6TB Can anyone please clarify that EFI/UEFI is meant for larger boot volumes? Bigger than 2TB. If I were to have an ideal situation where my OS is run on a SSD, 256GB, and I can attach the 8 TB drives as normal disk to the OS? I'm I correct in saying that if I wanted to boot from a 8TB partition, I would need to force the BIOS to boot from EFI? The limit for MBR is 2 TB as far as I know now... *FYI: The motherboard is EFI-ready

    Read the article

  • NFS high CPU usage

    - by user269836
    Hello, I have a very strange issue. I have next server: Intel(R) Xeon(TM) MP CPU 3.16GHz cat /proc/cpuinfo | grep proce | wc -l 8 free -m total used free shared buffers cached Mem: 28203 27606 596 0 10789 9714 -/+ buffers/cache: 7103 21100 Swap: 24695 0 24695 RAID card *-storage description: RAID bus controller product: MegaRAID vendor: LSI Logic / Symbios Logic physical id: 7 bus info: pci@0000:13:07.0 logical name: scsi2 version: 01 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list rom configuration: driver=megaraid latency=32 resources: irq:134 memory:d8ff0000-d8ffffff(prefetchable) memory:df600000-df60ffff(prefetchable) HDD: 10x148Gb SCSI U320 15k - RAID5 /dev/sdb1 807G 674G 93G 88% /storage /dev/sdb1 /storage ext4 defaults,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,noatime,nodiratime,noacl,errors=remount-ro 0 1 network cards ethtool -i eth0 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ethtool -i eth1 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ifconfig bond0 Link encap:Ethernet HWaddr 00:0f:1f:ff:d6:4d inet addr:192.168.15.71 Bcast:192.168.15.255 Mask:255.255.255.0 inet6 addr: fe80::20f:1fff:feff:d64d/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:1062818202 errors:0 dropped:3918 overruns:0 frame:0 TX packets:1041317321 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10000 RX bytes:258867684559 (241.0 GiB) TX bytes:396569192650 (369.3 GiB) this server running only nfs-kernel-server uname -a Linux nas2-backup 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux Debian 6. What do I have, once per day or two, LA goes up, it can be reached around LA: 40 but if I do: nfs-kernel-server restart. Every thing is OK. But on the next day or a little bit later, LA goes up again. Servers are connected to d-link dgs 1016d with 24 GBits ports. I have tried everything to find out what the problem is. Why it's happening, but still I can not resolve this issue. Any ideas on what is happening here?

    Read the article

  • Apache Process question about RAM usage

    - by Andrew Fashion
    So everytime I load a new page, I notice a new HTTPD process opens, every time I click a page, and each process says it's using anywhere from 2-4.5% of memory. Does that mean every single process is running at that time using 2-4% of RAM? It's a brand new server and I'm the only one on the server at the moment. Or does it mean all the other processes are dying, and only the new one is active. Because 4% of my 2048MB of RAM is already 82MB for just one process!?!? Let me know, because I am trying to determine what I need to beef my server up in order to handle high loads of traffic. I'm expect to get 20,000 uniques per day on launch. I am currently running a Dual Quad Xeon server, with only 2GB of ram, I will upgrade to 8GB or more shortly. Let me know what you suggest! thank you [root@D18634 log]# top | grep 'httpd' 11315 apache 15 0 362m 82m 24m R 12.3 4.1 0:03.00 httpd 11310 apache 16 0 322m 41m 21m S 5.7 2.1 0:02.98 httpd 11315 apache 15 0 362m 83m 25m S 24.3 4.1 0:03.73 httpd 11319 apache 16 0 324m 42m 20m R 1.0 2.1 0:01.85 httpd 11319 apache 16 0 362m 82m 23m R 78.5 4.1 0:04.21 httpd 11321 apache 16 0 323m 44m 23m S 35.3 2.2 0:04.13 httpd 11319 apache 15 0 361m 82m 23m S 8.3 4.1 0:04.46 httpd 11321 apache 15 0 323m 44m 23m S 35.9 2.2 0:05.21 httpd 11313 apache 15 0 324m 41m 19m S 48.6 2.1 0:03.23 httpd 11322 apache 16 0 354m 72m 20m R 11.0 3.6 0:05.11 httpd 11322 apache 16 0 354m 72m 20m S 23.9 3.6 0:05.83 httpd 11314 apache 16 0 355m 75m 22m R 18.3 3.7 0:04.64 httpd

    Read the article

  • High IOWait executing JBoss 3.2.7

    - by user64205
    Server Details: Kernel: Linux wiq31 2.4.21-9.ELsmp #1 SMP Thu Jan 8 17:08:56 EST 2004 i686 i686 i386 GNU/Linux CPU: 4 x Intel(R) Xeon(TM) CPU 3.06GHz Memory: 1028520 kB JBoss version: 3.2.7 Every time i try to start JBoss, in all CPU's, the iowait values starts to raise and the idle values starts to fall. Before executing my JBoss application, the free command returns the following output: *total used free shared buffers cached Mem: 1028520 966400 62120 0 187756 538928 -/+ buffers/cache: 239716 788804 Swap: 2044072 790672 1253400* After starting my JBoss application, the free command returns the following output: *total used free shared buffers cached Mem: 1028520 1007648 20872 0 187116 524084 -/+ buffers/cache: 296448 732072 Swap: 2044072 819096 1224976* After starting my JBoss application, without answering any request, the java process /proc/PID/status file have the following values: State: S (sleeping) SleepAVG: 27% Tgid: 24022 Pid: 24022 PPid: 21011 TracerPid: 0 Uid: 500 500 500 500 Gid: 500 500 500 500 FDSize: 256 Groups: 500 VmSize: 775200 kB VmLck: 0 kB VmRSS: 156752 kB VmData: 696752 kB VmStk: 36 kB VmExe: 21 kB VmLib: 710375 kB StaBrk: 0804f000 kB Brk: 095bb000 kB StaStk: bffff8c0 kB ExecLim: ffffffff Threads: 62 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 1000000180015ccf CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 Is this behavior being caused by memory swapping, or the short memory available in the server is enough to run my application?

    Read the article

  • SQL Server 2008 Web VS SQL Server 2008 Enterprise

    - by Jeremy
    I wrote an application a few months ago, and was hosting it out of our offices on a workstation with an Intel Core 2 Quad Q8200 @ 2.33GHz, 8 GB RAM, Windows Server 2008 Enterprise and SQL Server 2008 Enterprise. Both the webserver and database server were run on the same machine. We had a huge influx in traffic, and moved ClubUptime.com, and got 2 of their top teir windows VMs. The Database server runs Windows 2008 R2 Standard and SQL Server 2008 R2 Web on 8 GB ram and an Intel Xeon e5620 @ 2.40GHz. Ever since switching, the database which used to run at around 400MB in RAM now runs at around 4-7GB, and there haven't been any changes to it (other than a couple columns here and there). Our traffic has quadrupled, and our DB is 6 GB on disk, why would SQL server take up 7 GB if the DB is only 6. And why would it be storing the ENTIRE database in memory? Another thing is why growing 4 times in size did the database's memory footprint grow 12 times? Last question: Why does the CPU peg at 100% now where it didn't before? The design is simple, VERY few joins, NO subqueries. I am just at a loss, unless it is the SQL server edition, or the fact that I moved from real hardware to a VM.

    Read the article

  • Best way to attach 96 tb to workstation

    - by user994179
    I'm running a workstation with dual xeon 5690's (12 physical/24 logical cores), 192 gb of ram (ie, maxed-out), Windows 7 64bit, 5 slots for adapter cards, and 1 tb of internal storage, with 5 more internal bays available. I have an app that creates data files totaling about 88 tbs. These are written once every 14 months, and the rest of the time the app only needs to read them; and 95% of the reads are sequential reads of huge chunks of data. I have some control over how big the individual files are, but ideally they would be between 5 and 8 tbs. The app will be reading from only one drive at a time, and the nature of the data is such that if (when) a drive dies I can restore the data to a new disk from tape. While it would be nice to be able to use the fastest drive/controllers available, at this point size matters more than speed. After doing lots of reading, I am leaning toward buying a bunch of cheap 2tb drives and putting them into a bunch of cheap enclosures. All this stuff is going into my home office, so I need to avoid the raised floor/refrigerated approach. My questions: Is the cheap drive/enclosure solution the best one for this situation? Given the nature of the app and the way the data is used, does RAID make sense? If so, which one? For huge sequential reads, would Usb 3.0 and eSata be a wash performance-wise? For each slot available on the workstation, can I hook up an enclosure that can hold multiple drives? Or is it one controller per drive? If I can have multiple drives on one controller, am I essentially splitting the bandwidth (throughput)? For example, if I have a 12 bay enclosure, is the throughput of the controller reduced by a factor of 12? Are there any Windows 7 volume/drive/capacity limits I should be aware of? Thanks

    Read the article

  • Starting my own server - basic recommendations and questions [closed]

    - by Ilia Rostovtsev
    Possible Duplicate: Can you help me with my capacity planning? I'm planning to start my own high-performance server and then use collocation services for keeping it up and running. I'm planning to USE it for processing videos and keeping big video site up! (using FFMpeg, MENcoder and etc.) I just need recommendations on whether listed hardware is good enough and will work together well and fast enough. Do I need anything else (missed something). I remember about CPU coolers though! ;) I'm planning to use SSD drives so please tell me if it's going to work just as regular HDDs (but much faster)? Are they going to be used as RAID (is this possible for SSDs)? Here is what I would like to get: Intel ® Server System SR1600URHSR (Urbanna) or Intel® Server System SR1695WBAC 2 x Intel Xeon X5650 4 x 16Gb DDR-III 1333MHz Kingston ECC Reg (KVR13R9D4/16) 3 x (or maybe 4x) 480Gb SSD Intel 520 Series (SSDSC2CW480A3K5) Which server system would be better? Is listed hardware new/good enough and worth buying it at the moment? Should I probably take a look at something slightly more expensive but more up to date and powerful, may be? After all as software I would like to use CentOS 6 64 bit + WHM/CPanel? Any other suggestions on maybe cheaper and same/more powerful server management system but WHM? What most important points to keep in mind when starting/maintaining your own server?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19  | Next Page >