Search Results

Search found 1864 results on 75 pages for 'raid 0'.

Page 46/75 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • G4 server running slow

    - by Abby Kach
    I have HP proliant ML 350 servers. We have 8 remote locations where users connect and log on to our server through DYNDNS to access our company ERP's to conduct day to day work. The base of our company ERP's is oracle for which we have a separate server.Now the problem is day by day the load on the server is increasing and the speed is getting slower and slower and users are facing a lot of issues . so I are planning to implement Sonic wall VPN. I conducted a demo of sonic wall but it was slower than the current speed of dyndns. the configuration of my server is as follows :- Linux HP ProLiant 370 Intel Xenon 3.20 GHZ 150 GB (72 * 2) 3 GB Suse Omega HP ProLiant 370 Intel Xenon 3.20 GHZ 300GB (72.8 * 4) Raid 5 4 GB Windows Server 2K3 Enterprise Edition Storage Box HP Storage Works 1400 Intel Xenon 2.00 GHZ 4 TB(1 TB * 4) Raid 5 2 GB Windows Server 2K8 Enterprise Edition Domain & Terminal HP ProLiant 350 Intel Xenon 3.20 GHZ 250 GB(72.8 * 3) Raid 5 4 GB Windows Server 2K3 Enterprise Edition Can some one help me as to how can i speed up my network at remote locations and reduce the problems of speed etc..

    Read the article

  • Hyper-V and attaching physical disks

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • Hyper-V and attaching physical disks [migrated]

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • Hard Disk based storage library

    - by Ryan M.
    We have a Tandberg T24 tape device to handle all of our long term backups right now. We decided that we're not backing up nearly everything that we would like to and that we still have a lot of vulnerabilities. To get to where we want to be, we're going to have to back up a lot more servers than we're currently doing. All of our internal servers have some sort of directly attached drive (I.e. LaCie Raid box or a simple portable hard drive) doing backups, but what we want to do is get those backups off-site. The current tape drive is directly attached via SCSI to a Windows Server 2008 File Server. So to back up anything to tape, it has to be funneled through the File Server. With the current increase that we have planned, I don't think that funneling everything through the File Server is the right course of action and I'm thinking that maybe a second backup device would be more appropriate. I would like your input on a couple of ideas. 1) Doing HDD instead of tape. Tape is hard to deal with. We have a regular rotation cycle, so they don't need years and years of shelf life, so I'm wondering if something HDD-based would be better. 2) Something accessible over the network. Instead of having the device directly attached to one specific machine, have it available to all the servers over the network. Our File Server is a 12-disk raid 6 set up.. I was thinking something like that, but with no raid involved, all disks are stand alone so they can be used/installed/removed on an individual basis. Does any such thing exist? Thanks for your ideas. I'm really interested to hear about some of the solutions you guys are using..

    Read the article

  • php-cgi.exe Taking out server, multiple running

    - by Alex
    I have been using ZendServer CE for over a year and have never had a problem. Recently, about a week or two ago I have found my server to be acting up and even causing RDP to be un-connectable. After some looking around I have 20, 25, 30+ php-cgi.exe running. With my IIS7 service starting with Windows once my server started all these php-cgi.exe would start running (even though the limit is 10) and I could not even connect to it. After disabling the Web Server as startup which stops php-cgi.exe from running the server runs flawless, like it always has. As soon as I run the web server all these odd issues start. I have a post over at Zend http://forums.zend.com/viewtopic.php?f=44&t=41043&p=95133 where I was told to update my Zend install. After doing so this issue has not gone away. Even running 1 php-cgi.exe (somehow 2 start anyway) the server begins to go silly. The first issue I find myself with running php-cgi.exe is that Windows Services, weather be stock or using FireDaemon begin to lag, slow start, crash, etc. If anyone can help me with this I would GREATLY appreciate it. At this time I am forced to look for a alternative to running PHP other than cgi as it simply takes out the whole box. On another note, I run this same version of Zend on a similar server with no issues. Starting to think its a IIS issue. (UPDATE) Installed newest version of PHP, separate from Zend, same issue. Server Specs: Intel Xeon Quad w HT Nehlam Based 24GB DDR3 1333 2x1TB Raid Mirror OS 2x1TB Raid Mirror (Other) 4x2TB Raid 5 (Storage) Server 2008 R2

    Read the article

  • Question marks showing in ls of directory. IO errors too.

    - by jaymoo
    Has anyone seen this before? I've got a raid 5 mounted on my server and for whatever reason it started showing this: jason@box2:/mnt/raid1/cra$ ls -alh ls: cannot access e6eacc985fea729b2d5bc74078632738: Input/output error ls: cannot access 257ad35ee0b12a714530c30dccf9210f: Input/output error total 0 drwxr-xr-x 5 root root 123 2009-08-19 16:33 . drwxr-xr-x 3 root root 16 2009-08-14 17:15 .. ?????????? ? ? ? ? ? 257ad35ee0b12a714530c30dccf9210f drwxr-xr-x 3 root root 57 2009-08-19 16:58 9c89a78e93ae6738e01136db9153361b ?????????? ? ? ? ? ? e6eacc985fea729b2d5bc74078632738 The md5 strings are actual directory names and not part of the error. The question marks are odd, and any directory with a question mark throws an io error when you attempt to use/delete/etc it. I was unable to umount the drive due to "busy". Rebooting the server "fixed" it but it was throwing some raid errors on shutdown. I have configured two raid 5 arrays and both started doing this on random files. Both are using the following config: mkfs.xfs -l size=128m -d agcount=32 mount -t xfs -o noatime,logbufs=8 Nothing too fancy, but part of an optimized config for this box. We're not partitioning the drives and that was suggested as a possible issue. Could this be the culprit?

    Read the article

  • p410i Mirror failed couldnt find same disk

    - by Heishiro Mitsurugi
    I have an HP server with an P410i RAID Card installed. I had two SATA Drives connected (250GB each). The RAID was configured as a Mirror. A few days ago the drive one (1) failed, and i had to remove it. Tried to find the same part number here in Venezuela, but i couldn't. So, i bought a 500GB SATA Drive, and connected it to the same bay where the 250GB failed drive was. When the server booted, it asked me if i wanted to rebuild the data. I selected the option for that, and Windows Server restarted properly. When i got into the ACU (Array Configuration Utility) it told me that it was rebuilding the data. Today the warning went away, and according to the ACU everything is just fine. My question is... What i did was right? Can i create a mirror from a 250GB disk in a 500GB disk using the p410i? I have done that before, but only using software RAID in Windows, and it just uses the space it needs. As a matter of fact, when did that using Windows i was able to use the remaining space in the bigger drive, but in the p410i i can't use it. Should i be worried? Thanks a lot in advance for any pointers or info that you could give on this. Heishiro

    Read the article

  • Ubuntu 9.10 won't reboot after replacing a failed drive

    - by user149041
    Hello Serverfault community. I hope someone can shed light on a peculiar problem I am having with an Ubuntu 9.10 server install. I am not a Linux expert but have the responsibility of fixing the box if something goes wrong. DOH! I have Ubuntu 9.10 server installed on on a desktop platform: Compaq Presario SR5027CL. There are two 1TB SATA drives configured in a RAID 1 array; I use the box as an email backup server for a small group of users. Last week one of the drives failed and was replaced with a new drive of the same type. The problem I have been having is getting the box to reboot after a restart or a shutdown halt. The OS and the RAID 1 array are on the same drives that make up the RAID 1 array. The replacement drive (sda) was added to the box and the partitions were created to match the existing good drive (sdb). The array is made up of sda1 and sdb1. I found an interesting point while checking the BIOS settings: there is a "HDD Boot Group Priority" section, and the new drive was selected as the "1. 3rd master"; the server wouldn't boot configured like that, but when I set the old drive to be "1. 4th master", the box will reboot. I'm checking some more things, but I would certainly appreciate any useful information. Thanks in advance.

    Read the article

  • SQL Server insert slow

    - by andrew007
    Hi, I have two servers where I installed SQL Server 2008 Production: RAID 1 on SCSI disks Test: IDE disk When I try to execute a script with about 35.000 inserts, on the test server I need 30 sec and instead on the production server more than 2 min! Does anybody know why such difference? I mean, the DB is configured in the same way and the production server has also a RAID config, a better processor and memory... THANKS!

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • iSCSI timeouts under high load

    - by Antonio
    I have two servers connected via Gigabit Ethernet. One is iSCSI target, the second one is initiator. When I run mkfs.ext4 at initiator, after a while disk IO slows down critically. In the target host I can see the following in syslog: Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668c 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668c 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668d 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668d 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119668e 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119668e 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 1196696 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 1196696 6 Sep 14 09:40:03 sh11 tgtd: abort_task_set(1139) found 119669e 0 Sep 14 09:40:03 sh11 tgtd: abort_cmd(1115) found 119669e 6 Sep 14 09:40:04 sh11 tgtd: abort_task_set(1139) found 119669f 0 Sep 14 09:40:04 sh11 tgtd: abort_cmd(1115) found 119669f 6 And load average grows to 12 or even more: # uptime 12:37:00 up 23 days, 13:25, 1 user, load average: 12.00, 7.00, 4.00 CentOS 6.3 tgtd 1.0.24 Intel Pentium 4 2.4GHz 1Gb RAM 2Tb WD Cavlar Green SATA 2.0 #lspci 00:00.0 Host bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE DRAM Controller/Host-Hub Interface (rev 02) 00:01.0 PCI bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE Host-to-AGP Bridge (rev 02) 00:1d.0 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801DB/DBM (ICH4/ICH4-M) USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 82) 00:1f.0 ISA bridge: Intel Corporation 82801DB/DBL (ICH4/ICH4-L) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801DB (ICH4) IDE Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) SMBus Controller (rev 02) 00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 02) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV200 QW [Radeon 7500] 02:01.0 Ethernet controller: D-Link System Inc DGE-530T Gigabit Ethernet Adapter (rev 11) (rev 11) 02:02.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE/SATA Controller (rev 50) 02:03.0 RAID bus controller: VIA Technologies, Inc. VT6421 IDE/SATA Controller (rev 50) 02:04.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02) 02:08.0 Ethernet controller: Intel Corporation 82801DB PRO/100 VE (CNR) Ethernet Controller (rev 82) Is there a way to tune target host to avoid these timeouts?

    Read the article

  • Booting From VIA VT6421A Based PCI SATA Card

    - by Priyan R
    I bought a VIA VT6421A based SATA card for an 845 chipset based motherboard. The card is working i can access sata hdd from windows/linux. The problem is i can't directly boot from the sata card. My motherboard is award bios 6 based. I tried first boot as SCSI, its not worked. There is no Raid Bios screen appeared from the card. On searching i found , Add VT6421A bios to system bios as pci addon bios. I did it using cbrom6 , successfully added VT6421A bios to the existing bios. But now on booting instead of raid bios system bios showed Warning : something like cannot load add on rom for vendor id xxxx device id xxx . Whats wrong ? As the card is VT6421A based and i added VT6421A bios got from VIA website.

    Read the article

  • Power supply triggered to start by another power supply

    - by steampowered
    I am building a raid array in a separate enclosure. I will be putting an empty tower case next to an existing tower computer, and this second tower case will only hold hard drives. There are many solutions for connecting the drives in the second case to the raid card in the first case (SFF-8088 and SFF-8087 cables). But I prefer not to run power from the first case to the second case. Can I use a power supply in the first tower case and cause it to start the power supply in the second case based on an indication from power in the first tower case's power supply? Maybe run a 12 volt cable from the first case to the power supply on the second case only for the purpose of initiating the second power supply.

    Read the article

  • 12.04 ext4 - cannot create regular file/No space left - with a lot of space and inodes

    - by user1434058
    This seems similar: EXT4 "No space left on device (28)" incorrect but there is no explanation I created an ext4 filesystem on a RAID 1 array with: mke2fs -t ext4 -T small /dev/md0 Copying a single directory with many tiny files I get: cp: cannot create regular file `/mnt/raid1_new/pics/pic3412.jpg': No space left on device space used 5% inodes used 1% I manually tried: cp /source/test1.jpg /mnt/raid1_new/pics/test1.jpg --- error cp /source/test1.jpg /mnt/raid1_new/pics/test2.jpg --- ERROR cp /source/test1.jpg /mnt/raid1_new/pics/test3.jpg --- no error Notes: RAID 1 disks are error free. I tried mv instead of cp and got the same thing. I tried omitting -T small with no effect Can somebody help me understand this magic?

    Read the article

  • On Server Disk Storage VS SAN Storage

    - by Justin
    Hello, I am looking at buying three servers, and trying to figure out which storage solution makes the most sense in terms of performance and cost. Total budget is around: $10,000. OPTION 1: Dell servers with RAID 10 (4 Drives) each 7200RPM SAS 500GB, for a total capacity of 1TB. Each server is approx: $3000. Total storage then across all three servers is 3TB. OPTION 2: Same Dell servers with a cheap single drive no RAID for $2000 and go with a centralized SAN solution. The biggest problem is that I haven't been able to even find a SAN solution that is a reasonable price. Dell entry level storage servers are like $15,000. I am thinking just iSCSI, not fiber (too expensive). What do you guys recommend?

    Read the article

  • Damaged XenServer Storage LVM partition table

    - by Fiolek
    I have a homeserver running under XenServer control with 3x1TB discs inside, one for XenServer and two mirrored(using Intel's fakeRAID and dmraid) for VMs and a user data(but now I think RAID didn't work). I tried to pass PCI card to VM using PCI-passthroug and I read somewhere that I need to recompile kernel with pciback module but something went wrong(I made mistake in /boot/extlinux.conf and server couldn't run) and I had to use LiveCD of GPartEd(I already had it on USB key) to correct this. But when I re-run the server all VDIs were gone. I have completly no idea what could go wrong. I tried to repair RAID using dmraid -R in the hope that everything will return to noramal but now I think this done more bad than good(and corrupted rest of LVM table...). Is there any possibility to recover this SR or only data from one(~100GB) of VDI? I also wants to apologise for my English, I'm not from English-speaking country and I'm only 16 years old, so I hadn't "time" to learn it(school isn't good place to do this) in sufficient way.

    Read the article

  • MS Server 2003 Activiation Loop

    - by RPGonzo
    Recently we had a motherboard failure on a terminal server so we replaced the faulty motherboard, re-setup the RAID arrays (same motherboard but still wouldn't recognize old RAID setup) and continued to recover the system from a previous backup. No problem up to here, after restoring the system you are prompted to reboot and than login. On login we get a message box stating that Windows needs to be activated do you want to activate now, press yes but than the OS proceeds to log you off and do nothing at all. You can try over and over but to no avail. Found a few articles about a glitch in the activation script and how to reset it, tried that with the same results. Hoping someone can share some knowledge if you have seen this before? Thanks!

    Read the article

  • partition alignment on fresh windows 2003 ent server

    - by Datapimp23
    Hi, I have this server which has it's physical disks in RAID 5 controlled by a 3com raid controller. size of the stripe unit is unknown for the moment (Can check tomorrow in the office). I need to install windows server 2003 ENT and create 2 partitions (OS, Data). I'd like to create the partitions before the installation on windows server. They have to be aligned properly. I have the newest version of gparted on a disc but I have no clue if this is the right tool. Can someone point me in the right direction? Thanks

    Read the article

  • How would I add a second physical hard drive to proxmox

    - by Cygnus X
    I installed proxmox on a single 250GB hard drive and I would like to add a second identical hard drive to put more VM's on. I already tried once, and didn't get very far. I added it and formatted it as an ext4, but when I went to use the disk, it said only 8GB was available. That's not quite right. So I did some searching and found that I had to make the device ID 8e for a linux lvm. After I did this, it said I had to restart, so I did... and it wouldn't boot!!! What did I do wrong? And how do I do it right? (I know I could throw in a RAID card and do a RAID 0, but I'd rather not).

    Read the article

  • Network attached external harddrive from another computer.

    - by Paul Knopf
    I have a server that is setup in raid. It is on the same network as my main computer. I would like to have some of the memory on my server to act as a network attached drive on my main computer. Basically, I want it to be a new data drive (similar two C:\, but 2nd drives are mostly E:). That way, I can reformat my main computer without loosing any important data. And the data that is saved (on server E:\ drive) is secured via raid mirroring.

    Read the article

  • Is it possible to rent an IP address to mask the server real IP address?

    - by net-girl
    A customer would like to lease an IP address and point it to a dedicated web server with the intention of "masking" the server's IP address so it would be difficult to tell where the site is hosted. I found a company that leases IP addresses here: http://www.webhostingtalk.com/showthread.php?t=1191688 Is this even possible? Can they rent an IP address from a 3rd party in order to hide the server's IP address? Update: My client will be hosting a government leaks site and is trying to become Raid-Proof similar to what the pirate bay did: http://torrentfreak.com/pirate-bay-moves-to-the-cloud-becomes-raid-proof-121017/ Only that I'm worried about using a reverse proxy because of the latency it could cause having the app servers hosted in one data center and the load balancer/reverse proxy in other and also having to pay twice for bandwidth.

    Read the article

  • linux to linux, 10TB transfer?

    - by lostincode
    I've looked at all the previous similar questions, but the answers seemed to be all over the place and no one was moving a lot of data (100GB != 10TB). I've got about 10TB that I need to move from one raid to another, gigabit net, XFS file systems. My biggest concern is having the transfer die midway and not being able to resume easily. Speed would be nice, but ensuring transfer is much more important. Normally I'd just tar & netcat, but the raid I'm moving from has been super flaky as of late and I need to be able to recover and resume if it drops mid process. Should I be looking at rsync?

    Read the article

  • 4096 and 8192 block size read slower than write? by using lsi 9361-8i RAID10

    - by Min Hong Tan
    is it possible that 1024 and 2048 block size read speed is faster than 4096 and 8192 block? I'm using lsi 9361-8i with RAID 10 , with 8 x Kingston E50 250G. result: 1024 = Write: 2,251 MB/s Read: 2,625 MB/s 2048 = Write: 2,141 MB/s Read: 3,672 MB/s 4096 = Write: 2,147 MB/s Read: 231 MB/s 8192 = Write: 2,147 MB/s Read: 442 MB/s is there any possible? and below is the reading when i simply want to test out the RAID 10 function and disaster test by taking out one of the 250G harddisk. the result is different like below: Result: 1024 = Write: 825 MB/s Read: 1,139 MB/s 2048 = Write: 797 MB/s Read: 1,312 MB/s 4096 = Write: 911 MB/s Read: 1,342 MB/s 8192 = Write: 786 MB/s Read: 1,204 MB/s and the result for 4096 and 8192block are different? can any one explain to me is it normal? or I need to do some tuning/configuration? will it affect my host linux performance?

    Read the article

  • Flexible virtualization infrastructure Design with libvirt

    - by Lessfoe
    I'm going to install a CentOS6 Server with Virtualization ( libvirtd ) capabilities on a DELL Server with Hardware RAID5 of around 6T of disk space ( It has 4x2T disks in a PERC700 RAID Controller ). I'm going then to install some guests which requires few resources except one that needs 500GB of disk space, 8/16GB of RAM and good performances. I was thinking about file images for guests storage but I'm not sure about the 500GB VM what needs good performances so that an LVM device could be better. So my question is what would be the best layout concerning: RAID setup ( RAID5, RAID1 + 1 disk for OS only. ) disk partitioning ( using the entire disk/ leave free space for future use and extending it with LVM ) guests storage management ( LVM devices or file images ( considering the 500GB VM that is performance demanding ) or mixed ) Where to put guests storage? /var/lib/libvirt/images or maybe in a custom dir separated from system /home/VMs Thanks in advance for any hint.

    Read the article

  • Sizing a Virtual Server

    - by vdubs
    I would like to replace four aging physical servers with one virtual server. What is the best way to insure the VM server is sized correctly? The requirements of the apps that will be running on the four servers are APPLICATION SERVERS - QTY 3 - These will run the application layer for the web server, Business Objects Business Intelligence app, and various other small client server apps. The three most heavy hitting apps each have the following server requirements. So, if I bought three physical servers, this would be the requirements for each of them Processor - Dual 2.83 GHZ (or faster) Ram - 4 GB Raid 5 - 50-100GB usable space NIC - 1 GB Web Server - this will run one asp.net e-business app that will talk to our dedicated SQL server and the three app servers above. The E-Business software has these requirements for the web server Processor - Quad 2.83 GHZ (or faster) Ram - 8 GB Raid 5 - 50-100GB usable space NIC - 1 GB What is the best tool to determine what I need from a hardware standpoing in a virtual server? I am planning on using VMWare.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >