Search Results

Search found 3324 results on 133 pages for 'gb'.

Page 49/133 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Backing up large network (~200 clients) -- Enough Bandwidth?

    - by mtkoan
    My company wants to institute a backup plan for all of the clients on our network, which is about 200. We back up our servers and SQL databases regularly, but its been our policy to not backup individuals. What is most critical for people is their Documents and PST files in Outlook. PST files can be very large, and most people's are ~1-1.5 GB around here. So with PST files alone that is 200-300 GB of data needing to be transferred daily to a sever for backup. Or compressing first, then transferring, but many of the machines are VERY old and such a task would grind their computer to a halt. Isn't this the reason networks use things like VMware -- to reduce network traffic and streamline backups? Or is this only to reduce hardware costs? Would this much network traffic everyday drastically slow down our network? Enough to the point we'd have to mandate it to be done at night only? Or could we stagger then through out the day? Really appreciate any input, thank you.

    Read the article

  • OpenVZ vs KVM for Linux VMs

    - by Eliasdx
    Hardware: Intel® Core™ i7-920, 12 GB DDR3 RAM, 2 x 1500 GB SATA-II HDD (no SoftRaid because Proxmox developers don't support softraid and they are sure you'll run into problems) Software: Proxmox VE with KVM and OpenVZ support and debian everywhere I want to run multiple Linux VMs on this server. One for a firewall (I want to try pfSense), one for MySQL, one VM for nginx (my stuff) and ~2 VMs with nginx for other people's web sites. I don't think that pfSense will run in an OpenVZ environment but it should run in KVM. The question is if I should setup the other VMs using KVM or OpenVZ. In OpenVZ they should have less overhead for the OS itself but I don't know about the performance. I heard that KVM is more stable but needs more RAM and CPU. I found this diagram showing a OpenVZ setup on the same hardware I'm using. This guy uses an own VM for each and every website which is running on his server. I can't think of any advantage why he's using so many VMs.

    Read the article

  • Is my OCZ SSD aligned correctly? (Linux)

    - by Barney Gumble
    I have an OCZ Agility 2 SSD with 40 GB of space. I use it as a system drive in Debian Linux (Squeeze) and in my opinion it's really fast. But I've read a lot on aligning partitions and file systems... And I'm not sure if I succeeded in aligning the partitions correctly. Maybe the SSD could be even faster?? ;-) I use ext4 and here is the output of fdisk -cul: Disk /dev/sda: 40.0 GB, 40018599936 bytes 255 heads, 63 sectors/track, 4865 cylinders, total 78161328 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: [...] Device Boot Start End Blocks Id System /dev/sda1 * 2048 73242623 36620288 83 Linux /dev/sda2 73244670 78159871 2457601 5 Extended /dev/sda5 73244672 78159871 2457600 82 Linux swap / Solaris My partitions were created just by the Debian Squeeze setup assistant. So I didn't care about the details of partitioning. But now I think maybe the installer didn't align it correctly? Actually, 2048 looks good to me (better than odd values like 63 or something like that) but I've no idea... ;-) Help plz! According to some "SSD Alignment Calculator" I found on the web, the OCZ SSDs have a NAND Erase Block Size of 512kB and their NAND Page Size is 4kB. 2048 is divisible by 4 and 512. So are the partitions aligned correctly?

    Read the article

  • How to resolve virtual disk degraded in Windows Server 2012

    - by harrydev
    I am using the new Storage Spaces feature in Windows Server 2012. I have the following disks: FriendlyName CanPool OperationalStatus HealthStatus Usage Size ------------ ------- ----------------- ------------ ----- ---- PhysicalDisk2 False OK Healthy Auto-Select 2.73 TB PhysicalDisk3 False OK Healthy Auto-Select 2.73 TB PhysicalDisk4 False OK Healthy Auto-Select 2.73 TB PhysicalDisk5 False OK Healthy Auto-Select 2.73 TB There is also a separate OS disk. The above disks are part of a single storage pool: FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly ------------ ----------------- ------------ ------------ ---------- Pool OK Healthy False False Within this storage pool some virtual disks are defined, see below: FriendlyName ResiliencySettingNa OperationalStatus HealthStatus IsManualAttach Size me ------------ ------------------- ----------------- ------------ -------------- ---- Docs Mirror OK Healthy False 500 GB Data Mirror Degraded Warning False 500 GB Work Mirror Degraded Warning False 2 TB Now the virtual disks are all running normal 2-way mirror, but two of the virtual disks are degraded. This is probably because one of the physical disks was offline for a short period of time. However, now the virtual disk cannot be repaired, even though, all physical disks are healthy. There is plenty of available space in the storage pool. This I cannot understand so I was hoping for some help, on how to resolve this? Below I have listed the full output from the Get-VirtualDisk CmdLet for the "Work" disk: ObjectId : {XXXXXXXX} PassThroughClass : PassThroughIds : PassThroughNamespace : PassThroughServer : UniqueId : XXXXXXXX Access : Read/Write AllocatedSize : 412316860416 DetachedReason : None FootprintOnPool : 824633720832 FriendlyName : Work HealthStatus : Warning Interleave : 262144 IsDeduplicationEnabled : False IsEnclosureAware : False IsManualAttach : False IsSnapshot : False LogicalSectorSize : 512 Name : NameFormat : NumberOfAvailableCopies : 0 NumberOfColumns : 2 NumberOfDataCopies : 2 OperationalStatus : Degraded OtherOperationalStatusDescription : OtherUsageDescription : Disk for data being worked on (not backed up) ParityLayout : PhysicalDiskRedundancy : 1 PhysicalSectorSize : 4096 ProvisioningType : Thin RequestNoSinglePointOfFailure : True ResiliencySettingName : Mirror Size : 2199023255552 UniqueIdFormat : Vendor Specific UniqueIdFormatDescription : Usage : Other PSComputerName :

    Read the article

  • Can't create new Volume on Unallocated Space

    - by natediggs
    I installed Windows Server 2008 R2 on a Dell server that has one volume that is a 6 TB RAID 5 array. I created a 120GB install volume and I'm now trying to create a 5 TB data volume. For what ever reason Windows will not allow me to create a new volume out of all of the unalocated space. Windows will allow me to create a new volume out of one 2TB block of unallocated space but not the remaining 3.5 TB block. Tried to post a screen shot but I was blocked. If I right click on the 1949.85 GB block of space there is the option to create a new volume. If I click on the 3539.5 GB block of space that option is grayed out. If I go into diskpart and try to create a new partition, diskpart says that there is only 1949GBs free on the volume. I know this process works because I did the exact same thing on another server that we have that is the exact same hardware configuration on which I used the exact same Server 2008 R2 install image. Any help would be greatly appreciated. Nate

    Read the article

  • How to re-add a RAID-10 failed drive on Ubuntu?

    - by thiesdiggity
    I have a problem that I can't seem to solve. We have a Ubuntu server setup with RAID-10 and two of the drives dropped out of the array. When I try to re-add them using the following command: mdadm --manage --re-add /dev/md2 /dev/sdc1 I get the following error message: mdadm: Cannot open /dev/sdc1: Device or resource busy When I do a "cat /proc/mdstat" I get the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [r$ md2 : active raid10 sdb1[0] sdd1[3] 1953519872 blocks 64K chunks 2 near-copies [4/2] [U__U] md1 : active raid1 sda2[0] sdc2[1] 468853696 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdc1[1] 19530688 blocks [2/2] [UU] unused devices: <none> When I run "/sbin/mdadm --detail /dev/md2" I get the following: /dev/md2: Version : 00.90 Creation Time : Mon Sep 5 23:41:13 2011 Raid Level : raid10 Array Size : 1953519872 (1863.02 GiB 2000.40 GB) Used Dev Size : 976759936 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 25 09:25:08 2012 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : c6d87d27:aeefcb2e:d4453e2e:0b7266cb Events : 0.6688691 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed 2 0 0 2 removed 3 8 49 3 active sync /dev/sdd1 Output of df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 441G 2.0G 416G 1% / none 32G 236K 32G 1% /dev tmpfs 32G 0 32G 0% /dev/shm none 32G 112K 32G 1% /var/run none 32G 0 32G 0% /var/lock none 32G 0 32G 0% /lib/init/rw tmpfs 64G 215M 63G 1% /mnt/vmware none 441G 2.0G 416G 1% /var/lib/ureadahead/debugfs /dev/mapper/RAID10VG-RAID10LV 1.8T 139G 1.6T 8% /mnt/RAID10 When I do a "fdisk -l" I can see all the drives needed for the RAID-10. The RAID-10 is part of the /dev/mapper, could that be the reason why the device is coming back as busy? Anyone have any suggestions on what I can try to get the drives back into the array? Any help would be greatly appreciated. Thanks!

    Read the article

  • Loading a big database dump into PostgreSQL using cat

    - by RussH
    I have a pair of very large (~17 GB) database dumps that I want to load into postgresql 9.3. After installing the database packages, learning more or less how to use them, and fiddling around a little on various StackExchange pages (particularly this question), it looks like a proper command for me to use is something like: cat mydb.pgdump | psql mydb because of the format the dump is in. My machine has 16 GB of RAM, and I'm not familiar with the cat command but I do know that my RAM is 99% exhausted and the database is taking a while to load. My machine isn't non-responsive to the point of hanging; I can run other commands in other terminal windows and have them execute at a reasonable clip, but I am wondering if cat is the best way to pipe in the file or if something else is more efficient? My concern is that maybe cat could be using up all the RAM so the database doesn't have much to work with, throttling its performance. But I'm new to thinking about RAM issues like this and don't know if I'm worrying about nothing. Now that I think about it, this seems to be more of a question about cat and its memory usage than anything else. If there is a more appropriate forum for this question please let me know. Thanks!

    Read the article

  • What is the correct way to move a file?

    - by Joe McDonald
    We had an issue at my work where I cut and pasted some files. Immediately when I did it, a ton of files were lost. I've been working in IT for 10+ years. I know how to cut and paste a file. Well, when it went up to my managers as to why the files were lost, they deemed it to my cut and paste that caused all the problems and asked why in the world someone as knowledgeable as me would ever cut and paste a file, and didn't I know that was totally the wrong way to move a file? The correct way to move a file is to drag the file. When cutting and pasting, it moves that 1+ GB file (on the server) to the clipboard (on my PC), which, obviously, will cause problems. Dragging a file never hits the clipboard. Be honest, I don't believe that for a minute. I believe when I cut and paste text, it goes to the clipboard. I've seen it in the old versions of windows. But when right clicking on 100+ files that equals 1+ GB, I can't believe that all that data is copied immediately out of whatever share I'm on at the server across my wireless on my laptop to my local clipboard to just go back to the server to another share. It seems they would build some logic in the server OS or my local OS (more likely my local OS) that would say when copying files, don't perform the move action until I click paste and if the files are staying local to where they were before, just move them. So, who's right?

    Read the article

  • How do I reinitialise a failed RAID 5 drive using terminal on Ubuntu Server

    - by Stephen
    I've currently put together a new system and part of that has been creating a software RAID 5 using 'mdadm' in Ubuntu Server. I successfully got to the point where I create the array using: sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 I left it to do its thing overnight then used the following command to check on it: watch cat /proc/mdstat To which the following was returned: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[4](S) sdc1[2] sdb1[1] sda1[0](F) 5860535808 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [_UU_] unused devices: <none> It appears that one has failed (and I'm not too savvy with why another is a spare). So, just to be sure that something else isn't amiss I wanted to try and re-engage the failed drive. Can someone explain how I can do that and what I should do with the spare (if anything). And also how do I know when synchronisation is complete? The tutorial I used to get this far is located here: http://sonniesedge.co.uk/2009/06/13/software-raid-5-on-ubuntu-904/ Many thanks! p.s. Here is some extra information that may help: sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jun 18 21:14:21 2012 Raid Level : raid5 Array Size : 5860535808 (5589.04 GiB 6001.19 GB) Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jun 18 21:50:26 2012 State : clean, FAILED Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : myraidbox:0 (local to host myraidbox) UUID : a269ee94:a161600c:fb1665e7:bd2f27b3 Events : 13 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 0 0 3 removed 0 8 1 - faulty spare /dev/sda1 4 8 49 - spare /dev/sdd1

    Read the article

  • How to log kernel panics without KVM

    - by Spacedust
    My server is crashing and I can't find an answer why. It all started after my datacenter upgrade RAM from 16 GB to 32 GB. I also found such logs in dmesg - they've started to show itself just before the first kernel panic: EXT4-fs error (device md2): ext4_ext_find_extent: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_ext_remove_space: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 20974: 8589 blocks in bitmap, 54896 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. EXT4-fs error (device md2): ext4_ext_split: inode #97911179: (comm pdflush) eh_entries 28769 != eh_max 26988! EXT4-fs (md2): delayed block allocation failed for inode 97911179 at logical offset 1039 with max blocks 1 with error -5 This should not happen!! Data will be lost EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 21731: 5 blocks in bitmap, 60762 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. My system is CentOS 5.8 64-bit with latest kernel 2.6.18-308.20.1.el5. How can I check what is the reason of kernel panic without having an access to the KVM ? I have told my datacenter admins to check the memory in the server.

    Read the article

  • Windows is very slow with my new SSD

    - by Maksym H.
    I have a laptop HP probook 4520s with Core i5 M480 @ 2.67Ghz, 4Gb RAM, 640 GB HDD Radeon HD 6370m 1GB video card. It would seem like a good stack for work, right? But My HDD has crashed after everyday walking with laptop about 1 year. After buying my new SSD (Patriot memory - Torqx II 128 Gb SATA II) and installing new Windows 8 from scratch - it was amazing fast. But I had only install windows updates, and I feel that the speed become the same as my old HDD, after install other software for my work, it becomes so slow, so when I use my PC with old lower configuration and it really works better than my awesome laptop... I checked that TRIM and AHCI mode are turned on. So why's that? I asked for help in Patriot Memory support, they suggested to send them ATTO test results, done, sent. Here is the response: "Thank you very much for the attached results. Looking at the results, I can see that your SSD speed is a lot lower than it should be. Can you tell me your system specs?" Until they checked my email, I re-installed Windows 8 to Windows 7 and it was again perfect, but the story repeats it becomes slower and slower after every installing new software. Check out some screenshots.. (sorry for the screenshot with russian TaskManager, I hope you will recognize those parameters accordingly with your english or other lang TaskManager) So the main issue that something everytime loads the disc on 100% and the response time is jumping around 1000-3000 ms. Why am I asking about Windows? Because I tried to install Linux Mint (x86) and It just flies. So great performance independent on how many programs I have installed. Only Windows (any 7 or 8) has this problem. So guys, I appreciate any ideas about how to fix that and may be answers of main question - "why is it so.?" Thanks!

    Read the article

  • Linux server: Dropped packets

    - by Lars
    I see dropped packets using ifconfig on my eth0 interface: eth0 Link encap:Ethernet HWaddr 00:15:17:0d:03:ca inet addr:10.0.1.2 Bcast:10.0.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:30268348 errors:0 dropped:70721 overruns:0 frame:0 TX packets:133076885 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8699434077 (8.6 GB) TX bytes:194937313025 (194.9 GB) Interrupt:16 Memory:feae0000-feb00000 When i use ethtool -S i dont see anything wrong: NIC statistics: rx_packets: 30267138 tx_packets: 133074510 rx_bytes: 8699356158 tx_bytes: 194934147340 rx_broadcast: 35296 tx_broadcast: 5435 rx_multicast: 0 tx_multicast: 0 rx_errors: 0 tx_errors: 0 tx_dropped: 0 multicast: 0 collisions: 0 rx_length_errors: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_no_buffer_count: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 tx_window_errors: 0 tx_abort_late_coll: 0 tx_deferred_ok: 0 tx_single_coll_ok: 0 tx_multi_coll_ok: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 rx_align_errors: 0 tx_tcp_seg_good: 5757001 tx_tcp_seg_failed: 0 rx_flow_control_xon: 8649 rx_flow_control_xoff: 62072 tx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_long_byte_count: 8699356158 rx_csum_offload_good: 30212111 rx_csum_offload_errors: 0 rx_header_split: 10857552 alloc_rx_buff_failed: 0 tx_smbus: 0 rx_smbus: 0 dropped_smbus: 0 rx_dma_failed: 0 tx_dma_failed: 0 I am running Ubuntu 12.04 with kernel 3.2.0-30-generic #48-Ubuntu SMP I have pinged every device on my internal network for about 24 hours, without packet loss. Also checked my router and my interface to the WAN, also no errors there. Does anyone have any clue?

    Read the article

  • ACDSee alternatives for batch editing images

    - by Oxwivi
    I am looking for free, preferably open, alternatives to ACDSee for batch editing work. While I can do much of the work well on ACDSee, it's not entirely satisfactory despite having to pay for it. I need at least the following batch editing functions: Resize using either height or width and maintain aspect ration Auto contrast text overlays and occasionally, cropping oh, I make extensive use of renaming features as well Couple of issues with ACDSee are: I always need to highlight the Exposure section or auto contrast will not be done despite it being saved in the preset; and I can't define, move around the cropping box, forcing me to manually crop tons of images. I'm not an advanced, or "power photo-editor". I only require the basic stuff I described to be automated. My personal feature wish list (I'm pretty sure something so niche doesn't exist) would be text overlay based on the image names (images are named as image-1_1, image-1_2 or image-2_c1_1, image-2_c1_2, and text overlay would Image-1 and Image-2 C1 and Image-2 C2). I tried digiKam, but damn that thing is huge. It runs very slowly on my Pentium 4 and 1.5 GB RAM. On top of being a program with over 1 GB of files, the KDE library it uses is always slow regardless of it running on either Windows or Linux.

    Read the article

  • How to access previous VHD versions of system backup?

    - by feklee
    Quote from the 31 Oct 2009 TechNet article "Learn more about system image backup": During the first backup, the backup engine scans the source drive and copies only blocks that contain data into a .vhd file stored on the target, creating a compact view of the source drive. The next time a system image is created, only new and changed data is written to the .vhd file, and old data on the same block is moved out of the VHD and into the shadow copy storage area. Volume Shadow Copy Service is used to compute the changed data between backups, as well as to handle the process of moving the old data out to the shadow copy area on the target. This approach makes the backup fast (since only changed blocks are backed up) and efficient (since data is stored in a compact manner). When restoring the image, blocks will be restored to their original locations on the source disk. If you want to restore from an older backup, the engine reads from the shadow copy area and restores the appropriate blocks. For the last days, a daily system backup of drive C: to drive E: has been scheduled and run by Windows 7 Backup and Restore. Drive C: currently holds 233 GB of data, which fits comfortably on drive E:, a 1 TB drive, with 727 GB of free space remaining. How do I access the previous version of a VHD? I right clicked on files and folders in E:\WindowsImageBackup, and I looked for Previous Versions but always: There are no previous versions available

    Read the article

  • How to force mdadm to stop RAID5 array?

    - by lucek
    I have /dev/md127 RAID5 array that consisted of four drives. I managed to hot remove them from the array and currently /dev/md127 does not have any drives: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdd1[0] sda1[1] 304052032 blocks super 1.2 [2/2] [UU] md1 : active raid0 sda5[1] sdd5[0] 16770048 blocks super 1.2 512k chunks md127 : active raid5 super 1.2 level 5, 512k chunk, algorithm 2 [4/0] [____] unused devices: <none> and mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Sep 6 10:39:57 2012 Raid Level : raid5 Array Size : 8790402048 (8383.18 GiB 9001.37 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 0 Persistence : Superblock is persistent Update Time : Fri Sep 7 17:19:47 2012 State : clean, FAILED Active Devices : 0 Working Devices : 0 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 0 0 2 removed 3 0 0 3 removed I've tried to do mdadm --stop /dev/md127 but: mdadm --stop /dev/md127 mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group? I made sure that it's unmounted, umount -l /dev/md127 and confirmed that it indeed is unmounted: umount /dev/md127 umount: /dev/md127: not mounted I've tried to zero superblock of each drive and I get (for each drive): mdadm --zero-superblock /dev/sde1 mdadm: Unrecognised md component device - /dev/sde1 Here's output of lsof|grep md127: lsof|grep md127 md127_rai 276 root cwd DIR 9,0 4096 2 / md127_rai 276 root rtd DIR 9,0 4096 2 / md127_rai 276 root txt unknown /proc/276/exe What else can I do? LVM is not even installed so it can't be a factor.

    Read the article

  • Windows 8 Fails to install with corrupted graphics

    - by Andy
    I am trying to install Windows 8 Pro Upgrade via the download method on an older PC (07-08). It is a Dell Dimension E521 but it has been upgraded with a 3.0 Ghz Dual Core AMD Processor, 120 GB SSD, and 4 GB of RAM. The Windows 8 upgrade assistant did not detect any issues or concerns with upgrading other than I don't have DVD software installed. The system install Windows 8 but on the first boot, corrupt graphics are present. Eventually, the monitor will go into sleep mode and then roll back to Windows 7 Pro X64 which runs fine. I wouldn't be upset over not being able to install Windows 8, but I already paid for the software since I thought there would be no issues upgrading. The Graphics card in the system is the Geforce 7300LE and it has the latest NVidia drivers for Windows 7 loaded. I saw this solution which is similar to my problem: Corrupt graphics during Windows 8 installation However, I have downloaded Windows 8 and I am not sure how to go about modifying the install that resides somewhere on the hard drive. Thanks in advance for any assistance.

    Read the article

  • why does the partition start on sector 2048 instead of 63

    - by gcb
    I had two drives partitioned the same and running 2 raid partitions on each. One died and I replaced it under warranty for the same model. While trying to partition it, the first partition can only start on sector 2048, instead of 63 that was before. Driver have different geometry as previous and remaining ones. (Fewer heads/more cylinders) old drive: $ sudo fdisk -c -u -l /dev/sdb Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000aa189 Device Boot Start End Blocks Id System /dev/sdb1 * 63 174080339 87040138+ 83 Linux /dev/sdb2 174080340 182482334 4200997+ 82 Linux swap / Solaris /dev/sdb3 182482335 3907024064 1862270865 fd Linux raid autodetect remanufactured drive received from warranty: $ sudo fdisk -c -u -l /dev/sda Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d0b5d Device Boot Start End Blocks Id System /dev/sda1 2048 ... why is that?

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • "iostat" command different in two equal machines

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 8 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are - 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st. Mem is about 1.5GB free. Any idea what could it be, or where else can we check? iostat command on the proper machine: avg-cpu: %user %nice %system %iowait %steal %idle 8.97 0.03 4.46 0.19 0.14 86.23 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.60 0.69 55.38 587620 47254184 xvdfp2 2.64 1.10 61.04 934786 52091056 xvdfp4 0.86 0.19 41.72 163866 35601920 xvdfp1 4.37 36.59 73.89 31220810 63051504 xvdfp3 8.03 7.08 94.63 6045402 80749184 iostat command on problematic machine: avg-cpu: %user %nice %system %iowait %steal %idle 9.29 0.04 5.55 0.26 0.11 84.74 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.13 3.34 68.85 246244 5077888 xvdfp1 7.60 74.31 104.88 5480362 7734840 xvdfp3 13.22 73.67 125.00 5433386 9218600 xvdfp4 1.11 0.76 65.08 55762 4799248 xvdfp2 4.16 3.31 99.17 243818 7313264 Many thanks for the help.

    Read the article

  • Creating a bootable USB drive from a distro split over two DVD ISOs

    - by Kev
    I am searching and not finding the right way to do this. Please note, I don't think I'm trying for anything strange here. I just want to make a bootable USB stick of a single OS that happens to be larger than one DVD and happens to be larger than FAT32 will allow for in a single file. On our slow connection I spent a long time downloading CentOS 5.9's two DVD ISOs: CentOS-5.9-x86_64-bin-DVD-1of2.iso (4.4 GB) CentOS-5.9-x86_64-bin-DVD-2of2.iso (718 MB) I have a USB stick that I want to somehow get these two ISOs on. Since the first one is 4.4 GB, I can't use ISO2USB because it insists on FAT32. I cannot find an alternative that lets you specify more than one ISO image--of the same distro, I'm not trying for some fancy multi-boot thing--to put on the same stick. I guess I should have downloaded the CD ISOs, but I thought I was "saving time" because then I wouldn't have as many files to run through the md5 checker. There's no IMG file of the whole thing (only a net install version, which I don't want--I want to pre-download everything) otherwise I would've gone for that. So, given that I have these two DVD ISOs, how can I get them on a stick that will boot and make use of both of them properly to install CentOS somewhere? Again, I don't think this is anything out of the ordinary, yet I can't find software/docs that seem to support this. Am I stuck re-downloading everything in CD-sized ISOs just to do this? I found this, but it doesn't run on Windows. I am using Windows to prepare the stick.

    Read the article

  • Using rsync to migrate files between windows servers: problems with Excel spreadsheets

    - by HorusKol
    We've been migrating about 220 GB of data from a Windows 2003 Server to a Windows 2008 Server, and because of the time it would take to copy that data and the necessity of keeping it available for users, I came up with the idea of using rsync on an Ubuntu server to broker the migration. (I might have gone for a proper Windows solution - but the applications I found were a bit pricey for a one-shot like this - and permissions are not a problem). All well and good - and today I'm making the last sync and confirming that the new server is up-to-date using diff, but I"ve noticed an odd thing with Excel spreadsheets (.xls). Every instance of an Excel spreadsheet that has already been copied in a previous in a previous synchronisation is being marked as "already up-to-date" by rsync. However, when I then run a diff, I'm told that the files differ. I'm manually copying them, as there are but a handful, but I was wondering what might be causing this. No other filetype in the entire 220 GB tree has had any problem like this - just the Excel/xls files. It'd be great if someone could come up with an explanation.

    Read the article

  • SQL Server 2005 to 2008 DB attach elp please!

    - by Brandon
    I have SQL Server 2005 Standard on my personal machine. I created a very big DB about 21 gb. I made a backup and transferred the .bak file via an ftp program to my dedicated server. I have SQL Server 2008 Enterprise Edition on my dedicated server. I tried restore the transferred .bak file but got an error. I posted the error on here and was told the database is corrupt. How? I don't know. The connection was not interrupted during the ftp transfer. The DB works on my own machine. So then I detached the db on my own machine and transferred the mdf and ldf file to my dedicated server through ftp again and again there were not interruptions. Now I try to attach the db and get this error: The header for file 'DB.mdf' is not a valid database file header. The FILE SIZE property is incorrect. (Microsoft SQL Server, Error: 5172) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.00.1442&EvtSrc=MSSQLServer&EvtID=5172&LinkId=20476 I already wasted 21 gb transferring the .bak file. Now I used another 21 to transfer mdf and additional ldf file. Please tell me there's a solution. The db can detach and attach fine on my machine in sql server 2005 but not SQL server 2008 on my server.

    Read the article

  • a disk read error occurred [closed]

    - by kellogs
    Hi, ¨a disk read error occurred¨ appears on screen after choosing to boot into Windows XP from GRUB. [root@localhost linux]# fdisk -lu Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x48424841 Device Boot Start End Blocks Id System /dev/sda1 63 204214271 102107104+ 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 204214272 255606783 25696256 af HFS / HFS+ Partition 2 does not end on cylinder boundary. /dev/sda3 255606784 276488191 10440704 c W95 FAT32 (LBA) Partition 3 does not end on cylinder boundary. /dev/sda4 276490179 312576704 18043263 5 Extended /dev/sda5 * 276490240 286709759 5109760 83 Linux /dev/sda6 286712118 310488254 11888068+ b W95 FAT32 /dev/sda7 310488318 312576704 1044193+ 82 Linux swap / Solaris sda is a 160GB hard disk with quite a few partitions and 3 OSes installed. I am able to boot into Linux and Mac OS fine, but not into Windows anymore. The Windows system is located on /dev/sda1. I can not recall how exactly have I used testdisk but it once said that ¨The harddisk /dev/sda (160GB / 149 GB) seems too small! (< 172GB / 157GB)¨ or something simillar. So far I have tried to ¨fixboot¨ and ¨chkdsk¨ from a recovery console on the affected windows partition (/dev/sda1), the plug off power cord for 15 seconds trick, reinstalling GRUB, repairing the MFT and boot sector of the affected partition via testdisk, what next please ? Thank you!

    Read the article

  • How to better copy&paste big files over RDP?

    - by WebMAOhist
    Recently I was making a few attempts to copy&paste a big (1.2 GB) file to remote computer over RDP. The remote computer is virtual testing machine with MS Windows Server 2008 Datacenter. First I tried to copy&paste before midnight when the transfer speed was limited by client computer ISP to 100 kB/s. So, it required a few hours and I was forced to cancel transfer since remote desktop became too unresponsive and sluggish (slow). So, I re-started it over midnight when my local transfer speed is over 4 GB/s 4MB/s (sorry for typo). So, my impression is that independently on speed (broadband) of copy&paste transfer the remote computer becomes sluggish while copying over RDP. At the same time downloading from internet doesn't make remote host sluggish. AFAIU, it is because clipboard of remote computer and so its memory becomes overloaded by transfer. How can I control (restrict) the usage of clipboard for specific process (pasting of file)? What are the possible way to control it? Update: After reading that slow speed of transfer is caused by encryption used for copy&pasting over RDP and since I believe I am more interested in overall efficiency: both the time, or rapidness, of getting file as well as possibility to work without waiting, I changed the question title from: How to control the usage of remote desktop clipboard usage for pasting a big file? to How to better copy&paste big files over RDP? For example, is it better to copy&paste one huge (zip) archive or unzip it and copy paste a folder with unzipped files? And more exactly I wanted to ask: What are possible ways to improve overall experience: the speed of transfer (i.e. availability of needed file) responsiveness of remote host (making remote coputer available for work before completion of copy&pasting)?

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >