Search Results

Search found 25442 results on 1018 pages for 'disk size'.

Page 11/1018 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Disk Utility Restore causes "Could not validate resource - Invalid Argument"

    - by Yahoo
    I have a problem with Disk Utility on Mac OS X 10.6. I have an image of Windows that I would like to use as a bootable volume on a pen drive or external hard drive. The thing is: When I try to restore the volume from the image I get an error: "Restore Failure: Could not validate resource - Invalid Argument" I read some information about that error on the Internet. I converted the image into .iso (Mac OS Extended/ISO (Joliet) Hybrid Image) format and then got this error: "Restore Failure: Could not find any scan information. The source image needs to be imagescanned before it can be restored." When I try to scan the image for Restore, I get the first message. I really read a lot of information about this topic on the Internet, but I haven't found the solution. I tried both ISO and DMG formats; I don't know which is best.

    Read the article

  • Backup virtual hard disk

    - by Harshil Sharma
    I have a VM created in VMWare Player. It's VHD is currently sized 17 GB, split among multiple 2 GB files. The host OS is Windows 8. I use CrashPlan in host OS for file backup. The problem is, whenever I use the VM, CrashPlan detects all parth of VHD as altered and backs up the 17 GB VHD. WHat I want is a software that can run on host OS (Windows 8), treat the VHD as a physical hard disk and create incremental backups of the VHD, includeing all files, programs and the OS

    Read the article

  • Xen Disk Performence Issues

    - by user98651
    I'm currently using Xen PV on CentOS 5 with my domU's as flat files running on a hardware RAID controlled (write cache enabled) formatted with XFS. On the dom0 I can get about 500MB/s in a 2GB dd write from /dev/zero however on the domU's I'm lucky if I get 10MB/s (it is usually around half that). I've tried changing the disk scheduling to NOOP on the domU's, changed some mount parameters and tweaked the performance allocations of both the dom0 (prioritize CPU) and domU's (increase RAM and VCPU allocations). None of these steps have produced any noticeable change in performance. My instinct here is that it is not a hardware problem, due to the solid performance of the dom0. Any ideas on what might be causing this problem? I'm considering moving to LVM based domU's.

    Read the article

  • openSuse full disk encryption

    - by djechelon
    I'm a proud Suser. I'm about to reinstall 12.2 on my ASUS N76VZ (UEFI x64 laptop). Since I'm very sensitive about laptop security against theft or unwanted inspection, I chose to use BitLocker with USB dongle in Windows 7. When installing Suse the last time I found that only the home partition (separated from root) was capable of being encrypted. Does Suse offer a full disk encryption solution like BitLocker that I haven't discovered yet? Or is encrypting home partition the only way to protect data? Encrypting only home is feasible as one stores personal data in home, but I still would like to encrypt the whole thing! Also, using a hardware token (no TPM available) for unlocking is preferred to password, if possible! Thanks

    Read the article

  • Is rsync --delete safe in case of disk failure

    - by enedene
    I have two data hard drives on my Linux server and I use second as a backup for a first drive. I use rsync for that purpose. An example would be: rsync -r -v --delete /media/disk1/ /media/disk2/ What this does is that it copies every file/directory from /media/disk1/ to /media/disk2/ but also deletes any difference. For example, lets say that files A and B but not file C are on disk1, and on disk2 there is no A and B files, but there is C. The result would be that after the command on disk2 I'd have files A and B, but file C would be deleted, just like on disk1. Now, a rather disastrous scenario had crossed my mind; what if disk1 dies, system continues to work since system files are on my system disk, but when rsync tries to backup my data on disk2 from broken disk1, it deletes all the files from disk2 because it can't read anything on disk1. Is this a possible scenario, or is there a protection from it build in rsync?

    Read the article

  • How to increase the disk cache of Windows 7

    - by Mark Christiaens
    Under Windows 7 (64 bit), I'm reading through 9000 moderately sized files. In total, there is more than 200 MB of data. Using Java (JDK 1.6.21) I'm iterating over the files. The first 1400 or so go at full speed but then speed drops off to 4ms per file. It turns out that the main cost is incurred simply by opening the files. I'm opening the files using new FileInputStream (and of course closing them in time to avoid file leaks). After some investigating, I see that Windows' disk cache is using only 100 MB or so of RAM although I have 8 GiB available. I've tried increasing the cache size using the CacheSet tool but any values I provide are considered out of range. I've also tried enabling the LargeSystemCache registry key but (after rebooting) the CacheSet tool still indicates I'm using 100 MB of cache (and doesn't increase during the test run). Does anybody have any suggestions to "encourage" Windows 7 to cache my 9000 files?

    Read the article

  • Skype Disk Full Error...

    - by commradepolski
    I have searched around online for an answer to this one without any luck. Every couple of days, on my PC I get an error with Skype, that says Disk Full. I have plenty of HD space so I know that is not the issue. I am able to resolve the problem temporarily by killing the skype process and restarting skype. Does anyone know of a solution to this problem to stop it from happening? I am running Skype 4.2.0.169.

    Read the article

  • vista winsxs folder eats disk space

    - by Simpzon
    My machine has been running Vista Ultimate 64-Bit for about two years now. ServicePacks SP1 and SP2 are installed, too. The system partition has a size of 55 GB, which should be quite comfortable under normal circumstances, but about 40GB (no typo) are used by the Windows-Folder, especially the subfolder winsxs, which takes about 35 GB. I have already uninstalled as many programs as possible and did run compcln.exe, to reduce it, but this only gained 2-3 GB. What can I do to clean up without risking system stability? I'm a software developer and this is my daily work environment, which means - I can't risk to get strange side-effects from blindly deleting stuff. - You can maybe deduce some typical usage patterns from this information. Any suggestions?

    Read the article

  • Custom byte size?

    - by thyrgle
    So, you know how the primitive of type char has the size of 1 byte? How would I make a primitive with a custom size? So like instead of an in int with the size of 4 bytes I make one with size of lets say 16. Is there a way to do this? Is there a way around it?

    Read the article

  • Using jQuery To Get Size of Viewport

    - by Volomike
    How do I use jQuery to determine the size of the browser viewport, and to redetect this if the page is resized? I need to make an IFRAME size into this space (coming in a little on each margin). For those who don't know, the browser viewport is not the size of the document/page. It is the visible size of your window before the scroll.

    Read the article

  • Can't find disk usage in one directory

    - by Xster
    Similar questions are asked frequently but no suggested answers solved my issue. I have some disk space usage that I can't find as well. In df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 144183992 136857180 2652 100% / udev 2013316 4 2013312 1% /dev tmpfs 808848 876 807972 1% /run none 5120 0 5120 0% /run/lock none 2022116 76 2022040 1% /run/shm overflow 1024 0 1024 0% /tmp I checked the inodes, I checked lsof for +L1 or deleted files, I rebooted, I checked for files hidden behind mounts but none of them were the issue. It grows periodically and I'm running out of things to delete to feed the beast. It's all in the home directory of the only user I have. In du in ~ du -h --max-depth=1 192K ./.nv 2.1M ./.gconf 12K ./Pictures 1.6M ./.launchpadlib 12K ./Public 24K ./.TemporaryItems 8.9M ./.cache 12K ./Network Trash Folder 28K ./.vnc 11M ./.AppleDB 48K ./.subversion 1.9G ./.xbmc 8.0K ./.AppleDesktop 12K ./.dbus 81M ./.mozilla 12K ./Music 160K ./.gnome2 44K ./Downloads 692K ./.zsh 236K ./.AppleDouble 64K ./.pulse 4.0K ./.gvfs 1.4M ./.adobe 44K ./.pki 44K ./.compiz-1 168K ./.config 1.4M ./.thumbnails 12K ./Templates 912K ./.gstreamer-0.10 8.0K ./.emacs.d 92K ./Desktop 1.3M ./.local 12K ./Ubuntu One 12K ./Documents 296K ./.fontconfig 12K ./.qt 12K ./.gnome2_private 20K ./.ssh 20K ./.mission-control 12K ./Videos 12K ./Temporary Items 640K ./.macromedia 124G . I can't find a way to figure out how it got to that 124G in that directory. There are no mount points in home.

    Read the article

  • disk space keeps filling up on EC2 instance with no apperent files/directories

    - by sasher
    How come os shows 6.5G used but I see only 3.6G in files/directories? Running as root on an Amazon Linux AMI (seems like Centos), lots of free memory available, no swapping going on, no apparent file descriptors issue. The only thing I can think of is a log file that was deleted while applications append to it. Disk space usage is slowly but continuously rising towards full capacity (~1k/min with very small decreases from time to time) Any explanation? Solution? du --max-depth=1 -h / 1.2G /usr 4.0K /cgroup 22M /lib64 11M /sbin 19M /etc 52K /dev 2.1G /var 4.0K /media 0 /sys 4.0K /selinux du: cannot access /proc/14024/task/14024/fd/4': No such file or directory du: cannot access<br/> /proc/14024/task/14024/fdinfo/4': No such file or directory du: cannot access /proc/14024/fd/4': No such file or directory du: cannot<br/> access/proc/14024/fdinfo/4': No such file or directory 0 /proc 18M /home 4.0K /logs 8.1M /bin 16K /lost+found 12M /tmp 4.0K /srv 35M /boot 79M /lib 56K /root 67M /opt 4.0K /local 4.0K /mnt 3.6G / df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 6.5G 1.4G 84% / tmpfs 3.7G 0 3.7G 0% /dev/shm sysctl fs.file-nr fs.file-nr = 864 0 761182

    Read the article

  • How to server a small image at a larger size

    - by DennyHalim.com
    We all know about hotlinking images and how to ban bad referrers, but I feel the need to take it further then that. I want to replace the hotlinked images with one huge image that's several megabytes in size. I have found a good image that's less then 100k and replaced all bad hotlinkers with it. How can I convert this image to become larger?

    Read the article

  • Which Qt classes use the disk directly?

    - by Jurily
    I'm trying to write a library to separate all the disk activity out into its own thread, but the documentation doesn't really care about such things. What I want to accomplish is that aside from startup, all disk activity is asynchronous, and for that, I need to wrap every class that accesses the disk. Here's what I found so far: QtCore: QFile QTemporaryFile QDir QFileInfo QFileSystemWatcher QDirIterator QSettings QtGui: QFileDialog QFileSystemModel QDirModel (unsure) QFont (unsure) QFontDialog I'm sure there are more.

    Read the article

  • Changing the size of the Windows 7 taskbar

    - by dertoni
    Is there a way to change the size of the Windows 7 Taskbar? Internal or with the help of outside programs, both welcome. Something like the MacOS X Doc Zooming effect would be OK/nice, too. Edit: I'm essentially looking for a way to shrink it, because my laptop does not have a big screen so every pixel is valueable.

    Read the article

  • Visual Studio scratch disk behavior

    - by bobobobo
    I don't know if this feature exists, but I'd like a way to control Visual Studio 2010's scratch disk behavior (other than completely turning off intellisense). Right now it creates a massive .sdf file in the project folder (50MB+), and then it goes and creates an IPCH folder with 60MB+ of precompiled headers. All that's well and good while VS is running, but after it exits, I really would like the disk back. Is there a way to configure vs 2010 to Use the same location (%AppData%\VSScratch) for scratch disk files (so its easier to blow it away?) Automatically delete .sdf /ipch on exit? I know they don't delete them because its faster to startup.. but if you delete them yourself, startup time isn't that much increased..

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • FB.ui and setting popup size

    - by manuelpedrera
    I am using FB.ui with the display parameter set to popup. When the method is 'stream.publish', it autoresizes when the content is loaded. However, when using 'fbml.dialog' (in order to display a multi-friend selector) it shows a size that I'm not able to change (and the content is displayed cropped). I have tried with the following approaches, with no luck: FB.ui({ method: 'fbml.dialog', size: {width: 800, height: 500}, ... FB.ui({ method: 'fbml.dialog', width: 800, height: 500, ... Also I've been looking at the API source code, and it declares the method this way: Method declaration: 'fbml.dialog': { size : { width: 575, height: 300 }, url : 'render_fbml.php', loggedOutIframe : true }... Functions that executes the methods: // the basic call data var call = { cb : cb, id : id, size : method.size || {}, url : FB._domain.www + method.url, params : params }; Any help would be much appreciated...

    Read the article

  • big size of database-log SQL-Server 2008

    - by t.kehl
    I have a database which is running under Microsoft SQL Server 2008. Now, I have seen, that the log of the database (ldf-file) is growing to big size. The database-file (mdf) has a size of 630MB and the log-file has a size of 12GB. I ask me now, what the reason for this can be. Is there a tool which let me seeing into the log where I can see, what is the reason for this big size? What can I do to prevent that the log is growing to this big size?

    Read the article

  • Monitor Screen Independent Forms size & Control Size?

    - by Thomas
    in various case i have seen that when we run apps in various pc with different monitor size then win form behave differently. sometime the form get bigger and as a result few control on that form will not visible.so please tell me how to design win apps in such a way that what ever the monitor size would be the form size and control position will behave same way in all the pc monitor size.please guide me.thanks.

    Read the article

  • How to make a disk image and restore from it later?

    - by Torben Gundtofte-Bruun
    I'm a new Linux user. I've reinstalled my Wubi from scratch at least ten times the last few weeks because while getting the system up and running (drivers, resolution, etc.) I've broken something (X, grub, unknowns) and I can't get it back to work. Especially for a newbie like me, it's easier (and much faster) to just reinstall the whole shebang than try to troubleshoot several layers of failed "fixing" attempts. Coming from Windows, I expect that there is some "disk image" utility that I can run to make a snapshot of my Linux install (and of the boot partition!!) before I meddle with stuff. Then, after I've foobar'ed my machine, I would somehow restore my machine back to that working snapshot. What's the Linux equivalent of Windows disk imagers like Acronis True Image or Norton Ghost? Note: I found a similar question here.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >