Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 97/714 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Ubuntu resolution on Virtual Box(Mac OSX)

    - by idealflip
    Ubuntu can't detect my monitor type. What driver shall I update Ubuntu with? Or how can I go about increasing the resolution past 800x600, if no higher resolution is visibly available? I'm running Ubuntu 9.1 on Virtual Box 3.1.6 on Mac OSX 10.6.3 Thanks in advance!

    Read the article

  • openSUSE on Virtual Box : Guest Additions

    - by Arkapravo
    I am using openSUSE 11.0 guest in Ubuntu 9.10 host on Virtual Box. When I try to install Guest-Additions it doesn't really work as expected, the screen is not sized to fit the correct resolution, seamless working between 2 OS doesn't happen ! Has anyone tried installing Guest-Additions in openSUSE ?

    Read the article

  • Can someone recommend a Compact Flash card to be used as a boot disk

    - by Hamish Downer
    I have an early Acer Aspire One netbook, and the flash drive is really slow at writing. I've taken it apart to add more RAM, but I've pretty much stopped using it. I've read about people replacing the SSD with a Compact Flash card and a CF to ZIF adapter but I've also read about some Compact Flash cards where the manufacturer has permanently disabled the boot flag to stop people doing this kind of mod. (Can't find the link any more though). So my most specific question is: can someone recommend a compact flash card that does allow the boot flag to be set? Please say whether you've done it yourself, or just heard about it from someone else. Beyond that, is this generally a problem?

    Read the article

  • DEBIAN minimum hard disk footprint

    - by user41072
    Hi, I found this http://serverfault.com/questions/29071/red-hat-server-minimal-install question while searching google for debian minimal install. User shylent wrote thet he uses really basic debian install so small that processes can count on one hand fingers :D. So Im searching and asking for starting point to create basic linux distro but not from scratch like LFS but make linux distro based on debian for example. I used debootstrap but still it is 150M large.

    Read the article

  • Windows XP restore point file from disk.

    - by Dragos Toader
    Suppose I copied a Windows XP restore point file to a USB memory stick. I copied C:\System Volume Information\MountPointManagerRemoteDatabase C:\System Volume Information\tracking.log C:\System Volume Information\_restore{45B5E8B9-949A-471E-999D-F381DA56A2D3} C:\System Volume Information\catalog.wci to F:\System Volume Information\ How can I restore this restore point? Can I fool the system into using that file (if I copied it back into the restore point folder)? From F:\System Volume Information\MountPointManagerRemoteDatabase F:\System Volume Information\tracking.log F:\System Volume Information\_restore{45B5E8B9-949A-471E-999D-F381DA56A2D3} F:\System Volume Information\catalog.wci to C:\System Volume Information\

    Read the article

  • NETAPP Fragmentation

    - by mdpc
    We all know that once a disk (or storage system for that matter) gets introduced into use, the performance degrades due to fragmentation of files. This seems to be why disk defragmentors are in fairly wide use on Windows boxes. And they do increase the performance substancially. As an aside, I haven't heard of many defragmentors in the Unix/Linux area. Despite the claimed WAFL protections for the NetApp, file fragmentation still will occur, especially with all the sparsely crated VMs. My question is does anybody do any sort of defragmention of such a storage system? Do you notice any measurable degration/improvement of either not doing/doing anything to address this situation? Does anybody do anything about it? If so what? Thanks

    Read the article

  • Upgraded Ubuntu, all drives in one zpool marked unavailable

    - by Matt Sieker
    I just upgraded Ubuntu 14.04, and I had two ZFS pools on the server. There was some minor issue with me fighting with the ZFS driver and the kernel version, but that's worked out now. One pool came online, and mounted fine. The other didn't. The main difference between the tool is one was just a pool of disks (video/music storage), and the other was a raidz set (documents, etc) I've already attempted exporting and re-importing the pool, to no avail, attempting to import gets me this: root@kyou:/home/matt# zpool import -fFX -d /dev/disk/by-id/ pool: storage id: 15855792916570596778 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://zfsonlinux.org/msg/ZFS-8000-5E config: storage UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas ata-SAMSUNG_HD103SJ_S246J90B134910 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 UNAVAIL ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 UNAVAIL The symlinks for those in /dev/disk/by-id also exist: root@kyou:/home/matt# ls -l /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910* /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51* lrwxrwxrwx 1 root root 9 May 27 19:31 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910 -> ../../sdb lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part9 -> ../../sdb9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 -> ../../sdd lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part9 -> ../../sdd9 lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 -> ../../sde lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1 -> ../../sde1 lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part9 -> ../../sde9 Inspecting the various /dev/sd* devices listed, they appear to be the correct ones (The 3 1TB drives that were in a raidz array). I've run zdb -l on each drive, dumping it to a file, and running a diff. The only difference on the 3 are the guid fields (Which I assume is expected). All 3 labels on each one are basically identical, and are as follows: version: 5000 name: 'storage' state: 0 txg: 4 pool_guid: 15855792916570596778 hostname: 'kyou' top_guid: 1683909657511667860 guid: 8815283814047599968 vdev_children: 1 vdev_tree: type: 'raidz' id: 0 guid: 1683909657511667860 nparity: 1 metaslab_array: 33 metaslab_shift: 34 ashift: 9 asize: 3000569954304 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 8815283814047599968 path: '/dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 18036424618735999728 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1' whole_disk: 1 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 10307555127976192266 path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1' whole_disk: 1 create_txg: 4 features_for_read: Stupidly, I do not have a recent backup of this pool. However, the pool was fine before reboot, and Linux sees the disks fine (I have smartctl running now to double check) So, in summary: I upgraded Ubuntu, and lost access to one of my two zpools. The difference between the pools is the one that came up was JBOD, the other was zraid. All drives in the unmountable zpool are marked UNAVAIL, with no notes for corrupted data The pools were both created with disks referenced from /dev/disk/by-id/. Symlinks from /dev/disk/by-id to the various /dev/sd devices seems to be correct zdb can read the labels from the drives. Pool has already been attempted to be exported/imported, and isn't able to import again. Is there some sort of black magic I can invoke via zpool/zfs to bring these disks back into a reasonable array? Can I run zpool create zraid ... without losing my data? Is my data gone anyhow?

    Read the article

  • What are "Missing thread recordng" erros when running fsck -fy?

    - by ohho
    There is some error reported when I run Disk Utility and verify the root volume on my OS X MacBook. So I boot and CMD-S into the shell mode and run /sbin/fsck -fy. Errors are like: ** Checking catalog file. Missing thread record (id = ...) In correct number of thread records ** Checking catalog hierarchy. Invalid volume file count (It should be ... instead of ...) ** Repairing Volume Missing directory record (id = ...) I'd like to know what is the cause of the above errors? Hopefully I will be more careful in the future to prevent them from happening again. p.s. I am using a SSD and so I assume mechanical hard disk error is less likely. Thanks!

    Read the article

  • Trying to get Hobbit clients to show cpu, mem, disk, etc

    - by Bryan Agee
    I have a Hobbit server set up with a handful of hosts using conn, http, ssh, and sslcert services, but would like to add the other tests as well. I've installed hobbit-client on a server, and added: # CLIENT:fqdn.example.com to it's host line in bb-hosts, and added: HOST=fqdn.example.com before the default configuration in hobbit-clinets.cfg, but no joy. Does anyone know what else I need to do for those tests to register?

    Read the article

  • Run script when a specific disk/memory card is mounted under OSX

    - by Max Rydahl Andersen
    How do I run a script when a drive is mounted under OSX ? My usecase is that I would like to automatically copy images from my USB memory/harddrive when it is inserted in my USB card reader, and when a DVD or CD is inserted I would like to copy it for storage in my media center. I've tried using Marcopolo but from what I can see it can only detect the presence of a certain USB device, not the presence of specific harddrive. (http://superuser.com/questions/65127/is-it-possible-to-run-an-automator-workflow-when-a-usb-device-is-connected)

    Read the article

  • How should i determine for virtual dedicated hosting?

    - by babu
    Hi all What are all the criteria i should determine if i want to choose a virtual dedicated server? I have asp.net website that has 3800 vistors per day that is hosted on a shared hosting. Kindly advise me if i should upgrade it to VPS?? What is the advantage and disadvantage of using VPS? Thanks in advance!

    Read the article

  • Accessing guests on virtual network when connected to host via PPTP

    - by Viktor Elofsson
    I'm setting up a development machine which runs Ubuntu 12.04 and KVM for virtualization. I have a guest running Ubuntu 12.04 which can be accessed from the host via its IP address which is assigned by libvirt. The guest can also access the internet, no problem there. However, now I want to setup PPTP so I can connect to the host (from my workstation running Windows 7) and directly access guests without relying on SSH port forwarding. I can connect from my W7-machine to the host (PPTP), but I cannot access any virtual machines (which are accessable from the host directly). Relevant configuration files cat /etc/network/interfaces auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address x.x.x.x broadcast x.x.x.x netmask x.x.x.x gateway x.x.x.x # default route to access subnet up route add -net x.x.x.x netmask x.x.x.x gw x.x.x.x eth0 virsh net-edit default <network> <name>default</name> <uuid>xxxxxxxx-72ce-3c20-af0f-d3a010f1bef0</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0' /> <mac address='52:54:00:xx:xx:xx'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254' /> <host mac='52:54:00:yy:yy:yy' name='web1' ip='192.168.122.11' /> </dhcp> </ip> </network> cat /etc/pptpd.conf (commented lines removed) # TAG: option # Specifies the location of the PPP options file. # By default PPP looks in '/etc/ppp/options' # option /etc/ppp/pptpd-options # TAG: logwtmp # Use wtmp(5) to record client connections and disconnections. # logwtmp #(Recommended) localip 192.168.122.1 remoteip 192.168.122.234-238,192.168.122.245 cat /etc/ppp/chap-secrets* # Secrets for authentication using CHAP # client server secret IP addresses xxxxx * yyyyyyyyyy 192.168.122.100 I get the correct IP address when connecting my W7-machine, but when I try to ping the virtual machine at 192.168.122.11 I get Reply from 192.168.122.1: Destination port unreachable. It's probably something trivial I'm missing but I can't for the life of me figure out what it is. So I'm turning to you, serverfault.

    Read the article

  • drbd block device as storage for kvm virtual machine

    - by facha
    Hi, everyone I've setup a drbd replication between two machines and used a drbd block device as storage for a kvm machine. Everything is running well. However I'm in doubt if this setup is ok to use. From what I've read so far on the internet, people tend to use drbd-ospf-qcow2_file as storage for their virtual machines.

    Read the article

  • Virtual Server 2005 VSS writer missing

    - by Jon S.
    I have two servers running Virtual server 2005 R2 SP1. I'm using Symantec Backup Exec 10d for backups. One server runs the backups fine but the other will cause the vm's to crash when it tries to backup. I think the problem is because the "Microsoft Vitual Server 2005 Writer" is not showing up when I run "vssadmin list writers". Can I install the writer without reinstalling VS 2005?

    Read the article

  • Refresh file access time under Linux / Discard disk read cache

    - by calandoa
    I am making use of the access time to analyse some build process, but it is not working the way I want: the access time is updated the first time I read the file, then it stays the same for a long while, or until the next reboot. For instance: $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 10:03 some_file $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time is updated # waiting a few minutes... $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time has not been updated :( I suppose that the file is buffered by Linux in the free memory, the only this copy is accessed the subsequent times for speed reasons. A solution would be to discard the buffers in memory. After searching some forums, I found: sync echo 1 > /proc/sys/vm/drop_caches echo 2 > /proc/sys/vm/drop_caches echo 3 > /proc/sys/vm/drop_caches But it is not working, it seems that it only sync up the write buffers, not the read ones. May be it is due to some custom kernel configuration on my distro (fedora 9)? Or I am missing something here? Is there a way to achieve this access time refresh? Note also that I do not want to simulate some writes on my entire file tree. Because I am using some makefile based build system, this will cause the entire project to be build again.

    Read the article

  • how to restore files deleted in the Mapping virtual drive

    - by r9r9r9
    I used psubst drive1: drive2:path /P to create the persistent virtual drive, I found that's greate, but when I delete files in those drives, they didn't appeared in the Recycle Bin, so How can I restore them? ex: I used (p)subst K: C:/1 to create the K: driver, then I delete files in the K:, I think it will be better if they are moved to the C:/Recycle Bin but not delete persistently. you can find more detail about psubst here: http://code.google.com/p/psubst/

    Read the article

  • Tomcat with virtual hosts - 404

    - by Thardas
    I have a CentOS 5.2 server set up with Apache 2.2.3 and Tomcat 5.5.27. The server hosts multiple virtual hosts connected to multiple Tomcats. For instance we have one tomcat for development and testing and one tomcat for production. project.demo.us.com points to dev tomcat and project.us.com points to production tomcat. Here's the virtual host's configuration: <VirtualHost *:80> ServerName project.demo.us.com CustomLog logs/project.demo.us.com/access_log combined env=!VLOG ErrorLog logs/project.demo.us.com/error_log DocumentRoot /var/www/vhosts/project.demo.us.com <Directory /var/www/vhosts/project.demo.us.com> Allow from all AllowOverride All Options -Indexes FollowSymLinks </Directory> ########## ########## ########## JkMount /project/* online </VirtualHost> JkMount line defines that we use online worker and our workers.properties contains this: worker.list=..., online, ... worker.online.port=7703 worker.online.host=localhost worker.online.type=ajp13 worker.online.lbfactor=1 And tomcat's conf/server.xml contains: <Connector port="7703" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" URIEncoding="UTF-8" maxThreads="80" minSpareThreads="10" maxSpareThreads="15"/> I'm not sure what redirectPort is but I tried to telnet to that port and there's no one answering, so it shouldn't matter? Tomcat's webapps directory contains project.war and the server automatically deployed it under project directory which contains index.jsp and hello.html. The latter is for static debugging purposes. Now when I try to access http://project.demo.us.com/project/index.jsp, I get Tomcat's HTTP Status 404 - The requested resource () is not available. The same thing happens to hello.html so it's not working with static content either. Apache's access_log contains: 88.112.152.31 - - [10/Aug/2009:12:15:14 +0300] "GET /demo/index.jsp HTTP/1.1" 404 952 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2" I couldn't find any mention of the request in Tomcat's logs. If I shutdown this specific tomcat, I no longer get Tomcat's 404 but Apache's 503 Service Temporarily Unavailable, so I should be configuring the correct Tomcat. Is there something obvious that I'm missing? Is there any place where I could find out what path the Tomcat is using to look for requested files?

    Read the article

  • mdadm raid5 recover double disk failure - with a twist (drive order)

    - by Peter Bos
    Let me acknowledge first off that I have made mistakes, and that I have a backup for most but not all of the data on this RAID. I still have hope of recovering the rest of the data. I don't have the kind of money to take the drives to a recovery expert company. Mistake #0, not having a 100% backup. I know. I have a mdadm RAID5 system of 4x3TB. Drives /dev/sd[b-e], all with one partition /dev/sd[b-e]1. I'm aware that RAID5 on very large drives is risky, yet I did it anyway. Recent events The RAID become degraded after a two drive failure. One drive [/dev/sdc] is really gone, the other [/dev/sde] came back up after a power cycle, but was not automatically re-added to the RAID. So I was left with a 4 device RAID with only 2 active drives [/dev/sdb and /dev/sdd]. Mistake #1, not using dd copies of the drives for restoring the RAID. I did not have the drives or the time. Mistake #2, not making a backup of the superblock and mdadm -E of the remaining drives. Recovery attempt I reassembled the RAID in degraded mode with mdadm --assemble --force /dev/md0, using /dev/sd[bde]1. I could then access my data. I replaced /dev/sdc with a spare; empty; identical drive. I removed the old /dev/sdc1 from the RAID mdadm --fail /dev/md0 /dev/sdc1 Mistake #3, not doing this before replacing the drive I then partitioned the new /dev/sdc and added it to the RAID. mdadm --add /dev/md0 /dev/sdc1 It then began to restore the RAID. ETA 300 mins. I followed the process via /proc/mdstat to 2% and then went to do other stuff. Checking the result Several hours (but less then 300 mins) later, I checked the process. It had stopped due to a read error on /dev/sde1. Here is where the trouble really starts I then removed /dev/sde1 from the RAID and re-added it. I can't remember why I did this; it was late. mdadm --manage /dev/md0 --remove /dev/sde1 mdadm --manage /dev/md0 --add /dev/sde1 However, /dev/sde1 was now marked as spare. So I decided to recreate the whole array using --assume-clean using what I thought was the right order, and with /dev/sdc1 missing. mdadm --create /dev/md0 --assume-clean -l5 -n4 /dev/sdb1 missing /dev/sdd1 /dev/sde1 That worked, but the filesystem was not recognized while trying to mount. (It should have been EXT4). Device order I then checked a recent backup I had of /proc/mdstat, and I found the drive order. md0 : active raid5 sdb1[0] sde1[4] sdd1[2] sdc1[1] 8790402048 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] I then remembered this RAID had suffered a drive loss about a year ago, and recovered from it by replacing the faulty drive with a spare one. That may have scrambled the device order a bit...so there was no drive [3] but only [0],[1],[2], and [4]. I tried to find the drive order with the Permute_array script: https://raid.wiki.kernel.org/index.php/Permute_array.pl but that did not find the right order. Questions I now have two main questions: I screwed up all the superblocks on the drives, but only gave: mdadm --create --assume-clean commands (so I should not have overwritten the data itself on /dev/sd[bde]1. Am I right that in theory the RAID can be restored [assuming for a moment that /dev/sde1 is ok] if I just find the right device order? Is it important that /dev/sde1 be given the device number [4] in the RAID? When I create it with mdadm --create /dev/md0 --assume-clean -l5 -n4 \ /dev/sdb1 missing /dev/sdd1 /dev/sde1 it is assigned the number [3]. I wonder if that is relevant to the calculation of the parity blocks. If it turns out to be important, how can I recreate the array with /dev/sdb1[0] missing[1] /dev/sdd1[2] /dev/sde1[4]? If I could get that to work I could start it in degraded mode and add the new drive /dev/sdc1 and let it resync again. It's OK if you would like to point out to me that this may not have been the best course of action, but you'll find that I realized this. It would be great if anyone has any suggestions.

    Read the article

  • Tor in virtual machine - 502 bad gateway

    - by Kon
    I'm trying to run Tor in virtual machine. It used to work, but now when I try to access sites I get "502 bad gateway" error from Privoxy instead of requested site. I tried fixing time to correct one with date command but I still get 502 error. I use Virtualbox, Linux guest, and Tor+Privoxy setup.

    Read the article

  • Ghost Image - windows asks for activation on when deployed to VM

    - by Chris Sobolewski
    I have several images created with Ghost Solution Suite (v11 I believe), the images have been in use for a few years now, but I am finally to the point where I have enough time to attempt to virtualize them for easier updates. I am running VMWare and attempting to image the virtual machines with my ghost image files. For my images I am running sysprep with minisetup and using reseal. The image deploys successfully, however when I start the VM for the first time, it demands windows activation. This doesn't happen when I image a physical computer, even a different model with different hardware. The idea of virtualizing my images becomes rather worthless if I am unable to deploy the images without having to activate every time (especially as Microsoft keeps declaring our volume licence key as invalid for activations). Does anyone know why it is asking for activation on a virtual machine, but not a physical PC? How can I prevent this?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >