Search Results

Search found 2637 results on 106 pages for 'mount smbfs'.

Page 52/106 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Booting Windows7 kernel from an initrd/wim image file

    - by Ivo
    I'm wondering if it's possibile to have Win7 kernel and relative drivers (especially storage drivers) to boot from an initrd-like image file (maybe .wim?) and later then mount the windows root partition and complete the load of the full OS? I'll try to explain why: I'm running an emulated environment with NO REAL BIOS, and I'm passingthrough a raid storage controller. I want windows to boot from this controller array, but of course the BCD manager cannot access disks in the array until kernel and relative controller storage drivers are loaded. To be clear I get the classical winload.exe missing error. I need a similar solution to what Linux does, loading the kernel and his drivers, and later then mount the root partition and complete the boot. Any ideas or advices?

    Read the article

  • Do I have to chmod 777 my NFS folder when I share?

    - by luckytaxi
    Under Redhat, if I export a folder as an NFS mount, does the folder have to have RW for users/groups/others? Right now /storage/software is -rwxr-xr-x root/root i.e. /etc/exportfs /storage/software *(rw,sync) On my client, I can mount but I can't write. I'm using a regular user and NOT root. I think "no_root_squash" fixes it but I really don't want that. Then again, nor do I want to have to chmod 777 the folder on the server.

    Read the article

  • GParted tells me my partition has 1.30 GiB used space but I cannot access its contents

    - by reprogrammer
    I've a ext4 partition (/dev/sda7) for my Linux. And, another (/dev/sda5) for keeping my data. When I installing Ubuntu 10.04 LTS, I set the mount point of /dev/sda5 to "/" and that of /dev/sda5 to "/data". GParted tells me that 1.30 GiB out of 70.12 GiB of /dev/sda5 has been used up. But, the mounted directory "/data" is empty. So, it looks like that my data is there but I cannot access it. Besides, when I set the mount point, I didn't check the "format" box. So, it shouldn't have been formatted. How can I check whether the partition has been formatted? How can I recover my files?

    Read the article

  • ubuntu fails to start

    - by miccaman
    I have a laptop with ubuntu 9.10 which fails to start, and I want to copy the data from it to an external hard disk. I can login in recovery mode command line, but then I cannot mount the external hard drive. (in recovery mode I cannot write to the laptops hard drive) If I boot from an portable USB with mintlinux, I can mount the external harddrive, and copy most of the data from the laptop, however there is a dir which I have no rights to access under /home/user/Documents then I get a permission denied error. Are there any other options?

    Read the article

  • How to use UMLFS?

    - by Vi
    I'm trying to mount what is inside UML session as FUSE filesystem on host. There's "uml_mount" program which looks like a thing for this purpose, but it fails. What is UMLFS (I haven't found any documentation at all) and how to mount it? uml_mount mounts FUSE filesystem and starts uml_mconsole <umid> umlfs <file descriptor> which tries to send this file descriptor to UML kernel (to deal with further FUSE things), but sending fails. Also I haven't found any signs of FUSE inside a kernel. Do I need some special patch for this?

    Read the article

  • NFS share access - Permission denied

    - by rgngl
    I'm trying to share a directory on my NAS device(WD Mybook WE) with NFS to another machine on my local network. The directory on the NAS device looks like this: drwxr-x--- 15 git git 4096 Nov 17 01:05 git/ And id's of the user git on the NAS device is like this: [root@myhost DataVolume]# id git uid=505(git) gid=505(git) I played with many different parameters in the /etc/exports file and this is what I got there currently: /DataVolume/git 192.168.0.20(async,rw,no_root_squash,no_subtree_check) On the client side I have the user git and group git with the same id's to match the ones on the server. user@myclient:~$ id git uid=505(git) gid=505(git) groups=505(git) I mount the directory with: sudo mount myhost:/DataVolume/git -t nfs git/ and the mounted directory looks like: drwxr-x--- 15 git git 4096 Nov 17 01:05 git After these steps I can't seem to cd to that directory with any user, including git and root. I am getting a Permission denied error. Thanks in advance for any help.

    Read the article

  • Rsync over NFS with QoS: How to view real transfer speed?

    - by Ian Mackinnon
    We have a bandwidth limit between a Linux server and a NAS, created using 'tc' with an IP filter. When writing to an NFS mount of the NAS, rsync claims a very high transfer speed for each file and then waits a long time before acknowledging that everything has finished. The total time taken is consistent with the QoS limit and the time taken by the same transfer over FTP. Why does the write to the NFS mount report higher transfer speeds than are actually happening over the network? How can I monitor the actual bandwidth of the transfer?

    Read the article

  • Partition/install issues

    - by jalal ahmad
    I am new to Ubuntu and tried to install 10.1 as dual boot option from a USB. At first I encountered the error when in partition dialogue of installation process that cannot find root directory. I did a search on Ubuntu forums and did this as in one of the posts. Make sure that the partition file system you wish to install Linux, Ubuntu or Backtrack on it is ext4, ext3 or ext2, and not FAT32 or NTFS. Then mount / on it: During the installation process press "change" on the partition you wish to use Make sure "do not use this partition" scroll is not chosen, scroll to ext4, ext3 or ext2 On the "mount" field write / Click ok, then next a message will appear saying something like "swap area was not defined, do you wish to continue or choose a swap area?", click "ok" and continue or click "go back" and choose another partition and click change, on the file system scroll choose "swap" and click "ok" and next All good but when I rebooted I could not find Windows vista as in dual boot option. Plus I could not see wireless networks and in the process of trying to find out what went wrong the soft switch somehow turned off and as I cannot boot in Windows I have no idea what to do. Again searching internet I found a post which said the dual boot problem can be overcome by installing gparted but when I tried I got the message Reading package lists... Done Building dependency tree Reading state information.. Done E: Couldn't find package gparted I thought I am going to copy my stuff from my hard disk and try to install Windows but I found out that I have two partitions which are different from what I had before installing Ubuntu. I now have filesystem partition1 119 GB ext4, swap partition 5 1.1 GB swap and extended partition 2 1.1 GB. And I cannot mount 119 GB where all my personal videos, photos are if still there. Now I cannot boot from Windows even. Need help on what to do? Best case scenario would be to be able to copy my stuff before I mess up the system further. Else a dual boot system and if not then how do I install vista again. I have Windows CD. Cheers guys and thanks in advance.

    Read the article

  • Windows XP, USB-Stick and multiple Partitions

    - by Bobby
    Hello. I've got an USB-Stick with multiple Partitions on it (FAT32 (active), FAT32, Ext2 <-- that's another story) and it seems like that my Windows XP can only mount the first partition of the stick. If I try to mount the second one using the volume manager it tells me that I need to make it active and reboot...is it really that limited or am I just missing something here? Partitions: FAT32, System Rescue CD, bootable and active FAT32, some tools ext2, some data (I know that I need extra drivers etc., but that's not asked here. Edit (Solution): Thanks to the answer with the RMB (ReMoveable Bit) I was able to dig up a solution described at this site (Section: On flash drive only the first partition works). Basically, there's an Hitachi Driver available which filters the RMB on Driver-Level, which just needs to be a little modified to function with basically every USB-Stick. All you need to do is adding the "Device Instance ID" to the driver and then use this driver.

    Read the article

  • Windows share mounted then symlinked on LAMP server. Serves up html, but not images.

    - by Samuurai
    This has really got me befuddled... I've mounted a share, like this: //srv1/UserUploads /mount/UserUploads cifs rw,user,exec,uid=wwwrun,gid=www,username=shareuser,password=sharepw 0 0 I then have a symlink here: WEBSVR:/Web/htdocs/public_html # ls -l useruploads lrwxrwxrwx 1 wwwrun www 18 Dec 7 09:18 useruploads -> /mount/UserUploads Oddly, if I ls inside the mounted area, items appear with a capital S -rwxrwSrwx 1 wwwrun www 4077 Dec 30 14:54 prop9.jpg -rwxrwSrwx 1 wwwrun www 4 Jan 12 15:57 test.html And if I bring up test.html in a browser, it works fine, but if I go to prop9.jpg, chrome gives me this error: This web page is not available. The web page at http://10.1.64.100/useruploads/webteam/help2let/prop6-1.jpg might be temporarily down or it may have moved permanently to a new web address. More information on this error Below is the original error message Error 100 (net::ERR_CONNECTION_CLOSED): Unknown error. Has anyone seen this behaviour where the binary files (images) arent displayed, but html/text is?

    Read the article

  • How to remove large number of files/folders in linux

    - by user1745713
    We are using hadoop to split a table into smaller files to feed to mahout, but in the process, we created a huge amount of _temporary logs. we have an nfs mount for the hadoop volume so we can use all the linux commands to delete folders files, but we just can't get them to be deleted, here's what I've tried so far: hadoop fs -rmr /.../_temporary : hangs for hours and does nothing on nfs mount: rmr -rf /.../_temporary :hangs for hours and does nothing find . -name '*.*' -type f -delete : same as above the folders look like this (38 of these folders inside _temporary): drwxr-xr-x 319324 user user 319322 Oct 24 12:12 _attempt_201310221525_0404_r_000000_0 the content of these are actually folders, not files. each one of those 319322 folders has exactly one file inside. not sure why the do the logging this way. Any help is appreciated.

    Read the article

  • Install Linux with two hard drives

    - by rdecourt
    I've a machine with two hard drives. The first one has 80 GB and the second has 120 GB. I'm about to format this machine and install Linux, and I want to install all the main partitions (/, /boot, /usr/, etc.) on the first hard disk drive (sda) and mount the /home and /var partition on second disk (sdb). Is this possible, and do I have to do something after the instalation? Or is the second hard disk drive automatically mounted? How can I do it? I won't do it, but is there any problem to mount /boot on the second hard disk drive? I'm using Ubuntu 12.04.

    Read the article

  • I'd like to archive files from Ubuntu to Windows between two computers on a shared home network

    - by Wabbitseason
    I have an old laptop running Ubuntu 9.10 which I use as a LAMP environment for web development, and I have a comfortable, powerful desktop computer with Windows 7 installed on it. These two are connected to a home router so both can access the internet. I have been able to set up Samba so I can mount my Apache home directory so it is accessible from Windows and is mapped as a network drive. What I'd like to do is access some Windows folders from Linux so I could automatically create backups (with cron scripts) of my work to physically different locations on the Windows box. Perhaps at a later time I'd set up a local Subversion repository but I'd love to keep backups of that on the Windows drives too. Using Ubuntu's Places/Network menu I can see my desktop but I'm unable to log in to that despite having created the corrent username and password on Windows. All I can get is the following error message: "Unable to mount location. Failed to retrieve share list from server." What could be misconfigurated?

    Read the article

  • How do I make an encrypted disk image on Debian?

    - by Blacklight Shining
    I'm basically looking for an equivalent to OS X's encrypted sparsebundles. The solution should have support for file ACLs and should not force me to specify a size in the beginning (the image should only take up as much space as it needs) or require root access to mount and unmount. Ideally, I should be able to set two different passwords (both for the same data), but that's not too important. (I do have root access to the machine and so can install packages and such, but I would rather not have to sudo just to mount an image.)

    Read the article

  • Forcing rsync to convert file names to lower case

    - by SvrGuy
    We are using rsync to transfer some (millions) files from a Windows (NTFS/CYGWIN) server to a Linux (RHEL) server. We would like to force all file and directory names on the linux box to be lower case. Is there a way to make rsync automagically convert all file and directory names to lower case? For example, lets say the source file system had a file named: /foo/BAR.gziP Rsync would create (on the destination system) /foo/bar.gzip Obviously, with NTFS being a case insensitive file system there can not be any conflicts... Failing the availability of an rsync option, is there an enhanced build or some other way to achieve this effect? Perhaps a mount option on CYGWIN? Perhaps a similar mount option on Linux? Its RHEL, in case that matters.

    Read the article

  • Mounting share over VPN

    - by user1337
    I have a CentOS 5 web server which currently mounts a NFS export on my Mac OS X 10.7 laptop. It works great, except over VPN I can't get it to mount at all. I tried SMBUp but haven't been able to get it working even locally. It doesn't look like there's an easy way to install netatalk for CentOS 5. Even still, I'm not sure if that's the best way to do it. I tried using a GUI SSH client that can "mount a FTP disk" and it would work, except the files require root access and there's no external root access and the client can't elevate permissions. The basic thing I need to do is have the server be able to read the files off of my laptop, connected via VPN. The files are frequently updated (every 5-20 seconds) so I don't want to manually do that via SSH. Which protocol can work with both platforms and easily handle the latency introduced by VPN (and potentially mobile broadband)? Thanks

    Read the article

  • How can I remount an NFS volume on Red Hat Linux?

    - by user76177
    I changed the user id of a user on an NFS client that mounts a volume from another server. My goal is to get the 2 users to have the same id, so that both servers can read and write to the volume. I changed the id successfully on the client system, but now when I look at the NFS mount from that system, it reports the files being owned by the old id. So it looks like I need to "refresh" that mount. I have found many instructions on how to remount, but each seems slightly different according to the type of system. Is there a simple command I can run to get the mounted volume to refresh so that it interprets the new user settings?

    Read the article

  • Syncing Large Directories/Filesystems using USB Drive [closed]

    - by Alan Lue
    Does anyone have a solution for syncing large directories/filesystems using just a USB flash drive (and specifically without using a network connection)? The objective is simply to sync a user directory between two computers. The contents of the user directory could amount to a large quantity of data—say, a quantity larger than could be stored on any single USB drive—but the aggregate size of changes that must be propagated by a single sync could easily fit on a USB drive. As an example, suppose a user directory is already synchronized between a desktop and a laptop computer. Here's a use case: Some changes are made in the user directory on the desktop. We mount a USB drive onto the desktop and copy whatever changes need to be applied to the laptop user directory in order to synchronize the desktop and laptop user directories. We now mount the USB drive onto the laptop and apply the changes. The desktop and laptop user directories are now synchronized. Any ideas? Alan

    Read the article

  • URGENT: help recovering lost data

    - by Niels Kristian
    I have made a directory: sudo mkdir /ssd, the directory was supposed to be mounted to a raid array called md3. This was done by adding /dev/md3 /ssd auto defaults 0 0 to fstab. Then after a while where I had used the directory, I realized that I had forgotten to run sudo mount -a - and then I did, and now the data is gone. I tried to uncomment the line in fstab and run sudo mount -a but that didn't get back my data. What can I do!? CONTENT OF FSTAB: proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/md/0 none swap sw 0 0 /dev/md/1 /boot ext3 defaults 0 0 /dev/md/2 / ext4 defaults 0 0 /dev/md3 /ssd auto defaults 0 0

    Read the article

  • DosBox Booting From HDD Image, FreeDOS Image created with qemu-img.

    - by TechZilla
    I'm having trouble booting a HDD image with DosBox. I've only gotten either read errors, or boot failures. The HDD image is a verified working FreeDOS installation, created with qemu-img. The image has been formatted FAT32, and it's working as expected with QEMU. The Image is only 1G in size, and is a flat raw image. I have been able to mount it with Linux, for ease of file transfer. I even was able to boot with DOSEMU, After I mounted the image under Linux. I would love to somehow just boot from the raw image file, but I would have no problem booting from a mount. I just can't get anything to happen, and I have read the Documents over. I have verified DosBox is working as expected, with its included DOSlike environment. I would appreciate any help, as I just don't have much of DosBox experience.

    Read the article

  • Syncing Large Directories/Filesystems using USB Drive

    - by Alan Lue
    Does anyone have a solution for syncing large directories/filesystems using just a USB flash drive (and specifically without using a network connection)? The objective is simply to sync a user directory between two computers. The contents of the user directory could amount to a large quantity of data—say, a quantity larger than could be stored on any single USB drive—but the aggregate size of changes that must be propagated by a single sync could easily fit on a USB drive. As an example, suppose a user directory is already synchronized between a desktop and a laptop computer. Here's a use case: Some changes are made in the user directory on the desktop. We mount a USB drive onto the desktop and copy whatever changes need to be applied to the laptop user directory in order to synchronize the desktop and laptop user directories. We now mount the USB drive onto the laptop and apply the changes. The desktop and laptop user directories are now synchronized. Any ideas? Alan

    Read the article

  • Only allow root to change filesystem

    - by Uejji
    The VPS I manage uses a simple hard link rsync archive daily backup system saved to a loop file. This is great, because each backup only takes up as much space as what has changed each day, and all user/group permissions are kept. I would like to give users direct access to their home directories in each backup, but I'm worried about intentional or accidental backup data destruction, as how it stands now users can actually change, destroy or add to backed up data they originally owned. I've been looking for a way to mount this filesystem similar to an ro mount option, but something that would still allow rw access to root, but I've had absolutely no luck. In other words, I want users to be able to view and copy their backed up data without actually being able to change it, and have that data maintain the original permissions. I've got no real preferences as far as filesystem, as long as it's a standard unix filesystem that can preserve permissions, support hard links and deny write access to users without actually stripping the w permission from everything.

    Read the article

  • How to automount a Truecrypt volume before login in Windows 7?

    - by nonoitall
    I have an external hard drive containing all my documents, and it is encrypted with a password via Truecrypt. I'd like my desktop computer at home to automatically mount the volume prior to my logging in (so that it can be used as my user folder) without asking me for a password. (Yes, the password can be saved in plain text on my desktop's hard drive - that's okay.) For the life of me, I can't figure out a way to do this that actually works though. Tried using the Task Scheduler to schedule a mount when the computer starts up, and it works, but the volume is only accessible by my user account after I log in. (Haven't tried every combination of users/options for the scheduled task, so maybe there's something else there I need to try.) Also tried adding a startup script for my user account that runs on login, which evidently is too late to set up the user's profile folder. Anybody ever successfully achieve this or something like it?

    Read the article

  • Can't access to a iSCSI volume

    - by jmiguel.rodriguez
    I have a iSCSI target on a customer place I'm using from an old Fedora (Core6) server. I configured it and formatted as ext3 (mistake, now I know) and I've been working with it for some time. Now I need to access this volume from other machine. As far as I've read, I can't do it safely from two machines at the same time (yep, that's the first thing I tried). So I've umount it from original server and tried to mount it on the new server (I did it at first with Ubuntu 10 LTS but when I was unable to do it I installed another Fedora with the same configuration) with no success. The problem: I can see all target on NAS but when I do a "fdisk -l" to see all devices and know which mount I see all targets as SFS filesystem. From the original server I see all SFS (after all, they belong to my customer and don't know what he have in) except the one I manage which I see as 'Linux'. What can I do? Thank you in advanced, regards, jmiguel

    Read the article

  • I lost /dev/md2 on my server

    - by sten
    Hi, My 2 hard drives fried at the same moment apparently. My host company rebooted my server in rescue mode and I am trying to recover my data. They told me to mount /dev/sda2 to recover the data I need but, looking at a similar server that I have in pool, the data I'm looking for should be instead in /dev/md2. I can find /dev/md0 but not /dev/md2 (nor /dev/md1). I've looked on several places on the web and I could only find messages explaining how to create new partition. I just need to recover some data, not all of it and I'll be glad if anyone could help me to mount the /dev/md2 folder (or any other trick that would allow me to recover the data that was stored there). Thanks in advance, Sten

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >