Search Results

Search found 24073 results on 963 pages for 'mount point'.

Page 7/963 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Remote NX login to Ubuntu, Gnome can't mount CD/DVD drive

    - by T.J. Crowder
    Even though I'm sitting next to it, I log into my Ubuntu 10.04 LTS system via NX Free Edition from another system at the moment (this is temporary, not worth buying a KVM for). Curiously, though, when I do that Gnome's auto-mounting fails for CD/DVD media (I haven't tried other kinds) with a "Not Authorized" error. For instance here's what happens when I put the Ubuntu 10.04 LTS installation CD in: This does not happen if I log into it locally (not via NX) with the same user account. When using NX, I can mount the media if I go to mount directly: tjc@midnight:~$ sudo mkdir /media/dvd tjc@midnight:~$ sudo mount -r -t iso9660 /dev/sr0 /media/dvd tjc@midnight:~$ ls /media/dvd autorun.inf casper dists install isolinux md5sum.txt pics pool preseed README.diskdefines ubuntu wubi.exe ...which, along with the "not authorized" error, suggests some kind of permissions problem to me (doh). What I find odd is that the same user is involved in both cases (local and via NX). I'm new to Ubuntu on the desktop (used it and other distros on servers for years), so I'm afraid I don't know how this auto-mounting is happening. I think it's handled by the gvfs package and its daemon, but that's about as far as I got (and perhaps I've taken a left turn even getting that far). Although I can work around it with mount, does anyone know how I might get auto-mounting to work?

    Read the article

  • Mount shared folder (vbox) as another user

    - by jlcd
    I'm trying to mount my vbox shared folder every time my ubuntu starts. So, I added an entry on /etc/init with this: description "mount vboxsf Desktop" start on startup task exec mount -t vboxsf Desktop /var/www/shared Seems to work, except by the fact that all the files are owned by "root", and I don't have permission to write on the folder (neither chmod nor chown seems to be working). So, how can I make all the files under this shared folder to be owned by www-data user/group? Thanks ps.: The main reason for me to have an automatic shared folder, is so I can create/edit files from the HOST on the GUEST www folder. If you have a better idea for that, instead of sharing the folder, fell free to say.

    Read the article

  • Extending home network using multiple access point originating from another access point

    - by cyberjar09
    I have a home network with the current setup (RJ45 running from main router to access point) : +-------------+ +--------------+ | 192.168.2.1 | |192.168.2.2 | | router +--------------->access point 1| +-----^-------+ +--------------+ | | +-----+--------+ | 192.168.1.1 | | modem | +-----^--------+ | | | | +--+--+ | ISP | +-----+ However I would like to extend the network to two more floors in the house via the existing Access Point (router is too far and not reachable using a network cable, hence I need to extend using current access point). Please see diagram below : +-------------+ +--------------+ +----------------+ | 192.168.2.1 | |192.168.2.2 | | 192.168.2.3 | | router +--------------->access point 1+----------> access point 2 | +-----^-------+ +--------+-----+ +----------------+ | | | | +-----+--------+ | | 192.168.1.1 | | | modem | | +-----^--------+ | +----------------+ | +----------------> 192.168.2.4 | | | access point 3 | | +----------------+ | +--+--+ | ISP | +-----+ Q1 : is this setup possible? Q2 : if possible, will I have to do anything different from what I did to setup access point 1? edit 1 : I am trying to study the dd-wrt documentation to see which would be the correct mode of operation for me Linking Routers but Im confused because I dont see any info on how to use an existing Access point to extend the signal of the SSID. If anyone could point me to the correct wiki for how I should setup AP2 and AP3 based on AP1, it would be very helpful. For AP1, I did the following Use static IP and setup same SSID as primary wireless router use same security as primary wireless router make AP1 point to 192.168.2.1 (primary router) for DHCP Thanks in advance.

    Read the article

  • Suppress EXT3-fs warning on mount

    - by STM
    I am familiar with output suppress on Unix machines, ie: cat /file/that/doesnt/exist > /dev/null 2>& However I can't seem to suppress the output of mount when an ext3 filesystem is mounted for the nth time, and it recommends an fsck. As it happens, fscks are run regularly by another machine, so these warning messages are needlessly interrupting the flow of output to my pretty bash script. These are the errors: # mount -t ext3 /dev/sda1 /mnt > /dev/null 2>& kjournald starting. Commit interval 5 seconds EXT3-fs warning: maximal mount count reached, running e2fsck is recommended EXT3 FS 2.4-0.9.19, 19 August 2002 on sd(8,1), internal journal EXT3-fs: mounted filesystem with ordered data mode. Can anyone shed some light on this? I'm clearly blocking both fd's, but somehow output is still getting through. This is GNU Bash v2.05a

    Read the article

  • fstab line for auto mount drive that all users can read/write

    - by evilblender
    I have installed a cable that connects from the CPU's SATA motherboard connection to a removable drives' ESATA connection. I would like to be able to swap drives on the ESATA connection and have all users be able to read and write to these drives. I have created the directory /archive/ where I would like the drive(s) to mount. The drives are all formatted Fat 32 - but in the future I may use HFS for formatting. When I used the command (as root): mount /dev/sdc1 /archive the drive was mounted (but read only) What can I use in my /etc/fstab file that will allow drives to be mounted and unmounted by all users on the system? (both reading and writing) Also, will I be able to mount and unmount these drives without shutting down? or will I need to reboot every time I want to change drives? Thank you. Jeff

    Read the article

  • Unable to mount cifs in redhat 6

    - by user3734522
    I am relatively new to Linux, and I am trying to mount a CIFS filesystem from an openfiler instance I have on my network in Red Hat. The openfiler instance is authenticating using AD. I am able to connect using samba: smbclient '\\10.25.214.26\cluster_storage.cluster.Cluster' -U [DOMAIN]+[USERNAME] Enter DOMAIN+USERNAME's password: Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.5.6] smb: \> When I attempt to mount on boot via fstab, I am told that the line is bad during startup. mount -t cifs -o username=[DOMAIN]+[USERNAME], password=[my password], domain=[domain.edu] '\\10.25.214.26\cluster_storage.cluster.Cluster' /mnt/scratch Any help would be greatly appreciated.

    Read the article

  • Trying to mount an NFS directory from a Mac with another user

    - by Yair
    I have a username on an ubuntu server, lets call it user a. I want to mount a directory from that server to my Mac, on which I have another username, lets call it user b. My problem is that, after I mount the directory (using the disk utility app) I can view files on the server but can't modify or create new files on it. I checked, and if I change the permissions of the server directory so that its open to everyone (chmod 777), I can write to it. So what I need to know, is how can I specify the username and password in the NFS client when setting up the mount? That is, I want to specify that I'm trying to log in as user a to the server.

    Read the article

  • How can I unmount a s3fs mount as a normal user?

    - by coteyr
    I use S3 a ton. I have over 40 or so buckets floating around between clients. I like the fact that I can list them in /etc/fstab and that they just work. For reference here is one of the buckets. coteyrnet /mnt/S3/coteyrnet fuse.s3fs _netdev,use_cache=/tmp,use_rrs=1,allow_other,noauto,users 0 0 It mounts fine, but I am having one heck of a time unmounting it. The first problem is: umount: /mnt/S3/coteyrnet mount disagrees with the fstab The relevant part of mtab is: s3fs /mnt/S3/coteyrnet fuse.s3fs rw,noexec,nosuid,nodev,allow_other,user=coteyr 0 0 In addition to that, if I sudo umount /mnt/S3/coteyrnet I always get umount: /mnt/S3/coteyrnet: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) lsof | grep coteyrnet never returns anything of value, nor does fuser. My goal is to get user unmounting working. The inability to mount via sudo has been resolved. By using the "use_cache" setting the files were actually open, but not under the mount point. This is a caveat to that option. The mount point files are closed but the files were not yet transferred to S3. By waiting "a while" and trying again, sudo can unmount.

    Read the article

  • Script to setup Ubuntu as a wireless access point on a bridge mode

    - by nixnotwin
    I use the following script to make my netbook a full-fledged wireless access point. It creates a bridge with eth0 and wlan0 and starts hostapd. #!/bin/bash service network-manager stop ifconfig eth0 0.0.0.0 #remove IP from eth0 ifconfig eth0 up #ensure the interface is up ifconfig wlan0 0.0.0.0 #remove IP from eth1 ifconfig wlan0 up #ensure the interface is up brctl addbr br0 #create br0 node hostapd -d /etc/hostapd/hostapd.conf > /var/log/hostapd.log & sleep 5 brctl addif br0 eth0 #add eth0 to bridge br0 brctl addif br0 wlan0 #add wlan0 to bridge br0 ifconfig br0 192.168.1.15 netmask 255.255.255.0 #ip for bridge ifconfig br0 up #bring up interface route add default gw 192.168.1.1 # gateway This script works efficiently. But if I want to revert back to use Network Manager, I cannot do it. The bridge simply cannot be deleted. How can I modify this script so that if I run bridge_script --stop, the bridge gets deleted, network manager starts and interfaces behave as if the machine had a fresh reboot.

    Read the article

  • howto mount usb disk on esxi

    - by maruti
    have a USB drive NTFS attached to ESXi4 host. fdisk -l shows the device as /dev/mpx.... but when i try to mount that using mount /dev/xxx /mnt/usbdisk....it fails with message "no such file or dir" could anyone help with correct entry in etc/fstab? all that i am trying to do is backup the vms on esxi host to usb disk...thanks in advance

    Read the article

  • Fedora 12 XFCE mount permissions

    - by ibrahimovich
    I installed Fedora 12 with XFCE. When I run Gigolo to mount Windows partitions, I get an "Authentication is required" message. In Fedora 11 XFCE, there was a tool that changed the system permission to allow any user to mount any partition, but I can't find it in Fedora 12. How can I fix this problem and set all permissions needed?

    Read the article

  • Read only bind-mount?

    - by depesz
    I use mount -o bind to mount directories inside chroots, which works really well. The problem is that I'd like some of these bind-mounted directories to be read only in chroot. Is it possible? If not - any other way to achieve it? I was thinking about using NFS for localhost mounts, but it looks like overkill.

    Read the article

  • Can you mount a sysprep image using DISM

    - by Tester123
    I created a script to mount a sysprepped Windows 7 image to a directory so I can edit a specific file in the image and then unmount it. The script seems to work just fine however, each time I try I seem to be getting some sort of error about the Image. Errors such as: The image is supposedly damaged or corrupted The image mounts but nothing appears in the directory So I guess the overall question is it possible to mount an syprepped Windows 7 Wim Image into a directory?

    Read the article

  • Mount Windows share on Linux boot

    - by Delameko
    I'm running VirtualBox in Windows. I have Linux 10.04 installed as a VM. Whenever I log in I have to run to following command to mount my shared Windows web dev folder: sudo mount.vboxsf web_apps /mnt/web_apps Where can I put this line (minus the sudo) so that it will run once when Linux boots up? I'm guessing there must be a root .profile or .login script that runs at some point?

    Read the article

  • Troubleshooting an NFS server hanging after authenticated mount request

    - by Christoph
    I need some advice on troubleshooting an NFS server problem on Scientific Linux (RHEL) 6.1. The log on the server shows that an authenticated mount request was made: Jan 13 16:30:02 ??? rpc.mountd[3996]: authenticated mount request from ????:784 for /shared-storage/cm/shared (/shared-storage/cm/shared) But after that, it does not continue. On the client, it is also hanging. The interesting thing now is that I have two NFS servers, which should be identical, and the one is working perfectly, but the other exhibits the above mentioned behaviour. The problem is also not completely persistent, i. e. sometimes the mount request succeeds. I assume that the problem must be related to the server rather than to the client, because it is working perfectly on the other server. My question is where I should search the problem. I have already re-created the exports using exportfs -r, I have restarted the NFS server, I have compared the rpcinfo outputs of both server - no success. The problem even survives a reboot. Any other ideas are appreciated. As answer to Tim's question: I have sporadically the following in dmesg, but do not know whether it is related e1000e 0000:0c:00.0: eth4: Detected Hardware Unit Hang: TDH <24> TDT <25> next_to_use <25> next_to_clean <24> buffer_info[next_to_clean]: time_stamp <1c3d12940> next_to_watch <24> jiffies <1c3d12940> next_to_watch.status <0> MAC Status <80383> PHY Status <792d> PHY 1000BASE-T Status <7800> PHY Extended Status <3000> PCI Status <10> Further edit: The problem above does not occur on the machine that is working, so it probably is related. Again an edit: The error is not on the (software) device that is used for NFS, but on another one. The NFS mount also does not trigger the message.

    Read the article

  • ubuntu connect to server -> permantely mount?

    - by myforwik
    I have a system with Ubuntu 9.10 installed. I can connect to remote windows shares by using the "connect to server" under "places" menu. I can't figure out where these mount in the file system. And how can I get this to mount automatically on startup? And I can't install smbfs or anything else. I need to use only what comes on the live CD, as there is no internet connection and no way to get in packages.

    Read the article

  • using ubuntu connect to server to mount windows share

    - by myforwik
    I have a system with Ubuntu 9.10 installed. I can connect to remote windows shares by using the "connect to server" under "places" menu. I can't figure out where these mount in the file system. Is it possible to mount it in the file system. And I can't install smbfs or anything else. I need to use only what comes on the live CD, as there is no internet connection and no way to get in packages.

    Read the article

  • Permit any user to mount any CIFS share

    - by A.K
    Essentially, I want the Ubuntu pre-10.10 behaviour back. The setuid method (see notes) does not work anymore. I search a lot on the Internet, but I haven't found a satisfying solution. I have read a solution that involves editing the sudoers file (ALL ALL=NOPASSWD:/sbin/mount.cifs). But then, the users would also be able to specify a directory as a mount-point they normally would not have access to, right? This is not what I want.

    Read the article

  • TrueCrypt System Favorite Volume doesn't mount automatically on boot

    - by Anders Hovgaard
    I've encrypted my system partition using TrueCrypt and I've read that I can mount my encrypted data partition (TrueCrypt volume) on boot by making it a "System Favorite" and giving it the same password as the system partition. However it doesn't work and I have to mount it manually every time. See this example. I've tried enabling "Cache pre-boot authentication password in driver memory (for mounting of non-system volumes)" in System - Settings, but that didn't change anything either. Any ideas?

    Read the article

  • Mounting a usb floppy disk drive in Ubuntu 11.04

    - by Jocky B
    I have tried to mount my usb floppy disk drive in 11.04 Natty. So far I have managed to use the udisks --mount/dev/sdf command in terminal, but get a message that I have not stated a filetype "john@john-desktop:~$ udisks --mount /dev/sdf Mount failed: Error mounting: mount: you must specify the filesystem type" This is the result I get when trying to mount the usb floppy disk drive. Anyone know how to proceed Cheers

    Read the article

  • What are the appropriate mount options for a shared NTFS partition on an SSD in a dual boot Ubuntu/Windows setup?

    - by Andreas Jonsson
    I have Ubuntu 13.10 and Windows 7 installed in dual boot on a single SSD. In addition they share an NTFS partition where I put all my data and documents. What is the optimal way to mount this NTFS partition in /etc/fstab (considering performance and minimizing wear of the SSD)? Similar questions have been asked, but I could not find answers to this particular scenario. As I understand it, the mount option 'discard' is not supported for NTFS and so should not be used (although it is recommended here). Another often quoted mount option is 'noatime'. I use it for my ext4 partitions. Does it apply to NTFS? My current /etc/fstab line is: UUID=XXXXXXXXXXXXXXXX /dos ntfs defaults,nls=utf8,uid=1000,gid=1000 0 0

    Read the article

  • Ubuntu 12.04 glusterfs volume failed to mount at boot time

    - by user183394
    I have just setup 7 KVM guests, all running Ubuntu 12.04 LTS 64bit Minimal server to test out glusterfs 3.2.5 from the Ubuntu official repo. Two of them form a mirrored pair (i.e. replica 2), and five of them are clients. I am still new to this file system and would like to gain some "hands-on" experience. The setup was mostly uneventful, until I put in the following into each glusterfs client's /etc/fstab: 192.168.122.120:/testvol /var/local/testvol glusterfs defaults,_netdev 0 0, where 192.168.122.120 is the IP address of the first "glusterfs server". If I issue either a manaul mountall or a mount.glusterfs 192.168.122.120:/testvol /var/local/testvol on CLI, a mount shows that the volume is successfully imported. But once a client is rebooted, after it comes back up, the volume is not mounted! I searched the Internet, and found this article, but since I am not running both client and server on the same node, IMHO it's not strictly applicable. So, as a kludgy "get-around", I put in a sleep 3 && mount.glusterfs 192.168.122.120:/testvol /var/local/testvol into each client node's /etc/rc.local. It seems to be able to get the volume mounted on each node, as far as I can tell. But this is quite ugly, and I would appreciate a hint as to how to resolve this glusterfs-non-boot-time-mounting issue correctly. Note that I used the IP address of the first "glusterfs server" although the /etc/hosts of all nodes have been populated with their hostnames. I figured that the use of IP address is more robust. --Zack

    Read the article

  • MacOS creates a new mount on AFP path calls

    - by jAndy
    Hi Folks, following scenario: In my webapp, my customers are using Firefox as target browser. They have the need to open afp:// folders via Javascript. To make a long story short, this really works. You need to setup Firefox with about:config and set the value network.protocol-handler.external.afp to true. What happens then, the operating system (OSX) takes care of that path and it correctly opens a Finder window. The problem: OSX does create a new mount every time. It cannot distinct between afp://host/path/111 and afp://host/path/222 for instance. Furthermore, even if the afp path is 100% identical a new mount is created. It looks like this is the default behavior from OSX regardless of Firefox. So, is there any chance I can tell OSX not to create a new mount for some sub directorys which should get access over afp:// ? update: It looks like, there are OSX applications which can change the default behavior for network protocols. So you can change "somewhere" which application OSX should call for a protocol. If that is true, wouldn't it be possible to create a script which just opens the local path without a afp:// prefix ? The question here is, where is that configuration (?) to tell OSX which application to use for specific protocol. Any help welcome!

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >