Search Results

Search found 2630 results on 106 pages for 'mount'.

Page 27/106 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Executed PHP files are stale unitl "touched" (Symlinked NFS mount as web root)

    - by mmattax
    We have a PHP application that has 3 web servers (running Nginx and Apache). The web server's directory root are symlinked directories that point to an NFS mount. For example: web01 has an NFS mount at /data/webapp, which is symlinked to /home/webapp. Apache serves content from /home/webapp/www. We also use ACP for our PHP opcode cache. When we deploy code, we SCP an archive file to the NFS server and extract it. Since upgrading RedHat 6, when we deploy our code the webserver execute "stale" PHP files until touch is run on the PHP files. We thought that APC might be causing a problem, but the issue exists, even after clearing the opcode cache. Any ideas on how to diagnose why the stale PHP code is being executed?

    Read the article

  • use network drives as mount points during installation?

    - by ajsie
    is it possible to use network storage locations as mount points during installation? cause i want to separate system (ubuntu) with data (personal files). eg. if i have 5 computers i don't want to recreate /home/david 5 times. so i want to mount networkdrive/home to /home in local ubuntu server. so ALL users home folders could be used and maybe also networkdrive/projects to /projects. in that way its ok if i by accident repartitioned the local ubuntu server cause all data is not there on that server, but in the data server. is separating "data" from "logic" good in this case? and is it possible? what protocol should i use for the mapping over internet? (maybe the server is in Sweden, and the data is in Norway). thanks.

    Read the article

  • Need to automount dvd or cdrom at fixed mount point in Ubuntu 11.04

    - by Lindsay Haisley
    Ubuntu 11.04, by default, automounts a cdrom or dvd at /media/<vol_name>. I need to make the automounting system use a fixed name instead of the volume name for all CDs or DVDs inserted into this particular drive, e.g. "/media/op-drive0". A bit of searching turns up pretty much the same solution I used, successfully, on an older, gentoo box, which is to create an fdi file for hal, along the lines of the instructions at https://bbs.archlinux.org/viewtopic.php?id=91450. This doesn't seem to work on this box. Other sources say to use the gnome-mount utility to set the mounting properties. Ubuntu 11.04 doesn't know about the gnome-mount program. Any ideas?

    Read the article

  • Mount Docker container contents in host file system

    - by dflemstr
    I want to be able to inspect the contents of a Docker container (read-only). An elegant way of doing this would be to mount the container's contents in a directory. I'm talking about mounting the contents of a container on the host, not about mounting a folder on the host inside a container. I can see that there are two storage drivers in Docker right now: aufs and btrfs. My own Docker install uses btrfs, and browsing to /var/lib/docker/btrfs/subvolumes shows me one directory per Docker container on the system. This is however an implementation detail of Docker and it feels wrong to mount --bind these directories somewhere else. Is there a proper way of doing this, or do I need to patch Docker to support these kinds of mounts?

    Read the article

  • Oracle application - files missing in the Mount point in UNix server

    - by arun_V
    My oracle application test instance is down, When I browse through the Unix server, I couldn’t find any files in the mount point,U01 U06 or U10, when I put BDF command it shows the following $ bdf Filesystem kbytes used avail %used Mounted on /dev/vg00/lvol3 204800 35571 158662 18% / /dev/vg00/lvol1 299157 38506 230735 14% /stand /dev/vg00/lvol8 1392640 1261068 123620 91% /var /dev/vg00/lvol7 1327104 825170 470631 64% /usr /dev/vg00/lvol4 716800 385891 310746 55% /tmp /dev/vg00/lvol6 872448 814943 53936 94% /opt /dev/vg00/lvolssh 32768 13243 18306 42% /opt/openssh /dev/vg00/lvol5 204800 187397 16334 92% /home /dev/vg00/lvolback 512000 472879 36704 93% /backup dg-ora04:/dgora03_u10 204800 167088 35416 83% /u10 dg-ora04:/dgora03_u06 204800 167088 35416 83% /u06 dg-ora04:/dgora03_u01 204800 167088 35416 83% /u01 Why can't I see any files inside the mount points?

    Read the article

  • suddenly cannot mount nfs share from windows 7

    - by bing
    I recently reinstalled my file server (moved from fedora to ubuntu server). Now I cannot mount my nfs share from windows 7, mounting from mac osx works fine. In windows I either keep getting "the semaphore timeout period has expired" or "an unexpected error has occured". Does ubuntu need some special magic to allow windows 7 to mount an nfs share? This is my exports file /home/bing/ 192.168.1.*(rw,async,insecure,no_subtree_check) /home/bing/mnt/EXTRN2 192.168.1.*(rw,async,insecure,no_subtree_check) /home/bing/mnt/EXTRN3 192.168.1.*(rw,async,insecure,no_subtree_check)

    Read the article

  • TrueCrypt partition will no longer mount

    - by sparkyuiop
    I am hoping for some advice to help me out of my situation, with luck. I have a computer running Windows 7 Ultimate x64 with 3 hard disks installed. On my 2TB hard disk 2 (non-system disk) I have 4 partitions. One is for music, another for video, a downloads partition and a 500GB RAW Truecrypt encrypted partition / volume that I had setup to mount with 4 photographs used as keyfiles. The 4 photographs are located in my 'Documents' partition which is one of four partitions on my 1.5TB hard disk 1 (non-system disk) When I setup the disk encryption I did not (I'm 99% sure) create a password, I only used the 4 photograph keyfiles to mount the volume. Recently my 1TB hard disk 0 (system / boot) started to fail so I decided to replace it. I was going to clone the old disk to a new disk but decided that a fresh installation would be more beneficial. Once I had transferred all the required 'User Data' from my old hard disk 0 (C: disk) I discarded it. I reinstalled Truecrypt, pointed to the partition, selected my 4 keyfiles photographs and I mounted my encrypted volume with no issues. In fact I mounted it several times after re-installing Windows and after reboots. Now all of a sudden when I try and mount it I get the message "incorrect keyfile(s) and/or password or not a Truecrypt volume". Now I am not sure why this happened as I do not recall exactly what I did between last mounting the volume successfully and it not mounting. Here are some of the possible things I may have done to cause it to stop working but I am at a loss as to where to start to try and resolve the problem. 1. I had swapped the drive letters to a preferred order. 2. I possibly swapped the physical SATA connectors on the mainboard. 3. I enabled 'Hot Plugging' for the two non-system hard disk SATA ports and the DVD SATA port in the BIOS. I have tried changing the encrypted partition drive letter as suggested in another post but this does not help. On my old system the encrypted drive was drive "X". I have about tried it with all the other free drive letters but alas nothing changes. I do not recall what drive letter was allocated to the encrypted partition before I changed them all. I have not tried to change the letter back to what it possibly was to start with as I am happy with the current layout. I will try this is anyone thinks it would be worthwhile though. I do hope I have managed to convey my situation in an understandable manner and live in hope someone could help me recover years of personal files. Thank you very much for taking the time to read my post and for any suggestions you may offer. Regards Phillip Thorne (UK) Anyone???

    Read the article

  • Error : [0.8879153] kernel panic -not syncing VFS unable to mount fs unknow block (8.3)

    - by user43069
    i installed ubuntu using wubit inside the windows and started working on it then i got this error afer updating [0.8879153] kernel panic -not syncing VFS unable to mount fs unknow block (8.3) and i can't user rescue mode and it's give me another error no filesystem could mount root ..... i looked at grub folder and didn' find any file on disks/boot/grub/ so i tryed to user super grub to fix it but it didn' work and it keep giving me. boot/grub/stage1 not found i didn't edit anything from grub folder. any idea plz .

    Read the article

  • NFS automounts hang

    - by Yang
    Hi, I have been mounting NFS shares on my x86 Ubuntu with NIS/am-utils fine for a long time, but today my system got into a state where it could no longer access automounted directories and instead frequently got hung up trying to access them, returning either "Input/output error" or "Permission denied" (almost randomly), as well as "stale file handle." I can, however, manually mount that share fine. Restarting am-utils doesn't help get my system out of its funk; is there any other way of getting my system un-stuck?

    Read the article

  • Mounting ubuntu's root.disk in Windows 7

    - by gAMBOOKa
    I've got Ubuntu 9.10 installed in an NTFS partition. After an update, I started getting kernel panics, so I need to reinstall it. But before I do that, I need to retrieve and backup my home directory. I believe Ubuntu's file system is packaged in the root.disk image. So how do I mount it in Windows?

    Read the article

  • Mounting a Nested SSH Location

    - by Brandon Pelfrey
    I have a server that is only SSH-accessible to machines within a network and my only access to that network from the outside world is a single publicly-SSH-accessible node. Is there some way that I can mount the nested machine from the outside? Me - Public SSH-accessible Node - Internal SSH-accessible Machine Thanks!

    Read the article

  • Permissions on DVD folders in redhat10

    - by aryan
    I have written a data DVD by k3b. When I mount the DVD on my system I can't read and write on it's folders. I tried to set their permissions but it's not possible. I mean that when I set file access to Read and Write and press the Apply permissions to enclosed files button, after a few seconds my new settings (Read and Write) will be reverted to "---". Can any one guide me, please?

    Read the article

  • mounting without -o loop

    - by jumpinjoe
    Hi, I have written a dummy (ram disk) block device driver for linux kernel. When the driver is loaded, I can see it as /dev/mybd. I can successfully transfer data onto it using dd command, compare the copied data successfully. The problem is that when I create ext2/3 filesystem on it, I have to use -o loop option with the mount command. Otherwise mount fails with following result: mount: wrong fs type, bad option, bad superblock on mybd, missing codepage or helper program, or other error What could be the problem? Please help. Thanks.

    Read the article

  • anti-static foam under a motherboard?

    - by user29734
    I am modding out a custom built case/system. I have my motherboard mounted on a metal tray, (Dell did this) has been working great. Not I am modding the case to hold everything and how I want to mount the motherboard on the tray I have a slight gap between the wall of the case and the motherboard/tray. Can I put a piece of thin anti-static foam/packaging in between the tray and the case? That is safe right?

    Read the article

  • NFS host is not exporting the "share"

    - by user1345260
    I have a NFS Server: usanfsd01 And a remote machine: usafssd01 I tried mounting a directory from usafssd01 onto usanfsd01 by adding the following line to /etc/fstab as root usanfsdo1:/home/dblogs /home/data/dblogs nfs rw 0 0 And when I run the following command to see if NFS is exporting the share, it's not shown showmount -e usanfsdo1 Can someone please help? Also, a point of interest would be there is another mount that works on the same servers and thats defined as below in the /etc/fstab usanfsdo1:/home/files /home/data/files nfs rw 0 0 /etc/exports /nfs/home/dblogs 'IPADDRESS'(rw,no_root_squash)

    Read the article

  • Making an outside machine visible to private network

    - by William
    Hi, I'm trying to make a server visible to a every computer on a separate network without doing anything to the server but I'm not sure what would be the best way to do this. I really just need to access one folder but my attempting at NFS mounting failed since I can't NFS mount a mounted folder. Any advice? Thanks.

    Read the article

  • Setting filesystem mounting umask on OS X

    - by Nick
    (Using Snow Leopard.) When I plug in a flash drive formatted with FAT32, the permissions on all files on the drive are set as 0666; between colored ls and my obsessive-compulsive nature, this is annoying. Is there any way to make it automatically mount with a different umask?

    Read the article

  • Disk partition errors after size change

    - by benjamin.d
    I increased the disk size of one of my VM when it was running. After a reboot, I get the following error message (at boot time): Mounting local filesystems...failed Now the VM is only accessible through ESX console (not through ssh), and nothing is working anymore.... I already tried to run fsck, but without success. The result of mount: The result of blkid: The result of fstab: Thanks for your help

    Read the article

  • Ask filesystem if it is mounted

    - by Brian
    How can I see if a (ext3) filesystem is mounted by asking the filesystem directly (i.e. the same way that the system does when it boots and sees that it was not unmounted cleanly)? Checking the output of mount is no good because the filesystem might be mounted by a virtual machine. I know I can run fsck and it will abort if the filesystem is mounted, but I don't need to actually check the filesystem.

    Read the article

  • "connect to server" for KDE

    - by user36309
    Hi. Everybody knows the Gnome program (I can remember the package name right now, or if it's Nautilus itself) that gives us the menu "connect to server" that we can login in a remove ftp, ssh, windows, and much more and mount it very easily. Looks pretty much like expandrive for macos. What I need is a tool like that. But for KDE. Anyone knows? Thanks!

    Read the article

  • Why is mount -a not mounting fuse drive properly when executed remotely (via Fabric)?

    - by Jim D
    This is a weird bug and I'm not sure where it's coming from. Here's a quick run down of what I'm doing. I'm trying to mount a FUSE drive to an Amazon EC2 instance running Ubuntu 10.10 using s3fs (FUSE over Amazon). s3fs is compiled from source according to the instructions etc. I've also added an entry to /etc/fstab so that the drive mounts on boot. Here's what /etc/fstab looks like: # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 LABEL=uec-rootfs / ext4 defaults 0 0 /dev/sda2 /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 /dev/sda3 none swap sw,comment=cloudconfig 0 0 s3fs#mybucket /mnt/s3/mybucket fuse default_acl=public-read,use_cache=/tmp,allow_other 0 0 So the good news is that this works fine. On reboot the connection mounts correctly. I can also do: $ sudo umount /mnt/s3/mybucket $ sudo mount -a $ mountpoint /mnt/s3/mybucket /mnt/s3/mybucket is a mountpoint Great, right? Well here's the problem. I'm using Fabric to automate the process of building and managing this instance. I noticed I was getting this error message when using Fabric to build s3fs and set up the mount process: mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected I isolated it down the the problem and built a fabric task that reproduces the problem: def remount_s3fs(): sudo("mount -a") Which does: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: mount -a [And yes, I was sure to unmount it before running this task.] When I check the mount using mountpoint I get: $ mountpoint /mnt/s3/mybucket mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected Done. But if I run sudo mount -a at the command line, it works. Hrm. Here is that fab task output again, this time in full debug mode: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a" Again, I get that transport endpoint not connected error. I've also tried copying and pasting the exact command run into my ssh session (i.e. sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a") and it works fine. So...that's my problem. Any ideas?

    Read the article

  • HYPER-V R2 Can not mount ISO from network location (UNC Path)

    - by Entity_Razer
    So, as the name suggest I'm trying to mount a ISO from a network share using the UNC path to a HYPER-V R2 Cluster. This is a pure Demo / test case setup with: 2x HYPER-V R2 1X NAS/iSCSI CSV Cluster Management is happening through the MMC with RSAT tools. So what i've done so far is: Set up the cluster and configure Quorum, add CSV Shares and disks, set up 1 Virtual Machine on the Hyper-1 node. What i'm trying to do is, you go to settings --- DVD Drive --- use network location ---- Pick ISO file and press "apply". Error I'm getting is either "User account does not have rights to mount iso". I changed that or stopped getting that message when I went to the HYPER-V Node settings and tabbed on: "Use Default Credentials Automatically". Now I stopped getting the "user does not have right..." message but I get the following: Error applying DVD Drive Changes Failed to remove device microsoft synthetic DVD Drive:" the specified network resource or device is no longer available" I've google'd the problem but am unable to find a solution. Anyone here able to help me out ? Much abbliged !

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >