Search Results

Search found 676 results on 28 pages for 'nfs'.

Page 4/28 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • nfs client on ubuntu 9.10, /etc/init.d/nfs-common does not exist

    - by Denali
    This seems like a trivial problem, but I can not find a solution for several days now. I am trying to configure an nfs client on ubuntu 9.10 (64 bit). All the tutorials I've read say I need to restart a few things, such as portmap, and also nfs-common. Specifically: sudo /etc/init.d/nfs-common restart However, this file (/etc/init.d/nfs-common) does not exist. sudo apt-get install nfs-common returns "nfs-common is already the newest version." When I try: sudo service nfs restart I get: nfs: unrecognized service What am I missing here? Thank you to the kind soul who can help me with this.

    Read the article

  • NFS-Root not working when booting over PXE

    - by Randy
    I am desperately trying to get a diskless client running over PXE-Boot using a NFS-Share as a root file system. I did this before some years ago but for some reason I am stucked at this since days. The TFTP-Server itself is running fine and booting a netinstaller works also fine. The kernel and initrd are loaded also but the bootprocess stops with this (screenshot) kernel panic. I'm using the squeeze standard i386-Kernel and I have prepared the initrd with this config: MODULES=most BUSYBOX=y KEYMAP=n COMPRESS=gzip BOOT=nfs DEVICE= NFSROOT=auto I also tried MODULES=netboot with the same outcome. My PXE-configuration looks like this: LABEL linux KERNEL diskless/debian-default/vmlinuz-2.6.32-5-686 APPEND root=/dev/nfs initrd=diskless/debian-default/vmlinuz-2.6.32-5-686 nfsroot=192.168.140.2:/storage/nfs-boot-images/default-squeeze ip=dhcp rw Furthermore I have captured the network communication of the client via tcpdump and learned that the client isn't even trying to connect to the NFS-share. Does anybody has got an idea what is going wrong here?

    Read the article

  • Daemons die with bus error when their binaries live on NFS

    - by mbac32768
    We have some daemons executing on a number of hosts. The daemon executable images are these very large binaries that are hosted on NFS. When the binaries are updated on the NFS server, the previously running daemons sometimes drop dead with a Bus error. I'm assuming what's happening is the NFS server is replacing the binaries in a way that's invisible to the VFS layer on the NFS clients so they end up loading pages from the updated binary, which of course leads to madness. We tried moving the new binaries into place instead of cp, but that doesn't seem to fix it. I'm considering simply mlock()'ing the binary in the daemon startup script, but surely there's magic NFS options or semantics that we should be abusing. Is there a better way to fix this?

    Read the article

  • CentOS 6: Can not start NFS

    - by Chris
    I am unable to start the NFS service. When starting there is no error. But the services are stopt after it. No messages at all in /var/log/messages. Same happens to rpcbind serivce. Any idea what this could be? I also tried to disable iptables. [root@server1 ~]# service nfs start [root@server1 ~]# service nfs status rpc.svcgssd is stopped rpc.mountd is stopped nfsd is stopped rpc.rquotad is stopped [root@server1 ~]# service rpcbind start [root@server1 ~]# service rpcbind status rpcbind is stopped [root@server1 ~]# cat /etc/exports /tmp *(ro) [root@server1 ~]# chkconfig --list | egrep '(rpcbind|nfs)' nfs 0:off 1:off 2:on 3:on 4:on 5:on 6:off nfslock 0:off 1:off 2:on 3:on 4:on 5:on 6:off rpcbind 0:off 1:off 2:on 3:on 4:on 5:on 6:off

    Read the article

  • Weird permission issue with POSIX ACLs, NFS v3 on Linux

    - by jon
    I have two Linux systems, both running Debian Squeeze. Versions of (I think) the stuff involved are: kernel: 2.6.32-5-xen-amd64 ii nfs-kernel-server 1:1.2.2-4squeeze2 support for NFS kernel server ii libnfsidmap2 0.23-2 An nfs idmapping library ii nfs-common 1:1.2.2-4squeeze2 NFS support files common to client and server ii portmap 6.0.0-2 RPC port mapper (The client doesn't have nfs-kernel-server involved.) I have a directory with ACLs: # file: dirname # owner: jon # group: foogroup # flags: -s- user::rwx user:www-data:rwx group::r-x group:foogroup:rwx mask::rwx other::r-x default:... There are two users, neither one of which owns the directory: uid=3001(jake) gid=3001(jake) groups=3001(jake),104(wheel),3999(foogroup) uid=3005(nic) gid=3005(nic) groups=3005(nic),3999(foogroup) The jake user can create files in the directory without issues. The nic user can't. All UIDs/GIDs are the same on the client and server. I've verified (packet sniffing) that the right uids/gids get sent via AUTH_UNIX are correct-- uid=gid=3005, auxiliary gids=3005,3999-- and that the server replies with NFS3ERR_ACCESS, which the kernel on the client maps to EACCES (Permission denied). Can anyone help me here?

    Read the article

  • Debootstrap Ubuntu over NFS leads to mknod I/O error

    - by Aaron B. Russell
    Hi everyone, I'm trying to prepare an Ubuntu environment for a diskless machine that will PXE boot and mount an NFS share as it's root. I've currently got another Ubuntu machine mounting the NFS share and I'm trying to debootstrap into it, but it has trouble creating devices over NFS: root@kimiko:~# mount | grep Seiuchi 192.168.0.203:/mnt/user/Seiuchi on /mnt type nfs (rw,addr=192.168.0.203) root@kimiko:~# debootstrap --arch i386 maverick /mnt http://gb.archive.ubuntu.com/ubuntu/ mknod: `/mnt/test-dev-null': Input/output error E: Cannot install into target '/mnt' mounted with noexec or nodev My NFS rule on the unRAID server is 192.168.0.201/32(rw,no_root_squash,sync). I don't have the noexec or nodev options set. I've not got much experience with NFS, so I'm probably missing something basic in the way I'm sharing this, but my attempts at Googling for an answer isn't really turning anything useful up. Does anyone have suggestions on what I might have missed or maybe relevant docs? Edit: Creating normal files (and directories) works just fine, I just can't create devices... root@kimiko:/mnt# mkdir foo root@kimiko:/mnt# cd foo root@kimiko:/mnt/foo# touch bar root@kimiko:/mnt/foo# mknod quux c 4 64 mknod: `quux': Input/output error root@kimiko:/mnt/foo# ls bar

    Read the article

  • NFS using FREENAS for ESXi

    - by maruti
    trying to create NFS share for ESXi4. Using FREENAS 0.71. once setup NFS mount point is setup could this be shared to Windows clients using CIFS/SMB service? I mean sharing the backup vms on NFS datastore to Windows clients using CIFS/SMB service?

    Read the article

  • OSX Server & Client NFS "timeout" issues?

    - by user36659
    I have a mac environment our server is sharing an NFS mount Setup via Server Admin. Clients connect to the NFS mount at boot via The Directory Utility built into OSX... Everything works fine with one small exception, the NFS mount seems to timeout/dropout every now and then it seems random and requires a reboot to bring it back up? Has anyone else ran into this situation? Any fix would be appreciated.

    Read the article

  • Windows7 NFS with linux server

    - by Vitaly
    Hi. I have an Ubuntu server and want to access its web folder (/var/www). What I done: installed nfs-kernel-server, nfs-common and portmap (as in faq) Setted up /etc/exports: /var/www 192.168.1.0/255.255.255.0(rw,no_roow_squash,async,subtree_check) Then: sudo exportfs -ra Then: sudo /etc/init.d/nfs-kernle-server restart I checked, if all works on same machine: sudo 192.168.1.101:/var/www /mnt/test Then accessed /mnt/test and seen that all data present and all ok. Next, I tried to connect this folder to windows7 using NFS client: First, I checked, that linux exported path successfully: showmount -e 192.168.1.101 /var/www 192.168.1.0/255.255.255.0 All ok, go to mount: mount -o anon 192.168.1.101:/var/www z: Console said, that all success.. but. I cant access drive Z (drive exists in the system and point to right folder). When I try to access drive Z my Explorer just going to sleep and then say that timeout expired. Help me please.

    Read the article

  • Conditional dev|nfs mount in Linux

    - by o_O Tync
    I have a mount point — let it be /media/question — and two possible devices: a physical HDD and a remote NFS folder. Sometimes I plug the device in physically, in other cases I mount it via NFS. Is there a way to specify both of them in fstab so that executing mount /media/question will preferably choose physical volume, and when it's not available — NFS?

    Read the article

  • Trouble with NFS file sharing on Synology 211 NAS and Ubuntu Client

    - by Aglystas
    I'm attempting to set up NFS file sharing and keep getting the error... mount.nfs: access denied by server while mounting 192.168.1.110:/myshared Here is the exact command I'm using to mount sudo mount -o nolock 192.168.1.110:/myshared /home/emiller/MyShared I have set 'Enabled NFS' in DSM and set nfs priviledges in the the Shares section of the control panel. Here is the /etc/exports entry from the NAS. volume1/myshared 192.168.1.*(rw,sync,no_wdelay,no_root_squash,insecure_locks,anonuid=0,anongid=0) I read some things about the hosts.allow and hosts.deny but it seems like if they are empty they aren't used for anything. I can see the share when I run ... showmount -e 192.168.1.110 Any help would be appreciated in this matter.

    Read the article

  • XP Client for NFS failure dialog on startup, but drive mapping works

    - by Matt Bennett
    I'm mounting an NFS share to some windows machines using the tools that come in the Services for UNIX Administration toolkit. I've set up the User Name Mapping service to use local passwd and group files. I had to manually start the User Name Mapping service, and then created an 'advanced map' from the XP machine's user to a uid that exists in on my NFS server, like so: Windows User: Matt Bennett UNIX Domain: PCNFS UNIX User: mattbennett UID: 10250 Primary: * I can map a network drive without any issues, and it correctly identifies the UID and GID to use, but when I reboot I get this message: "An error occurred while connecting to the NFS server. Make sure that the Client for NFS service has started. If the problem persists make sure Client for NFS service can communicate with User Name Mapping or PCNFS server." After dismissing the dialog, the machine finishes booting and the network drive is there in My Computer with the title "Disconnected Network Drive", but I can open it I can see the network share without a problem, and then it drops the 'disconnected' from its title. It seems like the services are starting in the wrong order or something, so the first attempt to connect fails but subsequent ones work as expected. There don't seem to be any symptoms apart from the dialog box, but obviously something's not quite right. What have I done wrong? Thanks, Matt.

    Read the article

  • Missing NFS service link?

    - by Recc
    # ps ax | grep nfs 1108 ?        S<     0:00 [nfsd4] 1109 ?        S<     0:00 [nfsd4_callbacks] 1110 ?        S      0:00 [nfsd] 1111 ?        S      0:00 [nfsd] 1112 ?        S      0:00 [nfsd] 1113 ?        S      0:00 [nfsd] 1114 ?        S      0:00 [nfsd] 1115 ?        S      0:00 [nfsd] 1116 ?        S      0:00 [nfsd] 1117 ?        S      0:00 [nfsd] 4437 ?        S<     0:00 [nfsiod] 16799 ?        S      0:00 [nfsv4.0-svc] 18091 pts/1    S+     0:00 grep nfs But # service nfs status nfs: unrecognized service That'll be on Ubuntu 11.04 am I missing a sym link or something? How can I fix this quickly?

    Read the article

  • Mounting to external NFS from a KVM VM.

    - by jbfink
    I've got a machine acting as a KVM host and another machine that NFS exports to that KVM host. I'd like for one of the internal VMs on the KVM host to be able to mount the NFS share. I can export to the KVM host IP fine and do a mount, but it doesn't work for the internal VM; I just get a failed error with "reason given by server: Permission denied". I've already tried to re-export the NFS from host to VM, but apparently doing two levels of NFS is not a Good Idea. Anyone know how I might get this working?

    Read the article

  • Setting up NIS/NFS on Mac OS 10.6

    - by evan
    We have an Ubuntu NIS/NFS server at work and we recently got a few new iMacs. Is there a way to set them up so they can use the linux user accounts and mount the shared nfs files? Are there any guides on how to do this? I've been googling with no success. I tried getting NFS to work by connecting to the server via the Disk Utility but after I run 'sudo automount' from the command line and ls the directory I tried to mount it to (Volumes/nfs) it gives a permissions error. If there isn't a way to do this, anyone know of any not to complicated ways to share user accounts and files between mac and linux computers (and even hypothetically a windows computer one day?) I know its kind a of huge question, but I'll greatly appreciate any advice on the topic. Thanks!

    Read the article

  • NFS confusion - writing many small files

    - by Antonis Christofides
    I have a Debian squeeze amd64 which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 176.9.116.102:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS directory goes at more than 100 MB/s.) I am confused by the description of async in NFS. If my application accesses the local directory, system calls like write and close return even if caches have not been flushed to permanent storage. Apparently this is not true with NFS sync behaviour. However, with NFS async behaviour, even calls like fsync are ignored. Isn't it possible to work like local files, i.e. generally work asynchronously, but honour fsync and O_SYNC?

    Read the article

  • Trouble with NFS file sharing on Synology 211 NAS and Ubuntu Client

    - by Aglystas
    I'm attempting to set up NFS file sharing and keep getting the error mount.nfs: access denied by server while mounting 192.168.1.110:/myshared Here is the exact command I'm using to mount: sudo mount -o nolock 192.168.1.110:/myshared /home/emiller/MyShared I have set 'Enabled NFS' in DSM and set NFS priviledges in the the Shares section of the control panel. Here is the /etc/exports entry from the NAS: volume1/myshared 192.168.1.*(rw,sync,no_wdelay,no_root_squash,insecure_locks,anonuid=0,anongid=0) I read some things about the hosts.allow and hosts.deny but it seems like if they are empty they aren't used for anything. I can see the share when I run ... showmount -e 192.168.1.110 Any help would be appreciated in this matter.

    Read the article

  • Concurrent NFS access

    - by Kristian
    Similar to Concurrent FTP access. How is concurrent file access handled for NFS? Say that one client is updating/overwriting a file on a NFS server, and a process on the server is reading that same file directly from the file system at the same time. Is there some sort of atomic handling of file read/write in NFS/Linux or do I have to work with tmp files to ensure data consistency? I'm worried that the process reading the file will get corrupt data.

    Read the article

  • Drupal on an NFS share has terrible performance

    - by Marcus
    We have a setup where a Drupal 7 site with the following setup - a VMware ESXi 4.1 host server running a web vm and an NFS VM. The web VM is using Apache and mod_php. The site is still in development thus we have to turn off all forms of caching due to the frequently-updated files. Each page request takes around 15-20 seconds to complete. Profiling the PHP code shows that the vast majority of time (normally over 90%) is taking by all the is_dir(), is_file() function calls that load up the modules. I've increased PHP's realpath cache size to several megs and an strace shows that the lstat calls then drop from over 200 to around 6 and stat() decreases a bit (around 600 calls). However, while this has shaved off quite a bit of time, I am simply unable to break past the 10 second per request barrier. Is there a way to get better performance out of this setup that doesn't involve caching? Configs and stats: VMs: web - Centos 6 64bt, 2.5GB RAM, normal CPU/HD prioritisation nfs - Centos 6 64bt, 2GB RAM, normal CPU priority, high HD priority PHP: 32M realpath cache size (it's this high for testing purposes) NFS: ~]# egrep -v '#|^$' /etc/nfsmount.conf [ NFSMount_Global_Options ] Defaultvers=4 Ac=False Rsize=32k Wsize=32k Bsize=32k Reading speeds via NFS are not an issue a dd of a 100M test file using 32k blocks returns: 3200+0 records in 3200+0 records out 104857600 bytes (105 MB) copied, 1.84984 s, 56.7 MB/s real 0m1.857s user 0m0.007s sys 0m0.330s Strace on Apache process with empty realpath cache: % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 50.78 1.157452 337 3434 28 stat 32.58 0.742656 628 1182 425 open 9.29 0.211788 762 278 1 lstat 3.17 0.072322 0 237865 write 2.45 0.055839 490 114 13 access 0.45 0.010262 43 237 brk 0.34 0.007725 10 811 74 read 0.28 0.006340 9 679 fstat 0.22 0.005069 18 281 poll 0.20 0.004533 6 698 getdents 0.09 0.001960 10 190 mmap 0.05 0.001065 14 74 accept4 0.04 0.001000 333 3 chdir 0.03 0.000750 4 190 munmap 0.01 0.000339 0 836 close 0.01 0.000247 3 75 writev 0.00 0.000068 0 611 fcntl 0.00 0.000063 1 77 shutdown 0.00 0.000000 0 1 lseek 0.00 0.000000 0 5 rt_sigaction 0.00 0.000000 0 1 rt_sigprocmask 0.00 0.000000 0 3 setitimer 0.00 0.000000 0 5 socket 0.00 0.000000 0 5 5 connect 0.00 0.000000 0 74 getsockname 0.00 0.000000 0 15 setsockopt 0.00 0.000000 0 5 getcwd 0.00 0.000000 0 1 futex ------ ----------- ----------- --------- --------- ---------------- Strace after realpaths are cached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 60.14 1.371006 484 2831 28 stat 31.79 0.724705 627 1155 425 open 3.53 0.080354 0 237865 write 2.65 0.060433 530 114 13 access 0.43 0.009913 99 100 brk 0.38 0.008730 11 804 74 read 0.35 0.007910 12 675 fstat 0.30 0.006775 10 654 getdents 0.13 0.003065 11 281 poll 0.09 0.002000 333 6 1 lstat 0.07 0.001545 2 807 close 0.05 0.001063 14 74 accept4 0.04 0.001000 6 179 mmap 0.02 0.000404 2 179 munmap 0.01 0.000271 4 75 writev 0.01 0.000212 0 611 fcntl 0.01 0.000129 2 77 shutdown 0.00 0.000022 0 74 getsockname 0.00 0.000000 0 1 lseek 0.00 0.000000 0 5 rt_sigaction 0.00 0.000000 0 1 rt_sigprocmask 0.00 0.000000 0 3 setitimer 0.00 0.000000 0 3 socket 0.00 0.000000 0 3 3 connect 0.00 0.000000 0 15 setsockopt 0.00 0.000000 0 5 getcwd 0.00 0.000000 0 3 chdir ------ ----------- ----------- --------- --------- ---------------- Mount: nfs.xxx.xxx.xxx:/path/to/website/files on /path/to/website/files type nfs (rw,hard,intr,noac,vers=4,addr=xx.xx.xx.xx,clientaddr=xx.xx.xx.xx) Any help is, naturally, appreciated.

    Read the article

  • Ubuntu hangs on boot when NFS-mounting entries in /etc/fstab, but they mount cleanly otherwise

    - by lorin
    I'm managing several Ubuntu 9.10 servers that NFS mount several folders (including /home). I'd like these folders to be mounted at boot time. I would like to have several entries in my /etc/fstab to accomplish this, e.g. 192.168.1.100:/home /home nfs rw 0 0 192.168.1.100:/usr/ansys_inc /ansys_inc nfs ro 0 0 Unfortunately, with this configuration, the servers usually (although not always) hang during the bootup sequence when trying to do the NFS mount. if I comment out these fstab entries, reboot the machine, uncomment them and mount them manually using the shell, the folders mount cleanly. I'm not sure how to go about debugging this problem. It seems like it has something to do with the boot sequence, that some relevant process hasn't been started by the time the OS tries to mount the folders.

    Read the article

  • Conditional `mount` in Linux: dev or nfs

    - by o_O Tync
    I have a mount point — let it be /media/question — and two possible devices: a physical HDD and a remote NFS folder. Sometimes I plug the device in physically, in other cases I mount it via NFS. Is there a way to specify both of them in fstab so that executing mount /media/question will preferably choose physical volume, and when it's not available — NFS?

    Read the article

  • NFS compound failed for server foosrv: error 7 (RPC: Authentication error)

    - by automatthias
    I'm setting up an Ubuntu NFS server with a Solaris 10 client. The basic configuration looks okay to me, and it was also working for some time. I'm getting an "RPC: Authentication error" message on the client. server /etc/exports: /export/opencsw-future 192.168.3.0/24(rw,nohide,insecure,no_subtree_check,async) /export/opencsw-current 192.168.3.0/24(rw,nohide,insecure,no_subtree_check,async) $ ls -ld /export/opencsw-current drwxr-xr-x 7 maciej maciej 4096 2012-02-05 14:55 /export/opencsw-current client $ grep opencsw /etc/vfstab foosrv:/opencsw-current - /export/opencsw-current nfs - yes - $ sudo mount /export/opencsw-current NFS compound failed for server foosrv: error 7 (RPC: Authentication error) (...repeated...) nfs mount: mount: /export/opencsw-current: Permission denied My server host name resolves to both IPv4 and IPv6 addresses.

    Read the article

  • Limit NFS block size from server side?

    - by paulw1128
    Is it possible to enforce a maximum rsize/wsize in nfsd? I'm having issues related to IP fragmentation (yes, I'm stuck with NFS-over-UDP, contrary to the warnings in the manpage), and have no practical access to the client mount command (buried in one of many TFTP boot images). http://nfs.sourceforge.net/nfs-howto/ar01s05.html lists a kernel source parameter limiting the maximum block size, but I'm not gong to get away with recompiling the nfsd kernel module so that's not really an option either :-(

    Read the article

  • Nexenta, NFS and LOCK_EX

    - by Givre
    I'm currently using an LAMP architecture and I expect a big problem :( I have several http web server using PHP5. All are mounting via NFS (v3) the directory for all the hosted websites. The file server is running the Nexenta Storage Appliance using ZFS . The problem is all the NFS client trying to write something in a file over the NFS get this problem : This is inside the apache2 process: open("/nfs/website1/file.txt", ORDWR|OCREAT, 0600) = 11647 fstat(11647, {stmode=SIFREG|0600, st_size=23754, ...}) = 0 flock(11647, LOCK_EX And the process never get the LOCK and keep waiting for... always. The effect? All the apache2 procces get used and waiting.. my severs can't still proccess the others requests because there is no more proccess available. I don't now where to find a solution.. for me it.'s on the NFS server side.. but wich configuration is wrong or missing ? How can I find what is wrong? If you need more information about the configuration, just ask me what can help you more :)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >