Search Results

Search found 2282 results on 92 pages for 'filesystem'.

Page 55/92 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • How can I tell whether an interrupted rm -r removed any files?

    - by Jake Petroules
    I installed sshfs a Linux box and then mounted my Mac home directory. In the middle of troubleshooting a configuration issue, I did an ls -l on the mount directory (as normal user), receiving: total 0 d????????? ? ? ? ? ? sl I then ran sudo rm -r on that directory but pressed Ctrl+C to terminate it immediately before it (looks) like the command did anything. I notice no files missing but I want to be sure - is there a way I can somehow inspect the filesystem log on my Mac to see if any files were actually removed?

    Read the article

  • Getting FC11 to run under VMware Server, converted from physical machine

    - by Kristian
    I have a FC11 installation that I have converted to a VMware disk image to run om my VMware Server. I converted it with qemu-img, as the VMware Converter software apparently only converts Linux hosts to VMware Infrastructure servers. The disk image boots fine (grub is loaded and boots the kernel) but it seems like the disk is not found by the kernel, and the boot process stalls. Hotplugging USB devices work (the kernel prints debug information) and I'm able to press keys (Ctrl-Alt-Delete for instance). The VMware guest OS is set to RedHat Enterprise Linux 5 (32 bit), and I have tried both the LSI Logic, LSI Logic SAS and VMware Accelerated SCSI SCSI controllers, to no avail. I'm able to boot an installer disk and get into rescue mode and mount the filesystem, so my question is, what do I need to do to the guest kernel / initrd image to make it recognize the virtual disk?

    Read the article

  • Plesk file permissions - Apache/PHP conflicting with user accounts.

    - by hfidgen
    Hiya, I'm building a Drupal site which performs various automatic disk operations using the apache user (id=40). The problem is that the site was set up on a subdomain belonging to user ID 10001 (ie my main FTP account) so the filesystem belongs to that user ID. So I keep getting errors like this: warning: move_uploaded_file() [function.move-uploaded-file]: SAFE MODE Restriction in effect. The script whose uid is 10001 is not allowed to access /var/www/vhosts/domain.com/httpdocs/sites/default/files/images/user owned by uid 48 in /var/www/vhosts/domain.com/httpdocs/includes/file.inc on line 579. I've tried changing the apache group in httpd.conf to apache:psacln, psacln being the default group for all web users but that's not helped. The situation now is: ..../files/images/ = 777 and chown = ftplogin:psacln ..../files/images/user = 775 and chown = apache:psacln ..../files/tmp = 777 and chown = ftplogin:psacln So apparently uid 40 and 10001 both have permissions to write to any of the 3 directories involved, but still can't. Am i missing something here? Can anyone help? Thanks!

    Read the article

  • OpenAFS on Fedora/CentOS

    - by Michael Pliskin
    I am trying to see if OpenAFS fits my needs as a distributed filesystem and is a bit stuck. There are docs but they're all quite hard to understand, so asking for some expert advice here. My questions: which version to install? I need windows client support so I need 1.5 - right? But it is not stable.. Or is it? And don't see any pre-built rpms for it, so compiling from sources? tried to compile and it worked but it created a non-"mp" kernel module while my kernel needs an mp one - how to workaround that? do I really need a new fresh partition to start with or I can re-use an existing one and just make it available via afp? any nice HOWTOs around?

    Read the article

  • Possible to get SSD TRIM (discard) working on ext4 + LVM + software RAID in Linux?

    - by Don MacAskill
    We use RAID1+0 with md on Linux (currently 2.6.37) to create an md device, then use LVM to provide volume management on top of the device, and then use ext4 as our filesystem on the LVM volume groups. With SSDs as the drives, we'd like to see the TRIM commands propagate through the layers (ext4 - LVM - md - SSD) to the devices. It looks like recent 2.6.3x kernels have had a lot of new SSD-related TRIM support added, including lots more coverage of Device Mapper scenarios, but we still can't seem to get it to cascade down properly. Is this possible yet? If so, how? If not, is any progress being made?

    Read the article

  • PLesk Error: pmm-ras error (Error code = -6): during restore.. /tmp folder to increase?

    - by eric
    I had to re-install plesk on a centos 6 system after a crash. The full backup file is 11 gb. but at the beginning of the backup reinstall I get the error Error: pmm-ras error (Error code = -6): argh ! my disk organization is like this Filesystem Size Used Avail Use% Mounted on /dev/xvda1 3.7G 801M 2.9G 22% / /dev/mapper/vg00-usr 14G 1.5G 12G 12% /usr /dev/mapper/vg00-var 155G 14G 134G 10% /var /dev/mapper/vg00-home 3.9G 136M 3.6G 4% /home none 1000M 7.5M 993M 1% /tmp I suppose I have to increase my /tmp folder to accept the backup size,but I don't know how-to. I'm on 1&1 cloud server Thanks for your help. You can imagine the emergency of this situation...

    Read the article

  • How to convert ext3 partition to use encrypted file system without loosing data?

    - by User1
    My embedded Linux device have 2 partitions: small root partition containing OS. big data partition which uses ext3 I want to encrypt the data partition by using encrypted file system. I don't want loose any data of the partition. Size of the root partition is too small to hold all data of the data partition. It is not possible to use any external data storage. Is there any tools that can convert filesystem of the data partition from ext3 to encrypted fs without copying all files to other place?

    Read the article

  • secure synchronization of large amount of data

    - by goncalopp
    I need to automatically mirror a large amount (terabytes) of files in two unix machines over a slow link (1 Mbps). This needs to be done frequently, but the data doesn't change too much (delta transmission doesn't saturate the link). The usual solution would be rsync, but there's an additional requirement: it's undesirable, from a security standpoint, that either the source or destination machines have (keyless) ssh keys to each other, or any kind of filesystem access. All communication between the two machines should thus be initialized (and mediated) through a third machine. I've asked a separate question about rsync in particular here. Are there other obvious solutions I'm missing?

    Read the article

  • How do I mount a HFS+ dd image in OSX?

    - by Paul McMillan
    I had an HFS+ formatted drive that was going bad and wouldn't mount at all on OSX. I created an image using ddrescue on linux, and was able to save most of it. I can mount the drive and see the data just fine in linux using this: mount -o loop -t hfsplus dd_image /Volumes/mountpoint This doesn't work on my OSX system since hfsplus isn't a valid filesystem type. If I try: mount -t hfs image mountpoint It complains that it needs a block device. What's the fix here?

    Read the article

  • Windows 7 boot manager issue

    - by L.ppt
    I was having windows 7 installed on my laptop, yesterday I tried to install Open Suse operating system. During its installation I chose a NTFS partition and formatted it to ext4 filesystem. During installation an error came that mount point cannot be created on this partition and I aborted the installation. They on reboot a message came BootMgr is missing. I then reinstalled the windows but on complete installation when setup rebooted the system then a blank screen came with a cursor blinking. I went through many forums and learnt may startup repairs and commands but it continues to hang up at a blank screen with cursor blinking. Reinstalling new windows 7 is also not doing any effect. I urgently need to repair my laptop for very important work. Please Help

    Read the article

  • Bash Script to Back Up Backs Up Itself

    - by Jay LaCroix
    I have the following bash script that creates a tar.gz of my filesystem on a Kubuntu PC. The problem is, that it also tries to backup the tar.gz backup file, even though I am storing the backup in /tmp and omitting /tmp from the backup. I am wondering why it's backing up the file in /tmp even though I told it not to. #!/bin/bash # init DATE=$(date +20%y%m%d) sudo tar -cvpzf /tmp/`hostname`_$DATE.tar.gz \ --exclude=/proc \ --exclude=/lost+found \ --exclude=/sys \ --exclude=/mnt \ --exclude=/media \ --exclude=/dev \ --exclude=/tmp \ --exclude=/home/jlacroix/Desktop \ --exclude=/home/jlacroix/Documents \ --exclude=/home/jlacroix/Music \ --exclude=/home/jlacroix/Pictures \ --exclude=/home/jlacroix/Projects \ --exclude=/home/jlacroix/Roms \ --exclude=/home/jlacroix/Videos \ --exclude=/home/jlacroix/.VirtualBox\ VMs \ --exclude=/home/jlacroix/.SpiderOak \ / scp /tmp/`hostname`_$DATE.tar.gz jlacroix@Pluto:/share/Recovery/Snapshots sudo rm /tmp/`hostname`_$DATE.tar.gz

    Read the article

  • Possible to get SSD TRIM (discard) working on ext4 + LVM + software RAID in Linux?

    - by Don MacAskill
    We use RAID1+0 with md on Linux (currently 2.6.37) to create an md device, then use LVM to provide volume management on top of the device, and then use ext4 as our filesystem on the LVM volume groups. With SSDs as the drives, we'd like to see the TRIM commands propagate through the layers (ext4 - LVM - md - SSD) to the devices. It looks like recent 2.6.3x kernels have had a lot of new SSD-related TRIM support added, including lots more coverage of Device Mapper scenarios, but we still can't seem to get it to cascade down properly. Is this possible yet? If so, how? If not, is any progress being made?

    Read the article

  • What can I do to give some more love and disk space to my database on Ubuntu?

    - by Yaron Naveh
    I'm new to linux. I've deployed a db to ubuntu server on amazon and found out I'm low on disk space. did df (see below) - and found out that I'm 89% capacity on one file system, but less on others. What does this mean? Do I have a few partitions and can now utilize others besides /dev/xvda1? Also /dev/xvdb seems large, is it safe to put the db in it and only use it? If so do I need to mount it or do something special? $> df -lah Filesystem Size Used Avail Use% Mounted on /dev/xvda1 8.0G 6.7G 914M 89% / proc 0 0 0 - /proc sysfs 0 0 0 - /sys none 0 0 0 - /sys/fs/fuse/connections none 0 0 0 - /sys/kernel/debug none 0 0 0 - /sys/kernel/security udev 3.7G 8.0K 3.7G 1% /dev devpts 0 0 0 - /dev/pts tmpfs 1.5G 164K 1.5G 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.7G 0 3.7G 0% /run/shm /dev/xvdb 414G 199M 393G 1% /mnt

    Read the article

  • Rebuilding /etc/rc?.d/ links

    - by timday
    A regular filesystem check on a Debian Lenny system triggered an fsck, and that nuked a handful of links in the /etc/rc?.d hierarchy (unfortunately I didn't keep a list). The system seems to boot and run normally, but I'm worried its storing up trouble for the future. Is there an easy (fairly automatic) way of rebuilding this piece of the system ? As I understand it, the links are generally manipulated by package postinst scripts using update-rc.d (and I haven't made any changes from the installed defaults). Without any better ideas, my plan is one of: Diff a listing with another similar system to identify which packages need their links repairing. Wait until the system is upgraded to Squeeze (hopefully not too long :^) and assume the mass package upgrade will restore all the missing links.

    Read the article

  • Unable to mount portable hard drive on Ubuntu

    - by VoY
    My portable hard drive (WD My Passport), which used to work correctly now does not automount on my Ubuntu system. It does work on a Windows machine or even when plugged into WD HD TV, which is a Linux based device. There's one NTFS partition spanning the whole drive. When I plug the disk in, I see the following in dmesg: [269259.504631] usb 1-2.2: new high speed USB device using ehci_hcd and address 20 [269259.604674] usb 1-2.2: configuration #1 chosen from 1 choice However it does not mount in GNOME and I don't see it when I type: sudo fdisk -l Any suggestions why this might be? I repaired the partition using chkdsk on Windows, so the issue is probably not filesystem related.

    Read the article

  • Dealing with different usernames when mounting removable media in Linux

    - by dimatura
    I have a laptop in which my username is, say, "foo". I have an external drive, formatted with Ext4, for which all files are owned by "foo" (at a filesystem level). Now, I have a desktop in which my username is, say, "bar". If I mount this external drive in this computer the files are considered to not be owned by "bar". This makes sense, but it is annoying because their bits mode are set so that only the owner can modify/delete them. What's the cleanest way to deal with this? Create a group with "foo" and "bar" and add group modification permissions?

    Read the article

  • How can I get rid of / hide :2eDS_Store files on my linux netatalk server?

    - by Douglas Mayle
    I'm running a netatalk server process on my linux server that serves files up to Mac client machines. Whenever you use Mac's Finder to access foreign filesystems over netatalk, it creates '.DS_Store' files to store information about the folder. Normally, these files would be hidden by default, and I wouldn't care. Unfortunately, netatalk doesn't allow access to local hidden files, so when the Mac writes and reads these, it renames them :2eDS_Store on the local filesystem. When you have a deep tree, you end up with these littered all over the place, and other Windows and Linux clients have to deal with them. How do I make these available to Mac clients and hidden from everyone else?

    Read the article

  • df -h showing wrong output in GB

    - by Anurag Uniyal
    If I list df output for KB, MB and GB, they do not match e.g. $ df -k |grep xvdb /dev/xvdb1 12796048 732812 11413172 7% /xxx $ df -m |grep xvdb /dev/xvdb1 12497 716 11146 7% /xxx $ df -h |grep xvdb /dev/xvdb1 13G 716M 11G 7% /xxx 12796048 KB = 12496.14 MB so that is slight off but OK 12796048 KB = 12.2 GB, 12407 MB is also 12.2 GB so why df is showing 13 GB or am I missing something? Here is full df listing $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.5G 1.7G 5.5G 24% / none 5.8G 128K 5.8G 1% /dev none 5.8G 0 5.8G 0% /dev/shm none 5.8G 44K 5.8G 1% /var/run none 5.8G 0 5.8G 0% /var/lock none 5.8G 0 5.8G 0% /lib/init/rw /dev/xvdb1 13G 716M 11G 6% /xxx Coreutils version seems to 7.4 as info coreutils shows This manual documents version 7.4 of the GNU core utilities,

    Read the article

  • How do I get nginx to issue 301 requests to HTTPS location, when SSL handled by a load-balancer?

    - by growse
    I've noticed that there's functionality enabled in nginx by default, whereby a url request without a trailing slash for a directory which exists in the filesystem automatically has a slash added through a 301 redirect. E.g. if the directory css exists within my root, then requesting http://example.com/css will result in a 301 to http://example.com/css/. However, I have another site where the SSL is offloaded by a load-balancer. In this case, when I request https://example.com/css, nginx issues a 301 redirect to http://example.com/css/, despite the fact that the HTTP_X_FORWARDED_PROTO header is set to https by the load balancer. Is this an nginx bug? Or a config setting I've missed somewhere?

    Read the article

  • ubuntu server slowly filling up

    - by Crash893
    We had our samba server (ubuntu 8.04 ltr) share fill up the other day but when I went to look at it I cant see any of the shares have to much on them we have 5 group shares and then each users has an individual share one users has 22gigs of stuff a few others have 10-20mb of stuff and everyone else is empty so maybe like 26gigs total I deleted a few files yesterday and freed up about 250mb of space today when i checked it it was completely full again and i deleted some older files and freed up about 170mb of stuff but i can watch it slowly creep down in free space. I keep running a df -h Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 241690180 229340500 169200 100% / varrun 257632 260 257372 1% /var/run varlock 257632 0 257632 0% /var/lock udev 257632 72 257560 1% /dev devshm 257632 52 257580 1% /dev/shm lrm 257632 40000 217632 16% /lib/modules/2.6.24-28-generic /volatile what can I do to try to hunt down whats taking up so much of my hdd? (im fairly new to unix in general so i apologize if this is not well explained)

    Read the article

  • ubuntu server slowly filling up

    - by Crash893
    We had our samba server (ubuntu 8.04 ltr) share fill up the other day but when I went to look at it I cant see any of the shares have to much on them we have 5 group shares and then each users has an individual share one users has 22gigs of stuff a few others have 10-20mb of stuff and everyone else is empty so maybe like 26gigs total I deleted a few files yesterday and freed up about 250mb of space today when i checked it it was completely full again and i deleted some older files and freed up about 170mb of stuff but i can watch it slowly creep down in free space. I keep running a df -h Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 241690180 229340500 169200 100% / varrun 257632 260 257372 1% /var/run varlock 257632 0 257632 0% /var/lock udev 257632 72 257560 1% /dev devshm 257632 52 257580 1% /dev/shm lrm 257632 40000 217632 16% /lib/modules/2.6.24-28-generic /volatile what can I do to try to hunt down whats taking up so much of my hdd? (im fairly new to unix in general so i apologize if this is not well explained)

    Read the article

  • Apache whitelist a single location, but require basic auth for everything else

    - by Chris Lawlor
    I'm sure this is simple, but Google is not my friend this morning. The goal is: /public... is openly accessible everything else (including /) requires basic auth. This is a WSGI app, with a single WSGI script (it's a django site, if that matters..) I have this: <Location /public> Order deny,allow Allow from all </Location> <Directory /> AuthType Basic AuthName "My Test Server" AuthUserFile /path/to/.htpasswd Require valid-user </Directory> With this configuration, basic auth works fine, but the Location directive is totally ignored. I'm not surprised, as according to this (see How the Sections are Merged), the Directory directive is processed first. I'm sure I'm missing something, but since Directory applies to a filesystem location, and I really only have the one Directory at /, and it's a Location that I wish to allow access to, but Directory always overrides Location... EDIT I'm using Apache 2.2, which doesn't support AuthType None.

    Read the article

  • 3½" PATA Western Digital Caviar SE (250MB) makes steady ticking sound when idle

    - by intuited
    I've started to notice a ticking sound emanating from my WD2500JB. It is not alarmingly loud. The sound seems to occur only when the drive has been idle for some time, and will cease upon (some?) disk activity. The sound has a regular, steady cadence of somewhere between about 4 and 6 ticks per second. I'm not entirely certain that it just started making these sounds, since I previously had the drive — mounted in a USB enclosure — stored out of earshot, and only recently moved it to where I can hear it. The SMART attributes for the drive do not indicate any problems. I did have some errors to clean up recently (since I started noticing the sounds). The errors occurred on an ext3 filesystem. The drive had been powered down while mounted a few times prior to that fsck. Is this cause for alarm? Should I scrap the drive on principle?

    Read the article

  • chown: changing ownership of `.': Invalid argument

    - by Pierre
    I'm trying to install some new files on our new server while our sysadmin is in holidays: Here is my df # df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb3 273G 11G 248G 5% / tmpfs 48G 260K 48G 1% /dev/shm /dev/sdb1 485M 187M 273M 41% /boot xxx.xx.xxx.xxx:/commun 63T 2.2T 61T 4% /commun as root , I can create a new directory and run chown under /home/lindenb # cd /home/lindenb/ # mkdir X # chown lindenb X but I cannot run the same command under /commun # cd /commun/data/users/lindenb/ # mkdir X # chown lindenb X chown: changing ownership of `X': Invalid argument why ? how can I fix this ? updated: mount: /dev/sdb3 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/sdb1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) xxx.xx.xxx.xxx:/commun on /commun type nfs (rw,noatime,noac,hard,intr,vers=4,addr=xxx.xx.xxx.xxx,clientaddr=xxx.xx.xxx.xxx) version: $ cat /etc/redhat-release CentOS release 6.3 (Final)

    Read the article

  • Unmounting a zfs pool while it is shared with sharenfs

    - by Ted W.
    I have a Solaris (open indiana) system which is getting poor disk write performance. In order to enable ZIL in this version of zfs I need to add a line to /etc/system. This will not take affect until I've unmounted and remounted the zpool. The trick is that this spool is shared via nfs to about 200 other servers to host users' home directories. I can guarantee that no users will be accessing the disks during this period of maintenance but I would like to avoid having to issue an unmount for 200 systems in order to unmount the disk on the Solaris box. My question is, with sharenfs, is it necessary to have all systems disconnected before unmounting the filesystem on the host? If it's possible, how do you go about it? I've tried unmounting already, the normal way, and it reports the disk is busy. There is no lsof in Solaris and pfiles (I think that's what it was) does not show anything obviously using the mounts.

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >