Search Results

Search found 2834 results on 114 pages for 'filesystem corruption'.

Page 99/114 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Attempted hack on VPS, how to protect in future, what were they trying to do?

    - by Moin Zaman
    UPDATE: They're still here. Help me stop or trap them! Hi SF'ers, I've just had someone hack one of my clients sites. They managed to get to change a file so that the checkout page on the site writes payment information to a text file. Fortunately or unfortunately they stuffed up, the had a typo in the code, which broke the site so I came to know about it straight away. I have some inkling as to how they managed to do this: My website CMS has a File upload area where you can upload images and files to be used within the website. The uploads are limited to 2 folders. I found two suspicious files in these folders and on examining the contents it looks like these files allow the hacker to view the server's filesystem and upload their own files, modify files and even change registry keys?! I've deleted some files, and changed passwords and am in the process of trying to secure the CMS and limit file uploads by extensions. Anything else you guys can suggest I do to try and find out more details about how they got in and what else I can do to prevent this in future?

    Read the article

  • Cannot destroy ZFS snapshot: dataset already exists

    - by Morven
    I have a server (T5220, though I doubt it matters) running Solaris 10 8/07 and I have a ZFS pool, "mysql", on internal disk. Within it I have a filesystem "mysql/data/4.1.12", which I snapshot hourly with a script from cron. I have one snapshot, created as one of those hourly snaps, that will not destroy. I have renamed it out of sequence to be "mysql/data/4.1.12@wibble" so that my script will not try and fail to destroy it, but it was originally within the sequence, though I doubt that matters. It renames successfully. The snapshot can be successfully navigated and read from through the .zfs/snapshots directory. It has no clones based on it. Trying to destroy it does this: (265) root@web-mysql4:/# zfs destroy mysql/data/4.1.12@wibble cannot destroy 'mysql/data/4.1.12@wibble': dataset already exists (266) root@web-mysql4:/# which is apparently nonsensical: of course it already exists, that's the point! Anyone seen anything like this before? Web searches show nothing obviously similar. I can provide patches installed if necessary.

    Read the article

  • What's a good solution for file-tagging in linux?

    - by julien
    I've been looking for a way to tag my files and search/filter them based on those tags. Here are my (updated) requirements : any file readable by the user can be tagged freely a user can search for files matching one or several tags files can be moved around without losing the previously associated tags the system could be backed up easily no dependencies on any desktop environment if any gui is involved, there must be a cli fallback I've been hoping for some basic filesystem & coreutils hackery to handle this, but I haven' thought about this hard enough yet. Meanwhile I'll review beagle and metatracker, which have been mentionned here, and see how they perform. Ok so beagle has huge gnome dependencies, and tracker is okish, but still has some dependencies I don't like... Been doing some more research, and the way to go could very well be extended file attributes. That's a native solution for most recent filesystems, but they aren't very well supported yet (most coreutils destroys them by default, cp for example needs the -a flag to preserve them). Would like to hear some thoughts on using them while I try my hand at some hacks myself, eventhough this might warrant a new question.

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • How to allow users to transfer files to other users on linux

    - by Jon Bringhurst
    We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files (sometimes 1PB) controlled by traditional unix permissions (ACLs usually aren't available or practical due to the specialized nature of the filesystem). We currently have a program called "give", which is a suid-root program that allows a user to "give" a file to another user when group permissions are insufficient. So, a user would type something like the following to give a file to another user: > give username-to-give-to filename-to-give ... The receiving user can then use a command called "take" (part of the give program) to receive the file: > take filename-to-receive The permissions of the file are then effectively transferred over to the receiving user. This program has been around for years and we'd like to revisit things from a security and functional point of view. Our current plan of action is to remove the bit rot in our current implementation of "give" and package it up as an open source app before we redeploy it into production. Does anyone have another method they use to transfer extremely large files between users when only traditional unix permissions are available?

    Read the article

  • Relayout LVM Disk

    - by Tom
    I have an Ubuntu 11.10 system with two 500GB disks. The partition tables look like this: /dev/sda1 primary 465.52GB /dev/sda2 extended 243.17MB -> /dev/sda5 logical 243.14MB /dev/sdb1 primary 465.76GB sda1 and sdb1 are in a single LVM physical volume group containing a single logical volume containing a single logical filesystem which is mounted as /. sda5 is mounted as /boot. The problem comes when I want to upgrade to Ubuntu 12.04, which requires at least 247MB free on /boot. So I need to reduce the size of sda1 so that I can increase the size of sda2 and sda5. How the heck do I do that? I can find how to shrink the logical volume group, but I'm not at all clear on how to clear out the end part of sda1 so that I can reduce the physical volume group. Does pvresize just deal with this automagically? Or is that wild wishful thinking? I guess the alternatives are to back everything up onto something or other and recreate the thing from scratch or find out whether GRUB2 supports using LVM for /boot.

    Read the article

  • Why is there no /usr/bin/ in windows? Would it be dangerous to the entire Program Files to the path?

    - by dotancohen
    I am a Linux user spending some time in Windows and I'm trying to understand some of the Windows paradigms instead of fighting them. I notice that each program installed in the traditional manner (i.e. via orgasmic installers: Yes, Yes, Yes, Finish) adds the executables to C:/Program Files/foo/bar.exe and then adds a shortcut to the Desktop / Start Menu containing the entire path. However, there is no common directory with links to the software, i.e. C:/bin/bar.exe which would link to C:/Program Files/foo/bar.exe. Therefore, after installing an application the only way to use the application is via the clicky-clicky menus or by navigating to the executable in the filesystem. One cannot simply Win-R to open the run dialogue and then type bar or bar.exe as is possible with notepad or mspaint. I realize that Windows 8 improves on this with the otherwise horrendous Start Screen which does support typing the name of the app, but again this depends on the app having registered itself for such. Would I be doing any harm by adding C:/Program Files recursively to the Windows path? I do realize that there will be name collisions (i.e. uninstall.exe) but could there be other issues?

    Read the article

  • GRUB reporting wrong partition type

    - by plok
    It all started when I had to replace one of the disks that the software RAID 1 on this machine currently uses. From that moment on I have not been able to boot to the Windows XP that is installed on the fourth hard drive, /dev/sdd. I am almost positive that the problem is related not to Windows but to GRUB, as if I unplug all the other hard drives so that the Windows XP disk is now /dev/sda it boots with no problem. The problem seems to be that GRUB detects a wrong partition type, which I understand suggest that something is really messed up. This is what I get when I try to follow the steps that until now had worked like a charm: grub> map (hd0) (hd3) grub> map (hd3) (hd0) grub> root (hd3,0) Filesystem type is ext2fs, partition type 0xfd 0xfd? That doesn't make sense. /dev/sdb and sdc are 0xfd (Linux raid), but not /dev/sdd: edel:~# fdisk -l [...] Disk /dev/sdd: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00048d89 Device Boot Start End Blocks Id System /dev/sdd1 * 1 30400 244187968+ 7 HPFS/NTFS edel:/boot/grub# cat device.map (hd0) /dev/sda (hd1) /dev/sdb (hd2) /dev/sdc (hd3) /dev/sdd I have been trying to work this out for hours, to no avail. Can anyone point me in the right direction?

    Read the article

  • Deploying multiple identical copies of a virtual machine for compute tasks

    - by Reid
    I have a compute task which has a large number of library dependencies. I would like to deploy it on some of my company's large Linux clusters, where I do not have root. I could probably track down, compile, and install the right versions of all the libraries, but this looks to be quite tedious and would have to be repeated if I deployed it again somewhere else. On the other hand, it's pretty easy to install on current Ubuntu. This led me to wonder about a virtual machine approach. Could I put together a virtual machine which booted up, ran the computation (with parameters from and results to the host), and then shut down? In other words, I'd like a command like this that I could run on the host: $ ./run-vm --ram N --task /path/on/host/foo.sh --results /another/host/dir/ This would boot the VM, run foo.sh, and put the (relatively small) results of the computation in /another/host/dir/. It's important to start up many instances of the VM simultaneously, both on a single node and multiple nodes of the cluster. So it would be nice if I didn't have to make many copies of the VM virtual disk and metadata. As the task instances are completely independent, the VMs would not need any network support once deployed, or any outside communications beyond reading and writing the host filesystem. Is this possible, and if so, how might I go about doing it? Are there assumptions I've made above which are bogus?

    Read the article

  • grub refuses to install to raid array

    - by ronno
    I have a software raid 0 setup with dual booting Windows 7 and Ubuntu 12.04. The GRUB bootloader that is already on the hard drive seems to work fine. However, since the latest package update for grub, it refuses to install the new version to the hard disk. grub-install throws the following error: /usr/sbin/grub-probe: error: cannot find a GRUB drive for /dev/mapper/< raid name_RAID0p9. Check your device.map. Auto-detection of a filesystem of /dev/mapper/< raid name_RAID0p9 failed. Try with --recheck. If the problem persists please report this together with the output of "/usr/sbin/grub-probe --device-map="/boot/grub/device.map" --target=fs -v /boot/grub" to < [email protected] update-grub pops the same "/usr/sbin/grub-probe: error: cannot find a GRUB drive for /dev/mapper/< raid name_RAID0p9. Check your device.map." every alternate line. I don't understand what exactly is going on. I'm afraid to reinstall the grub package because it might mess up the boot, which currently works fine. Is it safe to just ignore this?

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • How do I add a second disk to my zfs root pool

    - by ankimal
    I am trying to add a new disk to my zfs root pool. Here is my current config: zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c0d0s0 ONLINE 0 0 0 errors: No known data errors bash-3.00# df -h Filesystem Size Used Avail Use% Mounted on rpool/ROOT/s10x_u7wos_08 311G 18G 293G 6% / swap 14G 384K 14G 1% /etc/svc/volatile /usr/lib/libc/libc_hwcap1.so.1 311G 18G 293G 6% /lib/libc.so.1 swap 14G 52K 14G 1% /tmp swap 14G 40K 14G 1% /var/run rpool/export 293G 19K 293G 1% /export rpool/export/home 430G 138G 293G 32% /export/home rpool 293G 36K 293G 1% /rpool # format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0 1. c2d0 <Hitachi- JK1181YAHL0YK-0001-16777216.> /pci@0,0/pci-ide@1f,5/ide@1/cmdk@0,0 Disk 1 above is the new disk I need to attach to expand my root pool (give /export/home some extra space). If I try to attach my new disk to the pool # zpool attach -f rpool c0d0s0 c2d0s0 cannot attach c2d0s0 to c0d0s0: new device must be a single disk # uname -a SunOS dsol1 5.10 Generic_139556-08 i86pc i386 i86pc Solaris Any ideas?

    Read the article

  • How to safely use grub rescue> in Fedora 16? System does not boot anymore

    - by YumYumYum
    When i boot my PC, i get this in my Fedora 16 distro. I have tried as following but none allowing me to boot anymore. Any help please? I am blocked completely. Grub loading. Welcome to GRUB! error: file not found. Entering rescue mode... grub rescue> grub rescue> ls (hd0) (hd0,gpt3) (hd0,gpt2) (hd0,gpt1) grub rescue> ls (hd0,gpt2)/ ./ ../ lost+found/ memtest86+-4.20 grub2/ System.map-3.1.0-0.rc3.git0.0.fc16.i686 config 3.1.0.0.rc3.git0.0.fc16.i686 grub/ vmlinuz-3.1.0.0.rc3.git0.0.fc16.i686 elf-memtest86+-4.20 initramfs-3.1.0.0.rc3.git0.0.fc16.i686.img initramfs-3.1.0.0.rc4.git0.0.fc16.i686.img System.mpa-3.1.0.0.rc3.git0.0.fc16.i686 config-3.1.0.0.rc3.git0.0.fc16.i686 vmlinuz-3.1.0.0.rc3.git0.0.fc16.i686 grub rescue> set prefix=(hd0,gpt2)/boot/grub grub rescue> set root=(hd0,gpt2) grub rescue>insmod normal error unknown filesystem. or sometimes "error: file not found." grub rescue>normal unknown command normal

    Read the article

  • disk space keeps filling up on EC2 instance with no apperent files/directories

    - by sasher
    How come os shows 6.5G used but I see only 3.6G in files/directories? Running as root on an Amazon Linux AMI (seems like Centos), lots of free memory available, no swapping going on, no apparent file descriptors issue. The only thing I can think of is a log file that was deleted while applications append to it. Disk space usage is slowly but continuously rising towards full capacity (~1k/min with very small decreases from time to time) Any explanation? Solution? du --max-depth=1 -h / 1.2G /usr 4.0K /cgroup 22M /lib64 11M /sbin 19M /etc 52K /dev 2.1G /var 4.0K /media 0 /sys 4.0K /selinux du: cannot access /proc/14024/task/14024/fd/4': No such file or directory du: cannot access<br/> /proc/14024/task/14024/fdinfo/4': No such file or directory du: cannot access /proc/14024/fd/4': No such file or directory du: cannot<br/> access/proc/14024/fdinfo/4': No such file or directory 0 /proc 18M /home 4.0K /logs 8.1M /bin 16K /lost+found 12M /tmp 4.0K /srv 35M /boot 79M /lib 56K /root 67M /opt 4.0K /local 4.0K /mnt 3.6G / df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 6.5G 1.4G 84% / tmpfs 3.7G 0 3.7G 0% /dev/shm sysctl fs.file-nr fs.file-nr = 864 0 761182

    Read the article

  • How to correctly setup home directories and permissions on a mounted partition.

    - by user36505
    I'm setting up a Fedora 12 server. I have a root (/) partition where the boot (/boot) partition is mounted and then a separate partition (/files) for separating home directories and shares away from the other partitions. The filesystem mounts fine and users can be created to have home directories in /files/home/[user] just fine. However, when I log in as one of those users, I get an error saying "Cannot chdir in to /files/home/[user]: permission denied". If I create a user under the default /home using the same process, everything works fine. The same goes for when I try and browse a share in windows; I can see the shares, but cannot access them. The permissions and owners on /files and /files/home are the same as /home. When the user is created, the user directory owner and permissions are also the same. How can I set the /files partition up so that it can be used as a home directory and for samba sharing rather than using the root (/) partition? Thanks.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • saving data from a failing drive

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • Subversion COPY/MOVE - File not found: transaction 'XXX-XX'

    - by theplatz
    I'm attempting to create a branch in one of my subversion repositories and keep running into an error. No mater what is done, I keep getting the following: File not found: transaction '3062-2e6', path '/Software/XXXXXX/branches/testbranch' I've noticed that the first part of the '3063-3e6' in the above message is the last successful committed revision in the repository. My apache logs don't give much more information: [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Could not MOVE/COPY /svn/p070361/!svn/bc/3049/Software/SXXXXXX/trunk. [404, #0] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Unable to make a filesystem copy. [404, #160013] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] File not found: transaction '3059-2e2', path '/Software/XXXXXX/branches/testbranch' [404, #160013] This is all happening on a server with an nginx frontend that proxies to Apache for the subversion bits. Other repositories are able to branch fine and I was able to create the branch using file:/// from the command line on the server this is occurring on. The permissions on this repository match every other repository and disk space is not an issue.

    Read the article

  • centos install / partitioning

    - by ServerSideX
    I'm using NOC-PS to remotely install Centos 6.2 via KVM / IPMI. I'm going to install cPanel as well and they recommend this layout /boot (99MB) swap (2x server RAM) / (remainder) In the o/s install profile within NOC-PS software, it shows as this: part /boot --fstype ext2 --size 250 part pv.01 --size 1 --grow volgroup vg pv.01 logvol / --vgname=vg --size=1 --grow --fstype ext4 --fsoptions=discard,noatime --name=root logvol /tmp --vgname=vg --size=1024 --fstype ext4 --fsoptions=discard,noatime --name=tmp logvol swap --vgname=vg --recommended --name=swap By the time the default partition setup was done installing Centos, I get this [root@server005 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg-root 532G 907M 504G 1% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 243M 28M 202M 13% /boot /dev/mapper/vg-tmp 1008M 34M 924M 4% /tmp [root@server005 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Fri Dec 7 18:47:24 2012 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg-root / ext4 discard,noatime 1 1 UUID=58b31aaf-5072-4fb1-a858-33bc316fa793 /boot ext2 defaults 1 2 /dev/mapper/vg-tmp /tmp ext4 discard,noatime 1 2 /dev/mapper/vg-swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 My question is, how should the NOC-PS install profile look like to get the recommended cPanel partitioning? The server has 16GB RAM, dual 600GB SAS drives and will be used for cPanel shared hosting.

    Read the article

  • Slower/cached Linux file system required

    - by Chopper3
    I know it sounds odd but I need a slower or cached filesystem. I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers. The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem. So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones. I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future. Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.

    Read the article

  • Windows 7 - ignore security when reading external drive

    - by w-
    hi, My system hard drive on an XP computer kind of failed (random corrupt sectors). So i got a new harddrive and am trying to recover the files. The filesystem is NTFS. The system i'm trying to use when recovering the files is Windows 7. I'm obviously an admin on this box. The last data i'm trying to recover is stuff in the Documents and Settings folder. I'm using a SATA to a USB cable thingy so that I just plug it in as an External Hard Drive. The problem: In Windows Explorer when i try to copy the data, I keep getting prompted with Security warnings and error messages. It keeps telling me i have to change the owner permissions of the folder and all it's contents. If i tell it to change all the files and folder permissions it takes a really long time because it has to recurse through all the folder contents to change the permissions. Is there a way for me to ignore the file permissions when doing this? thanks

    Read the article

  • Linux 2.6.24-gentoo-r3-comtrance on x86_64 high Useage for unknown reasons

    - by Dorjan
    Hello everyone, I'm a complete rookie when it comes to all things Linux related so please treat me as such and assume I know nothing. That being said my Top says this: top - 12:08:03 up 11 days, 15:36, 0 users, load average: 5.47, 5.53, 5.46 Tasks: 296 total, 2 running, 294 sleeping, 0 stopped, 0 zombie Cpu(s): 6.3%us, 1.4%sy, 0.0%ni, 71.3%id, 20.6%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 8176880k total, 8118236k used, 58644k free, 89312k buffers Swap: 1004052k total, 0k used, 1004052k free, 7235652k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1229 root 15 -5 0 0 0 D 1 0.0 199:28.63 kjournald 2946 root 20 0 1716 676 552 D 1 0.0 145:02.94 syslogd 14553 root 20 0 2644 1268 876 R 1 0.0 0:00.34 top 14609 postfix 20 0 7896 1884 1460 D 1 0.0 0:00.02 bounce 14630 postfix 20 0 7896 1876 1452 R 0 0.0 0:00.00 bounce And my hard drives says: > df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 4925556 4474836 200508 96% / /dev/sda5 489992 36090 428602 8% /tmp /dev/sda6 377951852 236171160 122581816 66% /var none 4088440 0 4088440 0% /dev/shm It has been like it for a few days now... I know not what is causing the high server load (Normally around 1.3) can anyone give any tips on how to track down the culprit? Many thanks,

    Read the article

  • Why are all of my ZFS snapshot directories empty?

    - by growse
    I'm running an Oracle 11 box as a ZFS storage appliance, and I'm taking regular snapshots of the ZFS filesystems, via cron. In the past, I know that if I wanted to grab a particular file from a snapshot, a read-only copy was kept in .zfs/snapshot/{name}/ and I could just navigate there and pull the file out. This is documented on Oracle's website. However, I went to do this the other day, and noticed that the ZFS directories within the snapshot directories are all empty. zfs list -t snapshot correctly shows the list of snapshots that should be present, and .zfs/snapshots correctly contains a directory for each snapshot, and in each snapshot there is a directory present for each ZFS filesystem. However, these directories appear to be empty. I just tested a restore by touching a file in a little-used share and rolling back to the latest hourly snapshot, and this appears to have worked fine. So the rollback functionality is there. Did Oracle change how snapshots are done? Or is something seriously wrong here?

    Read the article

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >