Search Results

Search found 859 results on 35 pages for 'filesystems'.

Page 19/35 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • File exists but is unreadable by PHP

    - by Aron
    More than once I have ran into this issue: I have a cache file that is automatically generated by PHP. It contains some generated PHP code. However for some reason the file cannot be read and parsed by PHP. These are the symptoms: File actually exists on file system. Using Terminal you can navigate to the file, view its contents (which are fully intact), etcetc. PHP file_exists() will report that the file exists...which is correct since it does :) Then I include() the file. But when actually parsing the file, PHP will just consider it an empty file. No fatal error, just no PHP code actually executed. Again, its as if the file was completely empty (which I assure you, it is not)... It is not a permissions issue. Permissions are set as needed. Workaround: open the file in Terminal via 'nano' or some other text editor and just save it to the disk again. After that (despite no changes to the content) PHP will run it just fine... As a clarification, I'd like to add that this happens rarely, but frequently enough to be a problem. And even when it does, there are hundreds of other similar files on the same system that work without a problem... If this were an issue affecting only my own scripts, I would consider that there must be a bug in the way I generate the PHP code. But no, the issue has occurred more than once when deploying to a server (usually from Beanstalk repository via FTP). The issue has been present on various servers, Debian and Ubuntu running Zend Community Server. Any ideas? One that crossed my mind was opcode cache-ing (part of Zend Server CE)...could it be that an empty version of the file is cached if it is requested while the write operation is still in progress?

    Read the article

  • Why sizes are different, and what do they mean?

    - by Ramy
    I have a 1 TB hard drive that consists of one NTFS partition which I use to back up my data (no operating system). The size of all the data in it is : 726 GB, size on disk: 728 GB, and the used space when I check the properties is: 731 GB. There's a 5 GB difference between the size and the used space. Why is that huge difference there? What's the difference between these sizes? (size, size on disk, and used space) Is there a way to calculate the difference, and be sure the HDD is not messing around? Is that normal?

    Read the article

  • Can I use Veritias Storage Manager to provide HA storage using server-local storage?

    - by Paul
    I have a need to provide an high-availability ftp/http file repository. Upload will happne to one server, but the uploaded file must be immediately visisble on all other servers I can handle the failover of the servers themeselves using load balancers. But in the event of failure of one server, the other servers must see the same contents of the repository. Normally, I'd use a SAN for this, but in this case the data centre standards do not allow SAN/external storage - all storage will be local to the servers. Cam I use Veritas Storage Manager (or any other product) to manage mirroring hte contents between servers in this way? Or does that require a SAN? I couldn't tell either way from a quick look at the data sheets etc.

    Read the article

  • Is it possible to view the contents of an underlying NFS mount without unmounting the NFS content?

    - by Brent
    I have a shared directory on a server - let's call it /home/shared - which is mounted with content from another server via nfs. When it is unmounted /home/shared is supposed to be empty - however, running du -x on the directory indicates that it is not empty. I cannot unmount the NFS content to inspect the mount point, since it is in use by others. Is there any way that I can view/edit the contents of the actual mount point (not the NFS content) while leaving the NFS content mounted for others to use?

    Read the article

  • Keeping folder of files in sync over 3 machines

    - by Wizzard
    Morning, Got 3 machines that have user content on them, which I need to keep in sync. This is a 3 way sync. Currently I run rsync but we just don't handle deletes. Have looked at something like gluster, but that seems a little over the top Any other software out there to do a 3 way sync, or a good network file system...? There is for web servers so we don't want a slow / IO hungry process. 3 servers... user content could be added to 1 and needs to be moved to other two.

    Read the article

  • How to get back win32 executable file

    - by Ahmed Ezz
    i make something by wrong... when i right click into .exe file then choose OPEN with and i select choose a program then i select wrong program to open with... then i checked the checkBox that have the label [Always use the selected program to open this kind of file]... The Problem??? All .exe file changed into the wrong program i select it-- so all .exe file opened with this program... My Question?? HOW to get back all .exe file to the regular work..?? and thanks in advance :)

    Read the article

  • Win 2003 Junction Point to Remote Unix Share

    - by Pogrindis
    Env : Windows Server 2003 with already established shared folders over the local Domain via Windows DC and AD. - Linux box being used as a fileserver with the folder /files/share being R+W by all domain users, this is not a problem. I have already transfered the files from the Windows Box to the /files/share on the Linux Box however i now want to create a junction point in order to prevent users saving to the Windows box. I have tried the FileServer Administration on windows server 2003 however it will not allow me to junction remote servers. I have tried mounting the remote filesystem as a drive and proceeding that way however no joy. Anyone have any suggestions ?

    Read the article

  • Strange filesystem behavior, Ubuntu 9

    - by Fixee
    I have two windows open on the same machine (Ubuntu 9, ia32, server). I'll call these windows W1 and W2. W1: $ cd ~/test $ ls sample $ In W2 I run "make" from a parent directory that recreates file test/sample: $ make project . . $ cd test $ ls sample $ Now, returning to W1: $ ls $ cd ../test $ ls sample $ In other words, after I build from another window and the file test/sample is replaced, ls shows the file as missing in the 2nd window until I cd ../test back into the directory whereupon it reappears. I can give more details if required, but just wondering if this is a well-known behavior.

    Read the article

  • Why can I not edit, delete directories inside of this directory

    - by user43053
    Hello there, First, I thought this was PHP related, but maybe it isn't. My original post, which may be irrelevant now is located at the bottom. The problem is I have a directory : /articles/. In it are 10 sub directories. I have been changing the permissions lately, but now it seems all the permissions of the parent folder, sub-folders and files are either chmod 755 or 777. I cannot move, delete or edit files inside of this parent directory or sub-directories with my FTP-client. I can however edit, delete, create new files and directories and change them with PHP-functions without problems. What may the problem be? OLD POST. Ignore everything below this line: If I create a directory with mkdir(), or create a file with fopen(), file_put_contents() or SimpleXMLElement::asXML(), I am unable to access the file with my FTP-client or c-Panel File Manager. If I try to delete or edit them, I get errors. Dreamweaver suggests it is a permission problem or a network or filesystem fault (but I've set the permissions with chmod() to 0777, and when I check the cPanel, it confirms chmod 777. I also tried to use fileowner() and the function returns int(99), the same owner as those files that I could access with my FTP-client. It seems files and directories created with PHP can only be modified or be deleted with PHP. I thought this must be a server setup related issue, so I write it here. I am on a shared server, and I have no idea about setting up servers. EDIT: It seems the problem is different. I cannot move files with FTP-client to the parent, or sub-directories either. This problem may not be PHP related, then. It seems the problem applies to any directory, regardless of whether it was created by PHP. EDIT 2: The parent directory has chmod 755. Thank you for your time. Kind regards Marius

    Read the article

  • MicroSD card getting corrupted for no good reason

    - by ChaosR
    I recently bought an MicroSD card online. It's a Sandisk 16GB class 2. However, it has a nasty problem. Every time I fill it with my data, the fat tables get corrupted. I've tried reformatting it, blanking it, doesn't seem to solve the problem. I have tried windows and linux (ubuntu), both have the problem. I've used my usb microsd readers, and even tried putting it in my phone and putting data on it from there. All have this problem. Now the really odd thing is, besides the corrupted file tables, no programs can find anything wrong with the hardware. I've tried both chkdisk and "badblocks -w", neither give any type of error. Now I don't know if the actual data gets corrupted, or if its just filesystem tables. What happens is that one or more folders start showing a load of chinese-charred (random UTF8 symbols I suppose) folders and files, and it is impossible to do anything with those. All the other data (outside of the corrupted folders) seems fine. I've tried to test it, and the problem doesn't seem to show up until I fill the disk upto about 3~4GB. After that I can still access the data. But as soon as I eject/safely remove/unmount it, the bad things happen somehow. Next time I plug it in, the folders I most recently wrote to (but sometimes also the folders I wrote the time before last time to) are all gibberish. Does anybody have any clue what might be going on here?

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • Can a hard poweroff / outage / crash corrupt VMware snapshots?

    - by basic6
    Assuming a host system is running virtual machines (in VMware Workstation) and all their data is on a reliable storage (so no data corruption due to hdd failure). If that host crashes (kernel panic) while a vm is running, files on the virtual filesystem could be corrupted. But there's a snapshot (of the vm), taken before the crash. Is it safe to assume that reverting to the snapshot, the vm will be back in a clean state - or is there any way that this snapshot could have been corrupted by the crash?

    Read the article

  • Is there such a thing as a file hosted container which deduplicates data held within?

    - by Mallow
    Background I have backups of a website which stores all of it's data into a single file. This file is several gigs large and I have many different backups of this file. Most of the data within is mostly the same plus whatever was added or changed to it. I want to keep all the concurrent backups I've made through the years in case I find a horrible surprise of data corruption along the line. However storing a 10gig file every month gets expensive. Seeking Solution I've often thought about different ways of alleviating this problem. One thought that comes up very often combines the idea of a duplicating file system which doesn't require it's own partitioned volume on a hard drive. Something like what truecrypt does, what it calls, "file hosted containers" which when using the truecrypt program allows you to mount and dismount that volume as a regular hard drive. Question Is there a virtual hard drive mounter which uses file-based container which uses data deduplicaiton file system? (This question is a little awkward to put into words, if you have a better idea on how to ask this question please feel free to help out.)

    Read the article

  • Apache .shared folder

    - by Kevin
    There are already a bunch of rules in my Apache configuration. What I want to add is the following. There are some shared folders (.shared): /var/www/.shared/ and /var/www/.include/.shared/ and /var/www/.include/(.*)/.shared/. Now when someone visits http://domain.com/test.png it first executes the existing apache rules and will (when the file/folder was not found) look in those .shared folders. So suppose I've got this filesystem: /var/www/.shared/dog.png /var/www/.shared/test.gif and /var/www/domain.com/dog.png. Now when someone visits http://domain.com/test.gif, it must load the test.gif from the .shared folder. Now when someone visits http://domain.com/dog.png it must load the dog.png from the domain.com folder (because the existing apache rules will be executed first).

    Read the article

  • Permissions for Multiple User VPS

    - by adnymarc
    I have a Linode VPS server that I have recently setup and am migrating to from Mediatemple, where I have a VPS managed by Plesk. I dislike the Plesk interface and the mess it makes of a lot of things, but appreciated its ability to allow multiple people access to different domains on a server. I have most everything setup the way I would like it, but am having issues with permissions for my domain directories. I am running Ubuntu 8.04 LTS and Apache 2 as my web server. I have domains successfully located in /var/www/vhosts/domainname.com but have to modify files as root in order to add/change files for the domains. I would like to setup access with the following criteria: Each domain can have a user assigned to it (and allow for the same user to manage multiple domains - could even create symlinks in their home folder to their domains) Certain users will have shell access and may be chrooted to the domain directory they control FTP needs to be setup and able to correctly access the domains so that content editors for each domain can upload/download without permissions issues I am relatively new to linux sysadmin and have searched for a good guide to help solve these issues but haven't been able to find one yet. Thanks in advance for your help.

    Read the article

  • Strange loss of format on pen drive

    - by Kiewic
    Hi, here is an screenshot of my pen drive. The files are impossible to open, and the names have been replaced by strange characters. In Ubuntu is worst, the Windows system crash. What can I do to recover my information?

    Read the article

  • Ask filesystem if it is mounted

    - by Brian
    How can I see if a (ext3) filesystem is mounted by asking the filesystem directly (i.e. the same way that the system does when it boots and sees that it was not unmounted cleanly)? Checking the output of mount is no good because the filesystem might be mounted by a virtual machine. I know I can run fsck and it will abort if the filesystem is mounted, but I don't need to actually check the filesystem.

    Read the article

  • Website and file/directory permissions

    - by mathiass
    I've been given a task to fix this one website. One of its issues is that on one page, the images have broken links - the images are not showing, and clicking on the image (i.e. direct link to the image file) results in a 403 (Forbidden) error. I am looking for some feedback on what could be the possible cause. The directory where the images are stored has the following permissions: drwxrws--- www "group" 10240 Aug 2008 "image directory name" I had to hide the names. I checked the page source code, and everything seems to be in place. The rest of the site, and other images outside that image directory are showing fine. I was told that recently there have been some changes to the server. I'm trying to assume that there is no fault in the source code, and the permissions are - or used to be - correct (since the site has been working before, and no recent changes to the site itself have been made). My only thoughts at the moment is that either: a) the directory permission should be: drwxrws--x (executable) for the other users, or b) there is a change in the server settings that I don't know of. Is there anything else I should check?

    Read the article

  • Why is /dev/urandom only readable by root since Ubuntu 12.04 and how can I "fix" it?

    - by Joe Hopfgartner
    I used to work with Ubuntu 10.04 templates on a lot of servers. Since changing to 12.04 I have problems that I've now isolated. The /dev/urandom device is only accessible to root. This caused SSL engines, at least in PHP, for example file_get_contents(https://... to fail. It also broke redmine. After a chmod 644 it works fine, but that doesnt stay upon reboot. So my question. why is this? I see no security risk because... i mean.. wanna steal some random data? How can I "fix" it? The servers are isolated and used by only one application, thats why I use openvz. I think about something like a runlevel script or so... but how do I do it efficiently? Maby with dpkg or apt? The same goes vor /dev/shm. in this case i totally understand why its not accessible, but I assume I can "fix" it the same way to fix /dev/urandom

    Read the article

  • Question marks showing in ls of directory. IO errors too.

    - by jaymoo
    Has anyone seen this before? I've got a raid 5 mounted on my server and for whatever reason it started showing this: jason@box2:/mnt/raid1/cra$ ls -alh ls: cannot access e6eacc985fea729b2d5bc74078632738: Input/output error ls: cannot access 257ad35ee0b12a714530c30dccf9210f: Input/output error total 0 drwxr-xr-x 5 root root 123 2009-08-19 16:33 . drwxr-xr-x 3 root root 16 2009-08-14 17:15 .. ?????????? ? ? ? ? ? 257ad35ee0b12a714530c30dccf9210f drwxr-xr-x 3 root root 57 2009-08-19 16:58 9c89a78e93ae6738e01136db9153361b ?????????? ? ? ? ? ? e6eacc985fea729b2d5bc74078632738 The md5 strings are actual directory names and not part of the error. The question marks are odd, and any directory with a question mark throws an io error when you attempt to use/delete/etc it. I was unable to umount the drive due to "busy". Rebooting the server "fixed" it but it was throwing some raid errors on shutdown. I have configured two raid 5 arrays and both started doing this on random files. Both are using the following config: mkfs.xfs -l size=128m -d agcount=32 mount -t xfs -o noatime,logbufs=8 Nothing too fancy, but part of an optimized config for this box. We're not partitioning the drives and that was suggested as a possible issue. Could this be the culprit?

    Read the article

  • How to calculate proper amount of inode/block sizes for a linux filesystem.

    - by Donatello
    I have an old reiser filesystem which I'm going to convert to Ext3. The problem I have is to determine the proper block- and inode-sizes for this partition. The partition is 44 GB large and has to hold 3,000,000+ files of sizes between 1 kb and 10kb, how can I figure out the best ratio of inodes and blocksize? The below is something I tried which seems OK but makes the copying files incredibly slow. mkfs.ext3 -t ext3 -c -c -b 1024 -i 4096 -I 128 -v -j -O sparse_super,filetype,has_journal /dev/sdb1 Thanks.

    Read the article

  • SSD: NTFS vs EXT4

    - by Joshua
    Always when I read about SSD usage under Linux, the advice is to disable journalling in Ext4 (or use Ext2), since it's too bad for your SSD. But in all articles about SSD tweaks for Windows, I never see any mentions that you should disable NTFS journalling, or that you should stick to FAT32. I know Ext4's journaling is more advanced, but is it so much more damaging to a SSD than that of NTFS? Or are Linux users just a little bit more cautious?

    Read the article

  • How to format pendrive from fat32 to ext3 in windows7

    - by newb
    I am trying to make a live usb of OPHCRACK and tried to boot from FAT32 pendrive. But after making live usb and boot from it the ophcrack didnt work. After searching a while i came to understand that ophcrack will not work in a fat32 pendrive and we have to convert it into ext3. But i am getting hard time finding a method or software which can be used to convert fat32 pendrive to ext3 in windows 7. Can you suggest any method or software's for this purpose

    Read the article

  • How to quickly remove hundreds of thousands of files? [closed]

    - by Nick
    Possible Duplicate: Doing an rm -rf on a massive directory tree takes hours I'm running a simulation program on a computing cluster (Scientific Linux) that generates hundreds of thousands of atomic coordinate files. But I'm having a problem deleting the files because rm -rf never completes and neither does find . -name * | xargs r Isn't there a way to just unlink this directory from the directory tree? The storage unit is used by hundreds of other people, so reformatting is not an option. Thanks

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >