Search Results

Search found 11262 results on 451 pages for 'important directories'.

Page 151/451 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • how to have files created by CMS have the same ownership as SSH user

    - by Cam
    I am having difficulty on our ubuntu server whereby I have an SSH user that when I create files using this user the ownership is web_user:www-data The problem is when a file is uploaded or created using a content management system like joomla. When files are uploaded through Joomla - such as components / modules... The ownership is set to www-data:www-data This means that I need to then chown all new files to web_user:www-data so we can edit the files. Is there a way to set for a directory and sub-directories that all new files created have the ownership of web_user:www-data? Do I need to use something like setuid or setgid? Any help would be greatly appreciated.

    Read the article

  • Zsh, directory tab-completion with prefix

    - by nifty
    I have a directory where I put all my projects in, let's say it's ~/projects as an example. I've made a command called s which takes one argument, and moves me into that directory. E.g.: s foo moves me to ~/projects/foo. What I'd like is to have a completion command of some sorts, which would act like cd so I could do keep hitting tab to go further into the ~/projects/... directories. Basically, cd with a prefix which is always present. I've looked into zstyle completion in man zshcompsys, but realized I just don't know enough about it to understand it properly.

    Read the article

  • how to have files created by CMS have the same ownership as SSH user

    - by Cam
    I am having difficulty on our ubuntu server whereby I have an SSH user that when I create files using this user the ownership is web_user:www-data The problem is when a file is uploaded or created using a content management system like joomla. When files are uploaded through Joomla - such as components / modules... The ownership is set to www-data:www-data This means that I need to then chown all new files to web_user:www-data so we can edit the files. Is there a way to set for a directory and sub-directories that all new files created have the ownership of web_user:www-data? Do I need to use something like setuid or setgid? Any help would be greatly appreciated.

    Read the article

  • Linux: Can I link multiple destinations via softlinks?

    - by kds1398
    Attempting to end up with something similar to this: $ ls -l lrwxrwxrwx 1 user group 4 Jun 28 2010 foo -> /home/bar lrwxrwxrwx 1 user group 4 Jun 29 2010 foo -> /etc/bar The intention is to be able to move a file to foo & have it go to both destination directories for now. The goal is to eventually unlink /home/bar link after confirming there are no issues with moving the files to /etc/bar. I am restricted in that I am unable to change or add to the process that moves the files.

    Read the article

  • DOSBox 8.3 filenames disagree with Windows 7

    - by wes
    When I compare a dir in DOSBox 0.74 against a dir from Windows 7 command prompt, the 8.3 filenames differ. Long format (both drives and directories): 2012-07-30_abcdefg-abcde 2012-07-30_abcdefg-abcde.7z 2012-08-06_abcdefg-abcde 2012-08-06_abcdefg-abcde.7z 2012-10-22_IIS-LogFiles 2012-10-22_IIS-LogFiles.zip 2012-11-14_selective-abcde DOSBox 0.74 (dir): 2012-0~1 2012-0~3 2012-1~1 2012-1~3 2012-0~2 7Z 2012-0~4 7Z 2012-1~2 ZIP Windows 7 (dir /x): 2012-0~1 2012-0~1.7Z 2012-0~2 2012-0~2.7Z 2012-1~1 2012-1~1.ZIP 2012-1~2 so for instance if I'm passing in a path to DOSBox, sometimes this happens and whatever I'm trying to automate will fail. Why the difference, and can I change any settings to help DOSBox generate the correct shortnames?

    Read the article

  • How do I fix my recycle bin that doesn't show the deleted items?

    - by Jasper
    The icon shows that it's full. I just deleted stuff, so I know that it's full. But when I open it, it doesn't show the deleted items. The only option I have is to restore them or empty them. Help me out. It's VERY VERY important for me to fix this problem since I use this workstation for my studio purposes. P.S. It's a Windows 7 Ultimate (x86) machine.

    Read the article

  • JFFS2 poor mount performance

    - by Marcin Polkowski
    I run multiple ARM boards with Debian Linux installed. Board is equipped with 512 MB of NAND memory. I've observed that after ~3 months of continuous run booting time increased significantly - it takes over 3 minutes to mount filesystem (JFFS2). System was using about 35% of available storage so I’ve removed unnecessary files (got to ~18%) but this didn't change anything. Then I realized that my software produces directories that are left empty so I’ve removed ~500 empty and unnecessary dirs. This didn’t help either. After system is started I see JFFS2 garbage collector (jffs2_gcd_mtd4) running and occupying over 90% of CPU. Now my question: is there a way to „optimize” JFFS2 filesystem for better performance - faster booting (my system have limited timid to boot up)? It would be great if this optimization could be done remotely - I have no physical access to boards.

    Read the article

  • Running telnet standalone - possible?

    - by Lanz
    So, this is what I want to do: there is a local non-superuser and it can upload the file into /tmp. Using this account, I download a telnet server package equivalent to what is already installed. I modify some settings, setting all file directories into /tmp. Then compile and run as a standalone telnet server. Is this possible? If not, what makes this impossible? Or as a non-privileged user, would there be any way to enable telnet?

    Read the article

  • Add entire 300 GB filesystem to Git Annex repository?

    - by Ryan Lester
    By default, I get an error that I have too many open files from the process. If I lift the limit manually, I get an error that I'm out of memory. For whatever reason, it seems that Git Annex in its current state is not optimised for this sort of task (adding thousands of files to a repository at once). As a possible solution, my next thought was to do something like: cd / find . -type d | git annex add --$NONRECURSIVELY find . -type f | git annex add # Need to add parent directories of each file first or adding files fails The problem with this solution is that there doesn't seem from the documentation to be a way to non-recursively add a directory in Git Annex. Is there something I'm missing or a workaround for this? If my proposed solution is a dead end, are there other ways that people have solved this problem?

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • Protect individual sites on Ubuntu/Apache server

    - by Christoffer
    Hi,?? I need to set up a Apache server configuration for some client sites that run under the same Ubuntu 9.10 machine. All sites are allowed to run PHP, Python and Ruby on Rails. I do not control the source code of these sites and so I need to set up a filter in order to prevent one user to reach files on another users account.?? If I run a script to list files in "/" from one account, I can browse some files and directories in the actual server root. I want to set the root for each account to /var/usersite.com/www/ instead so that listing files in "/" shows the files in the client's root. ??How is this most easily configured??? Cheers!? /Christoffer

    Read the article

  • Preventing Windows from automatically removing broken desktop shortcuts

    - by hkBattousai
    I have two external harddrives which I'm using for archiving purposes, because of that they are turned off most of the time. I have some shortcuts on the desktop to some directories on these external harddisks. Windows occasionally removes these desktop shortcuts. It happens when the harddisks are turned off. I think it thinks that the shortcuts are broken and no longer needed, and tries to clean the desktop up. How do I prevent this behavior? (OS Version: Windows 7 Ultimate x64 SP1)

    Read the article

  • CDPATH in windows command prompt?

    - by barlop
    The accepted answer of this question Fast Ways of Cd'ing on *nix? mentions bash having CDPATH is there an equivalent in windows? so from any directory e.g. c:\windows I could do c:\windowscd compbar* and it'd take me to m:\a\b\c\d\e\compbar what if there are many compbar directories? well, the CDPATH solution is one solution, I suppose you order them it'd search through the CDPATH environment variable and choose the first. I'd like that for windows.

    Read the article

  • How to troubleshoot git "unable to set permission" on adding project?

    - by Brian Knoblauch
    Finally decided to move from Subversion to Git, but am having problems with my first project. Did my "git init" and am trying to do a "git add" of my project, but it's failing with: $ git add . error: unable to set permission to '.git/objects/6b/6018c1c76dc5ec159d5cb65bab72 fa300d52f6' error: build.xml: failed to insert into database error: unable to index file build.xml fatal: adding files failed I have full permissions to the directories in question. The only odd thing about it is that it's a drive mounted (and mapped) from a server over CIFS. No problems creating/editing files/permissions with other applications. The host is Windows Vista x64 and I'm running git under Cygwin. Server is Windows 2008. Any other ideas on what I might be doing wrong?

    Read the article

  • rsync : Read input from a file and sync accordingly

    - by Dheeraj
    I have a text file which contains the list of files and directories that I want to copy (one on a line). Now I want rsync to take this input from my text file and sync it to the destination that I provide. I've tried playing around with "--include-from=FILE" and "--file-from=FILE" options of rsync but is is just not working I also tried pre-fixing "+" on each line in my file but still it is not working. I have tried coming with various filter PATTERNs as outlined in the rsync man page but it is not working. Could someone provide me correct syntax for this use case. I've tried above things on Fedora 15, RHEL 6.2 and Ubuntu 10.04 and none worked. So i am definitely missing something. Many thanks.

    Read the article

  • Where does gcc keep its built-in include directory paths

    - by Charles
    GCC has built in include directories for certain standard headers. I just need to know where this list is. My newly compiled gcc will not compile my little test C++ program because it cannot find standard headers. I think it fails because of some config options I used to make my file system more organized. I set the bindir and libdir, which I think might have screwed up the built-in include paths for some reason. Program (dummy.c): #include <iostream> void main(){} Command: g++ dummy.c Error: dummy.c:1:20: fatal error: iostream: No such file or directory

    Read the article

  • Best practice to create an ftp administrator account on vsftpd

    - by jtd
    Background: My manager would like me to create an administration account for out FTP server. When logged in via ftp, it should instantly display all of the home directories of the users, and be able to modify any directory or file in any way possible. What would be the best way to go about this? I planned on chrooting this ftp admin to /home, but I don't know how to properly go about the permissions. Maybe make a group called ftp_admins, and chgrp the /home folder? But then wouldn't it affect the users accessing their folders? any help is appreciated.

    Read the article

  • MySQL database Int overflow and can’t login in.

    - by Ryan Smith
    I have a MySQL database on my server and I"m pretty sure it's an int over flow on one table with an auto_increment field that's crashing it. I can delete the table, it's not very important, but I can't get into the server. Is there anyway to delete that database from the file system or without logging into MySQL? HELP! THE WORLD IS ENDING!

    Read the article

  • How do I change the NGINX user?

    - by danielfaraday
    I have a PHP script that creates a directory and outputs an image to the directory. This was working just fine under Apache but we recently decided to switch to NGINX to make more use of our limited RAM. I'm using the PHP mkdir() command to create the directory: mkdir(dirname($path['image']['server']), 0755, true); After the switch to NGINX, I'm getting the following warning: Warning: mkdir(): Permission denied in ... I've already checked all the permissions of the parent directories, so I've determined that I probably need to change the NGINX or PHP-FPM 'user' but I'm not sure how to do that (I never had to specify user permissions for APACHE). I can't seem to find much information on this. Any help would be great! (Note: Besides this little hang-up, the switch to NGINX has been pretty seamless; I'm using it for the first time and it literally only took about 10 minutes to get up and running with NGINX. Now I'm just ironing out the kinks.)

    Read the article

  • Need help with an .htaccess URL redirector

    - by AlexV
    I'm trying to do another SEO system with PHP/.htaccess... I need the following rules to apply: Must catch all URLs that do not end with an extension (www.foo.com -- catch | www.foo.com/catch-me -- catch | www.foo.com/dont-catch.me -- don't catch). Must catch all URLs that end with .php* (.php, .php4...) (thwaw are the exceptions to rule #1). All rules must only apply in some directories and not in their subdirectories (/ and /framework so far). The htaccess must send the typed URL in a GET value so I can work with it in PHP. Any mod-rewrite wizard can help me?

    Read the article

  • Is it possible to avoid umask 0002?

    - by Anatoly
    Is it possible to give an automatic ability to modify files(folders and all recursively) created by one user to another within one specified folder (let's say "shared") on the basis of both users belonging to the same secondary group (let's say "coworkers")? I've tried to achieve this by using ACL but with no success. Seems that umask wipes out corresponding bits.... I'm on FreeBSD 8.1 (but seems this problem is actual for other *NIX systems). Googling this problem (people often refer to it as "umask per directory" problem) gives the most relevant link: http://old.nabble.com/ACLs,-umask-and-shared-directories-td27820947.html that is not very promising... Want to ask ServerFault community - is it possible at all?

    Read the article

  • Multiple public keys for one user

    - by Russell
    This question is similar to SSH public key authentication - can one public key be used for multiple users? but it's the other way around. I'm experimenting on using ssh so any ssh server would work for your answers. Can I have multiple public keys link to the same user? What are the benefits of doing so? Also, can different home directories be set for different keys used (all of which link to the same user)? Please let me know if I'm unclear. Thanks.

    Read the article

  • How to bulk-rename files with invalid encoding or bulk-replace invalid encoded characters?

    - by qdoe
    I have a debian server and I'm hosting music for an internet radio station. I have trouble with file names and paths because a lot of files got an invalid encoding, for example: ./music/Bändname - Some Title - additional Info/B?ndname - 07 - This Title Is Cörtain, The EncÃ?ding Not.mp3 Ideally, I would like to remove everything that is not letters A-Z/a-z or numbers 0-9 or dash -/underscore _... The result should look like something like that: ./music/Bndname-SomeTitle-additionalInfo/Bndname-07-ThisTitleIsCrtain,TheEnc?dingNot.mp3 How to achieve this for a batch of a lot of files and directories? I've seen this similar question: bulk rename (or correctly display) files with special characters But this only fixes the encoding, I would prefer a more strict approach as described above.

    Read the article

  • Mount Docker container contents in host file system

    - by dflemstr
    I want to be able to inspect the contents of a Docker container (read-only). An elegant way of doing this would be to mount the container's contents in a directory. I'm talking about mounting the contents of a container on the host, not about mounting a folder on the host inside a container. I can see that there are two storage drivers in Docker right now: aufs and btrfs. My own Docker install uses btrfs, and browsing to /var/lib/docker/btrfs/subvolumes shows me one directory per Docker container on the system. This is however an implementation detail of Docker and it feels wrong to mount --bind these directories somewhere else. Is there a proper way of doing this, or do I need to patch Docker to support these kinds of mounts?

    Read the article

  • Is there a way to rsync in batches?

    - by Chris
    I have a huge chunk of data (11G) in a subversion repository that I'm using rsync to migrate to Alfresco, which lucene indexes new files as they hit the file system. I'm using a dav mount as a proxy to allow me to rsync. The issue I'm having is the indexing post-rsync is quite an expensive operation for such a huge chunk of data, so I was wondering whether there's a way I could logically separate the rsync into identically-sized batches (say 500MB each) so I could schedule them in cron. At the moment, I'm traversing the top level folders and taking the smallest ones across first, but once I'm done with those, the much larger sub-directories are going to be quite troublesome. Please let me know if you need any further info. Thanks in advance.

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >