Search Results

Search found 16797 results on 672 pages for 'directory traversal'.

Page 58/672 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Zsh, directory tab-completion with prefix

    - by nifty
    I have a directory where I put all my projects in, let's say it's ~/projects as an example. I've made a command called s which takes one argument, and moves me into that directory. E.g.: s foo moves me to ~/projects/foo. What I'd like is to have a completion command of some sorts, which would act like cd so I could do keep hitting tab to go further into the ~/projects/... directories. Basically, cd with a prefix which is always present. I've looked into zstyle completion in man zshcompsys, but realized I just don't know enough about it to understand it properly.

    Read the article

  • One samba directory is very slow

    - by Tim Rosser
    We have a RHEL server running as a samba server for our Windows network which has been running fine for ages. All of a sudden this morning one specific folder became really slow and sometimes inaccessible. It's the /home/ directory (containing all of the user specific stuff, like their windows desktops, documents etc). It's not only really slow over the network, but when I try to use ls to view the directory it just hangs. I'm getting loads of messages like the following in /var/log/messages Mar 20 09:53:32 zeus smbd[32378]: [2012/03/20 09:53:32, 0] smbd/service.c:set_current_service(184) Mar 20 09:53:32 zeus smbd[32378]: chdir (/opt/shares/home/tim.rosser) failed

    Read the article

  • Deny directory browsing in a Proftpd / Ubuntu Installation

    - by skylarking
    I used this guide to set up a Proftpd installation an Ubuntu 8.04 server... Works well, but the generic user ( userftp ) can run ls and is able to change to any Directory and browse freely on the server ..from the root / and upwards.. I added this line to etc/shells /bin/false in hopes that that would prevent this ... I really only want the userftp account to be able to upload to the generic /home/FTP-Shared directory, and be able to do nothing else on the server. How is this accomplished ... This is a headless Ubuntu box..and I am using CLI only .. no GUI admin tools

    Read the article

  • Add directory to $PATH if it's not already there

    - by Doug Harris
    Has anybody written a bash function to add a directory to $PATH only if it's not already there? I typically add to PATH using something like: export PATH=/usr/local/mysql/bin:$PATH If I construct my PATH in .bash_profile, then it's not read unless the session I'm in is a login session -- which isn't always true. If I construct my PATH in .bashrc, then it runs with each subshell. So if I launch a Terminal window and then run screen and then run a shell script, I get: $ echo $PATH /usr/local/mysql/bin:/usr/local/mysql/bin:/usr/local/mysql/bin:.... I'm going to try building a bash function called add_to_path() which only adds the directory if it's not there. But, if anybody has already written (or found) such a thing, I won't spend the time on it.

    Read the article

  • Only allow the POST method for a specific file in a directory

    - by Dave Chen
    I have one file that should only be accessible via the POST method. /var/www/folder/index.php The document root is /var/www/ and index.php is nested inside a folder. My configurations are as follows: <Directory "/var/www/folder"> <Files "index.php"> order deny,allow Allow from all <LimitExcept POST> Deny from all </LimitExcept> </Files> </Directory> I visit my server at 127.0.0.1/folder but I can GET and POST the file just like normal. I've also tried reversing the order, order allow,deny, require, limitexcept and limit. How can I only allow POST requests to be processed by one file in a folder?

    Read the article

  • Windows 7 logon script net use fails

    - by Bryan
    Our network PCs currently consists of Windows XP Professional on a mixed 2008/2003 domain, with exception to one machine, which is a new Windows 7 PC we have bought for testing before we deploy the operating system. But we have discovered a problem with our logon script which automatically maps network drives for our users. The logon scripts are done via User GPOs, but the script itself is just a .cmd file using net use. The permissions are perfectly fine, as the same user can log on to a Windows XP machine and get their drives mapped without problem, but this one drive mapping constantly fails. This is repeatable using the net use command, and fails every time - it actually prompts the user for a username and password when executed interactively, yet if we enter \\server\share from a run dialog, the contents of the network share appear and are accessible without any further authentication. The Windows 7 PC (just like the XP systems) are domain members and the account being used is a domain account, which does have access to the share (as stated, it works fine on XP). I fail to understand what is happening here, as other shares on the server get mapped on the Windows 7 system. More info: The effective permissions of the share in question only grant the user 'list' permission on the root directory, the share permissions are 'everyone,full control'. I've created a new share with the same permissions just to test if it was down to the 'list' permissions on the root directory, but the Windows 7 machine maps this one fine.

    Read the article

  • Setting per-directory umask using ACLs

    - by Yarin
    We want to mimic the behavior of a system-wide 002 umask on a certain directory foo, in order to ensure the following result: All sub-directories created underneath foo will have 775 permissions All files created underneath foo and subdirectories will have 664 permissions 1 and 2 will happen for files/dirs created by all users, including root, and all daemons. Assuming that ACL is enabled on our partition, this is the command we've come up with: setfacl -R -d -m mask:002 foo This seems to be working- I'm basically just looking for confirmation. Is this the most effective way to apply a per-directory umask with an ACL?

    Read the article

  • Replicate portion of an LDAP directory to external server

    - by colemanm
    We're in the process of setting up a Jabber server on Amazon EC2 right now, and we'd like to have our internal users authenticate via LDAP so we don't have to create/manage a separate set of user accounts than the master directory in the office. My question is: is there a way to copy, unidirectionally, a segment of our internal LDAP directory (the user accounts OU) to an external LDAP server and authenticate Jabber against that? We're trying to work around having our externally hosted machines out in the cloud accessing our internal network directly... If we can replicate in one direction only a subset of the user accounts, then if that gets compromised we don't necessarily have a critical security breach into our internal network.

    Read the article

  • Changing Vim Home Directory

    - by mcaaltuntas
    Previously I've been using vim without any problems. However a few months ago our company made some network and security updates. After that whenever I plug a network cable into my laptop, it creates a network shared drive "H" with my company name and when I try to open vim it doesn't load plugins and other things that are in my vim home directory. I have found the reason but I don't know how to solve it. The problem is that these network updates changed our HOME directory. When I write: echo $HOME It prints H. Before plugging in a network cable my home was C:\Users\blabla. How can I change my HOME variable? When I run set it prints: C:\Windows\System32>set | findstr /R "^HOME" HOMEDRIVE=H: HOMEPATH=\ HOMESHARE=\\companyname\blabla\username$

    Read the article

  • /usr/bin/mandb: can't search directory

    - by tfe
    Today I got this email from my debian server: test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) /etc/cron.daily/man-db: /usr/bin/mandb: can't search directory /usr/local/share/man/man1/: Permission denied Can me tell somebody what does it mean? I didn't change any permissions: drw---S--- 2 root staff 4096 Jun 28 14:05 man1 P.S Directory /usr/local/share/man/man1 contains 1 file: csf.1. Yesterday (Jun28) CSF/LFT was updated automatically. How do I fix this problem?

    Read the article

  • Unable to list contents/remove directory (linux ext3)

    - by RedKrieg
    System is CentOS5 x86_64, completely up to date. I've got a folder that can't be listed (ls just hangs, eating memory until it is killed). The directory size is nearly 500k: root@server [/home/user/public_html/domain.com/wp-content/uploads/2010/03]# stat . File: `.' Size: 458752 Blocks: 904 IO Block: 4096 directory Device: 812h/2066d Inode: 44499071 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 3292/ user) Gid: ( 3287/ user) Access: 2012-06-29 17:31:47.000000000 -0400 Modify: 2012-10-23 14:41:58.000000000 -0400 Change: 2012-10-23 14:41:58.000000000 -0400 I can see the file names if I use ls -1f, but it just repeats the same 48 files ad infinitum, all of which have non-ascii characters somewhere in the file name: La-critic\363-al-servicio-la-privacidad-300x160.jpg When I try to access the files (say to copy them or remove them) I get messages like the following: lstat("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Sebast\355an-Pi\361era-el-balc\363n-150x120.jpg", 0x7fff364c52c0) = -1 ENOENT (No such file or directory) I tried altering the code found on this man page and modified the code to call unlink for each file. I get the same ENOENT error from the unlink call: unlink("/home/user/public_html/domain.com/wp-content/uploads/2010/03/Marca-naci\363n-Madrid-150x120.jpg") = -1 ENOENT (No such file or directory) I also straced a "touch", grabbed the syscalls it makes and replicated them, then tried to unlink the resulting file by name. This works fine, but the folder still contains an entry by the same name after the operation completes and the program runs for an arbitrarily long time (strace output ended up at 20GB after 5 minutes and I stopped the process). I'm stumped on this one, I'd really prefer not to have to take this production machine (hundreds of customers) offline to fsck the filesystem, but I'm leaning toward that being the only option at this point. If anyone's had success using other methods for removing files (by inode number, I can get those with the getdents code) I'd love to hear them. (Yes, I've tried find . -inum <inode> -exec rm -fv {} \; and it still has the problem with unlink returning ENOENT) For those interested, here's the diff between that man page's code and mine. I didn't bother with error checking on mallocs, etc because I'm lazy and this is a one-off: root@server [~]# diff -u listdir-orig.c listdir.c --- listdir-orig.c 2012-10-23 15:10:02.000000000 -0400 +++ listdir.c 2012-10-23 14:59:47.000000000 -0400 @@ -6,6 +6,7 @@ #include <stdlib.h> #include <sys/stat.h> #include <sys/syscall.h> +#include <string.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) @@ -17,7 +18,7 @@ char d_name[]; }; -#define BUF_SIZE 1024 +#define BUF_SIZE 1024*1024*5 int main(int argc, char *argv[]) { @@ -26,11 +27,16 @@ struct linux_dirent *d; int bpos; char d_type; + int deleted; + int file_descriptor; fd = open(argc > 1 ? argv[1] : ".", O_RDONLY | O_DIRECTORY); if (fd == -1) handle_error("open"); + char* full_path; + char* fd_path; + for ( ; ; ) { nread = syscall(SYS_getdents, fd, buf, BUF_SIZE); if (nread == -1) @@ -55,7 +61,24 @@ printf("%4d %10lld %s\n", d->d_reclen, (long long) d->d_off, (char *) d->d_name); bpos += d->d_reclen; + if ( d_type == DT_REG ) + { + full_path = malloc(strlen((char *) d->d_name) + strlen(argv[1]) + 2); //One for the /, one for the \0 + strcpy(full_path, argv[1]); + strcat(full_path, (char *) d->d_name); + + //We're going to try to "touch" the file. + //file_descriptor = open(full_path, O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666); + //fd_path = malloc(32); //Lazy, only really needs 16 + //sprintf(fd_path, "/proc/self/fd/%d", file_descriptor); + //utimes(fd_path, NULL); + //close(file_descriptor); + deleted = unlink(full_path); + if ( deleted == -1 ) printf("Error unlinking file\n"); + break; //Break on first try + } } + break; //Break on first try } exit(EXIT_SUCCESS);

    Read the article

  • .htaccess redirect root directory and subpages with parameters

    - by wali
    I am having difficulty trying to redirect a root directory while at the same time redirect pages in a sub directory to a different URL. For example: http://test.example.com/olddir/sub/page.php?v=one to http://test.example.com/new/one while also redirecting the any request to the root of the olddir folder. I have tried RewriteCond %{QUERY_STRING} v=one RewriteRule ^/olddir/sub/page.php /new/? [R=301] and RedirectMatch /oldir "test.example.com" RedirectMatch /olddir/sub/page.php?v=one "test.example.com/new/one" Any help at this point will be extremely appreciated...Thanks!

    Read the article

  • Directory "Bookmarking" in Linux

    - by Jason R. Mick
    Aside from aliasing and links, is there an easy way in Linux to tag commonly used directories and to navigate to a commonly used directory from the terminal. To be clear the disadvantages I see with alternative approaches, and why I want a bookmark/favorites like system: alias Cons: Too specific (every new favorite requires a new alias...although you could in theory make an alias that echo append your dir as a new alias, which would be sort of clever). Can't nest favorites in folders (can't think of a simple solution to this outside of heavy config scripting). links Cons: Clutter directory make ls a headache. pushd/popd Cons: Non-permanent (without shell config file scripting), can't nest favorites in directories, etc. Granted I have multiple ideas for making my own non-standard solution, but before I have at it I wanted to get some perspective on what's out there and if there is nothing, what is a recommended approach. Does anyone know of such a favorites/bookmark-like terminal solution?

    Read the article

  • How to block access to files in the current directory with .htaccess

    - by kfir
    I have a few private files in a public folder and I want to block access to them. For example lets say I have the following files tree: DictA FileA FileA FileB FileC I want to block access to FileB and FileA in the current directory and allow access to the FileA in the DictA directory. The first thing that came to mind was to use the FilesMatch directive as follows: <FilesMatch "^(?:FileA)|(?:FileB)$"> Deny from all </FilesMatch> The problem here is that FileA inside DictA will also be blocked, which is not what I wanted. I could override that by adding another .htaccess file to DictA but I would like to know if there is a solution which wont involve that. P.S: I can't move the private files to a separate folder.

    Read the article

  • How to set/keep directory permissions?

    - by Dylan
    I'm using CwRsync to connect from my Windows development machine to a linux webserver : rsync -avuz -e ./ssh --exclude=".svn" /cygdrive/c/xampp/htdocs/project123/ [email protected]:/home/user123/public_html This syncs my development project directory nicely and fast to the server. But after doing this, all directory properties are reset to the local user user123 only, so the website is not available anymore. I need to manually reset those properties. Why is this happening, and how to prevent it? PS. coming from a Windows environment I'm having a really hard time understanding rsync. I copied the above command from some examples... just need to get this one small thing working too...

    Read the article

  • How to change the download save directory in ktorrent from a remote host

    - by Garethj94
    So I have ktorrent running on a server and I don't have the ability to see the X11 display for the client, and I need to change my download directory since I like to keep everything very organized. So I need a way to change the download directory for the program as a whole, not like decide where to put each and every torrent that I download. This can't be done through the webui preferences so I'm guessing that I'll have to do it through ssh somehow but from what I've read there really isn't a command to use. Also the way that I have the application run is on startup my server runs the command ktorrent, so it will already be running when I want to change the download location so I assume that I will have to restart the program as well. If anyone knows how to do this it would be much appreciated, and I can't think that I'm the only person to want this feature.

    Read the article

  • Show symbolic links AND their targets in web directory listing (apache)

    - by Erwan Queffélec
    Listing a directory content with ls -l shows this output: total 12 drwxr-xr-x 3 root root 4096 Dec 11 16:38 2.3 drwxr-xr-x 5 root root 4096 Dec 11 16:38 2.4 drwxr-xr-x 2 root root 4096 Dec 11 16:38 archive lrwxrwxrwx 1 root root 10 Dec 11 16:38 current -> 2.4/2.4.1/ lrwxrwxrwx 1 root root 10 Dec 11 16:38 next -> 2.4/2.4.2/ lrwxrwxrwx 1 root root 10 Dec 11 16:38 previous -> 2.4/2.4.0/ Notice how it shows the symbolic links and their respective targets. I need to know if there is a way of getting the same behaviour in apache directory browsing. If apache is not capable of it as I suspect, is there an application (FLOSS) providing that kind of behaviour ?

    Read the article

  • processing of Group Policy failed only on 2008 Servers and Name Resolution failure on the current domain controller

    - by Ken Wolfrom
    Spent last 3 months doing a upgrade from 2003 domain to a 2008R2 domain. our last DC was rebuilt (5 total) and brought up on line. After it was put on line we have some 2008 and 2008R2 servers (10 now) getting these errors in the event logs. ERRORS Description: The processing of Group Policy failed. Windows could not resolve the user name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).\ Can duplicate this if we drop to command prompt and run GPUPDATE manually When our users attempt to do a \directory\shared access to shared drive on an affected server get this error.– “THERE ARE CURRETLY NO LOGON SERVER AVAIALBE TO SERICE THE LOGON REQUEST. This is only affecting the 2008 OS and it is a random set of abotu 10 servers out of some 30 with this OS. The Services on the machines are running Ok and login. Able to log in with domain/user to the consoles and via RDP. WE can log onto an affected machine, and can get to the \domainname\sysvol and can see the GPO's Have checked the replication topology of the domain and it states all servers can replicate with no errrors. We went back to the last DC, demoted it, removed DNS and then removed it from the domain and waited 24 hours and issue still persist. Picked one server, removed it from domain, reboooted, and added back to domain with no problems, but still has this behavior. bottom line is we have some servers that the domain will not let any UDP/client server apps or GPO's process ,but the tcp related items seeme to work fine, http, tcp calls, sql and oracle dbs's connect and process. Any inputs on some possible reasons for this issue and fixes. It is only affecting the 2008 servers on a 2008R2 domain.

    Read the article

  • Web-Server directory permissions

    - by MLS
    Hello All, I would like some help understanding web-server directory permissions. Apache, CentOS, PHP, Mysql Example, I have multiple sites in /var/www/html They are in paths like: /var/www/html/www_domainname_com inside each site I might have a path like /lib/mysql/ like PHP connect stuff, database config, etc. What should me permissions be so that someone cannot just browse to that directory? Should I just .htaccess them? I have apache:apache as the owner of all my web directories. Can I prevent someone from crawling certain directories of my web-server? I have a robots.txt, but what is to say the crawler obeys it? So to sum up: 1. What is the best owner/permission set for my sensitive files that the web-server or php or mysql needs, but I dont want people browsing to? Can I prevent straight out crawling of portions?

    Read the article

  • permission for "users" directory for a mounted vmdk file

    - by rajmalhotraml
    I mounted one vmdk file in my windows 8 machine and I am able to access all the folders and files except those in "users\ directory. When I try to open, it says I dont have permission. I am not able to give the permission even. Any one can tell me how to open the users directory? I have very important files in the desktop folder which can be acessed through \users\\desktop. What is the alternate way of accessing the folder? I lost the password to boot up the vm image.

    Read the article

  • sa2 -A /var/log/sa/sa13: No such file or directory

    - by user53925
    I have systat version 7.0.2 and the /etc/sysconfig/sysstat has the entry HISTORY=27, this is on a redhat enterprise server 5.6, the cron setup for this is # run system activity accounting tool every minute * * * * * root /usr/lib64/sa/sa1 1 1 # generate a daily summary of process accounting at 23:53 53 23 * * * root /usr/lib64/sa/sa2 -A I get the following error from the cron sa2 -A find: /var/log/sa/sa13: No such file or directory, Looking at the directory /var/log/sa the files are created from sa01 through sa10 (sa1 created on sep1, sa2 created on sep2 and so on), then the rest of the files are from sa14 through to sa 31 (created from Aug 14 to Aug 31). I have not made any changes on the server so I am not sure why I am getting these error messages and is there a way to fix this?. Someone suggested creating empty files from sa11 through sa14 to fix this but I am not sure if this might mess up something .

    Read the article

  • How do you delete a directory you don't own in an NFS directory you do?

    - by John Ellinwood
    There should be a simple answer to this, but I can't find it. ~me/work>ls -la drwxrwxr-x 3 me mygroup . drwxrwxr-x 3 me mygroup .. drwxrwxr-x 3 me mygroup folder1 drwxr-xr-x 3 person2 mygroup folder2 This is in my home directory, which is an automounted NFS. Somebody in my group created folder2 in my home directory and then left for vacation. I can't delete the folder... I can't move it... can't change permissions on it. How can I get rid of it? My sysadmin has no clue.

    Read the article

  • How can I find which logon script is being run?

    - by user2517266
    I'm having an issue with network drives. Suddenly some computers and users aren't getting their mapped network drives from the logon script. I am NOT a domain admin, I don't have permission to login to the domain controller. And I know very little about Active Directory. The issue seems random, some users this day, different users tomorrow. Some computers run fine and some won't map no matter who logs in. They are mixed OS's XP (SP3), Vista, and 7. I was looking at the domain in windows explorer and I have found the batch file(s) that maps the drives in several locations, how do I know which one is actually being ran? The .bat file is located in \DOMAIN\NETLOGON\script.bat and \DOMAIN\SYSVOL\DOMAIN\scripts\script.bat and \DOMAIN\SYSVOL\DOMAIN\policies\GUID(Right? It's a crazy string)\User\Scripts\Logon\script.bat So, how can I figure out which one is actually being ran per computer or user? Cause they are all slightly different from each other and one of them doesn't map properly. Do all the files in NETLOGON get ran? Cause there are 15+ files in there. Or is it specified in Group Policy which one(s) get ran? EDIT: I am able to access a program called Active Directory Users and Computers, but the properties tab for any user is blank for the logon script.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >