Search Results

Search found 3168 results on 127 pages for 'directories'.

Page 38/127 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Mod_rewrite is ignoring the extension of a file

    - by ngl5000
    This is my entire mod_rewrite condition: <IfModule mod_rewrite.c> <Directory /var/www/> Options FollowSymLinks -Multiviews AllowOverride None Order allow,deny allow from all RewriteEngine On # force www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_URI} ^application.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond %{REQUEST_URI} !^(/index\.php|/assets|/robots\.txt|/sitemap\.xml|/favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] # Block access to "hidden" directories or files whose names begin with a period. This # includes directories used by version control systems such as Subversion or Git. RewriteCond %{SCRIPT_FILENAME} -d [OR] RewriteCond %{SCRIPT_FILENAME} -f RewriteRule "(^|/)\." - [F] </Directory> </IfModule> It is suppose to allow only access to mysite.com(/index.php|/assets|/robots.txt|/sitemap.xml|/favicon.ico) The error was noticed with: mysite.com/sitemap vs mysite.com/sitemap.xml Both of these addresses are resolving to the xml file while the first url should be resolving to mysite.com/index.php/sitemap * For some reason mod_rewrite is completely ignoring the lack of an extension. It sounded like a Multiviews problem to me so I disabled Multiviews and it is still going on. ***And then a different rule will eventually take the index.php out, I am having another problem with an extra '/' being left behind when this happens. This httpd file is setting up for my codeigniter php framework

    Read the article

  • Moving MySQL directory on an Amazon EC2 machine

    - by Traveling Tech Guy
    I'm trying to have MySQL point to a directory on an EBS volume I mounted on my EC2 machine. I took th following steps: Stopped MySQL (/etc/init.d/mysqld stop) - successful Created a MySQL directory on my volume, mounted on /vol (mkdir /vol/mysql) Copied the contents of /var/lib/mysql to /vol/mysql (cp -R /var/lib/mysql /vol/mysql) Chanded the owner and group of that directory to match the original (chown -R mysql:mysql /vol/mysql) - after this step, the 2 directories are identical. Edited the /etc/my.cnf file (commented 2 original lines): [mysqld] #datadir=/var/lib/mysql #socket=/var/lib/mysql/mysql.sock datadir=/vol/mysql socket=/vol/mysql/mysql.sock` Started MySQL (/etc/init.d/mysqld start) - FAILED The error file /var/log/mysqld.log contains the following lines: 100205 20:52:54 mysqld started 100205 20:52:54 InnoDB: Started; log sequence number 0 43665 100205 20:52:54 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.45' socket: '/vol/mysql/mysql.sock' port: 3306 Source distribution No other errors are available. What am I doing wrong? Where can I find the error/s encountered by MySql? If I restore the original lines, MySQL starts, leading me to believe it may be a permissions issue - but permissions are the same for both directories? Thanks!

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • Powershell script to delete sub folders and files if creation date is >7 days but maintain parent folders of sub folders and files <7 days old

    - by Mark
    I'm currently using the Powershell script below to delete all files directories and sub directories of "$dump_path" that are seven days or older based upon the creation date and not modified date. The problem with this script is this: If folder "A" is seven (or more) days old it will be deleted even if its sub folders and files are less then seven days old. What I would like this script to do is this: Delete all files from the root and in all sub folders of "$dump_path" that are seven or more days old but maintain the parent folder(s) of files and folders that are less than seven days old even if that means the parent folders are more than seven days old. If all subfolders and files are seven days or older than the parent folder then the parent can be deleted. Slightly obscure problem I know, but the intention is to have a 7 day retention period of all data in a 'sandbox' location of our shared areas. Also, an added bonus if it could generate a log of what it deletes and e-mails it out post deletion. Thank you for reading and I hope that all makes sense! Mark # set folder path $dump_path = "c:\temp" # set minimum age of files and folders $max_days = "-7" # get the current date $curr_date = Get-Date # determine how far back we go based on current date $del_date = $curr_date.AddDays($max_days) # delete the files and folders Get-ChildItem $dump_path | Where-Object { $_.CreationTime -lt $del_date } | Remove-Item -Recurse

    Read the article

  • Move postfix maildir files from one mail server to another

    - by Tauren
    I have a new mail server configured as described in this howto: http://howtoforge.com/virtual-users-domains-postfix-courier-mysql-squirrelmail-ubuntu-9.10 I also have an ancient mail server configured very similarly (using the same HOWTO, just for Fedora Core 6, if I recall correctly). Earlier today I had to switch from the old server to the new one, and the old one is no longer online. However, after I had migrated everything and switched it all over, I discovered a bunch of undelivered mail in the queue. It got delivered to the local mailboxes on the old server, so now there are a bunch of messages on it that I'd like to move to the new server. The new server has already received new messages, so I need to merge the files together somehow. For each user with an email of [email protected], there are files like this on both servers: /home/vmail/customer.com/username/maildirsize /home/vmail/customer.com/username/courierpop3dsizelist /home/vmail/customer.com/username/new/1271481177.Vca01I6006bM580357.mailhost.mydomain.com Can I simply copy the hundreds of files in the various new directories on the old server to the corresponding new directories on the new server? Will the maildirsize and courierpop3dsizelist files get updated automatically, or do I need to do something to update them?

    Read the article

  • sudo or acl or setuid/setgid ?

    - by Xavier Maillard
    Hi, for a reason I do not really understand, everyone wants sudo for all and everything. At work we even have as many entries as there are way to read a logfile (head/tail/cat/more, ...). I think, sudo is defeating here. I'd rather use a mix of setgid/setuid directories and add ACL here and there but I really need to know what are the best practices before starting up. Our servers have %admin, %production, %dba, %users -i.e many groups and many users. Each service (mysql, apache, ...) has its own way to install privileges but members of the %production group must be able to consult configuration file or even log files. There is still the solution to add them into the right groups (mysql...) and set the good permission. But I do not want to usermod all users, I do not want to modify standards permissions since it could change after each upgrade. On the other hand, setting acls and/or mixing setuid/setgid on directories is something I could easily do without "defacing" the standard distribution. What do you think about this ? Taking the mysql example, that would look like this: setfacl d:g:production:rx,d:other::---,g:production:rx,other::--- /var/log/mysql /etc/mysql Do you think this is good practise or should I definetely usermod -G mysql and play with standard permissions system ? Thank you

    Read the article

  • How can I tell SELinux to give vsftpd write access in a specific directory?

    - by Arcturus
    Hello. I've set up vsftpd on my Fedora 12 server, and I'd like to have the following configuration. Each user should have access to: his home directory (/home/USER); the web directory I created for him (/web/USER). To achieve this, I first configured vsftpd to chroot each user to his home directory. Then, I created /web/USER with the correct permissions, and used mount --bind /web/USER /home/USER/Web so that the user may have access to /web/USER through /home/USER/Web. I also turned on the SELinux boolean ftp_home_dir so that vsftpd is allowed to write in users' home directories. This works very well, except that when a user tries to upload or rename a file in /home/USER/Web, SELinux forbids it because the change must also be done to /web/USER, and SELinux doesn't give vsftpd permission to write anything to that directory. I know that I could solve the problem by turning on the SELinux boolean allow_ftpd_full_access, or ftpd_disable_trans. I also tried to use audit2allow to generate a policy, but what it does is generate a policy that gives ftpd write access to directories of type public_content_t; this is equivalent to turning on allow_ftpd_full_access, if I understood it correctly. I'd like to know if it's possible to configure SELinux to allow FTP write access to the specific directory /web/USER and its contents, instead of disabling SELinux's FTP controls entirely.

    Read the article

  • Running multiple sites on a LAMP with secure isolation

    - by David C.
    Hi everybody, I have been administering a few LAMP servers with 2-5 sites on each of them. These are basically owned by the same user/client so there are no security issues except from attacks through vulnerable deamons or scripts. I am builing my own server and would like to start hosting multiple sites. My first concern is... ISOLATION. How can I avoid that a c99 script could deface all the virtual hosts? Also, should I prevent that c99 to be able to write/read the other sites' directories? (It is easy to "cat" a config.php from another site and then get into the mysql database) My server is a VPS with 512M burstable to 1G. Among the free hosting managers, is there any small one which works for my VPS? (which maybe is compatible with the security approach I would like to have) Currently I am not planning to host over 10 sites but I would not accept that a client/hacker could navigate into unwanted directories or, worse, run malicious scripts. FTP management would be fine. I don't want to complicate things with SSH isolation. What is the best practice in this case? Basically, what do hosting companies do to sleep well? :) Thanks very much! David

    Read the article

  • Automounting Active Directory home drives on a Linux server on login

    - by Ethan
    I've got a Centos 5.7 box authenticating against Active Directory through PBIS Open (the new LikeWise Open), which works well. Now, I'm trying to get the server to automount the user's AD home directory, located at //ad.server.dom/shares/home directories (Yeah, it's a space in the path. I didn't set this up). Each user has a directory in there with the same name as the user. I've tried to get pam_mount working, but it has a series of issues on RedHat and friends, and I can't seem to get that working. The directory does need to be automounted for the server to perform it's role. My reading on automount seems to suggest that there's no way to get it to do it's thing with authentication, though I'm happy to be proved wrong. I've looked at this resource, but it requires version RedHat (thus CentOS) 6 or higher, and newer packages than I have. I can manually (As root) mount the AD directory using the command mount.cifs "//ad.server.dom/Shares/home directories/testuser" /home/local/AD/testuser/nfs_mount/ -o username=testuser and when I log in as testuser, I can see all of the sample files in the nfs_share directory. Any tips towards the right direction would be highly appreciated. This is going to be on a server at a college, so it needs to be fairly stable, and would lead towards more Linux adoption there.

    Read the article

  • Cannot connect to FTP sites anymore

    - by Wayne M
    I have the FTP service running on Server 2003, and I am hosting websites through Apache. I have users configured to point to certain directories on the server. I am using FileZilla to remote FTP, but it never seems to connect to the directory. The command window says: Command: USER wayne Response: 331 Password required for wayne Command: PASS: ***** Response: 230 User wayne logged in Status: Connected Status: Retrieving directory isting... Command: PWD Response: 257 "/wayne" is current directory Command: TYPE I Response: 200 Type set to I. Command: PASV And that's it. It doesn't display any directories at all, and the pane says "Not connected to any server". Sometimes it will display the folder, but nothing happens when I click on it to expand it. It was working fine, and I have another FTP server set up the same way that does work. How can I fix this? EDIT: I've tried changing it to Active FTP, and it says: Command: LIST Command: 150 Opening BINARY mode data connection for /bin/ls Response: 425 Can't open data connection. Error: Failed to retrieve directory listing. I also noticed that I'm not able to browse the site in IIS's management console anymore, it just shows a blank screen when I click on one of the names and says There are no items to show in this view, although the name has permissions to view the folder and everything. Could it be because I have the Web Publishing service disabled (as I'm not using IIS to host websites)? That shouldn't cause anything should it?

    Read the article

  • Ubuntu questions - important

    - by asdasd
    They had installed some modified edubuntu's at school... So i have some questions about setting some things up: How we can play HD videos ? They are made for windows machines and are in .wmv format but we need to play them on our multimedia class but don't know how - which player, which codecs etc. How to edit properly the /etc/apt/sources file ? Anything we try to install via apt-get it just says that E:\ is not available. Please tell me which repositories to put in there so we could be able to install some tools. Where are usually viruses/trojans put in ubuntu ? I mean in which directories ? Because our computers are behaving really slow and we need to check for some malware manually - we are not even allowed to install any kind of AV software. So tell me the usual directories and places for hiding such files, how are they hiddem, how to recognize them etc. Any others nice tricks/tips that we need to know. Thank you very much in advance.

    Read the article

  • Tripwire help Required

    - by ramaperumal
    I have created the policy file in Tripwire and also I have created the rules as well mentioned below: /opt/jboss/server/gis/conf -> $(SEC_CONFIG) +aipm +c+g+a+i+s+t+u+l+M; /usr/local/gtech/eseries/ -> $(SEC_CONFIG) +a+c+g+i+s+t+u+l+M ; After running the integrity check the output should be a(Access timestamp),c (Inode timestamp (create/modify),g (File owner's group ID),i (Inode number),s (File size),t (time stamp),u (File owner's user ID),l(File is increasing in size (a "growing file"),M (MD5 hash value). I am getting the output as below: [root@xxsi1242 tripwire]# tripwire --check Parsing policy file: /etc/tripwire/tw.pol *** Processing Unix File System *** Performing integrity check... Wrote report file: /var/lib/tripwire/report/xxsi1242.gtk.gtech.com-20131106-053812.twr Open Source Tripwire(R) 2.4.1 Integrity Check Report Report generated by: root Report created on: Wed 06 Nov 2013 05:38:12 AM EST Database last updated on: Wed 06 Nov 2013 05:31:17 AM EST =============================================================================== Report Summary: =============================================================================== Host name: xxsi1242.gtk.gtech.com Host IP address: 156.24.65.171 Host ID: None Policy file used: /etc/tripwire/tw.pol Configuration file used: /etc/tripwire/tw.cfg Database file used: /var/lib/tripwire/xxsi1242.gtk.gtech.com.twd Command line used: tripwire --check =============================================================================== Rule Summary: =============================================================================== ------------------------------------------------------------------------------- Section: Unix File System ------------------------------------------------------------------------------- Rule Name Severity Level Added Removed Modified --------- -------------- ----- ------- -------- Invariant Directories 66 0 0 0 Temporary directories 33 0 0 0 * Tripwire Data Files 100 0 0 1 Tech Stack 100 0 0 0 User binaries 66 0 0 0 Tripwire Binaries 100 0 0 0 * CLPS bins 100 0 0 2 CLPS Configuration files 100 0 0 0 ESCommon 100 0 0 0 Shell Binaries 100 0 0 0 OS executables and libraries 100 0 0 0 Security Control 100 0 0 0 ESCommon Configuration 100 0 0 0 (/etc/gtech/escommon) Total objects scanned: 12358 Total violations found: 3 =============================================================================== Object Summary: =============================================================================== ------------------------------------------------------------------------------- # Section: Unix File System ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Rule Name: Tripwire Data Files (/etc/tripwire/tw.pol) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/etc/tripwire/tw.pol" ------------------------------------------------------------------------------- Rule Name: CLPS bins (/opt/jboss/server) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/opt/jboss/server/esapps1/data/hypersonic/localDB.lck" "/opt/jboss/server/gis/data/hypersonic/localDB.lck" =============================================================================== Error Report: =============================================================================== No Errors ------------------------------------------------------------------------------- *** End of report *** Note: In the output I only am getting the files which are modified. I need the detail output for this. But unfortunately I am not getting what I expected. Please help me to proced further.

    Read the article

  • php on ubuntu 13.10 won't get parsed

    - by fefe
    I'm facing a strange issue with a fresh installed ubuntu 13.10 apache2 with mysql and php. My php won't get parsed and I have been doing every changes what I found during my researches PHP 5.5.3-1ubuntu2 (cli) (built: Oct 9 2013 14:49:12) sudo a2enmod php5 apache2.conf <FilesMatch \.php$> SetHandler application/x-httpd-php </FilesMatch> /etc/apache2/mods-enabled/php5.conf </FilesMatch> # Deny access to files without filename (e.g. '.php') <FilesMatch "^\.ph(p[345]?|t|tml|ps)$"> Order Deny,Allow Deny from all </FilesMatch> # Running PHP scripts in user directories is disabled by default # # To re-enable PHP in user directories comment the following lines # (from <IfModule ...> to </IfModule>.) Do NOT set it to On as it # prevents .htaccess files from disabling it. #<IfModule mod_userdir.c> # <Directory /home/*/public_html> # php_admin_value engine Off # </Directory> #</IfModule>

    Read the article

  • Exchange backup verification shows no files

    - by Olaf
    [SBS2003SP2] If i read the exhange log it shows the backup contains no files just folders and the total size seems to be Ok. I i try to restore the folders are empty... But the 14 files that where backupped dissapeared in the verification log?! Other backups on the same medium turned out to be fine. Any idea what's wrong here? This is my log: Backup Status Operation: Backup Active backup destination: File Media name: "testbackup.bkf created 2-6-2010 at 11:25" Volume shadow copy creation: Attempt 1. Backup of "SERVER1\Microsoft Information Store\First Storage Group" Backup set #1 on media #1 Backup description: "Set created 2-6-2010 at 11:25" Media name: "testbackup.bkf created 2-6-2010 at 11:25" Backup Type: Normal Backup started on 2-6-2010 at 11:26. Backup completed on 2-6-2010 at 12:21. Directories: 4 Files: 14 Bytes: 26.842.932.104 Time: 55 minutes and 38 seconds Verify Status Operation: Verify After Backup Active backup destination: File Active backup destination: \backup\Server1\Backup Files\testbackup.bkf Verify of "SERVER1\Microsoft Information Store\First Storage Group" Backup set #1 on media #1 Backup description: "Set created 2-6-2010 at 11:25" Verify started on 2-6-2010 at 12:21. Verify completed on 2-6-2010 at 12:47. Directories: 4 Files: 0 Different: 0 Bytes: 26.842.932.104 Time: 25 minutes and 46 seconds

    Read the article

  • Errors in Homebrew on OS X Lion

    - by Bo Tian
    I just ran the Homebrew script as described in the installation page. I then ran brew doctor in Terminal, and it returned several errors. I'm not sure how to fix those errors, please help. brew doctor Error: Some directories in /usr/local/share/man aren't writable. This can happen if you "sudo make install" software that isn't managed by Homebrew. If a brew tries to add locale information to one of these directories, then the install will fail during the link step. You should probably `chown` them: /usr/local/share/man/de /usr/local/share/man/de/man1 Error: You have Xcode 4.2, which is outdated. Please install Xcode 4.3. Error: Unbrewed dylibs were found in /usr/local/lib. If you didn't put them there on purpose they could cause problems when building Homebrew formulae, and may need to be deleted. Unexpected dylibs: /usr/local/lib/libcdt.5.dylib /usr/local/lib/libcgraph.6.dylib /usr/local/lib/libgraph.5.dylib /usr/local/lib/libgvc.6.dylib /usr/local/lib/libgvpr.2.dylib /usr/local/lib/libpathplan.4.dylib /usr/local/lib/libxdot.4.dylib Error: Unbrewed .pc files were found in /usr/local/lib/pkgconfig. If you didn't put them there on purpose they could cause problems when building Homebrew formulae, and may need to be deleted. Unexpected .pc files: /usr/local/lib/pkgconfig/libcdt.pc /usr/local/lib/pkgconfig/libcgraph.pc /usr/local/lib/pkgconfig/libgraph.pc /usr/local/lib/pkgconfig/libgvc.pc /usr/local/lib/pkgconfig/libgvpr.pc /usr/local/lib/pkgconfig/libpathplan.pc /usr/local/lib/pkgconfig/libxdot.pc Error: /usr/bin occurs before /usr/local/bin This means that system-provided programs will be used instead of those provided by Homebrew. The following tools exist at both paths: 2to3 Consider amending your PATH so that /usr/local/bin is ahead of /usr/bin in your PATH.

    Read the article

  • Edubuntu video playback and apt-get

    - by asdasd
    They had installed some modified edubuntu's at school... So i have some questions about setting some things up: How we can play HD videos ? They are made for windows machines and are in .wmv format but we need to play them on our multimedia class but don't know how - which player, which codecs etc. How to edit properly the /etc/apt/sources file ? Anything we try to install via apt-get it just says that E:\ is not available. Please tell me which repositories to put in there so we could be able to install some tools. Where are usually viruses/trojans put in ubuntu ? I mean in which directories ? Because our computers are behaving really slow and we need to check for some malware manually - we are not even allowed to install any kind of AV software. So tell me the usual directories and places for hiding such files, how are they hiddem, how to recognize them etc. Any others nice tricks/tips that we need to know. Thank you very much in advance.

    Read the article

  • Upstart multiple instances of service not working

    - by Dax
    I started playing with MongoDB on Lucid. Now I would like to run a DB and Config server on the same box. They both use the same binary to launch, but with different config files and running on different ports. All directories for log and lib is split so one goes to mongodb and the other to mongoconf. Each process can be started without any problems on their own. start mongodb stop mongodb start mongoconf stop mongoconf But if I try to start both, the second one would just start and exit. Using 'initctl log-priority debug' I got the following in the logs. Jan 6 12:44:12 mongo4 init: event_finished: Finished started event Jan 6 12:44:12 mongo4 init: job_process_handler: Ignored event 1 (1) for process 5690 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) main process (5690) terminated with status 1 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) goal changed from start to stop Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) state changed from running to stopping man 5 init shows that you can use instance names to differentiate the two. I tried using 'instance mongoconf' in the on upstart script and 'instance mongodb' in the other one, and it still fails. I can manually start the other process, so there is definitely no conflicts on port numbers or directories. Any ideas on what to try or how to get output on why it is 'terminated with status 1'? Thanx

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • ksh Auto-Completion PuTTY Configuration

    - by Nitrodist
    I'm having a bit of a problem configuring my PuTTY client to work with the auto-completion feature in the ksh shell. I do a listing on the root with the directories /home and /homeroot and it returns the directories in a list just fine. I can't select it, though, by hitting X = (where X is the number). /home/nitrodist>ls /h #hits esc + = 1) home/ 2) homeroot/ #hits 2 + = for the 'homeroot' dir 1) home/ 2) homeroot/ #hits just the '=' key. 1) home/ 2) homeroot/ Any ideas? I've su -'d to another user who can actually do it with their PuTTY session and I can't do it there, which makes me think it's a PuTTY configuration issue. This is running on a ksh93 shell on HP-UX, if that makes any difference. Here's my ksh config: /home/campbelm>set -o Current option settings allexport off bgnice on emacs off errexit off gmacs off ignoreeof off interactive on keyword off markdirs off monitor on noexec off noclobber off noglob off nolog off notify off nounset off privileged off restricted off trackall off verbose off vi on viraw on xtrace off /home/campbelm>

    Read the article

  • tmpreaper, --protect and a non-root user

    - by nsg
    Hi, I'm a little confused. I have a download directory that I want to remove all files older then 30 days with tmpreaper. Just one problem, the directory in question is a separate partition with a lost+found directory, of course I need to keep it so I added --protect 'lost+found', the problem is that tmpreaper outputs: error: chdir() to directory 'lost+found' (inode 11) failed: Permission denied (PID 30604) Back from recursing down `lost+found'. Entry matching `--protect' pattern skipped. `lost+found' I have tried with other pattern like lost* and so on... I'm running tmpreaper as a non-root user because there is no reason for superuser privileges because I own all files (except lost+found). Are I'm forced to run tmpreaper as root? Or are my shell-skills not as good as I thought? I guess the problem is: tmpreaper will chdir(2) into each of the directories you've specified for cleanup, and check for files matching the <shell_pattern> there. It then builds a list of them, and uses that to protect them from removal. Any thought and/or advice? Edit: The command I'm trying to run is something like $ /usr/sbin/tmpreaper -t --protect 'lost+found' 30d /mydir 1> /dev/null error: chdir() to directory `lost+found' (inode 11) failed: Permission denied Edit 2: I read the source code for tmpreaper-1.6.13 and found this if (safe_chdir (dirname)) exit(1); and if (chdir (dirname)) { message (LOG_ERROR, "chdir() to directory `%s' (inode %lu) failed: %s\n", dirname, (u_long) sb1.st_ino, strerror (errno)); return 1; } So it seems tmpreaper needs to be able to chdir in to all directories, ignored or not. I see two options left Run tmpreaper as root Move the download directory Find a alternative tool (tmpwatch?) I will give it some more research before i make a choice.

    Read the article

  • Debian 6: setting up FTP just for website editing

    - by David Oliver
    I have a VPS using Debian 6.0. Currently, SSH is set to not accept password logins, and only key-based ones. A person who needs to work on one particular website (a vhost) wishes to use FTP. He doesn't need/want SSH. How can I set up FTP access for him, enabling him to have write permissions for all files in the relevant directory, and only the relevant directory? The directory is /srv/www/domainname.com/public_html Currently, all directories and files in that directory belong to www-data:www-data and are 644/755. I've installed vsftpd and have been reading some guides, but they all seem to deal with allowing multiple users to have their own user-named directories which isn't what I'm after. I can't seem to work out how to simply define one FTP user with a password that has access to one directory of my choosing. This is my first experience of setting up an FTP server. Thanks. Edit: have also found this - maybe I should be using ProFTPd, or can vsftpd also do what I want?

    Read the article

  • Delete on windows vista and seven -- discovery process

    - by M'vy
    Hi SUs! I've recently encountered a problem. Using svn at work I needed to clear some space. As you may know svn directories are full of sub-directories and files. So the delete process begins with a step of discovering the items to be deleted (I guess this is for displaying the progress bar). But in my case it ended up to be still running after I watched Braveheart (Off-topic: good film in my opinion. On-topic: and it last 2h50) and counting 440 000+ files. I finally decided to cut off the process and use the good old cmd with a del <directory> to do the job. (Done in some minutes) So I'm wondering if someone know how to override the system to make it actually begins the process while scanning the other items? At the end, I just want the file to be deleted and I don't care the number of files to be deleted. On the contrary I care about the time it takes. Thanks

    Read the article

  • phpmyadmin “Forbidden: You don't have permission to access /phpmyadmin on this server.”

    - by Caterpillar
    I need to modify the file /etc/httpd/conf.d/phpMyAdmin.conf in order to allow remote users (not only localhost) to login # phpMyAdmin - Web based MySQL browser written in php # # Allows only localhost by default # # But allowing phpMyAdmin to anyone other than localhost should be considered # dangerous unless properly secured by SSL Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin <Directory "/usr/share/phpMyAdmin/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Allow,Deny Allow from all </Directory> <Directory /usr/share/phpMyAdmin/setup/> <IfModule mod_authz_core.c> # Apache 2.4 <RequireAny> Require ip 127.0.0.1 Require ip ::1 </RequireAny> </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Allow from All Allow from 127.0.0.1 Allow from ::1 </IfModule> </Directory> # These directories do not require access over HTTP - taken from the original # phpMyAdmin upstream tarball # <Directory /usr/share/phpMyAdmin/libraries/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/lib/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/frames/> Order Deny,Allow Deny from All Allow from None </Directory> # This configuration prevents mod_security at phpMyAdmin directories from # filtering SQL etc. This may break your mod_security implementation. # #<IfModule mod_security.c> # <Directory /usr/share/phpMyAdmin/> # SecRuleInheritance Off # </Directory> #</IfModule> When I get into phpmyadmin webpage, I am not prompted for user and password, before getting the error message: Forbidden: You don't have permission to access /phpmyadmin on this server. My system is Fedora 20

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • How can I automatically synchronize a directory tree on multiple machines?

    - by Blacklight Shining
    I have two Mac laptops and a Debian server, each with a directory that I would like to keep in sync between the three. The solution should meet the following criteria (in rough order of importance): It must not use any third-party service (e.g. Dropbox, SugarSync, Google whatever). This does not include installing additional software (as long as it's free). It must not require me to use specific directories or change my way of storing things. (Dropbox does this IIRC) It must work in all directions (changes made on /any/ machine should be pushed to the others) All data sent must be encrypted (I have ssh keypairs set up already) It must work even when not all machines are available (changes should be pushed to a machine when it comes back online) It must work even when the /directories/ on some machines are not available (they may be stored on disk images which will not always be mounted) This can be solved for Macs by using launchd to automatically launch and kill (or in some way change the behavior of) whatever daemon is used for syncing when the images are mounted and unmounted. It must be immediate (using an event-based system, not a periodic one like cron) It must be flexible (if more machines are added, I should be able to incorporate them easily) I also have some preferences that I would like to be fulfilled, but do not have to be: It should notify me somehow if there are conflicts or other errors. It should recognize symbolic and hard links and create corresponding ones. It should allow me to create a list of exceptions (subdirectories which will not be synced at all). It should not require me to set up port forwarding or otherwise reconfigure a network. This can be solved by using an ssh tunnel with reverse port forwarding. If you have a solution that meets some, but not all of the criteria, please contribute it in the comments as it might be useful in some way, and it might be possible to meet some of the criteria separately. What I tried, and why it didn't work: rsync and lsyncd do not support bidirectional synchronization csync2 is designed for server clusters and does not appear to work with machines with dynamic IPs DRBD (suggested by amotzg) involves installing a kernel module and does not appear to work on systems running OS X

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >