Search Results

Search found 17710 results on 709 pages for 'portable home directories'.

Page 313/709 | < Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >

  • Allow PHP to write file without 777

    - by camerongray
    I am setting up a simple website on webspace provided by my university. I do not have database access so I am storing all the data in a flat file. The issue I am experiencing is related to file permissions. I need PHP to be able to read and write the data file but I don't really want to set the file to 777 as anybody else on the system could modify it, they already have read access to everyone's web directories. Does anyone have any ideas on how to accomplish this? Thanks in advance

    Read the article

  • environment variables generated by at command

    - by Jordan Arseno
    I'm inspecting /var/spool/cron/atjobs/a001cf01570e44 with cat, after running the at command from PHP using exec(). It looks like at has prepended the script with lots of APACHE environment variables. #!/bin/sh # atrun uid=33 gid=33 # mail www-data 0 umask 22 APACHE_RUN_DIR=/var/run/apache2; export APACHE_RUN_DIR APACHE_PID_FILE=/var/run/apache2.pid; export APACHE_PID_FILE PATH=/usr/local/bin:/usr/bin:/bin; export PATH APACHE_LOCK_DIR=/var/lock/apache2; export APACHE_LOCK_DIR LANG=C; export LANG APACHE_RUN_USER=www-data; export APACHE_RUN_USER APACHE_RUN_GROUP=www-data; export APACHE_RUN_GROUP APACHE_LOG_DIR=/var/log/apache2; export APACHE_LOG_DIR PWD=/home/jordanarseno/webroot/public_html/myapp; export PWD cd /home/jordanarseno/webroot/public\_html/myapp || { echo 'Execution directory inaccessible' >&2 exit 1 } curl -k http://localhost/myapp/crons/this_action/3 The last line is the only real command I sent along with at via stdin. What is the purpose of these variables? Where is this procedure stored?

    Read the article

  • Facing difficulty with migrating from wordpress to Drupal

    - by rakibtg
    One of my blog was build of Wordpress but now i want to use Drupal as the CMS of my Blog. To do so I have deleted all the Wordpress files from my server and the Database and MySQL user which are associated with wordpress blog and uploaded the Drupal files in my server directory where the wordpress files were. But, when i have opened the blog it shows the Wordpress blog! though its been deleted and their should be the Drupal Installation interface. So, i have re-checked my server directories and database, there is not wordpress files and wp database all are deleted, there is only the drupal files, but when i go to the blog to install drupal there is still the Wordpress blog, I have checked the blog in many web browsers and there is not cache memory problem. My hosting server is linux based. can't understand what to do? Any idea? Thanks

    Read the article

  • recommendations for disk -> usb backup software

    - by TWood
    Recently I lost a tape drive and rather than repair the unit I decided that backups to usb external drives would be cheaper. In the past I used NTBackup and figured that the new server 2008 R2 backup wbadmin utility would be able to meet my needs. It does not. I am looking for recommendations for another utility that i can use. My requirements are: -backup local disk in addition to files on a network share -scheduled task integration (or some gui options to manage schedule) -non-incremental backup Basically I could do this all with WBAdmin if it just supported network shares. I saw some links that described attaching a vhd pointed to a network share but I am trying to avoid hacks like that. If i'm going to do all that trouble I'd just as well manually copy the directories over myself. If anyone has any software suggestions that might make this task easier for me let me know please. I am considering BackupAssist but can only find a few reviews here and there for it.

    Read the article

  • How can I get command prompt to merge my files in name order?

    - by Anastasia
    I'm using the copy command in command prompt to merge all the files in a directory, for a number of directories. The problem is, I need to edit the first file in each directory before I merge. This means that when I put in the command "copy /b *.mp3 name.mp3", the joined file has part 2 at the start and part 1 at the end, presumably because it was created last. Is there a way of using the copy command so that the files merge in name order? Each folder has a different number of parts, anywhere from 2 to 1000 so I don't want to list each file with a "+" in between. Ideally, I'd like to find something to insert into the copy command I'm already using. Otherwise, is there a way of rearranging the files in a folder so that if you enter "DIR", part 1 shows up first even if it was edited last? I'm using Windows 7 by the way.

    Read the article

  • Server 2008 R2 file access permissions

    - by Napster100
    I'm finding it awkward to sort out permissions for file sharing and access on my LAN. I've created an account on the server node (as a normal user) and shared a drive that has 2 folders at the root, one is for personal file storage and the other shared files, if I connect to the shared area from a workstation running windows 7 and log-in using the account I created on the server, I can look through directories but can't look in some (which I wanted as I changed the permissions for that to happen), but my problem is although the permissions are set for this user account to have full control of the specific folder I can't create a folder in that area or upload files to that folder. Could someone explain why this is? Thanks in advanced

    Read the article

  • Linux that restores itself on each reboot

    - by jettero
    I'm looking for methods and software to help create a variant of lubuntu that will restore itself to an install state and/or update on every boot. I'm thinking of doing things like putting the root filesystem on a squashfs and using unionfs and tmpfs to make root writable, but automagically restorable. I'm thinking of updating the squashfs with rsync. Perhaps there are other ways to approach the problem. Perhaps root needn't be writable at all. All thoughts welcome. The home dir would be writable in the usual way. The goal, if it matters, is a Linux that's simple to maintain from the home office, but that functions correctly for customers. We have some custom software that we wish for customers to be able to run trivially on equipment we provide. Ideally these devices would have a "restore to factory" function that would put it back the way we intended. If this is part of the normal boot cycle, so much the better. Why lubuntu? Personal preference for this application. It has a usable desktop, but doesn't take up much ram.

    Read the article

  • Weblogic WLST classpath

    - by user43736
    When I run the WLST .sh script to set the env as follows why can't I see the updated path when I do echo? [linbox2 bin]$ ./setWLSEnv.sh CLASSPATH=/directory/ols_wls/patch_wlss1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_oepe1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/patch_ocm1031/profiles/default/sys_manifest_classpath/weblogic_patch.jar: /directory/ols_wls/jrockit_160_14_R27.6.5-32/lib/tools.jar: /directory/ols_wls/utils/config/10.3/config-launch.jar: /directory/ols_wls/wlserver_10.3/server/lib/weblogic_sp.jar: /directory/ols_wls/wlserver_10.3/server/lib/weblogic.jar: /directory/ols_wls/modules/features/weblogic.server.modules_10.3.2.0.jar: /directory/ols_wls/wlserver_10.3/server/lib/webservices.jar: /directory/ols_wls/modules/org.apache.ant_1.7.0/lib/ant-all.jar: /directory/ols_wls/modules/net.sf.antcontrib_1.0.0.0_1-0b2/lib/ant-contrib.jar: PATH=/directory/ols_wls/wlserver_10.3/server/bin: /directory/ols_wls/modules/org.apache.ant_1.7.0/bin: /directory/ols_wls/jrockit_160_14_R27.6.5-32/jre/bin: /directory/ols_wls/jrockit_160_14_R27.6.5-32/bin: /usr/kerberos/bin: /usr/local/bin: /bin: /usr/bin: /usr/X11R6/bin: /usr/java/j2sdk1.4.2_11/bin/bin: /home/oracle/bin: /directory/wls_olwcs/jdk160_14_R27.6.5-32/bin: /directory/ccanywhere81/bin:/directory/oracle/oracle/product/10.2.0/client_1/bin Your environment has been set. [linbox2 bin]$ export CLASSPATH [linbox2 bin]$ export PATH [linbox2 bin]$ echo $PATH /usr/kerberos/bin: /usr/local/bin: /bin: /usr/bin: /usr/X11R6/bin: /usr/java/j2sdk1.4.2_11/bin/bin: /home/oracle/bin: /directory/wls_olwcs/jdk160_14_R27.6.5-32/bin: /directory/ccanywhere81/bin: /directory/oracle/oracle/product/10.2.0/client_1/bin [linbox2 bin]$

    Read the article

  • "private" directory not accessible in Apache

    - by janeden
    The directory private lives under my DocumentRoot, and despite its name, it should be accessible just like any other dir. But if I add the following RewriteRule to httpd.conf: RewriteRule ^/([^\.]+)$ /$1.html [L] Apache returns 403 for http://server/private/2201. The error log states client denied by server configuration: /private/2201.html If I then rename private to foo, or if I request 2201.html directly, the file is served: 127.0.0.1 - - [21/Nov/2011:10:24:45 +0100] "GET /private/2201 HTTP/1.1" 403 214 127.0.0.1 - - [21/Nov/2011:10:24:58 +0100] "GET /foo/2201 HTTP/1.1" 200 3068 127.0.0.1 - - [21/Nov/2011:10:27:39 +0100] "GET /private/2201.html HTTP/1.1" 200 3068 This is confusing. Is there any special rule for directories named private? If so – why does the direct request for 2201.html work (although the denied request seems to handle the same resource, at least according to the error log entry)?

    Read the article

  • Grub Installation Failed: Fatal Error ... now what I do?

    - by eklavya
    I know there are some threads that touch this but I feel I have done something uniquely stupid. hence the post and plea for help. I am a beginner @ Linux. So I have a PC with a HDD (hard disk drive) and SSD (solid state drive) It was running Linux Mint /dev/sda1 - HDD Partition 1 - 2 TB (mounted this is /home /dev/sda2 - HDD Partition 2 - 1 TB (separate back up drive, i was backing up files to this) /dev/sdb1 - SSD Partition 1 - 100 GB (OS) /dev/sdb2 - SSD Partition 2 - 20 GB (Swap) The operating system was Linux Mint and was installed on the /dev/sdb1 i.e the solid state drive. I had partitioned off the sda into 2 TB and 1TB and presented the 2 TB as the /home to the OS. Anyway last night I decided to make a return to Ubuntu via the path of Elementary OS. Everything went fine with the install until it stated that GRUB installed failed and this was a Fatal error (no kidding I said). No I am stuck. I have definitely done something wrong and don't know what it is... My biggest pain is the files on the /dev/sda2. I want to save these before I try something drastic like wiping off the /dev/sda completely. So I have the following questions... Can I use a liveCD USB to save these files ? I can see the /dev/sda2 but was unable to access the files in the liveCD last not least ... how do I fix the main issue here. Why could the OS not install GRUB 2b... why is my SSD the /dev/sdb ... and not /dev/sda. Does that have something to do with it that my master boot record sits on the HDD /dev/sda and not /dev/sdb

    Read the article

  • amazon ec2-medium apache requests per second terrible

    - by TheDayIsDone
    EDITED -- test running from localhost now to rule out network... i have a c1.medium using EBS. when i do an apache benchmark and i'm just printing a "hello" for the test from localhost - no database hits, it's very slow. i can repeat this test many times with the same results. any thoughts? thanks in advance. ab -n 1000 -c 100 http://localhost/home/test/ Benchmarking localhost (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Apache/2.2.23 Server Hostname: localhost Server Port: 80 Document Path: /home/test/ Document Length: 5 bytes Concurrency Level: 100 Time taken for tests: 25.300 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 816000 bytes HTML transferred: 5000 bytes Requests per second: 39.53 [#/sec] (mean) Time per request: 2530.037 [ms] (mean) Time per request: 25.300 [ms] (mean, across all concurrent requests) Transfer rate: 31.50 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 7 21.0 0 73 Processing: 81 2489 665.7 2500 4057 Waiting: 80 2443 654.0 2445 4057 Total: 85 2496 653.5 2500 4057 Percentage of the requests served within a certain time (ms) 50% 2500 66% 2651 75% 2842 80% 2932 90% 3301 95% 3506 98% 3762 99% 3838 100% 4057 (longest request)

    Read the article

  • Windows 7 Account Settings Vanished

    - by Roy Tang
    So I came home and stumbled upon a bit of a mystery. When I got home my brother was using my desktop PC that was running Windows 7 (I have accounts for my 2 brothers and my mom on the machine, but mine is the only admin account). After he finished his game he logged out and I logged in to my account, but found only strangeness. My windows account seems to have been somewhat "reset", meaning: my quick launch shortcuts were gone my dropbox account did not automatically login my pidgin accounts were no longer there I had to relogin Steam iTunes could not launch (I had hooked up my iDevice before logging in) The Documents/Pictures/Music shortcuts in the start menu no longer work However, despite that: my desktop wallpaper was still correct my documents folder was still there in c:\Users\my account name\My Documents as expected Google Chrome settings seem to have been retained other accounts on the same machine seem to be fine I asked my brother if he had installed anything strange during the day, he only installed Yahoo Messenger. I last used the machine around 24 hours ago and it was fine then. I'm not sure what else has been affected. I'm inclined to just create a new admin user for me to use, but I'd like to have some idea of what actually happened.

    Read the article

  • Winodws server 2003 Setup

    - by Barracksbuilder
    I work at a university maintaining the computer science department server. I am looking for a more economical way to stream line the set up of student accounts. CS students are granted a Username and password an IIS virtual directory, FTP virtual directory, and a mysql database. Server is running windows server 2003R2 (Possibly migrating to 2008R2) The server is running a domain though no students physically log a terminal into it (No computers are part of my domain.) Creating the account is a manual process. I did right a PHP script to query the Universities AD and copy the information and write it to my AD. I then have to create basically the users home directory. I tried having AD do it but since the user never physically logs in it never creates the directory. Permissions on this folder are set to User - full, Instructors (group) - full, Users (group) - read, IUSER - read. Inside of the users folder their is a "Private" folder with permissions User - full, instructors (group) - full. Next step is IIS I create a virtual directory in the default web site pointed to the users home directory so they have a website. Same goes for FTP virtual directory in the default ftp configuration to allow the users to upload files to their website. Mysql I have to create a user and password then create a mysql scheme (database) full access for the user and full access to the instructors account to be able to access the students database. All of this is done manually and takes me a week to do. The closest description is maybe a shared hosting environment. Is there a better way to do this? Scripting wise, or better structure setup?

    Read the article

  • Tomcat directly serve static (css, js) files shared by multiple applications

    - by Josvic Zammit
    I'm using the ExtJS framework which has a bulk of js and css files that are used for all apps. I intend to share these between a number of web applications (different war files). For this reason I would like to serve ExtJS js and css directly from the web server, in my case Tomcat6, which can be used to serve static files, as in this helpful link. Therefore I put my files under /var/lib/tomcat6/webapps/ROOT/extjs/. The static files that are directly under that directory are served correctly, e.g. /extjs/ext.js correctly serves the file at /var/lib/tomcat6/webapps/ROOT/extjs/ext.js. However files in lower-level directories, for example /extjs/welcome/css/welcome.css, which should serve the file at /var/lib/tomcat6/webapps/ROOT/extjs/welcome/css/welcome.css, return a 404. TL/DR Tomcat serves static files only at top-level directory. A 404 is returned for files deeper in the hierarchy. Config file contents: server.xml application's web.xml

    Read the article

  • Regex working in RedHat is not giving any result in Ubuntu

    - by Supratik
    My goal is to match specific files from specific sub directories. I have the following folder structure `-- data |-- a |-- a.txt |-- b |-- b.txt |-- c |-- c.txt |-- d |-- d.txt |-- e |-- e.txt |-- org-1 | |-- a.org | |-- b.org | |-- org.txt | |-- user-0 | | |-- a.txt | | |-- b.txt I am trying to list the files only inside the data directory. I am able to get the correct result using the following command in RHEL find ./testdir/ -iwholename "*/data/[!/].txt" a.txt b.txt c.txt d.txt e.txt If I run the same command in Ubuntu it is not working. Can anyone please tell me why it is not working in Ubuntu ?

    Read the article

  • rsync invocation to replace symlinks pointing to source?

    - by bdbaddog
    Currently I'm moving a big filesystem to a new server as the original fileserver is no longer able to handle the filesystem writes. To make this quick I made symlinks at the target filesystem pointing to the original filesystem. Initially: /company/release (mountpoint of the original filesystem) After migration: /company/release.old (points to original filesystem after automount map update) /company/release (points to new fileserver/filesystem after automount map update) In /company/release there are symlinks like the following: /company/release/product-1.0.tar.gz - /company/release.old/product-1.0.tar.gz /company/release/product-1.0 - /company/release.old/product-1.0 (this is a tree of files) Using symlinks allowed me to move the writes to the new filesystem quickly. Now I'd like to slowly migrate the existing files and directories to the new filesystem. The problem I'm running into is that since the symlinks point back at the original files rsync doesn't see any difference and so it doesn't actually copy the file(s) or directory(s) and remove/overwrite the symlinks. Is there a set of rsync flags which will do what I want?

    Read the article

  • SMB/CIFS connection, attempting to change the permissionswithin rhel5 to comply with the clients needs

    - by Skreemer
    I can get the mount to work and as written in /etc/fstab: //pcsprdvhost.prod.tsh.mis.mckesson.com/sftphome /sftphome2 cifs username=myuser,workgroup=domain,password=mypassword,noserverinfo,uid=tmadmin,gid=tibco,nounix,file_mode=0777,dir_mode=0777 0 2 this means that every directory under /sftphome2 looks like: drwxrwxrwx 1 tmadmin tibco 0 Jul 6 2010 D0000001 When I issue: chown -R D0000001:D0000001_admin D0000001 Nothing happens. When I pull the uid and gid specifications out I get the system owner/group of root:sys What I need to be able to do is change the sub-directories under /sftphome2 to whatever owner and group (and permissions) I desire versus the ones that are getting specified. How do I do this?

    Read the article

  • Opensolaris, ZFS shares

    - by Random
    I've recently updated to OpenSolaris 2010.05 from 2009.06. Its currently set up to share one of the the ZFS zpools to the other machines on my network. However, only a single account is able to access these shares. My other accounts get refused when attempting to connect. File permissions are all correct - When logging into the machine directly with the other logins, I can see, and play with the files and directories as normal - Its just the share that gets refused. The interesting thing is, I could access the shares prior to upgrading. Nothing else was changed other than the straight upgrade. I am pretty sure there is no user specific sharing going on. What could be preventing others from accessing this zpool?

    Read the article

  • Can I split one ethernet line coming out of my wall into multiple separate lines?

    - by Burteçin 'Turk' Sapta
    Hi all and thanks in advance. I'll start with some background. I live in an apartment which provides internet service included in the rent. They use company called pavlov for the internet http://pavlovmedia.net/ wireless seems to be working fine but wired connection is at least %30 faster. Ethernet, cat5 outlet is built in the wall, and there is only 1 outlet in each room. I would like to take this 1 outlet coming out the wall and multiply it into 4 wires, for desktop, playstation, tv and laptop without loosing any internet bandwidth. i have absolutely no idea weather this line is coming from a switch or a router but i have been researching Ethernet splitter, routers, switches, hubs and haven't found a solid answer. what is the best solution for me? thank you once again! EDIT: ok this picture cleared few things http://www.home-network-help.com/images/home-network-expanded.jpg so seems that an ethernet switch is to ethernet as a USB hub is to USB. what is really 10/100Mbps Network Switch and what is the cap?

    Read the article

  • What character can be safely used for naming files on unix/linux?

    - by Eric DANNIELOU
    Before yesterday, I used only lower case letters, numbers, dot (.) and underscore(_) for directories and file naming. Today I would like to start using more special characters. Which ones are safe (by safe I mean I will never have any problem)? ps : I can't believe this question hasn't been asked already on this site, but I've searched for the word "naming" and read canonical questions without success (mosts are about computer names). Edit #1 : (btw, I don't use upper case letters for file names. I don't remember why. But since a few month, I have production problems with upper case letters : Some OS do not support ascii!) Here's what happened yesterday at work : As usual, I had to create a self signed SSL certificate. As usual, I used the name of the website for the files : www2.example.com.key www2.example.com.crt www2.example.com.csr. Then comes the problem : Generate a wildcard self signed certificate. I did that and named the files example.com.key example.com.crt example.com.csr, which is misleading (it's a certificate for *.example.com). I came back home, started putting some stars in apache configuration files filenames and see if it works (on a useless home computer, not even stagging). Stars in file names really scares me : Some coworkers/vendors/... can do some script using rm find xarg that would lead to http://www.ucs.cam.ac.uk/support/unix-support/misc/horror, and already one answer talks about disaster. Edit #2 : Just figured that : does not need to be escaped. Anyone knows why it is not used in file names?

    Read the article

  • Computer Turns on Briefly then right back off again.

    - by goddamnyouryan
    So yesterday I came home from work and went to turn my computer on....it turned on for about 5 seconds then promptly turned right back off again...before I ever saw anything on the screen. I tried again, same result. After several attempts, I've found that the length at which it turns on differs. After trying multiple times in a row, it only stays on for about 3 seconds. If I let it rest for a bit it sometimes will stay on for up to a minute (though it never boots, the screen stays black the whole time). I'm not sure what is causing this issue...I built this computer a little more than 2 years ago and this is the first issue I have ever had with it. I did all the usual checks: -It's not the power switch -The capacitors on the motherboard all seem to be in working order -The PSU seems to be fine as it lights up, fan spins, and will sometimes stay on for about a minute period My hope is that the thermal paste on the cpu has degraded and just needs to be re-applied. Does that seem like a reasonable assumption? I'm going to tear the thing apart and do a minimum system build when I get home, but any heads up as to what I should be looking for would be much appreciated. Any thoughts?

    Read the article

  • With Ubuntu 12.04 unlike 11.04 Wine installed application start menu links are missing

    - by Ron Whites
    With Ubuntu 12.04 and wine 1.4 unlike ubuntu 11.04 with wine 1.2.2 installed application start menu links are missing. For instance with Ubuntu 11.04 including Wine I can install one of our Windows applications and then can go to Applications Wine Programs Semantic Designs TestCoverage Documentation to bring up the documentation for how to run our tool. Unfortunately with Ubuntu 12.04 the Applications menu is gone and going to Dash I do see "Recent Apps and more apps" but my installed Wine application and related documentation link is shown present, even though the wine uninstaller shows in present. I found this online suggestion and tried using the gnome "main menu"... Windows key to launch the Dash. Enter "Main Menu" in the search field and open the old Edit Main Menu app. Select the Category (aka Unity Dash Filter) you want the item in. Name the Dash/Launcher Item Add the Command to launch said app With "mainmenu" then get down to the TestCoverage Documentation and I could see a command link in properties of .. env WINEPREFIX="/home/sdtest/.wine" wine C:\windows\command\start.exe /Unix /home/sdtest/.wine/dosdevices/c:/users/sdtest/Start\ Menu/Programs/Semantic\ Designs/Test\ Coverage/Java\ 1.7\ Documentation.lnk BUT I could not execute this link to view the installed documentation. So I copied the link properties into a file, set it as executable, and ran it as a bash script and the documentation came up! So why can't I use this link under main menu?

    Read the article

  • Problem running MVC3 app in IIS 7

    - by mjmoore99
    I am having a problem getting a MVC 3 project running in IIS7 on a computer running Windows 7 Home-64 bit. Here is what I did. Installed IIS 7. Accessed the server and got the IIS welcome page. Created a directory named d:\MySite and copied the MVC application to it. (The MVC app is just the standard app that is created when you create a new MVC3 project in visual studio. It just displays a home page and an account logon page. It runs fine inside the Visual Studio development server and I also copied it out to my hosting site and it works fine there) Started IIS management console. Stopped the default site. Added a new site named "MySite" with a physical directory of "d:\Mysite" Changed the application pool named MySite to use .Net Framework 4.0, Integrated pipeline When I access the site in the browser I get a list of the files in the d:\MySite directory. It is as if IIS is not recognizing the contents of d:\MySite as an MVC application. What do I need to do to resolve this?

    Read the article

  • Can Apache be configured to specify more than one docroot per virtualhost?

    - by syn4k
    I have a vhost which specifies <VirtualHost *:80> DocumentRoot "/private/var/www/html/cms/sites/" ServerName localhost.com </VirtualHost> I would like to know if localhost.com can also point to /private/var/www/html/wordpress/. This seems like a no brainer but Apache is like black magic; these things are always possible. Anyway, I already know that I could specify a new ServerName entry and set a new docroot. The problem is, both directories need to be available as roots. If I need to provide more info, I will gladly do so.

    Read the article

  • Script apparently changing file permissions on Mac OS to 000

    - by half_bit
    I wrote a little shellscript that helps installing a web application. The script itself just downloads a zip archive, extracts it and changes the permissions of the extracted files to the one needed to run the webapp. The problem now is that some users reported that after running my script, all the permissions of every file in their home directory or even on their whole computer changed to 000 (except the actual unzipped files which do have the correct permissions). The only lines in my script actually doing IO are these: URL="http://foo.com/" FILENAME="some.zip" curl --silent "$URL$FILENAME" -o $FILENAME > /dev/null echo "Unzipping...\c" if unzip -oqq $FILENAME > /dev/null then chmod -R 777 app/tmp app/webroot app/Config/database* app/configuration* chown -R www:www * rm $FILENAME echo "\t\t\tOK" exit 0 else echo "\t\t\tERROR" exit 1 fi I seriously can't explain this to myself. How can this even be possible? It is entirely possible that the users accidentally ran the script in their home directory, but that still wouldn't explain why the permissions where set to 000, not www/777.

    Read the article

< Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >