Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 358/974 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • Strategy for managing lots of pictures for a website

    - by Nate
    I'm starting a new website that will (hopefully) have a lot of user generated pictures. I'm trying to figure out the best way to store and serve these pictures. The CMS I'm using (umbraco) has a media library that puts a folder on the server for each image. Inside of there you can have different sizes of that same image. That folder has an ID on it and the database has additional information for that image along with the ID of the folder. This works great for small sites, but what if the pictures get up to 10,000, 100,000 or 1,000,000? It seems like the lookup on the directory would take a long time to find the correct folder. I'm on windows 2008 if that makes a difference. I'm not so worried about load. I can load balance my server pretty easily and replicate the images across the servers. The nature of the site won't have a lot of users on it either, but it could have a lot of pics. Thanks. -Nate EDIT After some thought I think I'm going to create a directory for each user under a root image folder then have user's pictures under that. I would be pretty stoked if I had even 5,000 users, so that shouldn't be too bad of a linear lookup. If it does get slow I will break it down into folders like /media/a/adam/image123.png. If it ever gets really big I will expand the above method to build a bigger tree. That would take a LOT of content though.

    Read the article

  • Mac and windows 7 file sharing specific user

    - by all-R
    Hi guys, I try to share a specific directory to my windows 7 computer, but I want it to use a specific user that I created on my mac to connect to it. I saw this tutorial: http://www.trickyways.com/2010/06/how-to-access-mac-files-from-windows-7/ wich is exactly what I want to do, but it ain't working. For some reason, it never prompts me for username/password when I try to connect on my mac when I'm on Windows 7. On top of that, when I set the permission "No Access" to the "Everyone" user on my mac, my windows computer simply don't see the directory. If I set the permission to "Read/Write" or "Read only" it works. I simply don't want that everyone in my workgroup to be able to read my files. I want to create specific users on my mac and share them to the persons I want... Any thoughts?

    Read the article

  • how to compile VTK on Mac Air

    - by Ilan Tal
    I am trying to compile VTK on a Mac after successfully doing so on Linux. I am using CMake 2.8.9 and for a long time I had a problem where it couldn't find jni.h. If anyone else has the problem, the answer is to have JAVA_INCLUDE_PATH2 point to "-framework JavaVM". After that it successfully finds JNI and does a "Configure" followed by "Generate". According to the instructions I have the next step should be easy: in a terminal run "make -j 2". I get a message "-bash: make: command not found". Strange.... I am in the vtkBuild directory, which seems the most reasonable place. I even tried the parent directory where I have the VTK source but that didn't work either. I ran a make under Linux and had no troubles, so what do I need to do with the Mac? Thanks, Ilan

    Read the article

  • Unexpected Access Denied error while accessing EFS encrypted file

    - by pozi
    I am getting Access Denied error when I try to access some files. ACL is OK, all ACE's all intherited, I have full access to these files and I am the owner of these files. ACE's are exactly same as other files in the same directory which are accessible without problems (doublechecked through Security Tab on file properties and cacls command). Files are EFS encrypted, however I should have access to these files, because they were encrypted by the same user account I am trying to access (decrypt) them. EFS settings are exactly same as other files in the same directory which are also encrypted and accessible without problems (doublechecked through cipher command and efsdump command (SysInternals)). In ProcMon utility (SysInternals) I am getting Access Denied entry while accessing these files. Files are not used (locked), checked by Unlocker utility. Up to now, I tought I understand NTFS ACL's and EFS mechanisms fairly well, but now I am completely stuck and I do not know how to access these files. Any thoughts?

    Read the article

  • Losing internet connectivity on server after installing LogMeIn Hamachi (with server set as gateway node)

    - by Kim Jong-Un
    Our domain controller (SBS 2003) completely lost internet and network connectivity yesterday after I remotely installed LogMeIn Hamachi on it and set it to be a gateway node- in an attempt to create a VPN link between the server and a remote site. I had to go in to the office to resolve the problem as, unsurprisingly, my own remote access to the server was also lost. I was only able to restore network connectivity by deleting a virtual network adapter Hamachi created when making the server the gateway node (called "Hamachi bridge" I believe), then rebooting the server. This is a repeatable problem. Every time I try to get this to work, it just takes the server offline. Why would this bridge affect regular TCP/IP connectivity on the NIC in this way? I have tried a "hub-and-spoke" configuration between the server and our PC at a remote site (server set as hub, remote site as spoke). This caused no such problems with general internet connectivity, and file transfer worked well between the two computers. However, there was a DNS issue with the VPN between the two sites- resulting in Active Directory not being able to communicate between them (could not log on using domain user accounts at remote site if they were not already cached on that machine). I only tried a "gateway" network as LogMeIn support told me: If you can get the Active Directory to work it would only be through a "Gateway" network type with the server acting as the Gateway Node. You would configure the gateway settings on the server in the Hamachi client on that machine to push whatever IP's/DNS settings you prefer and at that point AD would be able (all things being equal) communicate to the client node when it attaches. We do not have any ActiveDirectory configuration info as that's outside the scope of our support. I hope this helps. It would be fantastic if I could get Active Directory to work over a Hamachi VPN connection, without worrying about the server going offline in this way. Does anyone have any ideas how I should proceed, or any theories as to what is going on when I try to use the "gateway" network type? I want to try to narrow down what is going on here.

    Read the article

  • How do I tar dot files but not dot directories

    - by bjackfly
    The following tar command will exclude all dot files and dot directories. tar -cvzf /media/bjackfly/bkup/bkup.gz --exclude '.*' --one-file-system /home/bjackfly In my case I want the dot files to be backed up in the home directory (.vimrc, .bashrc) etc. but not the dot directories /.config /.cache /.eclipse etc. Any Linux gurus with a command for this, or do I need to run a find into a tar or do two different tar commands which is non-ideal? One for dot files in the home directory and one for everything else?

    Read the article

  • Crontab -e gives me error messages

    - by DNA
    I get a bunch of error messages when I run crontab -e Here are the error messages. And here is my crontab file under `/usr/bin/': # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 30 * * * * root rsync /home/dnaneet/Downloads/*.pdf /home/dnaneet/Downloads/pdfs/ # I notice that the last task ('rsync') NEVER RUNS! Why is this happening? What did I do wrong? Running Ubuntu 11.10/Bash. I have read this... Am I missing a shebang? And I don't know if my anacron jobs run. Edit 1 In light of Masi's comment, I commented out lines 17 thru 25 of my crontab file with #. Now when I run sudo crontab -e, all I get is: /usr/bin/crontab: 11: 17: not found /usr/bin/crontab: 12: 25: not found (gedit:4301): Gtk-WARNING **: Attempting to store changes into `/root/.local/share/recently-used.xbel', but failed: Failed to create file '/root/.local/share/recently-used.xbel.GOHVBW': No such file or directory (gedit:4301): Gtk-WARNING **: Attempting to set the permissions of `/root/.local/share/recently-used.xbel', but failed: No such file or directory What in the world?

    Read the article

  • Remove IP address from the URL of website using apache

    - by sapatos
    I'm on an EC2 instance and have a domain domain.com linked to the EC2 nameservers and it happily is serving my pages if I type domain.com in the URL. However when the page is served it resolves the url to: 1.1.1.10/directory/page.php. Using apache I've set up the following VirtualHost, following examples provided at http://httpd.apache.org/docs/2.0/dns-caveats.html Listen 80 NameVirtualHost 1.1.1.10:80 <VirtualHost 1.1.1.10:80> DocumentRoot /var/www/html/directory ServerName domain.com # Other directives here ... <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=290304000, public" </FilesMatch> </VirtualHost> However I'm not getting any changes to how the URL is displayed. This is the only VirtualHost configured on this site and I've confirmed its the one being used as I've managed to break it a number of times whilst experimenting with the configuration. The route53 entries I have are: domain.com A 1.1.1.10 domain.com NS ns-11.awsdns-11.com ns-111.awsdns-11.net ns-1111.awsdns-11.org ns-1111.awsdns-11.co.uk domain.com SOA ns-11.awsdns-11.com. awsdns-hostmaster.amazon.com. 1 1100 100 1101100 11100

    Read the article

  • Install proprietary drivers 14.04 NVIDIA (steam segmentation issue)

    - by allthosemiles
    Recently, I finally got the official drivers for my NVIDIA 560 Ti card installed on Ubuntu 14.04 (hooray) However I started looking into installing Steam and I'm getting segmentation errors when I try to run the software. I tried installing 32-bit libs and it seemed like they weren't available or they were already installed. Upon further investigation, I found that a solution is to install the proprietary drivers, install steam then switch back to the other drivers. I'm not really sure what "proprietary drivers" are in all honesty. Has anyone gone through this process that could provide some insight here? (I installed the official 64-bit driver from the NVIDIA site for my 560 Ti just for reference. And the Ubuntu version installed is 64-bit as well) Update: This is the error text I get when trying to run steam after installing it via the ubuntu store. Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 3943 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete! Restarting Steam by request... Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME has been set by the user to: /home/dbrewer/.steam/ubuntu12_32/steam-runtime Installing breakpad exception handler for appid(steam)/version(1401381906_client) /home/dbrewer/.steam/steam.sh: line 755: 4066 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$PLATFORM/$STEAMEXE" "$@" What I get when I run "steam --reset" mv: cannot stat ‘/home/dbrewer/.steam/registry.vdf’: No such file or directory Installing bootstrap /home/dbrewer/.steam/bootstrap.tar.xz Reset complete!

    Read the article

  • Understanding the Linux Root

    - by Zac
    I've been using Linux (Ubuntu) for about 2 weeks now and am still struggling with some basic concept surrounding the root user: (1) Some terminal operations (such as making subdirectories inside a FHS directory such as /opt) require me to prefix the command with sudo - why? I guess what I'm choking on is: if I'm already logged in as a valid system user, why do I have to be a superuser/root in order to modify things that the sysadmin has already deemed me worthy of accessing? (2) Is there a GUI (Gnome, KDE) equivalent to sudo? Is there a way to assume a superuser role through a graphical context, rather than from inside a new shell? (3) I can't access the /root directory logged in as myself... but I installed the system to begin with and was never asked to create a root account! How do I log in as root and gain access to /root?!? Thanks for all feedback & input!

    Read the article

  • Understanding the Linux Root

    - by Zac
    I've been using Linux (Ubuntu) for about 2 weeks now and am still struggling with some basic concept surrounding the root user: (1) Some terminal operations (such as making subdirectories inside a FHS directory such as /opt) require me to prefix the command with sudo - why? I guess what I'm choking on is: if I'm already logged in as a valid system user, why do I have to be a superuser/root in order to modify things that the sysadmin has already deemed me worthy of accessing? (2) Is there a GUI (Gnome, KDE) equivalent to sudo? Is there a way to assume a superuser role through a graphical context, rather than from inside a new shell? (3) I can't access the /root directory logged in as myself... but I installed the system to begin with and was never asked to create a root account! How do I log in as root and gain access to /root?!? Thanks for all feedback & input!

    Read the article

  • Configuring Weblogic Server 10.3.6 from 32-bit mode to 64-bit mode

    - by Ekta Malik
    This post pertains to the configuration of Weblogic Server from 32-bit mode to 64-bit mode on Solaris OS. Just in case, you have WLS 10.3.6 running in 32-bit mode and the JDK being used is installed for 64-bit mode [On Solaris OS, JDK 64-bit installation comprises of installing 32-bit JDK followed by a patch for 64-bit JDK].  Verification of the mode being used One can verify the mode of Weblogic Server in the following ways Either check the commonEnv.sh script located at $MIDDLEWARE_HOME/wlserver_10.3/common/bin where $MIDDLEWARE_HOME refers to the install directory of Middleware. Look for the patterns - SUN_ARCH_DATA_MODEL and JAVA_USE_64BIT in the file. For 32-bit mode, the parameters would appear as shown belowSUN_ARCH_DATA_MODEL="32"JAVA_USE_64BIT=false Check the server console logs; which JDK is being used during start-up By checking which JDK is used by the running process of Weblogic Server Configuration Steps Take a backup of the commonEnv.sh script located at $MIDDLEWARE_HOME/wlserver_10.3/common/bin where $MIDDLEWARE_HOME refers to the install directory of Middleware Modify the commonEnv.sh script for the following parameters: The values should be 64 and true respectively for 64-bit modeSUN_ARCH_DATA_MODEL="64"JAVA_USE_64BIT=true  Restart the weblogic server. One can confirm that the JDK being used is 64-bit by looking at the Weblogic console logs during server start up or by looking at the running process.

    Read the article

  • Copying files to zfs mountpoint doesn't work - the files aren't actually copied to the other filesystem,

    - by user113904
    I have 3 x 4 TB disks in a NAS that I want to group together and access as if they were one whole 'unit' of some kind. I also have a 250GB disk containing the OS - this is full of films and tv shows currently. I thought zfs sounded good so I created a raidz zpool, after installing the ppa sudo zpool create store raidz /dev/sdb /dev/sdc /dev/sdd and set the mountpoint to /mnt/store sudo zfs set mountpoint=/mnt/store /store checked it was successful - I think it was sudo zfs list NAME USED AVAIL REFER MOUNTPOINT store 266K 7.16T 170K /mnt/store Then I wanted to move over a whole load of files from my home directory. I went to where the to-be-copied folder was (called media) and entered sudo cp -R * /mnt/store cp: cannot create directory `/mnt/store/media': No space left on device It seems like it's not copying over to the new filesystem I made (or thought I did). I've never really done this type of thing until a few days ago so may be running before I can walk... is this not the right way to copy files across? I've only used windows before so the idea of mountpoints is a bit mind boggling. I'm using XBMCbuntu 12 beta 2.0 which is based on 12.04. Will retry with normal Ubuntu 12.04 desktop to see if that's the problem. thanks for the help!

    Read the article

  • 403 Forbidden when trying to download file that was uploaded using SSH

    - by Simon Hartcher
    I have FTP access to an Apache server on linux to upload files so that they can be downloadable from the web. I recently was granted SSH access for extra permissions and figured that it would be quicker to download the files directly to the server, instead of downloading them to my machine then FTPing to the server. When I downloaded a file using SSH to the server, and then placed it in the public_html directory, it was not visible from the web. The permissions (from SSH and the FTP client) were the same as all the other files that are visible, but it was not visible in the directory listing, and if I tried to type in the filename into my browser I would get a 403 error. Obviously, when I FTP a file to the server something else happens that makes it web visible, that I am not currently privy to. What am I missing that is causing the file to be invisible from the web?

    Read the article

  • Cron not able to succesfully change background

    - by Solenoid
    I'm running 12.04 with a custom XML background (modification on Day of Ubuntu) that changes based on time of day. I've noticed that there's a significant delay between when the changes are scheduled to take place in the XML file and when they actually show up on the background. I've also noticed that when I resume from suspend I don't get the correct background image either. I've found that cycling the wallpaper manually will fix this, and I've written a script to automate the process. If I execute the script manually it works fine. However, when I schedule the script to run in cron, cron doesn't change the background. To make sure that the script was being run properly by cron, I had it create a directory in my home folder after running the background change, and the directory is created successfully, so I know cron is running and executing the script. My script: #!/bin/bash sleep 5 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU2.xml sleep 1 gsettings set org.gnome.desktop.background picture-uri file:///home/zak/Pictures/Wallpaper/DOU.xml sleep 1 mkdir /home/zak/iscronworking exit Is cron just not able to access gsettings? The job is on my user crontab so it shouldn't be running as root. Alternately, is there any way to make Precise play nicely with XML wallpaper?

    Read the article

  • A specific user is unable to log in to vsftpd

    - by HackToHell
    I am setting up a new user let his name be ftpguy. He has access to only one directory /var/www/xxx. I have already chowned the directory so that he has write and read privileges. The user is also unable to login via ssh as I have disabled that by changing his shell to /sbin/nologin. Also, in vsftpd config, I have enabled the chroot_local_user. Now whenever I log in from ftp, i get an auth error. Connect socket #1008 to xxxxxxxx, port 21... 220 Welcome to blah FTP service. USER ftpguy 331 Please specify the password. PASS ********** 530 Login incorrect. I changed the password to something different several times, using the passwd command, nothing happens, i still the above error. However I am able to log in with my ssh creditals to my ftp server without any problems.(I do not use a key).

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    Hi I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks Thanks, Chris

    Read the article

  • How to install SIP+PyQt with apt-get + pip + virtualenv?

    - by kjo
    [I originally posted this question, under a different title, in StackOverflow (here), but later I realized that my problem is very specific to apt-get, hence I am re-posting it here. Sorry for the duplication.] I'm trying to install PyQt on Ubuntu (and within a virtualenv). The list of obstacles I'm dealing with is far too long to include here, but the one I'm currently trying to get past is this: % workon myvenv (myvenv)% cd ~/.virtualenvs/myvenv/build/pyqt (myvenv)% python ./configure.py Traceback (most recent call last): File "./configure.py", line 32, in <module> import sipconfig OK, so let's install sipconfig... (myvenv)% pip install SIP Downloading/unpacking SIP Downloading sip-4.14.8-snapshot-02bdf6cc32c1.zip (848Kb): 848Kb downloaded Running setup.py egg_info for package SIP Traceback (most recent call last): File "<string>", line 14, in <module> IOError: [Errno 2] No such file or directory: '/home/yt/.virtualenvs/myvenv/build/SIP/setup.py' Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 14, in <module> IOError: [Errno 2] No such file or directory: '/home/yt/.virtualenvs/myvenv/build/SIP/setup.py' ---------------------------------------- Command python setup.py egg_info failed with error code 1 in /home/yt/.virtualenvs/myvenv/build/SIP Storing complete log in /home/yt/.pip/pip.log The only recipe I've found so far installing SIP is this % python configure.py % make % sudo make install ...but this recipe goes against my policy of doing all my Ubuntu installations either through apt-get (or through pip in the case of Python modules). Is there some way that I can install SIP with apt-get (and possibly pip)?

    Read the article

  • Permission denied install Joomla CiviCRM

    - by Tim
    Dear All, I am trying to install CiviCRM on a Joomla 1.5.17 web server running Ubuntu 9.10. Uploading the package to the tmp directory in /var/www/[site name]/tmp and installing creates this error: Warning: fopen(/var/www/trbcp/administrator/components/com_civicrm/civicrm/templates/CRM/common/civicrm.settings.php.tpl) [function.fopen]: failed to open stream: Permission denied in /var/www/trbcp/libraries/joomla/filesystem/file.php on line 240 Warning: fopen(/var/www/trbcp/administrator/components/com_civicrm/civicrm/templates/CRM/common/civicrm.settings.php.tpl) [function.fopen]: failed to open stream: Permission denied in /var/www/trbcp/libraries/joomla/filesystem/file.php on line 240 Warning: include_once(/var/www/trbcp/administrator/components/com_civicrm/civicrm.settings.php) [function.include-once]: failed to open stream: Permission denied in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 115 Warning: include_once() [function.include]: Failed opening '/var/www/trbcp/administrator/components/com_civicrm/civicrm.settings.php' for inclusion (include_path='.') in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 115 Warning: require_once(DB.php) [function.require-once]: failed to open stream: No such file or directory in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 140 Fatal error: require_once() [function.require]: Failed opening required 'DB.php' (include_path='.') in /var/www/trbcp/administrator/components/com_civicrm/configure.php on line 140 Initially I got a permissions denied error and thought that Joomla did not have permissions to all its directories but looking at Help-System information all the necessary directories are writable. I then decided to chmod 777 all the directories and try again but it still fails. Looking at the directories afterwards it seems that the new directories being created are not being created 777. By changing them I can get at least one step further before the error appears again. My question is does anyone know how to get round this? I am thinking that the new directories being created will require sudo privileges to have mv and create actions carried out, hence the permission denied errors. Can this be configured in Joomla? Or is there a way to specify that new directories created in /var/www/[site name] take 777 by default? any help greatly appreciated! EDIT: P.S. if anyone could give me a clue as to how the insert code feature works as well that would be great! Might make this post a bit more readable! EDIT2: Well I have had a bash at changing the permissions and ownership. sudo chown -R www-data:www-data /var/www/trbcp I then tried changing the whole /var directory (insecure I know but this is a test and dev server for me to find my feet on) to 777 and still getting permission errors. It seems to be error opening stream? Not a php guy so not sure what that is but could it be that permissions to run php script need to change? any thoughts greatly appreciated.

    Read the article

  • Overcrowded Windows XP Folders

    - by BlairHippo
    I know that, technically, an individual Windows XP directory can hold an immense number of files (over 4.29 billion, according to a quick Google search). However, is there a practical ceiling where too many files in one directory starts having an impact on reads to those files? If so, what factors would exacerbate or help the issue? I ask because my employer has several hundred XP machines in the field at client sites, and the performance on some of the older ones is getting "sludgy." The machines download and display client-defined images, and my supervisor and I suspect that our slacktastic approach to cache management could be to blame. (Some of the directories have tens of thousands of images in them.) I'm trying to gather evidence to support or contest the theory before spending time on a coding fix.

    Read the article

  • Subversion error: (405 Method Not Allowed) in response to MKCOL

    - by Sergio del Amo
    I am getting the next error while trying to commit a new directory addition. svn: Commit failed (details follow): svn: Server sent unexpected return value (405 Method Not Allowed) in response to MKCOL request for '.... I have never seen this error before. Can someone help me? Solution I managed to solve the problem: Delete the parent's directory of the folder giving the problem. Did SVN Update A folder with the same name as the new one already existed in repository. Delete this folder SVN Commit Copy the new folder, Schedule for addition and SVN Commit

    Read the article

  • How to create a folder in SharePoint2010 root folder and set permission to it

    - by ybbest
    If you need to create a folder in SharePoint2010 root folder and set permission to it, here is piece of code that does it. In the script, I have created a folder called Temp in Logs folder under SharePoint2010 root and then I grant read/write access to the Windows group WSS_WPG and full access to the group WSS_ADMIN_WPG for that folder. $Folder=New-Item "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\LOGS\temp" -Type Directory -force $acl = Get-Acl $Folder ##The following line has been commented out , if you like to break the permission inheritance from the parent floder , uncommented the code. #$acl.SetAccessRuleProtection($True, $False) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("WSS_ADMIN_WPG","FullControl", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("WSS_WPG","Modify", "ContainerInherit, ObjectInherit", "None", "Allow") $acl.AddAccessRule($rule) Set-Acl $Folder $acl References: http://technet.microsoft.com/en-us/library/ff730951.aspx http://msdn.microsoft.com/en-us/library/tbsb79h3.aspx http://blogs.technet.com/b/josebda/archive/2010/11/12/how-to-handle-ntfs-folder-permissions-security-descriptors-and-acls-in-powershell.aspx http://chrisfederico.wordpress.com/2008/02/01/setting-acl-on-a-file-or-directory-in-powershell/

    Read the article

  • Postfix tutorial inconsistency

    - by Desmond Hume
    I'm following this tutorial to setup a Postfix/Dovecot mail server with Postfix Admin as a web front end. As regards directory structure for virtual mail users, the author of the tutorial writes: Virtual mail users are those that do not exist as Unix system users. They thus don't use the standard Unix methods of authentication or mail delivery and don't have home directories. That is how we are managing things here: mail users are defined in the database created by Postfix Admin rather than existing as system users. Mail will be kept in subfolders per domain and account under /var/vmail - e.g. [email protected] will have a mail directory of /var/vmail/example.com/me. But when he gives instructions about configuring Postfix Admin, he suggests this to be contained by Postfix Admin's config.inc.php: // Mailboxes // If you want to store the mailboxes per domain set this to 'YES'. // Examples: // YES: /usr/local/virtual/domain.tld/[email protected] // NO: /usr/local/virtual/[email protected] $CONF['domain_path'] = 'NO'; Is there an inconsistency?

    Read the article

  • Missing APR on apache2 ./configure

    - by arby
    I want to build the latest stable version of apache2. I downloaded the source and put APR & APR-util in the srclib folder, then changed directories to ./srclib/apr and ran: ./configure --prefix=/usr/local/apr sudo make sudo make install This seemed to install APR ok, but when I run ./configure from the apr-util directory, I receive the error: configure: error: APR could not be located. Please use the --with-apr option. Using ./configure --prefix=/usr/local/apr-util --with-apr=/usr/local/apr, the error becomes: checking for APR... configure: error: the --with-apr parameter is incorrect. It must specify an install prefix, a build directory, or an apr-config file. Why can't it find APR?

    Read the article

  • Separate php.ini file for each Apache virtual host?

    - by Calvin L
    Is it possible to have a separate php.ini file that overrides the default php.ini file for each virtual host? I'm running Apache/2.2.14, PHP 5.3.2-1. For example I have several vhosts pointing to domains in my /var/www/ directory: /var/www/website1.com /var/www/website2.com What I'd like is to be able to place a custom php.ini file in each directory that would override the default values only for that vhost, but keep the original defaults if the value isn't specified: /var/www/website1.com/htdocs/ /var/www/website1.com/php.ini EDIT: I found more info on the topic here for those interested: http://serverfault.com/questions/34078/how-do-i-set-up-per-site-php-ini-files-on-a-lamp-server-using-namevirtualhosts

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >