Search Results

Search found 18333 results on 734 pages for 'temporary directory'.

Page 213/734 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Why is it good to have website content files on a separate drive other than system (OS) drive?

    - by Jeffrey
    I am wondering what benefits will give me to move all website content files from the default inetpub directory (C:) to something like D:\wwwroot. By default IIS creates separate application pool for each website and I am using the built-in user and group (IURS) as the authentication method. I’ve made sure each site directory has the appropriate permission settings so I am not sure what benefits I will gain. Some of the environment settings are as below: VMWare Windows 2008 R2 64 IIS 7.5 C:\inetpub\site1 C:\inetpub\site2 Also as this article (moving the iis7 inetpub directory to a different drive) points out, not sure if it's worth the trouble to migrate files to a different drive: PLEASE BE AWARE OF THE FOLLOWING: WINDOWS SERVICING EVENTS (I.E. HOTFIXES AND SERVICE PACKS) WOULD STILL REPLACE FILES IN THE ORIGINAL DIRECTORIES. THE LIKELIHOOD THAT FILES IN THE INETPUB DIRECTORIES HAVE TO BE REPLACED BY SERVICING IS LOW BUT FOR THIS REASON DELETING THE ORIGINAL DIRECTORIES IS NOT POSSIBLE.

    Read the article

  • How to troubleshoot if a zip file is valid or if it is big file size to be unzipped ?

    - by mireille raad
    Hello , I am trying to unzip a file with the size of 2GB I am getting the following error : unzip CLTE_C_08.zip Archive: CLTE_C_08.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of CLTE_C_08.zip or CLTE_C_08.zip.zip, and cannot find CLTE_C_08.zip.ZIP, period. After some googling, some people say that this error is because the file is too big, others say because file is corrupt, others say that it could be a not unix archive. So my question , how to find out if file is valid archive file on my Centos and what is the command/trick to uncompress big files ( if any ) Thanks in advance :)

    Read the article

  • Apache + plesk vhost problem: .htaccess ignored!

    - by DaNieL
    Hi guys, i have a problem with a simple apache configuration. When the user ask for https://mydomain.com i have to redirect it to https://www.mydomain.com, becose my https certificate is valid just for the domain with www. I create the vhost.conf into my /var/www/vhosts/mydomain.com/conf/ directory, with inside: <Directory /var/www/vhosts/mydomain.com/httpsdocs> AllowOverride All </Directory> And my .htaccess file into the /var/www/vhosts/mydomain.com/httpsdocs/ is: RewriteEngine on RewriteCond %{HTTPS_HOST} ^mydomain\.com RewriteRule ^(.*)$ https://www.mydomain.com/$1 [R=301,L] But seem like the .htaccess is completely ignored. Any idea?

    Read the article

  • Why can't tuxboot and ubuntu play well together?

    - by mmr
    I'm trying to get clonezilla to run off of a usb stick, and it seems that the right way to do that is via tuxboot. Tuxboot is not compilable on ubuntu. I used git to get it from the repository, and then when I run the 'install' script (because building it is apparently not allowed, since the build script just tries to install windows things). Qmake-linux wants my qmake executable to be in the same directory as the stuff I pulled down, and let's just say that if there's a way to do this easily, I ain't seein' it. So then I download the linux file, the most recent of which is tuxboot-linux-25. Try to run it, get a failure that libpng12.so.0 isn't found. OK, then I go to install that via the instructions I found on the web but firefox seems to have already deleted from my history (yay!) Then I add the /usr/local/lib directory to ldconfig via emacs (had to install that too, of course): http://ubuntuforums.org/showthread.php?t=369848 I still get the errors that libpng12.so.0 cannot be opened because 'No such file or directory'. ldconfig -p | grep libpng shows that the library is there, but it still doesn't seem to be findable. What to do next? (for the record, doing this in windows is painless-- download, click, and it's done. But I'm trying to be all linuxy and get away from Windows for this...)

    Read the article

  • Tried to install some software, it says some packages are damaged, cannot fix them

    - by lempira
    So, I go to the Ubuntu Software Center, as soon as it opens, a window pops up with the following text: "Items cannot be installed or removed until the package catalog is repaired. Do you want to repair it now?" Then I click the "Repair" button, then a new window pops up with the following text: "Package operation failed. The installation or removal of a software package failed." Then I click on the "Details" button, which returns me the following text: installArchives() failed: Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... Can't exec "locale": No such file or directory at /usr/share/perl5/Debconf/Encoding.pm line 16. Use of uninitialized value $Debconf::Encoding::charmap in scalar chomp at /usr/share/perl5/Debconf/Encoding.pm line 17. Preconfiguring packages ... dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (2) dpkg: warning: 'ldconfig' not found in PATH or not executable. dpkg: error: 1 expected program not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. What should I do?

    Read the article

  • 389 DS Achitecture for Multiple Sites

    - by Kyle Flavin
    I'm looking to deploy 389 Directory in my environment to replace an existing iPlanet installation. I would be using it primarily to store user account data for authentication purposes. I have two physically separate data centers that I would like to share the same directory tree. My initial thinking is to setup 389 DS as follows: -A Master/Consumer in DataCenter A -A Master/Consumer in DataCenter B -Replication agreement between both masters, to mirror the directory tree in both environments. Does this sound like a reasonable approach? Is there a better way to do it? (ie: four masters?) Is there documentation for best practices when setting up 389 DS in situations such as this? Thanks.

    Read the article

  • Apache: Assign SSL server / client certs to directories

    - by Daniel Amaya
    I have multiple directories on my system, e.g., /var/www/dir1 /var/www/dir2 /var/www/dir3 And what I'd like to do is to generate a server/client SSL certificate for each directory, and then set up each directory such that the client cert must match the server cert in order to access said directory. Now, if someone has the client cert for /var/www/dir2 and they try to access /var/www/dir1, they will be unable to do so since those directories use different certs. Each of these directories is hosted on the same domain (i.e., domain.com/dir1, domain.com/dir2). Now, the problem I am having is that I am not exactly sure how to accomplish this in Apache. (Also, I don't really care for domain.com to require SSL, but I do want the directories to require it.)

    Read the article

  • Symlink across local volumes in webroot?

    - by geerlingguy
    I am looking for a good short-term solution to storage space concerns on my website. Currently, I have all uploaded files (flash video, images, etc.) inside the 'files' directory in my web root (/home/account/public_html/files). That directory is located on my high-speed main hard drive (a 15k SCSI drive). I have another drive with much more capacity, but spinning at 10k rpm (so still fast, but not as good for random reads/writes as the main drive. The entire drive is mounted at /backup Right now I'm just using it as a backup volume. I would like to create a symlink from my /home/account/public_html/files folder to /backup/files, and have all files reside on the second drive. However, if someone accesses a file at http://www.example.com/files/filename.jpg, would it still work if I symlinked to the second drive? (Basically, would Apache/PHP automatically know to follow the symlink for that directory?).

    Read the article

  • Access a PLESK website before propagation?

    - by RCNeil
    My web host uses Plesk and I want to know if there is anyway to access and view a website (with PHP and other processes being functional) without propagation of the domain name? I have found countless forums on this but they are all pretty old (circa 01-04) and involve either tricking your localhost or SSH commands and some even result in terrible security risks. I would like to access a web page directory through a browser and see it's contents while having the PHP processes carry out... before I propagate it's potential domain name. People claim this is pointless but during a site migration why on earth would you not test a site before propagating it? I'm looking for something similar to what cPanel offers i.e. http://IP.ADDRESS./~mydomain.com The only solution I could think of is storing the site in a new directory of an already functional site and then setting up databases and testing the site once it's complete. Once tested and working I should be easily be able to migrate the files to the "new" domain name's root directory and just setup a new databases and then propagate the domain name. I can't believe that Plesk V10+ still does not have a site preview method that includes PHP, JS, and Flash ability.

    Read the article

  • Delete recursive directorys with FTP command on Bash

    - by Fake4d
    I have a problem with my infrastructure here. I am in a closed DMZ and have to access a FTP-Server in another DMZ from a headless Suse Linux 10.1. So i think i only got the ftp command.. But i have to delete a directory with about 100 subdirectorys and endless files in it.. When I type del directory it returns "Its not empty" and so i have to delete each sub directory and file manually. Oh please tell me a way how i can do this automatically :)

    Read the article

  • Change XAMPP's htdocs web root folder to another one

    - by vitto
    I'm trying to change the XAMPP's web root default directory /opt/lampp/htocs to another one like /home/me/Dropbox/public_html without success. I've edited the file /opt/lampp/etc/httpd.conf # old line: DocumentRoot "/opt/lampp/htdocs" DocumentRoot "/home/me/Dropbox/public_html" #...etc... # old line: <Directory "/opt/lampp/htdocs"> <Directory "/home/me/Dropbox/Work/public_html"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # etc... I've did this as said in this article: Using Ubuntu One to synchronise htdocs? Then I've restarted Apache and I've got a permission error 403 on every page I've called with the web browser. So I've changed folder and files permission to 755. I've did this as said in this article: What file permissions should I set on web root? The problem still remains the same, I have the 403 error on every page I try to reach with the web browser. I have the same problem on a Mac using XAMPP. So everythig works fine if the folder remains the original /opt/lampp/htocs. How can I change it correctly?

    Read the article

  • How do I change the NGINX user?

    - by danielfaraday
    I have a PHP script that creates a directory and outputs an image to the directory. This was working just fine under Apache but we recently decided to switch to NGINX to make more use of our limited RAM. I'm using the PHP mkdir() command to create the directory: mkdir(dirname($path['image']['server']), 0755, true); After the switch to NGINX, I'm getting the following warning: Warning: mkdir(): Permission denied in ... I've already checked all the permissions of the parent directories, so I've determined that I probably need to change the NGINX or PHP-FPM 'user' but I'm not sure how to do that (I never had to specify user permissions for APACHE). I can't seem to find much information on this. Any help would be great! (Note: Besides this little hang-up, the switch to NGINX has been pretty seamless; I'm using it for the first time and it literally only took about 10 minutes to get up and running with NGINX. Now I'm just ironing out the kinks.)

    Read the article

  • Is it inefficient to have symbolic links to symbolic links?

    - by Ogre Psalm33
    We're setting up a series of Makefiles where we want to have a project-level include directory that will have symbolic links to sub-project-level include files. Many sub-project developers have chosen to have their include files also be symbolic links to yet another directory where the actual software is located. So my question is, is it inefficient to have a symbolic link to a symbolic link to another file (for, say, a C++ header that may be included dozens or more times during a compile)? Example directory tree: /project/include/ x_header1.h -> /project/src/csci_x/include/header1.h x_header2.h -> /project/src/csci_x/include/header2.h /project/src/csci_x/ include/ header1.h -> /project/src/csci_x/local_1/cxx/header1.h header2.h -> /project/src/csci_x/local_2/cxx/header2.h local_1/cxx/ module1.cpp header1.h local_2/cxx/ module2.cpp header2.h

    Read the article

  • Tracking the linux config with git: how?

    - by Pierre
    I'd like to track my linux configurations with git. My idea is to have a branch for each server. /etc is not the only one directory to be tracked (I won't git init in '/etc' ) As far as I could see, it is possible to init a git for a distant directory. I tried this: # mkdir -p /git/.git # cd /git # git --work-tree=/ --git-dir=/git/.git init Initialized empty Git repository in /git/.git/ 1) Creating a new branch before everything is not possible # git branch server1 fatal: Not a valid object name: 'HEAD'. 2) adding a file in master/HEAD is not possible # touch README.md # git add README.md fatal: Unable to create '//.git/index.lock': No such file or directory how should I properly setup git to track my system-config ? Thanks. P.

    Read the article

  • How to troubleshoot if a zip file is valid or if it is big file size to be unzipped ?

    - by mireille raad
    Hello , I am trying to unzip a file with the size of 2GB I am getting the following error : unzip CLTE_C_08.zip Archive: CLTE_C_08.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of CLTE_C_08.zip or CLTE_C_08.zip.zip, and cannot find CLTE_C_08.zip.ZIP, period. After some googling, some people say that this error is because the file is too big, others say because file is corrupt, others say that it could be a not unix archive. So my question , how to find out if file is valid archive file on my Centos and what is the command/trick to uncompress big files ( if any ) Thanks in advance :)

    Read the article

  • Taking user out of MACHINENAME\Users group does not disallow them from authenticating with IIS site

    - by jayrdub
    I have a site that has anonymous access disabled and uses only IIS basic authentication. The site's home directory only has the MACHINENAME\Users group with permissions. I have one user that I don't want to be able to log-in to this site, so I thought all I would need to do is take that user out of the Users group, but doing so still allows him to authenticate. I know it is the Users group that is allowing authentication because if I remove that group's permissions on the directory, he is not allowed to log in. Is there something special about the Users group that makes it so you are actually always a part of it? Is the only solution to revoke the Users group's permissions on the site's home directory and grant a new group access that contains only the allowed users?

    Read the article

  • Best Practices PHP mvc routing

    - by dukeofweatherby
    I have a custom MVC framework that is in a constant state of evolution. There's a long standing debate with a co-worker how the routing should work. Considering the following directory structure: /core/Router.php /mvc/Controllers/{Public controllers} /mvc/Controllers/Private/{Controllers requiring valid user} /mvc/Controllers/CMS/{Controllers requiring valid user and specific roles} The question is: "Where should the current User's authentication be established: in the Router, when choosing which controller/directory to load, or in each Controller?" My argument is that when authenticating in the Router, an Error Controller is created instead of the requested Controller, informing you of your mishap; And the directory structure clearly indicates the authentication required. His argument is that a router should do routing and only routing. Leave it to the Controller to handle it on a case by case basis. This is more modular and allows more flexibility should changes need to be made by the router. PHP MVC - Custom Routing Mechanism alluded to it, but the topic was of a different nature. Alternative suggestions would be welcomed as well.

    Read the article

  • Unable to access, make directories (and files) with ftp

    - by Kriem
    I'm having trouble with my new server and accessing its directories. I updated my proftpd.conf with: DefaultRoot / No I'm able to see the root directory of my server. But, trying to access some directories gives different results. For example, I can access /vars but I can't access /home or /root How can I overcome this? This is what my ftp client says after trying to access /root: Server said: /root: No such file or directory Error -125: remote chdir failed This is what my ftp client says after trying to create a new directory in /: Server said: untitled folder: Permission denied Error -140: remote mkdir failed

    Read the article

  • How to copy symbolic links?

    - by Basilevs
    I have directory that contains some symbolic links: user@host:include$ find .. -type l -ls 4737414 0 lrwxrwxrwx 1 user group 13 Dec 9 13:47 ../k0607-lsi6/camac -> ../../include 4737415 0 lrwxrwxrwx 1 user group 14 Dec 9 13:49 ../k0607-lsi6/linux -> ../../../linux 4737417 0 lrwxrwxrwx 1 user group 12 Dec 9 13:57 ../k0607-lsi6/dfc -> ../../../dfc 4737419 0 lrwxrwxrwx 1 user group 17 Dec 9 13:57 ../k0607-lsi6/dfcommon -> ../../../dfcommon 4737420 0 lrwxrwxrwx 1 user group 19 Dec 9 13:57 ../k0607-lsi6/dfcommonxx -> ../../../dfcommonxx 4737421 0 lrwxrwxrwx 1 user group 17 Dec 9 13:57 ../k0607-lsi6/dfcompat -> ../../../dfcompat I need to copy them to the current directory. The resulting links should be independent from their prototypes and lead directly to their target objects. cp -s creates links to links that is not appropriate behavior. cp -s -L refuses to copy links to directories cp -s -L -r refuses to copy relative links to non-working directory What should I do?

    Read the article

  • How restore qmail backup files

    - by Maysam
    We are using qmail as our mail application on a linux server. A few weeks ago our server crashed and we had everything installed from scratch and our users started to send & receive email again. The problem is they have lost their old emails. We have a back up of the whole qmail directory. But I don't know how to restore the old emails without losing the new ones. It's worth mentioning that I don't have any problem with restoring old sent mails. When I copy email files into .sent-mail/cur directory, I have them restored in sent box of users, but restoring files in /cur directory doesn't work for inbox emails and I can't get them restored.

    Read the article

  • Make symlink on Windows of whole tree without modifying the original folder

    - by DarkGhostHunter
    I'm trying to do this: make a symlink of a whole directory "C:/Master", in different folders like "C:\Projects\Alpha\", "C:\Projects\Beta\" an so on. "Master" directory usually changes in files and data. I work on the "Projects/*", where every project folder uses the "Master" files, but every one has new files in them. Let's say, I point to the car engine in every project folder, and inside them I add different kind of wheels. The problem I'm having, as a Windows 8 user, is that symlinks (junction) acts as a window to "Master" - I'm not allowed to add any file inside. I looking a way to reference the entire "Master" directory, and add new files - not edit any of the "Master" ones. It's as described here, but on Windows.

    Read the article

  • using "touch" to create directories?

    - by user66732
    1) in the "A" directory: find . -type f a.txt 2) in the "B" directory: cat a.txt | while read FILENAMES; do touch "$FILENAMES"; done 3) Result: the 2) "creates the files" [i mean only with the same filename, but with 0 Byte size] ok. But if there are subdirs in the "A" directory, then the 2) can't create the files in the subdir, because there are no directories in it. Question: is there a way, that "touch" can create directories?

    Read the article

  • using "touch" to create directories?

    - by user62367
    1) in the "A" directory: find . -type f a.txt 2) in the "B" directory: cat a.txt | while read FILENAMES; do touch "$FILENAMES"; done 3) Result: the 2) "creates the files" [i mean only with the same filename, but with 0 Byte size] ok. But if there are subdirs in the "A" directory, then the 2) can't create the files in the subdir, because there are no directories in it. Question: is there a way, that "touch" can create directories?

    Read the article

  • Loading images in XNA 4.0; "Cannot Open File" Problems

    - by user32623
    Okay, I'm writing a game in C#/XNA 4.0 and am utterly stumped at my current juncture: Sprite animation. I understand how it works and have all the code in place, but my ContentLoader won't open my file... Basically, my directory looks like this: //WindowsGame1 - "Game1.cs" - //Classes - "NPC.cs" - Content Reference - //Images - "Monster.png" Inside my NPC class, I have all the essential drawing functions, i.e. LoadContent, Draw, Update. And I can get the game to find the correct file and attempt to open it, but when it tries, it throws an exception and tells me it can't open the file. This is how my code in my NPC class looks: Texture2D NPCImage; Vector2 NPCPosition; Animation NPCAnimation = new Animation(); public void Initialize() { NPCAnimation.Initialize(NPCPosition, new Vector2(4, 4)); } public void LoadContent(ContentManager Content) { NPCImage = Content.Load<Texture2D>("_InsertImageFilePathHere_"); NPCAnimation.AnimationImage = NPCImage; } The rest of the code is irrelevant at this point because I can't even get the image to load. I think it might have to do with a directory problem, but I also know little to nothing about spriting or working with images or animations in my code. Any help is appreciated. Not sure if I provided enough information here, so let me know if more is needed! Also, what would be the correct way to direct that Content.Load to Monster.png given the current directory situation? Right now I just have it using the full path from the C:// drive. Thanks in advance!

    Read the article

  • Samba / smbd on Centos 6.5

    - by Satalink
    I've installed Samba4 and have the smb.conf file as follows: [global] workgroup = WORKGROUP server string = Samba Server realm = REXIALO.COM netbios name = REXIALO.COM security = user map to guest = Bad Password bind interfaces only = no interfaces = lo venet0 log file = /var/log/samba/samba.log max log size = 1000 [webroot] path = /usr/local/apache/htdocs comment = Example.com webroot directory read only = No I can connect from the same server with smbclient. Localhost: # smbclient -L localhost -U root Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Sharename Type Comment --------- ---- ------- webroot Disk RexiAlo webroot directory IPC$ IPC IPC Service (RexiAlo Samba Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Server Comment --------- ------- Workgroup Master --------- -------Enter root's password: network: # smbclient -L rexialo.com -U Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Sharename Type Comment --------- ---- ------- webroot Disk RexiAlo webroot directory IPC$ IPC IPC Service (RexiAlo Samba Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Server Comment --------- ------- Workgroup Master --------- ------- The problem is when I try to map to the smb webroot from Windows 7, it asks for user/pass but just times out and then prompts for credentials. The samba.log file does not show any activity other than the startup of the smbd process. Any help would be appreciated.

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >