Search Results

Search found 33650 results on 1346 pages for 'line break'.

Page 557/1346 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • How can I use wildcards in an Nginx map directive?

    - by Ian Clelland
    I am trying to use Nginx to served cached files produced by a web application, and have spotted a potential problem; that the url-space is wide, and will exceed the Ext3 limit of 32000 subdirectories. I would like to break up the subdirectories, making, say, a two-level filesystem cache. So, where I am currently caching a file at /var/cache/www/arbitrary_directory_name/index.html I would store that instead at something like /var/cache/www/a/r/arbitrary_directory_name/index.html My trouble is that I can't get try_files, or even rewrite to make that mapping. My searching on the subject leads me to believe that I need to do something like this (heavily abbreviated): http { map $request_uri $prefix { /aa* a/a; /ab* a/b; /ac* a/c; ... /zz* z/z; } location / { try_files /var/cache/www/$prefix/$request_uri/index.html @fallback; # or # if (-f /var/cache/www/$prefix/$request_uri/index.html) { # rewrite ^(.*)$ /var/cache/www/$prefix/$1/index.html; # } } } But I can't get the /aa* pattern to match the incoming uri. Without the *, it will match an exact uri, but I can't get it to match just the first two characters. The Nginx documentation suggests that wildcards should be allowed, but I can't see a way to get them to work. Is there a way to do this? Am I missing something simple? Or am I going about this the wrong way?

    Read the article

  • How do I move a linked file on Unix?

    - by r3mbol
    I have a bunch of files in one directory and links to each one of those files in another directory. So ls -l looks something like this: lrwxrwxrwx 1 rembol rembol 89 Jan 25 10:00 copyright.txt -> /home/rembol/solr/target/deploy/data/core/copyright.txt lrwxrwxrwx 1 rembol rembol 92 Jan 25 10:00 jar-versions.xml -> /home/rembol/solr/target/deploy/data/core/jar-versions.xml lrwxrwxrwx 1 rembol rembol 85 Jan 25 10:00 lgpl.html -> /home/rembol/solr/target/deploy/data/core/lgpl.html lrwxrwxrwx 1 rembol rembol 79 Jan 25 10:00 lib -> /home/rembol/solr/target/deploy/data/core/lib lrwxrwxrwx 1 rembol rembol 87 Jan 25 10:00 readme.html -> /home/rembol/solr/target/deploy/data/core/readme.html drwxr-xr-x 3 rembol rembol 4096 Jan 25 10:00 server drwxr-xr-x 2 rembol rembol 4096 Jan 25 10:00 startup Now I want to move those linked files from /home/rembol/solr/target/deploy to /home/rembol/output/. If I do that my simply calling mv, links will break. I don't want to re-link each file separately, cause there are hundreds of them (they are generated automatically). Is there some clever way to move linked files, rather than writing a script that unlinks, moves and relinks recursively for each file in each subdirectory?

    Read the article

  • Permissions in OS X for iTunes library with multiple users

    - by John
    I currently have a lot of music on an external drive and my iTunes set up from there. However, periodically, when the external drive isn't connected, iTunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my Mac's HD is large enough to house the music collection. However, I have 4 family members – all with their own logins – using this same gob of music. I don't want four copies of the library, only one with all libraries referencing it. So, what I want to do is: Move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. Get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let iTunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the iTunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any XML library file editing if possible. Am I dreaming?

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • After few days of server running fine with nginx it start throwing 499 and 502

    - by Abhay Kumar
    Nginx start throwing 499 and 502 after running fine for few days, website is a rails app using thin as the webserver. Restarting the Nginx doent not seem to help. Below the the Nginx config Nginx config under sites-enabled upstream domain1 { least_conn; server 127.0.0.1:3009; server 127.0.0.1:3010; server 127.0.0.1:3011; } server { listen 80; # default_server; server_name xyz.com *.xyz.com; client_max_body_size 5M; access_log /home/ubuntu/www/xyz/current/log/access.log; root /home/ubuntu/www/xyz/current/public/; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 150; if (!-f $request_filename) { proxy_pass http://domain1; break; } } }

    Read the article

  • Windows 8 using as a webserver

    - by Jason
    I have a few hobby websites that I currently host on CentOS 6. Apache, mail serving, PHP, MySQL nothing special. In the past I used Windows XP to do this same task, for years, and I was OK. I switched to Linux and for the last few years it has been such a pain. updates break, certain apps only support certain distros without compiling from source. It prevents me from working on my hobby sites more because I am always fixing something. With Windows I locked it down, I run a hardware firewall and packet analyser, kept up on updates and A/V and never had a problem. I dont allow RDC from outside the local LAN, no FTP open, run OpenSSH on an obscure port.. I am considering switching to Windows 8 (since it is a cheaper license now that Windows 7) and running apache, HMailServer, PHP, MySQL, just like my CentOS install. My questions: I am not familiar with Windows 8, can the above be done like XP? No new security restrictions or the OS preventing this from happening? The machine is a Athlon 64-bit X2 with 32GB of RAM. Will Windows 8 see all of the RAM? Technically the machine came with Windows 7, and there is a serial number on it but I am sure I wiped away the Windows 7 recovery partition when I switched to Linux....

    Read the article

  • Nginx terminate SSL for wordpress

    - by Mike
    I have a bit of a problem. We run a wordpress blog behind a ngnix proxy and looking to terminate the ssl on the nginx side. Our current nginx config is upstream admin_nossl { server 192.168.100.36:80; } server { listen 192.168.71.178:443; server_name host.domain.com; ssl on; ssl_certificate /etc/nginx/wild.domain.com.crt; ssl_certificate_key /etc/nginx/wild.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; location / { proxy_read_timeout 2000; proxy_next_upstream error; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://admin_nossl; break; It just does not seem to work. If I can hit https://host.domain.com but it quickly switches back to non-secured from what I can see. Any pointers?

    Read the article

  • Macros in Excel 2010 hangs

    - by Ahmad
    I have a spreadsheet with several macros. Generally, when previously using Excel 2007, a user clicks a button and everything works as expected (calculations, some email sending & file I/O). Typically, the expected run-time is about 90 seconds. The spreadsheet is a xlsm file created with Excel 2007. With Excel 2010 however, the same user process results in a non-responsive excel and forces us to kill excel from the task manager. Some note that I have gathered so far in trying to debug this issue: When monitoring CPU usage, it seems that Excel does start the macro. CPU usage increases as expected to about 47% for a few seconds. Excel.exe than drops to 0% usage and I now have a non-responsive Excel (even after 1 hour). If I set debug break points across modules and different functions and step through the code (after clicking the button) , the process works as expected albeit much slower. To add, there were no exceptions. I am at a complete loss as to what the issue may be. I initially thought it may be the add in that is being used but that was debunked by point 2. This seems to be a very odd situation. I can provide more information if required, but I'm at wits end about the root cause could be. I need help in diagnosing and resolving this issue.

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • Using awk to split text file every 10,000 lines

    - by Sneaky Wombat
    I have a large gzip'd text file. I'd like to something like: zcat BIGFILE.GZ | awk (snag 10,000 lines and redirect to...)|gzip -9 smallerPartFile.gz the awk part up there, I basically want it to take 10,000 lines and send it to gzip and then repeat until all lines in the original input file are consumed. I found a script that claims to do this, but when I run it on my files and then diff the original to the ones that were split and then merged, lines are missing. So, something is wrong with the awk part and I'm not sure what part is broken. Here's the code. Can someone tell me why this doesn't yield a file that can be split and merged and then diff'd to the original successfully? # Generate files part0.dat.gz, part1.dat.gz, etc. # restore with: zcat foo* | gzip -9 > restoredFoo.sql.gz (or something like that) prefix="foo" count=0 suffix=".sql" lines=10000 # Split every 10000 line. zcat /home/foo/foo.sql.gz | while true; do partname=${prefix}${count}${suffix} # Use awk to read the required number of lines from the input stream. awk -v lines=${lines} 'NR <= lines {print} NR == lines {exit}' >${partname} if [[ -s ${partname} ]]; then # Compress this part file. gzip -9 ${partname} (( ++count )) else # Last file generated is empty, delete it. rm -f ${partname} break fi done

    Read the article

  • Vim clobbering scrollback buffer outside of screen

    - by dotancohen
    If I'm not in a screen session, then when exiting Vim I get a bash prompt below the remnants of the VIM window. A side effect of this is that my scrollback buffer is clobbered, especially if I have paged through a long file in VIM. The problem only occurs if I'm not in screen, inside a screen window VIM exits to show the bash prompt and the previous lines just as before. I tried adding sett_ti=t_te= to my .vimrc to fix the problem, but the only effect that it has was to break VIM such that the problem occurs inside screen as well as outside. Thus, I removed the line. For good measure I do have altscreen on in .screenrc. This is on Ubuntu Server 12.04.1 LTS, with Bash 4.2.24, Screen 4.00, and VIM 7.3 (not vim-tiny), accessed over SSH in Cygwin version NT-6.1-WOW64 on a Windows 7 laptop. Thanks. EDIT: Note that in the same Cygwin install I can SSH into a different server (CentOS) and there VIM does not clobber the scrollback buffer. Therefore, I do not suspect a Cygwin issue. The CentOS machine does not have screen installed, and I did not have to add set t_ti= t_te= to .vimrc.

    Read the article

  • nginx giving of 404 when using set in an if-block

    - by ba
    I've just started using nginx and I'm now trying to make it play nice with the Wordpress plugin WP-SuperCache which adds static files of my blog posts. To serve the static file I need to make sure that some cookies aren't set, that it's not a POST-request and making sure the cached/static file exist. I found this guide and it seems like a good fit. But I've noticed that as soon as I try to set something inside an if my site starta giving 404s on an URL that isn't rewritten. The location block of the configuration: location /blog { index index.php; set $supercache_file ''; set $supercache_ok 1; if ($request_method = POST) { set $supercache_ok 0; } if ($http_cookie ~* "(comment_author_|wordpress|wp-postpass_)") { set $supercache_ok '0'; } if ($supercache_ok = '1') { set $supercache_file '$document_root/blog/wp-content/cache/supercache/$http_host/$1/index.html.gz'; } if (-f $supercache_file) { rewrite ^(.*)$ $supercache_file break; } try_files $uri $uri/ @wordpress; } The above doesn't work, and if I remove all the ifs above and add if ($http_host = 'mydomain.tld') { set $supercache_ok = 1; } and then I get the exact same message in the errors.log. Namely: 2010/05/12 19:53:39 [error] 15977#0: *84 "/home/ba/www/domain.tld/blog/2010/05/blogpost/index.php" is not found (2: No such file or directory), client: <ip>, server: domain.tld, request: "GET /blog/2010/05/blogpost/ HTTP/1.1", host: "domain.tld", referrer: "http://domain.tld/blog/" Remove the if and everything works as it should. I'm stymied, no idea at all where I should start searching. =/ ba@cell: ~> nginx -v nginx version: nginx/0.7.65

    Read the article

  • Can't delete ntuser.dat file to remove profiles after reboot

    - by Matrix Mole
    I've ran into an issue where some servers will not release the handle on the ntuser.dat file even after a reboot. Or quite possible, after the reboot, the ntuser.dat file is getting re-loaded into memory. The user accounts are definitely not being accessed (some of them belong to users that have not been with the company in over a year). It seems to be on Windows 2003 servers, but I can't be 100% certain that there aren't some 2000 servers showing this issue as well. When I try to use process explorer or handle.exe from sysinternals to kill the handle on these ntuser.dat files, the handle remains open and connected. Handle.exe even reports that the handle was broken while it remains in use. I've even taken ownership on the file and tried to kill the handle to no effect (windows shows I have ownership of the file, but still refuses to release the handle). I have looked into the registry to see if I can discover where the files may be getting loaded at. Unfortunately, the username is appearing in too many places for me to be certain which one is actually loading their reg file into memory. Any suggestions on how I can either break the handle on the files, or prevent them from getting re-loaded after a reboot?

    Read the article

  • javascript doesn't seem to be able to post form data (nginx server w/ php-fpm)

    - by Jones
    So the situation is like so: I have a nginx server with php-fpm installed. All is well and the site scripts and all work perfectly. I am able to use html to POST form data and it works just fine. However, There seems to be be some correlation between javascript, the POST protocol and nothing happening. I cant seem to determine the issue. Example: I have a user login widget that uses javascript on submit the fields and POST the data to a backend auth script which returns a server message that then populates the login box saying something like "Login Successful" followed by reloading the page to properly enable content. Problem is, nothing happens when you hit submit. I do know the setup works because i had it working on apache before migrating. Also if it makes any difference, the server is a Amazon EC2 instance using the Amazon AMI. I really dont know where to start looking on this one, but below is my default.conf for the server: upstream backend_get { server 127.0.0.1:80 weight=1; } upstream backend_post { server 127.0.0.1:80 weight=1; } #Main website url server { listen 80; server_name server.com; #charset koi8-r; access_log logs/host.access.log main; error_log logs/host.error.log; location / { root /usr/share/nginx/html; index index.php index.html index.htm; if ($request_method = POST) { proxy_pass http://backend_post; break; } } location ~ \.php$ { #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • 85 Hz on old/new driver looks the same like 75 Hz on previous one?

    - by jon
    I have old philips 107T5 CRT and Nvidia graphics card. I used old Nvidia driver (but it wasn't 'legacy' one when I installed it) for few years but recently I decided to install other Linux distribution. I used 75 Hz refresh rate and 1024x768 resolution on my previous distribution. After I installed the new distribution I had to install a Nvidia driver so I downloaded one from the Nvidia site (this time only legacy supported my card so I downloaded legacy and installed it). It wasn't automatically updating xorg.conf but I had my previous xorg.conf copy and I used it. When I run X I could only choose 85 and 75 Hz, 85 was checked as default. And now what shocks me: that default 85 Hz looks identically like 75 Hz on previous driver looked (at least to me). I tried 75 Hz out of curiosity and it's too bright, hurts, etc. But on the previous driver 75 Hz wasn't hurting my eyes. Why is it different? It's the same number after all, so it should always give the same results, right? That's my first question. Second question: Is 85 Hz OK for that monitor model? Would it break it? I tried to find the optimal refresh rate for this model but couldn't find it.

    Read the article

  • Why is XSP/mono giving me file/write errors with lucence?

    - by acidzombie24
    I am using mono 2.6.7, xsp 2.8.? and i am not sure what else. I'll try downgrading xsp and whatever else. Today i notice my server randomly gets Http Error 500, internal server error. I looked in my log files http://www.pastie.org/1426236 and it appears that the problem is locking lucence which it does by creating a file called write.lock. This use to work with no problem but now it gives me pain. It will fix itself if i refresh the page several times but it will break just as easily. My other site will not work everytime i restart apache and i need to by hand delete the file (at least it doesnt get random 500 errors). However now it just says Lock obtain timed out: NativeFSLock@/var/www/SITE_2/App_Data/LuceneIndex_a/write.lock no matter what. I even chmod -R 777 the directory and still no luck. write.lock doesnt even exist. I have no clue whats going on. Any ideas?

    Read the article

  • Broken Vista. Can't open Windows settings.

    - by serena
    My neighbor has a Lenovo laptop with Windows Vista Home Basic. She's a noob and just uses the laptop for internet purposes. She said she had to close down Windows improperly (sometime ago, maybe 6 months) because of system freeze. She realized there's something wrong with her Windows when she tried to open Windows update settings. I took a look at the system and determined the following errors: When I click on Windows Updates, a bare white window opens for a sec. and closes immediately. When I try to open Computer Properties, the same thing happens. (Windows+Break doesn't work either.) When I try to open Bluetooth settings, the same thing happens. So Vista won't let me open any Windows settings, but installed programs work correctly (games, applications etc.). She has no Windows Vista discs since the laptop came with preinstalled genuine Vista. She also has no recovery discs. I don't think there is a system restore point for the time the system was stable. Now what can we do to solve this big problem?

    Read the article

  • Persistent PuTTY sessions for multiple windows

    - by Tgr
    I'm working in various Linux environments through PuTTY connections which break from time to time. I'm looking for a solution to make the PuTTY windows persist (e.g. if I was editing a file, then after reconnecting I should be in the same editor with the same file open at the same place), with the following requirements: it shouldn't require any manual setup at the beginning of the session or after reconnection (I don't want to type in screen or anything like that) I have several windows open to the same machine with the same user, which tend to disconnect at the same time the number/role of windows is not constant (it's not like I have an mc window, a mysql window and a "script runner" window; sometimes I use one window for search or for SVN commands, other times I need several at the same time) sometimes I need to change the properties of the windows for a task (large window for grepping/editing, small windows because I need to see two of them at the same time, red background because I am modifying the live database in MySQL etc), so I need to get the same console back in the same window after a reconnect Is there a way to achieve this? I suppose I should use screen or something equivalent, but how does it know which window I am reconnecting from? Is there some way to pass a unique window identifier to the shell from PuTTY?

    Read the article

  • FTP account ownership on vhost directory makes Apache not run website correctly

    - by CodeShining
    I've purchased a virtual server, where I'm given of a non-root sudo-enabled user. Actually I do need to create an FTP account that's not that sudo-able account, so I created a no-login account just for that purpose. I've set up VSFTPd correctly, also enabling the "userlist" feature, to specify which user are permitted to use FTP. Then I created an empty directory under my sudo-able account, and I gave ownership permissions to the second account, so to make it more easy to understand, let's say the main account (the one I do use to manage my VPS) is called ubuntu and the FTP-user is named ftpuser, I created a directory /home/ubuntu/mywebsite giving the ownership to ftpuser:ftpuser. Then I uploaded a worpdress website, whose default permissions are 755 and 644. The issue is that Apache is not given of any privilege to run the website. How can I make the website run properly, and which is the most secure? Should I run that virtualhost with another user (if it's possible)? Should I force the FTP user to use the www-data group (if that's possible) and run with permissions like 775 and 664? How can I solve this issue? Any help is appreciated, I'd like to run it using the default permissions, so any update won't break up anything (because of permissions reset).

    Read the article

  • Did my hard drive fail or is it something else?

    - by Julian
    Last night while I was watched a movie on my laptop the external monitor just went blank and the built-in display froze. Weird I thought, so I restarted it only to be greeted with this heart-breaking message. "No Operating System Found". After a few panicked restarts I accepted the fact that my hard drive might be done :(. Being the resourceful technie that I am, I whipped out Ubuntu Live on my old Flash Drive and was up and running before day break. I cannot access the hard drive through Ubuntu (which I expected) but I also cannot access my DVD drive either! This got me thinking that it might not be the hard drive and some other component that they hdd and the dvd uses. Hopefully this is the case. Which component is the most likely culprit? What tools can I use from Ubuntu Live on my USB flash drive to find out? I'm in a bad place without my hdd, thanks in advance for any assistance provided! P.S. My laptop makes a weird noise when I try to access or eject my DVD within the slot. Also my HDD makes a weird noise sometimes. Not sure how to describe it. System Specs: Dell 1558

    Read the article

  • how to import existing VM in vmware workstation 8 inventory

    - by Wimmel
    I like to add existing vmware (player) virtual machines to the vmware workstation 8 inventory on linux. When I create a new virtual machine, it is stored in /var/lib/vmware/Shared VMs/. But copying new directories to that folder, does not make them appear in the workstation window. I found out, the inventory is stored in /etc/vmware/hostd/vmInventory.xml; <ConfigRoot> <ConfigEntry id="0000"> <objID>1</objID> <vmxCfgPath>/var/lib/vmware/Shared VMs/test 1234/test 1234.vmx</vmxCfgPath> </ConfigEntry> </ConfigRoot> But I don't know if I break anything when adding entries myself, and giving it an unique ID. Besides, adding a large number of VMs this way is a bit cumbersome. On ESX, it was possible to use vmware-cmd -s register, but I don't have a vmware-cmd installed. In another question it was suggested to use vmware converter. But vmware converter 5 (on windows) only allows a destination file location when I select workstation as destination type. When I select vmware infrastructure as destination type, it says the destination is unsupported; it required vmware vcenter server.

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • Router intermittently failing

    - by nomen
    My old Asus router died a few weeks ago, so I thought I'd set up my Debian box to deal with routing my home network. I have a few complications, but I adapted my configuration from a previously working configuration, and I don't see why I am having intermittent problems. But I am having them! Every so often, my SSH connections to the router (and to the Xen virtual machines hosted by the router) just drop. I am unable to use the router's dns server. I can't ping the router. Etc. All of these things work most of the time, but break down intermittently, for a few minutes at a time. (I can provide more details, but I'm not sure what will be helpful) /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # Gigabit ethernet, internal network auto eth0 allow-hotplug eth0 iface eth0 inet manual # USB ethernet, internet auto eth1 allow-hotplug eth1 iface eth1 inet dhcp # Xen Bridge auto xlan0 iface xlan0 inet static bridge_ports eth0 address 10.47.94.1 netmask 255.255.255.0 As I understand it, this is sufficient to create the network interfaces, and even do some switching between Xen hosts and my eth0 interface. I installed and configured Shorewall to manage routing between the bridge and my internet-facing interface: /etc/shorewall/zones fw firewall net ipv4 lan ipv4 /etc/shorewall/interfaces net eth1 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians lan xlan0 detect dhcp,tcpflags,nosmurfs,routefilter,logmartians,routeback,bridge /etc/shorewall/policy net all DROP info fw net ACCEPT info all all REJECT info /etc/shorewall/rules DNS(ACCEPT) fw net DNS(ACCEPT) lan fw Ping(ACCEPT) lan fw ... and so on, these all work, when the router is accepting traffic at all. /etc/shorewall/masq eth1 10.47.94.0/24 Also, the router is currently "working", and I checked on a problematic client: arp infrastructure infrastructure.mydomain (10.47.94.1) at 0:23:54:bb:7d:ce on en0 ifscope [ethernet] I tried it when the router was down, and I (eventually) got the same response. It took about 30 seconds to return, though.

    Read the article

  • Restricting access to one controller of an MVC app with Nginx

    - by kgb
    I have an MVC app where one controller needs to be accessible only from several ips(this controller is an oauth token callback trap - for google/fb api tokens). My conf looks like this: geo $oauth { default 0; 87.240.156.0/24 1; 87.240.131.0/24 1; } server { listen 80; server_name some.server.name.tld default_server; root /home/user/path; index index.php; location /oauth { deny all; if ($oauth) { rewrite ^(.*)$ /index.php last; } } location / { if ($request_filename !~ "\.(phtml|html|htm|jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|xlsx)$") { rewrite ^(.*)$ /index.php last; break; } } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } It works, but does not look right. The following seems logical to me: location /oauth { allow 87.240.156.0/24; deny all; rewrite ^(.*)$ /index.php last; } But this way rewrite happens all the time, allow and deny directives are ignored. I don't understand why...

    Read the article

  • Safe to remove Python2.6 files?

    - by darkfeline
    I'm using Linux Mint 11 (will upgrade soon), and I've noticed that, even though I don't have any python2.6 packages installed with apt, there's a bunch of residual python2.6 files scattered around my drive, including, but not limited to, dist-packages in /usr/lib/python2.6 and various /usr/share stuff. Is there any way to test if these files are still being used? I'm tempted to sudo rm -rf the lot of them, but I'm scared it'll break stuff. Also, does anyone have any idea where these files could have come from? I believe I had python2.6 installed once upon a time, but I made sure to --purge them, so there shouldn't be any trace of them left, right? EDIT: after using a quick script to check all of the files, it appears most of them belong to important packages, so I won't try weeding out the few which I know are probably useless. Although I am curious why so many packages have python2.6 files when I don't even have it installed. These files are not associated with any packages and I'm not sure if they are safe to remove: /usr/bin/ipython2.6 /usr/lib/python2.6/dist-packages/distribute-0.6.15.egg-info /usr/lib/python2.6/dist-packages/easy_install.py /usr/lib/python2.6/dist-packages/IPython /usr/lib/python2.6/dist-packages/ipython-0.10.1.egg-info /usr/lib/python2.6/dist-packages/setuptools /usr/lib/python2.6/dist-packages/setuptools.egg-info /usr/lib/python2.6/dist-packages/setuptools.pth /usr/lib/python2.6/dist-packages/site.py /usr/lib/python2.6/dist-packages/wx.pth /usr/local/lib/python2.6 /usr/local/lib/python2.6/dist-packages /usr/local/lib/python2.6/site-packages /usr/share/man/man1/ipython2.6.1.gz

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >