Search Results

Search found 14709 results on 589 pages for 'root permission'.

Page 488/589 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • nginx: how do I add new site/server_name in nginx?

    - by Neo
    I'm just starting to explore Nginx on my Ubuntu 10.04. I installed Nginx and I'm able to get the "Welcome to Nginx" page on localhost. However I'm not able to add a new server_name, even when I make the changes in site-available/default file. Tried reloading/restarting Nginx, but nothing works. One interesting observation. "http://mycomputername" in browser works. So somehow there is a command like 'server_name $hostname' somewhere over-riding my rule. File: sites-available/mine.enpass server { listen 80; server_name mine.enpass ; access_log /var/log/nginx/localhost.access.log; location / { root /var/www/nginx-default; index index.html index.htm; } } File: nginx.confg user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • How to get nginx to pass HTTP_AUTHORIZATION header to Apache

    - by codeinthehole
    Am using Nginx as a reverse proxy to an Apache server that uses HTTP Auth. For some reason, I can't get the HTTP_AUTHORIZATION header through to Apache, it seems to get filtered out by Nginx. Hence, no requests can authenticate. Note that the Basic auth is dynamic so I don't want to hard-code it in my nginx config. My nginx config is: server { listen 80; server_name example.co.uk ; access_log /var/log/nginx/access.cdk-dev.tangentlabs.co.uk.log; gzip on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 120; location / { proxy_pass http://localhost:81/; } location ~* \.(jpg|png|gif|jpeg|js|css|mp3|wav|swf|mov|doc|xls|ppt|docx|pptx|xlsx|swf)$ { if (!-f $request_filename) { break; proxy_pass http://localhost:81; } root /var/www/example; } } Anyone know why this is happening? Update - turns out the problem was something I had overlooked in my original question: mod_wsgi. The site in question here is a Django site, and it turns out that Apache does get the auth variables passed through, however mod_wsgi filters them out. The resolution is to use: WSGIPassAuthorization On See http://www.arnebrodowski.de/blog/508-Django,-mod_wsgi-and-HTTP-Authentication.html for more details

    Read the article

  • Jenkins shell command isn't executing

    - by Dmitro
    In Jenkins project I add command for executiong rm /var/www/ru.liveyurist.ru/tmp/* But when I build project I get error: Started by user anonymous Building in workspace /var/www/ru.myproject.ru Updating https://subversion.assembla.com/svn/myproject/trunk At revision 1168 no change for https://subversion.assembla.com/svn/liveexpert/trunk since the previous build [ru.myproject.ru] $ /bin/sh -xe /tmp/hudson7189633355149866134.sh FATAL: command execution failed java.io.IOException: Cannot run program "/bin/sh" (in directory "/var/www/ru.myproject.ru"): java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:475) at hudson.Proc$LocalProc.<init>(Proc.java:244) at hudson.Proc$LocalProc.<init>(Proc.java:216) at hudson.Launcher$LocalLauncher.launch(Launcher.java:709) at hudson.Launcher$ProcStarter.start(Launcher.java:338) at hudson.Launcher$ProcStarter.join(Launcher.java:345) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:82) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:703) at hudson.model.Build$RunnerImpl.build(Build.java:178) at hudson.model.Build$RunnerImpl.doRun(Build.java:139) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:473) at hudson.model.Run.run(Run.java:1410) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:238) Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:468) ... 16 more Build step 'Execute shell' marked build as failure Finished: FAILURE I started Jenkins from root user. Please advise what can be reason of this error?

    Read the article

  • Can't open cocoa emacs from terminal using open -a

    - by Shane
    I installed emacs on my MacBook Air running Mac OS X 10.6.5 from this site http://emacsformacosx.com/. I believe this is what used to be called cocoa emacs. I dragged it into my Application folder and it works fine when I run it from there. I want to be able to run it from the Terminal. After some googling, I tried open -a /Application/Emacs.app foo.txt (foo.txt was and existing file). I got two emacs windows - one with welcome screen and one with foo.txt loaded. I tried a few applications in the /Applications directory and they did not seem to behave like this. I had installed it using my own account (an admin account) so after doing ls -l on /Application I noticed that the owner and group were different from the other entries in this folder. I recursively changed the owner and group to root and wheel, like the others, but this did not help. The only thing that looks funny now is that there that ls -l show a @ character which has something to do with extended attributes but I don't know how to check these. Any suggestions on what to check next? Is using the open command the only to run the program? Can I simulate what it does using a shell script?

    Read the article

  • Troubleshooting a Windows 7 PC that wouldn't sleep

    - by NPE
    I have a new Windows 7 PC that wouldn't sleep (not just automatically, but also when specifically told to). The screen goes black momentarily, but within two seconds the machine comes back as if nothing has happened. I tried powercfg energy. This produces some errors quoted at the bottom of this post, plus some warnings about timer resolution. There are no USB devices connected other than wireless keyboard + mouse (Logitech MK250); I tried unplugging them to no effect. The motherboard is Asus P7P55D-E. powercfg lastwake says "Wake History Count - 0", which I take to mean that it never actually went to sleep. I dual boot into Ubuntu, and was having exactly the same problem on the Linux side. That turned out to do with USB 3.0, which I've now disabled in the BIOS. This has solved the problem on the Ubuntu side of things, but made no difference to Windows 7. Any suggestions? Suspend:USB Device not Entering Suspend The USB device did not enter the Suspend state. Processor power management may be prevented if a USB device does not enter the Suspend state when not in use. Device Name Generic USB Hub Host Controller ID PCI\VEN_8086&DEV_3B34 Host Controller Location PCI bus 0, device 29, function 0 Device ID USB\VID_8087&PID_0020 Port Path 1 USB Suspend:USB Device not Entering Suspend The USB device did not enter the Suspend state. Processor power management may be prevented if a USB device does not enter the Suspend state when not in use. Device Name USB Root Hub Host Controller ID PCI\VEN_8086&DEV_3B34 Host Controller Location PCI bus 0, device 29, function 0 Device ID USB\VID_8086&PID_3B34 Port Path USB Suspend:USB Device not Entering Suspend The USB device did not enter the Suspend state. Processor power management may be prevented if a USB device does not enter the Suspend state when not in use. Device Name USB Composite Device Host Controller ID PCI\VEN_8086&DEV_3B34 Host Controller Location PCI bus 0, device 29, function 0 Device ID USB\VID_046D&PID_C52E Port Path 1,8

    Read the article

  • Facing application redirection issue on nginx+tomcat

    - by Sunny Thakur
    I am facing a strange issue on application which is deployed on tomcat and nginx is using in front of tomcat to access the application from browser. The issue is, i deployed the application on tomcat and now setup the virtual host on nginx under conf.d directory [File i created is virtual.conf] and below is the content i am using for the same. server { listen 81; server_name domain.com; error_log /var/log/nginx/domain-admin-error.log; location / { proxy_pass http://localhost:100; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } Now the issue is this when i am using rewrite ^(.*) http://$server_name$1 permanent; in server section and access the URL then this redirects to https://domain.com and i am able to log in to app and able to access the links also [I am not using ssl redirection in this host file and i don't know why this is happening] Now when i removed this from server section then i am able to access the application from :81 and able to logged into the application but when i click on any link in app this redirect me to the login page. I am not getting any logs in application logs as well as tomcat logs. Please help on this if this is a redirection issue of nginx. Thanks, Sunny

    Read the article

  • What is /etc/apache2/sites-available used for and is it necessary?

    - by Mariane
    I have 3 sites, each with a specific IP, running on apache2 (up-to-date Ubuntu). To put a site online, I just created a file in: /etc/apache2/sites-enabled and in this file I told apache which directory was the root directory for this site, and to which IP it should correspond. So I have 000-default 001-www.lapf.eu 002-www.felkin.info 003-www.seidhr.fr in this directory. My first site, lapf suddenly lost contact with its database after the domain name was transferred from another registrar unto the registrar who is also hosting the site's data. Then I did an update, and I reinstalled mysql-server and mysql-common, and I did I-have-forgotten-what to reinstall the locales (uft8 and such) which had vanished for some reason. This fixed my first site. Now I noticed that the other 2 sites are offline. Pointing a browser to them just hangs until timeout. They used to function, and their domain names did not move, they are still registered at the same place. The files are still in /etc/apache2/sites-enabled I noticed another directory: /etc/apache2/sites-available with just defaut and default.ssl in it. Why are there 2 directories, sites-enabled and sites-available? Should I copy the files from "sites-enabled" into "sites-available"? Or should I put a modified version of each in "sites-available"? command: "apache2ctl -S" VirtualHost configuration: 92.243.20.169:80 Charlotte (/etc/apache2/sites-enabled/001-www.lapf.eu:1) 92.243.21.141:80 xvm-21-141.ghst.net (/etc/apache2/sites-enabled/002-www.felkin.info:1) 92.243.4.114:80 xvm-4-114.ghst.net (/etc/apache2/sites-enabled/003-www.seidhr.fr:1) wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server Charlotte (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost Charlotte (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • Linux And NTFS Permissions

    - by VGE IT
    Trying to restrict a folder within a directory created in linux filesystem. I have changed the permissions to: root rwx, a special active directory group rwx and all others r. Upon doing so, people that are not in the special AD group can access the directory and modify files. Upon doing so the group changes to "Domain Users" when the user modifies documents within the directory. I have to manualy change the documents default group back to my AD group. I have tried to create another AD group and modify permissons to deny write access. When doing so through windows explorer, the settings seem to take affect until I go back in a look at permissions for the restricted group. No permissions show when I view for the second time. Please assist. Samba share properties [MyShare] comment = "blah blah blah" browseable = yes guest ok = no read only = no path = /xxx/xxxxx/ create mask = 0640 directory mask = 0750 admin users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" valid users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" nt acl support = Yes inherit acls = yes inherit owner = yes inherit permissions = yes

    Read the article

  • chkconfig creating service symlinks with the wrong order

    - by Robert
    On RHEL 6.3, I have a system service that should be starting after postgresql and httpd (order 64 and 85, respectively), but chkconfig always places it at order 50. I tried an experiment on a CentOS 6.0 virtual machine to make sure I understood the LSB stanza syntax. I created /etc/init.d/foo, owner root, permissions 755, with this text: ### BEGIN INIT INFO # Provides: foo # Required-Start: postgresql httpd # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Foo init script ### END INIT INFO And then ran chkconfig --add foo. Result: /etc/rc5.d/S86foo is created, as expected. (The other runlevels are also as expected.) I repeated the exact same experiment on the RHEL machine, and it created /etc/rc5.d/S50foo instead. I can't see anything different between the two that would lead to different results. Both machines have postgresql and httpd starting at the same orders and runlevels. Any thoughts? I could just use # chkconfig: 2345 86 50, or manually rename the service symlinks to the correct order, but I'm trying to document an install process for later users, and I want to know how to do it right and understand why it's not working as expected.

    Read the article

  • Limit copssh users to home directory Windows 7

    - by Siriss
    Hello all- I have found these two sites below: CopSSH SFTP -- limit users access to their home directory only and http://blogs.windowsnetworking.com/wnadmin/2006/11/07/copssh-restricting-users-access/ as well as the Copssh website, but upon completion they do not seem to work. I have copssh installed and I have a separate Windows account "sftpuser" created that is used to connect. The connection works just fine, but I want to limit that user to just their home directory and sub folders. I have 3 hard drives, the C:, a W: and an S: and I want the FTP account to only be able to access the W: drive and its contents (the root of the W: drive is the FTP home directory). Right now "sftpuser" can access all folders, including jump drives to C:, and S:. The linked tutorials do not seem to work, because it seems when I create a group "ftpusersgroup" and add "sftpuser" to the group, and then deny "ftpusersgroup" access to the C: drive, the service breaks and I can no longer login. I have undone everything and am ready to start fresh. Does anyone know how to do this, or is there a better tutorial that someone has or has found? I hope this makes sense. Thank you very much for any help!

    Read the article

  • Why do I have to log into hotmail twice?

    - by Tony Lee
    I just recently noticed I have to attempt a login into hotmail twice before it succeeds. Although I'm using Google Chrome (3.0.195.21), the symptoms are well described in a Mozilla thread. In short, I'm told: The e-mail address or password is incorrect. Please try again. The thread on mozilla's site that supposed to describe the latest details (and the 1st hit on google when I search for "hotmail login twice") requires an account to read so I'm hoping someone here has a good synopsis of what the cause is. I normally start at hotmail.com, which redirects to login.live.com/.... I can login by starting at mail.live.com, using IE8 or attempting a 2nd login. Oddly, if I start at login.live.com Chrome tells me there is a redirect loop. Does anyone know or have a public link to the root cause of the double login? (it is a hotmail account I'm login into) EDIT - Caused by my 'restricting how 3rd parties can use cookies'. If I allow all cookies, it works first time.

    Read the article

  • Resize a RAID 1 volume on OS X Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OS X 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • Disabling Keyboard Wakeup for Ubuntu 10.04 on Acer 1810TZ

    - by sybreon
    My Acer Aspire 1810TZ laptop suspends fine but wakes up on any slight key-press. I would like to disable this behaviour. I read that it involves disabling something in the /proc/acpi/wakeup but SLPB does not seem to be listed at all. root@1810TZ:/etc# cat /proc/acpi/wakeup Device S-state Status Sysfs node UHC0 S3 disabled pci:0000:00:1d.0 UHC1 S3 disabled pci:0000:00:1d.1 UHC2 S3 disabled pci:0000:00:1d.2 UHCR S3 disabled EHC1 S3 disabled pci:0000:00:1d.7 UHC3 S3 disabled pci:0000:00:1a.0 UHC4 S3 disabled UHC5 S3 disabled EHC2 S3 disabled pci:0000:00:1a.7 EXP1 S4 disabled pci:0000:00:1c.0 PXSX S4 disabled pci:0000:01:00.0 EXP2 S4 disabled PXSX S4 disabled EXP3 S4 disabled PXSX S4 disabled EXP4 S4 disabled pci:0000:00:1c.3 PXSX S4 disabled pci:0000:02:00.0 EXP5 S4 disabled PXSX S4 disabled EXP6 S4 disabled PXSX S4 disabled However, the relevant bits seem to be detected from dmesg. [ 0.357628] ACPI: AC Adapter [ACAD] (on-line) [ 0.357749] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 [ 0.357754] ACPI: Power Button [PWRB] [ 0.357817] input: Lid Switch as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0D:00/input/input1 [ 0.359319] ACPI: Lid Switch [LID0] [ 0.359390] input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 [ 0.359394] ACPI: Sleep Button [SLPB] [ 0.359475] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 [ 0.359479] ACPI: Power Button [PWRF] Not quite sure what to do next.

    Read the article

  • install latest gcc as a non-privileged user

    - by voth
    I want to compile a program on a cluster (as a non-privileged user), which requires gcc-4.6, but the cluster has only gcc-4.1.2. I don't want to tell the administrator to update gcc, because 1) he is busy and would do it only after several days. 2) He probably wouldn't update it anyway, since other users may need the older gcc version (gcc is not backward compatible) I tried to compile gcc from source, which seems more difficult that it sounds, since it requires several other packages to be installed (GMP, MPFR, MPC, ...), and when I did it, after several hours I got a message like checking for __gmpz_init in -lgmp... no configure: error: libgmp not found or uses a different ABI (including static vs shared). at which point a got stuck. My question is: what is the easiest way to install the latest version of gcc as a non-privileged user? (something like apt-get install XXXXX, with an option to not install as root for example) The setup of the cluster is the following: CentOS release 5.4 (Final) Rocks release 5.3 (Rolled Tacos) If there are no other options than compiling from source, do you have any ideas how to handle the above error?

    Read the article

  • Setting up WHM nameservers

    - by Miskone
    I am new to server administration and I've just got a dedicated root server from Hetzner. First I set up in Hetzner's robot DNS entries Registered nameservers: ns1.raybear.com 88.198.32.57 ns2.raybear.com 88.198.32.57 Under DNS entires I have buzz-buzz.me pointing to 88.198.32.57 // My server IP address and on my WHM I have DNS zone for buzz-buzz.me ; cPanel first:11.42.1.17 (update_time):1402062640 Cpanel::ZoneFile::VERSION:1.3 hostname:hosting.raybear.com latest:11.42.1.17 ; Zone file for buzz-buzz.me $TTL 14400 buzz-buzz.me. 86400 IN SOA ns1.raybear.com. miskone.gmail.com. ( 2014060605 ;Serial Number 86400 ;refresh 7200 ;retry 3600000 ;expire 86400 ) buzz-buzz.me. 86400 IN NS ns1.raybear.com. buzz-buzz.me. 86400 IN NS ns2.raybear.com. buzz-buzz.me. 14400 IN A 88.198.32.57 localhost 14400 IN A 127.0.0.1 buzz-buzz.me. 14400 IN MX 0 buzz-buzz.me. mail 14400 IN CNAME buzz-buzz.me. www 14400 IN CNAME buzz-buzz.me. ftp 14400 IN CNAME buzz-buzz.me. agent 14400 IN A 88.198.32.57 src 14400 IN A 88.198.32.57 platform 14400 IN A 88.198.32.57 But still I have some problems accesing buzz-buzz.me, agent.buzz-buzz.me and platform.buzz-buzz.me Also I have problem getting mails on Google account, I can send but not receive emails. How to solve this. As I said I am completly new here and I need urgent help.:(

    Read the article

  • Installing xampp on system that already have mysql

    - by Charith
    I'm rather new to PHP and xampp. I have a computer that has installed MySQL server and MySQL workbench as I was working with Java and NetBeans. Now I want to use my computer for developing PHP and other web stuff too. I installed xampp successfully. But when I'm trying to access phpMyAdmin, it gives me an error saying mysql server rejected its connection Actually I tried stopping my current MySQL service and installing it again. However xampp have a its own mysql server in its installation path too. I tried configuring config.inc.php to use my existing installation of MySQL which is on a separate path. But I failed. Can anyone please instruct me how to configure this xampp to use my existing MySQL server to do everything and ignore the installed one with itself? I don't want two MySQL services to run on my system and clash in future. I'll be glad if anyone can explain to me what is best to use when you're developing Java, PHP, C and all the stuff on the same machine. P.S.: I have been given a password for my existing MySQL sever (user = root) as we do it usually when installing MySQL alone.

    Read the article

  • Nginx configuration question

    - by Pockata
    Hey guys, i'm trying to make the autoindex feature only run for my ip address with this code: server{ ... autoindex off; ... if ($remote_addr ~ ..*.*) { autoindex on; } ... } But it doesn't work. It gives my a 403 :/ Can someone help me :) Btw, i'm using Debian Lenny and Nginx 0.6 :) EDIT: Here's my full configuration: server { listen 80; server_name site.com; server_name_in_redirect off; client_max_body_size 4M; server_tokens off; # log_subrequest on; autoindex off; # expires max; error_page 500 502 503 504 /var/www/nginx-default/50x.html; # error_page 404 /404.html; set $myhome /bla/bla; set $myroot $myhome/public; set $mysubd $myhome/subdomains; log_format new_log '$remote_addr - $remote_user [$time_local] $request ' '"$status" "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # Star nginx :@ access_log /bla/bla/logs/access.log new_log; error_log /bla/bla/logs/error.log; if ($remote_addr ~ 94.156.58.138) { autoindex on; } # Subdomains if ($host ~* (.*)\.site\.org$) { set $myroot $mysubd/$1; } # Static files # location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { # access_log off; # expires 30d; # } location / { root $myroot; index index.php index.html index.htm; } # PHP location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $myroot$fastcgi_script_name; include fastcgi_params; } # .Htaccess location ~ /\.ht { deny all; } } I forgot to mention that when i add the code to remove static files from my access log, the static files cannot be accessed. I don't know if it's relevant :)

    Read the article

  • Best shortcut in Total Commander

    - by life-warrior
    So, what's your favourite TC shortcut or shortcut combination ? Which one do you use and for what purpose ? Among my most often used: Ctrl-Left ( or Ctrl-Right ) - open archive or folder under cursor in opposite tab. Ctrl-Shift-Enter, Alt-F8, Ctrl-X - copy full file path to clipboard. Shift-F6, Shift-End(if needed), Ctrl-C - copy only file name w/o path. Select files, Ctrl-M - multi-rename, for example remove "DVDrip" from file names. Ctrl-\ - go to root directory. Ctrl-D, - go to directory with highlighted letter specified. For example, name a downloads directory "&Downloads" in favourites, and the letter after ampersand will be highlighted. Alt-F7, feed to listbox, Ctrl-A, Mark(menu)-Save selection to file - creates a file with all files and directories inside current, with full path. Ctrl-[3-6] - sort files by name(3), extension(4), date(5), size(6). For example, Sort by name, when you need movies and soundracks with the same name and different extension to group them together. Sort by extension, when you need to find EXEs in Windows directory. Sort by Date, when you need to find the latest file downloaded in your dir. Sort by size, when you need to delete the largest files for free space.

    Read the article

  • Vagrant doesn't detect chef-solo unless re-installed

    - by nightowl
    I am using Vagrant to test my Chef recipes in Amazon AWS, and I am encountering an irritating issue: I initially assumed that Vagrant would install chef itself (as it does when using Virtual Box as the provider) but it seems that this needs to be done using the cloud-init script. However, even after I successfully installed the chef gem via cloud-init I was still getting the following error: The chef binary (eitherchef-soloorchef-client) was not found A quick google of this error suggested three probable causes: Chef had failed to install It had installed, but the directory was not in the $PATH environment variable It had installed and in the $PATH but with incorrect permissions I logged in and double checked; chef-solo and chef-client were installed; The path variable for the user, sudo and root all included /usr/local/bin and permissions were all fine. I managed to solve this problem by uninstalling and reinstalling the gem using sudo gem install chef. I don't understand why this should resolve the issue and it is a bit of a problem if I have to ssh into a test box and manually install the gem every time. Does anyone have any suggestions why this might be happening?

    Read the article

  • Forward real IP through Haproxy => Nginx => Unicorn

    - by Hendrik
    How do I forward the real visitors ip adress to Unicorn? The current setup is: Haproxy => Nginx => Unicorn How can I forward the real IP address from Haproxy, to Nginx, to Unicorn? Currently it is always only 127.0.0.1 I read that the X headers are going to be depreceated. http://tools.ietf.org/html/rfc6648 - how will this impact us? Haproxy Config: # haproxy config defaults log global mode http option httplog option dontlognull option httpclose retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 # Rails Backend backend deployer-production reqrep ^([^\ ]*)\ /api/(.*) \1\ /\2 balance roundrobin server deployer-production localhost:9000 check Nginx Config: upstream unicorn-production { server unix:/tmp/unicorn.ordify-backend-production.sock fail_timeout=0; } server { listen 9000 default; server_name manager.ordify.localhost; root /home/deployer/apps/ordify-backend-production/current/public; access_log /var/log/nginx/ordify-backend-production_access.log; rewrite_log on; try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://unicorn-production; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }

    Read the article

  • Turn off gzip for a location in Nginx

    - by Nyxynyx
    How can gzip be turned off for a particular location and all its sub-directories? My main site is at http://mydomain.com and I want to turn gzip off for both http://mydomain.com/foo and http://mydomain.com/foo/bar. gzip is turned on in nginx.conf. I tried turning off gzip as shown below, but the Response Headers in Chrome's dev tools shows that Content-Encoding:gzip. How should gzip/output buffering be disabled properly? Attempt: server { listen 80; server_name www.mydomain.com mydomain.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; root /var/www/mydomain/public; index index.php index.html; location / { gzip on; try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_read_timeout 300; } location /foo/ { gzip off; try_files $uri $uri/ /index.php?$args ; } }

    Read the article

  • SATA Driver for Acer Aspire One D257

    - by Robert Niestroj
    i have a Acer Aspire One D257. In this netbook the hard disk is defect so i bought a new one. Now i want to reinstall Windows 7. Im using an external DVD Drive plugged into USB. The Windows 7 DVD is staring, Win7 setup is starting and when it comes to Hard Drive options it says that no drive was detected and i should try search for drivers. It shows me this window: Screenshot from web Now i cant find the right drivers for this netbook to continue with the installation. The laptop has the newest BIOS - 1.15, it is reset to factory default settings except that i enabled the Boot Menu prompt with F12. From the Acer Support Website i've downloaded the SATA AHCI Driver and the Chipset Driver. I unpacked both to a USB flashdrive in seperate folders. When i select the SATA AHCI Driver it does not find any drivers. When i uncheck the checkbox "Hide drivers that are not compatible with hardware on this computer" it shows one driver: Acer HWID (path_to\1.inf). When i continue with this driver i got an error message that says something like: No new devices found. Check if the driver files are on the installation disk. When i show him the Chipset Driver it sees a lot more driver. When i uncheck the checkbox "Hide drivers that are not compatible with hardware on this computer" it show some drivers: Intel N10 Family DMI Bridge Intel N10/ICH7 Family PCI Express Root Port Intel N10/ICH7 SMBUS Controller Intel N10/ICH7 Family USB Universal Host Controller Intel N10/ICH7 Family USB2 Enhanced Host Controller Intel N10/ICH7 Family Interface LPC Controller When i uncheck this checkbox i get a lot more drivers, and some SATA Drivers but the also do not work. I get the same error message as before. Can someone help me find a driver that should work or am i doing anything else wrong?

    Read the article

  • Delete ARP cache on Mac OS when moving from one Wifi network to the other

    - by Puneet
    I am facing wireless connectivity problems when I move from one Wifi network to the other. Here is how it happens: I am at my friends place. I connect to his Wifi. His Wifi router ip address is 192.168.0.1. Everything is fine I close my laptop, come back to my house, open my laptop and I connect to the Wifi Network at my place. Different ESSID, but the Wifi router address is the same 192.168.0.1. At this point I cant get to anything on the internet. To debug I try to see if I can ping the router (192.168.0.1), I cant. I get a no route to host. Meanwhile airport tells me Im connected to Wifi. I see the arp cache and I see a permanent entry for 192.168.0.1 ? (192.168.0.1) at 5c:d9:98:65:73:6c on en1 permanent [ethernet] This permanent bit looks problematic. I go ahead and delete the arp cache entry and all is fine with the world until I go back to my friends place where the same situation plays out. Now my question is, why the hell is this happening? If there is no way around it, can I run a script on Wifi connect/disconnect to clear out the arp cache? Im using Mac OS X $uname -a Darwin 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386

    Read the article

  • Setting up self signed cert and CA [plesk / linux]

    - by microchasm
    I'm about ready to give up and do a clean wipe of this machine and start over with ISPConfig or some other variant. I installed Plesk on this machine to help with some of the handiwork. It is the free version (single domain); I don't need it for much. It's nice, though, to use to set up db's email, etc. Anyway, I would like to set it up as a CA (which I can add to users' trusted root servers to alleviate those warnings). It seems like Plesk does all it can to obfuscate where things are. Despite trying to find the conf files, and crt/pem/key etc. I am (5 hours later) now left with a machine that won't even get to the ssl page. The browser will sit there, until a 'connection reset' error comes up. In error_log, I get messages saying CN doesn't match server name -- which it does. ssl_error_log: [Thu May 13 16:02:14 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu May 13 16:12:19 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) not very helpful. If anyone has any experience, and/or recommendations (including other software), I'd be much obliged. NB RHEL5; 1 domain, 3 subdomains; everything local only. Thanks.

    Read the article

  • Hugepages not utilized by MySQL 5.0, CentOS 5

    - by TechZilla
    I've set up Hugepages, but i'm not seeing any of them reserved. Have I missed a step, or for some particular reason, is MySQL is unable to utilize the Hugepages? I have not created a mount of hugetlbfs, although from what I read, MySQL would not call pages in such a manner. If I'm wrong, please let me know, as that would be a trivial solution. Almost all my MySQL tables are using InnoDB. NOTE: I created a hugetlbfs, no change as expected. Is it possible that rebooting would rectify this situation? I would not want to go through the procedure, as this is high availability, but would do so if necessary. This is the configurations, which I believe are relevant. /etc/sysctl.conf ... ## Huge Pages vm.nr_hugepages = 4096 vm.hugetlb_shm_group = 27 ## SHM kernel.shmmax = 34359738368 kernel.shmall = 8589934592 ... /etc/security/limits.conf ... mysql soft nofile 12888 mysql hard nofile 51552 @mysql soft memlock unlimited @mysql hard memlock unlimited /etc/my.cnf [mysqld] large-pages ... grep Huge /proc/meminfo HugePages_Total: 4096 HugePages_Free: 4096 HugePages_Rsvd: 0 Hugepagesize: 2048 kB id mysql uid=27(mysql) gid=27(mysql) groups=27(mysql) context=root:system_r:unconfined_t:SystemLow-SystemHigh tail -6 /var/log/mysqld.log InnoDB: HugeTLB: Warning: Failed to allocate 1342193664 bytes. errno 12 InnoDB HugeTLB: Warning: Using conventional memory pool 120808 15:49:25 InnoDB: Started; log sequence number 0 1729804158 120808 15:49:25 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution I would really appreciate any help, I'm completely out of ideas. If I missed any more relevant configs, or diagnostics, please comment and I'll add it to the question.

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >