Search Results

Search found 10706 results on 429 pages for 'pkg config'.

Page 335/429 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Caching all files in varnish

    - by csgwro
    I want my varnish servers to cache all files. At backend there is lighttpd hosting only static files, and there is an md5 in the url in case of file change, ex. /gfx/Bird.b6e0bc2d6cbb7dfe1a52bc45dd2b05c4.swf). However my hit ratio is very poorly (about 0.18) My config: sub vcl_recv { set req.backend=default; ### passing health to backend if (req.url ~ "^/health.html$") { return (pass); } remove req.http.If-None-Match; remove req.http.cookie; remove req.http.authenticate; if (req.request == "GET") { return (lookup); } } sub vcl_fetch { ### do not cache wrong codes if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } remove beresp.http.Etag; remove beresp.http.Last-Modified; } sub vcl_deliver { set resp.http.expires = "Thu, 31 Dec 2037 23:55:55 GMT"; } I have made an performance tuning: DAEMON_OPTS="${DAEMON_OPTS} -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100" The main url which is missed is... /health.html. Is that forward to backend correctly configured? Disabling health checking hit ratio increases to 0.45. Now mostly "/crossdomain.xml" is missed (from many domains, as it is wildcard). How can I avoid that? Should I carry on other headers like User-Agent or Accept-Encoding? I thing that default hashing mechanism is using url + host/IP. Compression is used at the backend. What else can improve performance?

    Read the article

  • How do I connect to SSH without the password to be requested every time ? - Already follow some answers here but it doesn't work

    - by MEM
    MAC OS X Lion 10.7.3 1) On host, I've created an authorized_keys file inside .ssh folder, by doing: touch authorized_keys 2) I've copy my public ssh key into host .ssh folder by doing: scp ~/.ssh/mykey.pub [email protected]:/home/userhost/.ssh/mykey.pub 3) I've place it's contents inside authorized files by doing: cat mykey.pub >> authorized_keys 4) Then I've removed the mykey.pub file: rm mykey.pub 5) On my terminal, locally, inside my ~/.ssh folder I made: ssh-add mykey (notice that it is without the pub extension); 6) I've closed and opened again the terminal. When I first connect to this host, it has being added to the *known_hosts* file inside ~/.ssh; I've pico known_hosts and the hash is there. Still, every time I connect by doing: ssh [email protected] it requests a password ! What am I missing here ? UPDATE: I've done EVEN TWO MORE THINGS here: 7) Set your key to be the default identity - if it doesn't exist, create; touch ~/.ssh/config and place inside the following line: IdentityFile ~/.ssh/yourkeyname *id_rsa is normally your default key. You should switched to your key. This tells that the outgoing ssh connections should use this as a default identity.* 8) Add a bash process to your ssh-agent: ssh-agent bash ssh-add ~/.ssh/yourkeyname Lisinge answer helped but it's not definitive. If we restart our machine, the password gets prompted again!!! How can we debug this? What can we do here? How can we check where is this process failing ? UPDATE 2: If I use: ssh -v -i <keyfile> [email protected] I get among other things: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 Warning: Identity file yourkeyname not accessible: No such file or directory. This message refers to what? The identify file is not accessible on the localhost, or it's not accessible on the remote host ? Please advice

    Read the article

  • IIS 7.0 rewrite url problem

    - by Jouni Pekkola
    Hello, How i can set redirect url for virtual directory in iis 7.0.I have installed lates url rewrite module 2. ? I could explain my problem with exsample. I have website on my iis 7.0 server: www.mysite.com I desided to create virtual directory sales under my site which is pointing to website root directory.Now I need create redirect url for the vdir. The vdir is pointing same virtual root directory as my site root is The big idea is that i can write on browser www.mysite/sales and i will automaticly redirect to url www.mysite.com?productid=200. I tried to make redirect with rewite url for vdir(not website), but I always get this error message : cannot add duplicate colletion entry of type 'rule' with unique key key attribute 'name' set to "test".This happens when i am pointing for virtual vdir and try to add rule. I can add rules to website level,but rules doesn work. I mean url www.mysite/sales gives me follwing error. I know that key is unique I checked it from web.config. This kind of feature was really easy use in IIS 6.0, just point vdir with your mouse and set properties--a redirect to url. Please some one explain what is right way to do it in IIS 7.0

    Read the article

  • Nginx fails upon proxying PUT requests

    - by PartlyCloudy
    Hi. I have an arbitrary web server that supports the full range of HTTP methods, including PUT for uploads. The server runs fine in all tests with different clients. I now wanted to set this server behind an nginx reverse proxy. However, each PUT request fails. The entity body is not forwarded to the backend web server. The header fields are sent, but not body. I searched the nginx proxy documentation and find several hints that PUT might not be supported. But I also found people running svn/ web dav stuff behind nginx, so it should work. Any ideas? Here is my config: server { listen 80; server_name my.domain.name; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8000; } } Client == HTTP PUT ==> Nginx == HTTP Proxy ==> Backend Server The error.log shows no entries concerning this behaviour. Thanks in advance!

    Read the article

  • Php5-fpm Crash if much visitors

    - by chillah
    I decided to change my OP to Nginx from Litespeed because i read much about the low resource that Nginx would cost. Im running a Wordpress site with 500 users online My www.conf in "/etc/php5/fpm/pool.d" pm.max_children = 200 pm.start_servers = 20 pm.min_spare_servers = 20 pm.max_spare_servers = 60 ;pm.process_idle_timeout = 10s; ;pm.max_requests = 1000 Since i changed that config to higher requests and more children the Site needs longer till it shows me a White blank page. If i restart the php5-fpm prozess then, the site is running fine again. The CPU usage is at 5% if the php5-fpm is crashed and if its running its about 20 to 90%! Heres my Nginx.conf: user www-data; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 3048; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; I already tried changing the settings, tryed all variants but nothing worked. The Mashine: Dualcore 4gb ram

    Read the article

  • SVG doesn't work on subdomain - some browsers try to download rather than display

    - by John Catterfeld
    I have a site with a 'development' subdomain, which displays my SVG file exactly as intended. However when I copy it across to www, or any other subdomain (e.g. 'test') some browsers try to open the file in an external editor, therefore asking me to download the file rather than displaying it. For example: http://development.mysite.com/test.svg - works http://www.mysite.com/test.svg - doesn't work The SVG file: <?xml version="1.0" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg xmlns="http://www.w3.org/2000/svg" version="1.1"> <circle cx="100" cy="50" r="40" stroke="black" stroke-width="2" fill="red" /> </svg> This happens in Firefox, Chrome and Safari, however IE9 and above display the file as intended. It is a Windows hosting, and I have <mimeMap fileExtension=".svg" mimeType="image/svg+xml" /> in my web.config file My hunch is that there must be some setting on the server which I need my hosting company to make. Can anyone suggest what might cause this issue?

    Read the article

  • Apache: How to enable Directory Index browsing at the Doc Root level?

    - by Brian Lacy
    I have several web development projects running on Fedora 13. I generally setup Apache to serve my larger projects as Virtual Hosts, but I've got several small projects cycling through that I don't really care to setup a VirtualHost for each one. Instead I'd like them all under a subdirectory of the main VirtualHost entry. I just want Apache to serve me the directory index when I browse to the host name. For example, the hostname projects.mydomain.com refers to /var/www/projects, and that directory contains only subdirectories (no index file). Unfortunately when I browse to the host directly I get: Forbidden You don't have permission to access / on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. But my virtual host entry in my apache config looks like this: <VirtualHost *> ServerName projects.mydomain.com DocumentRoot /var/www/projects <Directory "/var/www/projects"> Options +FollowSymlinks +Indexes AllowOverride all </Directory> </VirtualHost> What am I missing here?

    Read the article

  • Creating a Jenkins build farm in a hands-off manner?

    - by user183394
    My colleague and I have set up and run Jenkins on a KVM guest running Ubuntu 12.04 with good results for a while now. We are thinking about deploying a cluster of Jenkins CI hosts in master/slave configuration, with the libvirt slave plugin to keep our hardware count low. Our environment is strictly Linux (CentOS, Scientific Linux, Fedora, and Ubuntu). Both of us are competent in setting up large clusters. We typically use tools like cobbler + a configuration management tool (Puppet, Chef, and alike) to set up a large number of machines (physical and/or virtual) hands off (hundreds of nodes in less than an hour typical). We would like to do the same for nodes running Jenkins. But the step by step guide doesn't give us any clues in this regard. I did see a Multi-slave config plugin. But, being used to dealing with hundreds or more machines completely hands-off, clicking the UI for many machines just doesn't feel right. Can someone point to us a reference that talks about how to set up large cluster of Jenkins CI hosts more in the hands-off way?

    Read the article

  • Can't connect to Server Manager from Windows 7

    - by SAdmin317
    I have a Windows 7 Pro 64bit with SP1 desktop that has RSAT tools installed. I opened Server Manager and can't connect to the server (Server 2008 R2 core). I followed the guide to enable everything on the server, added a registry key to enable read-only on the device manager as well. On the Windows 7 PC I turned on winrm, did the quick config, and added the server IP and name as trusted hosts. I still get an error when connecting. "Connecting to the remote server failed with the following error message: The WinRM client cannot process the requests. If the authentication scheme is different from Kerberos, or if the client computer is not joined to a domain, then HTTPS transport must be used or the destination machine must be added to the TrustedHosts configuration setting...." I also added the name of the server to the windows 7 /etc/hosts file. Ping the server name translates to the IP of the server. Also opened up the firewall for "Remote Volume Management" Both machines are on the same Workgroup, using the same Administrator account, with the same password. Any help appreciated.

    Read the article

  • Isolating Apache virtualhosts from the rest of the system

    - by JesperB
    I am setting up a web server that will host a number of different web sites as Apache VirtualHosts, each of these will have the possibility to run scripts (primarily PHP, possiblu others). My question is how I isolate each of these VirtualHosts from eachother and from the rest of the system? I don't want e.g. website X to read the configuration of website Y or any of the server's "private" files. At the moment I have set up the VirtualHosts with FastCGI, PHP and SUExec as described here (http://x10hosting.com/forums/vps-tutorials/148894-debian-apache-2-2-fastcgi-php-5-suexec-easy-way.html), but the SUExec only prevents users from editing/executing files other than their own - the users can still read sensitive information such as config files. I have thought about removing the UNIX global read permission for all files on the server, as this would fix the above problem, but I'm not sure if I can safely do this without disrupting the server function. I also looked into using chroot, but it seems that this can only be done on a per-server basis, and not on a per-virtual-host basis. I'm looking for any suggestions that will isolate my VirtualHosts from the rest of the system. PS I'm running Ubuntu 12.04 server

    Read the article

  • Nginx deny doesn't work for folder files

    - by user195191
    I'm trying to restrict access to my site to allow only specific IPs and I've got the following problem: when I access www.example.com deny works perfectly, but when I try to access www.example.com/index.php it returns "Access denied" page AND php file is downloaded directly in browser without processing. I do want to deny access to all the files on the website for all IPs but mine. How should I do that? Here's the config I have: server { listen 80; server_name example.com; root /var/www/example; location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to front handler expires 30d; ## Assume all files are cachable allow my.public.ip; deny all; } location @handler { ## Common front handler rewrite / /index.php; } location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*.php)/ $1 last; } location ~ .php$ { ## Execute PHP scripts if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss expires off; ## Do not cache dynamic content fastcgi_pass 127.0.0.1:9001; fastcgi_param HTTPS $fastcgi_https; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } }

    Read the article

  • Reverse Proxy issues IIS on Windows Server 2012

    - by ahwm
    I've tried searching, but nothing seems to be working. I have a feeling it might be due to our custom Rewrite module. Here is the excerpt from the web.config that sets it up: <modules runAllManagedModulesForAllRequests="true"> <add name="UrlRewriteModule" type="EShop.UrlRewriteModule"/> </modules> EShop.UrlRewriteModule is a custom class in App_Code which handles incoming requests. I have set up the rewrite rules but it doesn't seem to want to work. I'm inclined to think that our rewrite class is interfering earlier than the proxy rules and saying that the page doesn't exist. Here's what we're trying to accomplish: We are working on a new site for a client, but they have a forum that they're not likely to want to move. I set up a new subdomain to point to the new server while the site is being completed (before we go live) and want the reverse proxy to forward test.domain.com/forum to www.domain.com/forum. After the site goes live, we'll need to forward using an IP address instead. I've set up a reverse proxy successfully with nginx, but we didn't want to set up another server if we didn't need to. Ideas?

    Read the article

  • Removing trailing slashes in WordPress blog hosted on IIS

    - by Zishan
    I have a WordPress blog hosted in my IIS virtual directory that has all URLs ending with a forward slash. For example: http://www.example.com/blog/ I have the following rules defined in my web.config: <rule name="wordpress" patternSyntax="Wildcard"> <match url="*" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="index.php" /> </rule> <rule name="Redirect-domain-to-www" patternSyntax="Wildcard" stopProcessing="true"> <match url="*" /> <conditions> <add input="{HTTP_HOST}" pattern="example.com" /> </conditions> <action type="Redirect" url="http://www.example.com/blog/{R:0}" /> </rule> In addition, I tried adding the following rule for removing trailing slashes: <rule name="Remove trailing slash" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> It seems that the last rule doesn't work at all. Anyone around here who has attempted to remove trailing slashes from WordPress blogs hosted on IIS?

    Read the article

  • Remote access to phpmyadmin from computer belongs to same LAN

    - by Charles
    OK... I solved it. It is because I have not configured the httpd.conf to allow the centos listen port 80 and 8080. Listen 80 Listen 8080 I have setup the myphpadmin on my CentOS 6.4 recently. I can access and login to the myphpadmin on my localhost. However, when I type http://[hostipaddr]/phpmyadmin on my other computer in the same LAN with the CentOS, the browser simply cannot access the page. Below are some of the current configuration. Anyone can help please......? config.inc.php $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'http'; /* Server parameters */ $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = false; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysql'; $cfg['Servers'][$i]['AllowNoPassword'] = false; phpmyadmin.conf <Directory /var/www/html/phpmyadmin/> order allow,deny allow from all </Directory> Furthermore, I can access the webpage that stored in the CentOS from my other computer without problems. After using wireshark and tcpdump, I found that the server (the Cent OS) keep resetting the connection. (192.168.1.106 is my other computer, 192.168.1.101 is my CentOS) 23:29:42.281473 IP 192.168.1.106.55999 > 192.168.1.101.webcache: Flags [S], seq 2559409090, win 65535, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 23:29:42.281504 IP 192.168.1.101.webcache > 192.168.1.106.55999: Flags [R.], seq 0, ack 2559409091, win 0, length 0 I have disabled the iptables service on the CentOS already.

    Read the article

  • Exclamation 403 forbidden for cgi-bin/ and cannot protect site with password

    - by gasgdasdgasdg
    First problem i have is i am getting 403 forbidden error for cgi-bin/ I have created a new /var/www2/ i can access it fine. php runs fine. Second problem is I cannot password protect it. i first tried doing htpasswd, it asks for login but everytime i login it keeps asking for new one. its getting frustrating, i have tried all tricks. and doesn't seem to work. this is a virtual host config inside sites-available. httpd.conf is empty but i have apache2.conf Code: NameVirtualHost 12.12.12.12. <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www2/cgi-bin/ <Directory "/var/www2/cgi-bin/"> AllowOverride Options Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • All FireFTP passwords gone after auto-update

    - by GitaarLAB
    For the last six months (since the Firefox madness started and they keep on taking control of my PC) I'm terrified to touch Firefox. Problem is however, I've been using it in my business (since once upon a time it was a trustworthy application with useful extensions like FireFTP) and that installation (and plugins) holds four years of information. So Firefox continually deletes my important data (by) messing up (or blocking/or worse: auto-updating) my plug-ins, even crashing my computer as a result. Today Firefox killed FireFTP by (again) autoupdating FireFTP without my permission, and I did my best to disable that nonsense in about:config). Result: none of the (over 100) FireFTP accounts can be logged on to, they suddenly all ask for a password. I do not have the time to to find all of the passwords and reconfigure FireFTP again. How can I undo the mess Firefox created once again? That is, where are the passwords, how do I downgrade? As a side-question, how can I make Firefox behave again? I'm the boss of my computer, not them! How can I once and for-all take back control and completely kill every kind of auto-update feature?

    Read the article

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just this morning), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? EDIT: It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • Nginx terminate SSL for wordpress

    - by Mike
    I have a bit of a problem. We run a wordpress blog behind a ngnix proxy and looking to terminate the ssl on the nginx side. Our current nginx config is upstream admin_nossl { server 192.168.100.36:80; } server { listen 192.168.71.178:443; server_name host.domain.com; ssl on; ssl_certificate /etc/nginx/wild.domain.com.crt; ssl_certificate_key /etc/nginx/wild.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; location / { proxy_read_timeout 2000; proxy_next_upstream error; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://admin_nossl; break; It just does not seem to work. If I can hit https://host.domain.com but it quickly switches back to non-secured from what I can see. Any pointers?

    Read the article

  • nginx redirect what is not coming from load balancing

    - by dawez
    I have nginx on SERVER1 that is acting as load balancing between SERVER1 and SERVER2 in SERVER1 I have the upstreams for the load balancing defined as : upstream de.server.com { # similar upstreams defined also for other languages # SELF SERVER1 server 127.0.0.1:8082 weight=3 max_fails=3 fail_timeout=2; # other SERVER2 server otherserverip:8082 max_fails=3 fail_timeout=2; } The load balancing config on SERVER1 is this one: server { listen 80; server_name ~^(?<LANG>de|es|fr)\.server\.com; location / { proxy_pass http://$LANG.server.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # trying to pass a variable in the header to SERVER2 proxy_set_header Is-From-Load-Balancer 1; } } Then in server 2 I have: server { listen 8082; server_name localhost; root /var/www/server.com/public; # test output values add_header testloadbalancer $http_is_from_load_balancer; add_header testloadbalancer2 not_load_bal; ## other stuff here to process the request } I can see the "testloadbalancer" in the response header is set to 1 when the request is coming from the load balancing, it is not present when from a direct access: SERVER2:8082 . I would like to bounce back to the SERVER1 all the direct requests that are sent to SERVER2, but keep the ones from the load balancing. So this should forbid direct access to SERVER2:8082 and redirect to SERVER1:80 .

    Read the article

  • CentOS 6.5 x64 + Ajendi +nginx + php-fpm + mysql setup unable to load the PHP page

    - by Francis
    I'm using Ajendi to setup a PHP web server, I can load the PHP info page, when I try to load the PHP copy from other server, I get a blank page with this log appear on access log, I have no clue what was wrong and troubleshoot further, please advice. x.x.x.x - - [28/May/2014:10:08:37 -0400] "GET /index.php HTTP/1.1" 200 31 "-" "Mozilla/5.0 (Windows NT 6.1; rv:29.0) Gecko/20100101 Firefox/29.0" 2.6.32-431.1.2.0.1.el6.x86_64 cat /etc/redhat-release CentOS release 6.5 (Final) The vhost config AUTOMATICALLY GENERATED - DO NO EDIT! server { listen *:80; server_name test.com www.test.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; root /www; index index.html index.htm index.php; location ~ \.php$ { alias /www; fastcgi_index index.php; include fcgi.conf; fastcgi_pass unix:/var/run/php-fcgi-php-fcgi-0.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } }

    Read the article

  • nginx- Rewrite URL with Trailing Slash

    - by Bryan
    I have a specialized set of rewrite rules to accommodate a mutli site cms setup. I am trying to have nginx force a trailing slash on the request URL. I would like it to redirect requests for domain.com/some-random-article to domain.com/some-random-article/ I know there are semantic considerations with this, but I would like to do it for SEO purposes. Here is my current server config. server { listen 80; server_name domain.com mirror.domain.com; root /rails_apps/master/public; passenger_enabled on; # Redirect from www to non-www if ($host = 'domain.com' ) { rewrite ^/(.*)$ http://www.domain.com/$1 permanent; } location /assets/ { expires 1y; rewrite ^/assets/(.*)$ /assets/$http_host/$1 break; } # / -> index.html if (-f $document_root/cache/$host$uri/index.html) { rewrite (.*) /cache/$host$1/index.html break; } # /about -> /about.html if (-f $document_root/cache/$host$uri.html) { rewrite (.*) /cache/$host$1.html break; } # other files if (-f $document_root/cache/$host$uri) { rewrite (.*) /cache/$host$1 break; } } How would I modify this to add the trailing slash? I would assume there has to be a check for the slash so that you don't end up with domain.com/some-random-article//

    Read the article

  • Why Would one VLAN have no Communication on one Switch?

    - by Webs
    So the problem is we have a device or host that needs to communicate on a specific VLAN. This VLAN is not new, it is running all throughout our environment and works fine. But the VLAN was recently configured on the switch in question, a Cisco 3750. The DHCP server is handing out addresses on that VLAN with no problem. I have verified the cable between the host and switch and tried multiple hosts, but none of them can communicate or get an address. I plugged my laptop into an empty port which had a different VLAN assigned and immediately got a DHCP address. When I changed that port to the same VLAN I'm having issues with I got the same problem. The laptop just sits there and tries to DHCP an address but nothing happens. I double checked the cores and their Layer 3 VLAN config and its fine too. Plus I figured the issue couldn't be with them because the VLAN works fine everywhere else it exists. So the only other thing I can think of is the switch, but the VLAN exists on the switch and seems to be configured correctly. The trunks appear to be configured just fine as well too. Anyone have any ideas? I'm lost on this one.

    Read the article

  • sbs-server with 2 nics and 2 connections to the internet with different providers not working as it

    - by erik-van-gorp
    We have the following configuration : A sbs-2003 server in a domain (mydomain.com) with 2 network cards, each connected to a different network (provider), with different gateways, one for web and one for mail and clients. (we do this because the bandwitdh we get from our providers is too small to handle all the mail(+spam) traffic and webservices, so we took 2 providers) DNS is as follows : www.mydomain.com 1.2.3.4 mail.mydomain.com 5.6.7.8 NIC 1(192.168.1.3) is connected to to the internet through a firewall at 192.168.1.1, having wan address 1.2.3.4 NIC 2(10.0.0.3) is connected to to the internet through a firewall at 10.0.0.1, having wan address 5.6.7.8 Both nics have their default gateway installed at their corresponding routers. Also the metrics are set equal. (i know this isn't a supported config, but it works more or less). In this configuration i can use RDP on both wan adresses, and telnet to port 25 works as well on both. The issue now is that since a few weeks , we get regular disconnections, and website hickups(timeouts), several per hour. If we set one router to a higher metric, that route no longer works. In short, I want the mails to route through NIC2 and the web through NIC1. Any better configuration (without installing a second mail server) ?

    Read the article

  • 403 Error when accessing vhost directive

    - by Ortix92
    I'm having some troubles with setting up my webserver (Centos 5.8). It's a brand new server and I'm trying to set a vhost to the following dir: /home/exo/public_html However whenever I restart httpd I get the following warning: Code: Starting httpd: Warning: DocumentRoot [/home/exo/public_html] does not exist Yes the directory does exist. So whenever I visit the domain exo-l.com it gives me a 403 error. This is my config file (I put this inside my httpd.conf because the files in conf.d were not included for some reason. Or at least my newly created vhost conf file, but that has 0 priority for now) <VirtualHost *:80> DocumentRoot /home/exo/public_html ServerName www.exo-l.com ServerAlias exo-l.com <Directory /home/exo/public_html> Order allow,deny Allow from all </Directory> </VirtualHost I'm completely clueless because this should work as far as I know. httpd is being run as apache:apache i tried chowning the public_html directory (also recursively) to exo:apache, apache:apache, root:root with no success. chmod 777 doesn't do anything either. a tail from the log: [Sat Oct 13 15:10:04 2012] [error] [client 82.***.***.61] (13)Permission denied: access to / denied [Sat Oct 13 15:10:04 2012] [error] [client 82.***.***.61] (13)Permission denied: access to / denied I also found something about selinux and that disabling it might help, but do I really want to do that?

    Read the article

  • SBS DC DNS entries going missing?

    - by Chris W
    I've been looking at a problem on a friends SBS (2003) server where the client PC's aren't able to connect to the server with a variety of errors reported. Checking the server itself the only indicator of an issue is an error 5782: Dynamic registration or deregistration of one or more DNS records failed with the following error: No DNS servers configured for the local system. Running a dcdiag reports that there are no DNS records registered for the DC so I fixed the problem by doing a netdiag /fix after which the dcdiag comes back clean and clients are ok again. It happened a few weeks ago as well and the same fix solved it. What are the possible causes of the DC DNS entries going missing? Is this a config option that needs tweaking or could it be solved by something simple like scheduling the SBS server to re-boot periodically? The only change they can think of that was made near to the time of the first instance of this problem occurring is that RRAS was started up to allow for a VPN connection from a home user. NB - The server is setup with a pair of NICs in a team so the server has a single virtual NIC providing both LAN/WAN connections to it. An external hardware firewall is in use rather than the windows firewall.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >