Search Results

Search found 6441 results on 258 pages for 'mod proxy fcgi'.

Page 68/258 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Link to pages on site without .html extension appearing in browser?

    - by Anime163
    I've modified my .htaccess file to allow access to html files without having to include the extension on the end, for example: www.mysite.com/document directs to www.mysite.com/document.html However, when I want to link to pages within my site using something like <a href="page.html"></a> I still get the .html appearing in the URL. So am I allowed to exclude the extension and leave a link as <a href="page"></a> so that the extension doesn't appear in the browser? Or is there a better way to do it?

    Read the article

  • Issue with permanent redirect implementation

    - by Argoron
    I have a tricky problem related to 301 redirections I badly need help with. I tried to implement these via .htaccess, but ran into trouble. The start of my .htaccess looks like this: SetEnv PHP_VER 5 Options +FollowSymlinks RewriteEngine on # Redirect non-www to www RewriteCond %{HTTP_HOST} !^(www\.|$) [NC] RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L] #--- GENERAL --- RewriteRule ^index\.html$ index.php [L] ... When I try to put a permanent redirect to index.php by adding R=301 in the square brackets, I get a 404, and I have no idea where the error comes from.

    Read the article

  • Enable 'mod_rewrite' Using .htaccess File On cPanel Shared Hosting Server

    - by zulhfreelancer
    I'm using cPanel to host my website. I need to enable 'mod_rewrite' on this Shared Hosting cPanel account to run my script. I've tried to Google the solutions high and low but did not find any luck yet. Those tutorials that I found only work well with VPS and some of them said that, only hosting provider can change and enable it. But, some of them said that, it can be done easily by editing the .htaccess file. My question: If I want to edit the .htaccess file, what should I include in that file? What the 'rules' and 'conditions' that should be included?

    Read the article

  • Transparent PHP script execution using mod_rewrite

    - by tori3852
    I am looking for a solution for this a problem: I need that every HTTP request (method is irrelevant) in Apache http server would be served only after execution of specific PHP script. This is needed because I need to gather some information about requests, etc. As far as I understand - this could be achieved using mod_rewrite module. So far I have done this (in .htaccess file): RewriteEngine on RewriteRule ^(.*)$ script.php [C] script.php is executed, but I need that after this original request would be executed. Thanks - any help is appreciated.

    Read the article

  • Help with URL Rewrite

    - by bodesam
    This is the first time i'm doing this and have been doing some research on it. I have a page that selects some info from a database and displays it with a link to a second page that uses the result to query the database, something like this: $sel=mysql_query("select id, title from thetable "); while($row=mysql_fetch_array($sel)) { $id=$row['id']; $title=$row['title']; echo "<a href='more.php?id=$id'>$title</a>"; } The issue is, in the more.php page, instead of more.php?id=5 to show in the address bar, I want something like more/title Secondly, as it obtains in most sites, I want the link on the referring page to show this friendly url on mouse hover not the more.php?id=5 And I notice in most sites some words like 'a', 'and', 'the' etc are usually removed from the url title(even if there originally), moreover how does one handle the situation where more than one record have the same title. How does one go about achieving this url rewrite with htaccess or whatever method is used. Thanks.

    Read the article

  • RewriteRule for URLs with spaces

    - by Robert Cailliau
    My site's pages are in multiple languages whereby each language version shares its media (images) with the other language versions. I place all versions and the media in a single directory with the same name. E.g. pages mypage-en.html, mypage-fr.html etc. will sit in directory mypage. The directory path suffices to reference a page: h t t p : //....../mypage/ is good enough, there is no need for h t t p : //....../mypage/mypage-en/html A rewrite with RewriteRule ^(.*)/([a-zA-Z0-9]+)/?$ /$1/$2/$2-en.html lets me use the shorter form. But what if the name mypage contains spaces (which some do) ? I want h t t p : //....../my page/ to lead to h t t p : //....../my page/my page.html Using RewriteRule ^(.*)/([a-zA-Z0-9|\s]+)/?$ /$1/$2/$2-en.html did not work. Any hints welcome. (please do not ask me why I want to do this, nor tell me I should not use spaces in file names)

    Read the article

  • Simple mod_rewrite Question

    - by user5358
    Hello, I want to have everything that looks like this: /1/2/3/4/5/[...] to redirect to this: /index.php?u=/1/2/3/4/5/[...] unless the requested string is a specific file. So anything that doesn't have a ".", I want to redirect to "index.php?u=[...]". I'll then parse the URI segments in PHP to determine what the user is requesting. I've been looking around for how to do this, but have only a very rough understanding of regular expressions and have been unable to find an example of how to do it. Thanks!

    Read the article

  • .htaccess two different rules but only one per time

    - by dragon112
    I'm rather new to the whole .htaccess thing and I'm using the following right now to use 'pretty url's': <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?path=$1 [NS,L] </IfModule> Now i found my website a bit slow and decided to start gzipping my CSS files thru a php script I found somewhere on the web (the website). For this to work I need to rewrite the url to open the correct php file. That would look something like this: RewriteRule ^(.*).css$ /csszip.php?file=$1.css [L] But I only want the first to happen when the second doesn't and vice versa. In other words i'd like something like this: <IfModule mod_rewrite.c> RewriteEngine On if request doesn't contain .css do RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?path=$1 [NS,L] else do RewriteRule ^(.*).css$ /csszip.php?file=$1.css [L] </IfModule> Can anyone help me with the proper code or a place where i can find a way to use some kind of conditional statement in htaccess files? Thanks in Advance!:)

    Read the article

  • ssl between balancer members?

    - by jemminger
    I have apache running on one machine as a load balancer: <VirtualHost *:443> ServerName ssl.example.com DocumentRoot /home/example/public SSLEngine on SSLCertificateFile /etc/pki/tls/certs/example.crt SSLCertificateKeyFile /etc/pki/tls/private/example.key <Proxy balancer://myappcluster> BalancerMember http://app1.example.com:12345 route=app1 BalancerMember http://app2.example.com:12345 route=app2 </Proxy> ProxyPass / balancer://myappcluster/ stickysession=_myapp_session ProxyPassReverse / balancer://myappcluster/ </VirtualHost> Note that the balancer takes requests under SSL port 443, but then communicates to the balancer members on a non-ssl port. Is it possible to have the forwarding to the balancer members be under SSL too? If so, is this the best/recommended way? If so, do I have to have another SSL cert for each balancer member? Does the SSLProxyEngine directive have anything to do with this?

    Read the article

  • nginx : backend https, proxy_pass shows ip

    - by Vulpo
    I am using nginx as a reverse proxy listening at port 80 (http). I am using proxy_pass to forward requests to backend http and https servers. Everything works fine for my http server but when I try to reach the https server through nginx reverse proxy the ip of the https server is shown in the client's web browser. I want the uri of the nginx server to be shown instead of the https backend server's ip (once again, this works fine with the http server but not for the https server). See this post on the forum Here is my configuration file : server { listen 80; server_name domain1.com; access_log off; root /var/www; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } location / { proxy_pass http://ipOfHttpServer:port/; } } server { listen 80; server_name domain2.com; access_log off; root /var/www; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } location / { proxy_pass http://ipOfHttpsServer:port/; proxy_set_header X_FORWARDED_PROTO https; #proxy_set_header Host $http_host; } } When I try the "proxy_set_header Host $http_host" directive and "proxy_set_header Host $host" the web page can't be reached (page not found). But when I comment it, the ip of the https server is shown in the browser (which is bad). Does anyone have an idea ? My other configs files are : proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; user www-data; worker_processes 2; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; server_names_hash_bucket_size 64; sendfile off; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 5; gzip_http_version 1.0; gzip_min_length 0; gzip_types text/plain text/html text/css image/x-icon application/x-javascript; gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Thanks for your help !

    Read the article

  • Tunnel over HTTPS

    - by ephemient
    At my workplace, the traffic blocker/firewall has been getting progressively worse. I can't connect to my home machine on port 22, and lack of ssh access makes me sad. I was previously able to use SSH by moving it to port 5050, but I think some recent filters now treat this traffic as IM and redirect it through another proxy, maybe. That's my best guess; in any case, my ssh connections now terminate before I get to log in. These days I've been using Ajaxterm over HTTPS, as port 443 is still unmolested, but this is far from ideal. (Sucky terminal emulation, lack of port forwarding, my browser leaks memory at an amazing rate...) I tried setting up mod_proxy_connect on top of mod_ssl, with the idea that I could send a CONNECT localhost:22 HTTP/1.1 request through HTTPS, and then I'd be all set. Sadly, this seems to not work; the HTTPS connection works, up until I finish sending my request; then SSL craps out. It appears as though mod_proxy_connect takes over the whole connection instead of continuing to pipe through mod_ssl, confusing the heck out of the HTTPS client. Is there a way to get this to work? I don't want to do this over plain HTTP, for several reasons: Leaving a big fat open proxy like that just stinks A big fat open proxy is not good over HTTPS either, but with authentication required it feels fine to me HTTP goes through a proxy -- I'm not too concerned about my traffic being sniffed, as it's ssh that'll be going "plaintext" through the tunnel -- but it's a lot more likely to be mangled than HTTPS, which fundamentally cannot be proxied Requirements: Must work over port 443, without disturbing other HTTPS traffic (i.e. I can't just put the ssh server on port 443, because I would no longer be able to serve pages over HTTPS) I have or can write a simple port forwarder client that runs under Windows (or Cygwin) Edit DAG: Tunnelling SSH over HTTP(S) has been pointed out to me, but it doesn't help: at the end of the article, they mention Bug 29744 - CONNECT does not work over existing SSL connection preventing tunnelling over HTTPS, exactly the problem I was running into. At this point, I am probably looking at some CGI script, but I don't want to list that as a requirement if there's better solutions available.

    Read the article

  • Detecting dead proxies

    - by Afnan
    Is it possible to detect which proxy is active which is dead? using c# and a combo box containing list of proxies with port number is there any way we take every proxy one by one and determine as if it was dead or active? Microsoft.Win32.RegistryKey registry = Microsoft.Win32.Registry.CurrentUser.OpenSubKey("Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings", true); registry.SetValue("ProxyEnable", 1); registry.SetValue("ProxyServer", comboBox1.Text) ;

    Read the article

  • How to handle requsts from IIS to Apache?

    - by omoto
    Hi all! I need to create something like proxy between IIS and Apache So hostheaders I would like to set up on IIS because it's Windows 2003 Server but I have some applications that should be hosted under Apache. In this case I think that I should set up something like proxy... Any ideas?

    Read the article

  • Setting custom SQL in django admin

    - by eugene y
    I'm trying to set up a proxy model in django admin. It will represent a subset of the original model. The code from models.py: class MyManager(models.Manager): def get_query_set(self): return super(MyManager, self).get_query_set().filter(some_column='value') class MyModel(OrigModel): objects = MyManager() class Meta: proxy = True Now instead of filter() I need to use a complex SELECT statement with JOINS. What's the proper way to inject it wholly to the custom manager?

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • Lighttpd server is stopped

    - by tomaszs
    I have a Lighttpd server plus mod_fastcgi. And today I had Internal Server Error 500. I've checked my error log and it goes like this: 2010-04-22 22:59:14: (server.c.1464) server stopped by UID = 0 PID = 3332 2010-04-22 22:59:15: (mod_fastcgi.c.1768) connect failed: No such file or directory on unix:/tmp/php.socket-5 2010-04-22 22:59:15: (mod_fastcgi.c.2956) backend died; we'll disable it for 5 seconds and send the request to another backend instead: reconnects: 0 load: 1 2010-04-22 22:59:15: (mod_fastcgi.c.2709) child died somehow, waitpid failed: 10 2010-04-22 22:59:15: (server.c.1464) server stopped by UID = 0 PID = 3332 2010-04-22 22:59:15: (server.c.1464) server stopped by UID = 48 PID = 1385 2010-04-22 22:59:15: (server.c.1464) server stopped by UID = 48 PID = 1385 2010-04-22 22:59:15: (server.c.1464) server stopped by UID = 48 PID = 1385 2010-04-22 22:59:15: (server.c.1464) server stopped by UID = 48 PID = 1385 What to do to find out what can be cause of this?

    Read the article

  • FastCGI for C++

    - by Gordon
    I've found only two FastCGI libraries for C++. There's the "official" one, and fastcgi++. How is either one better than the other? Do any others exist?

    Read the article

  • Ruby: having callbacks on 'attr' objects

    - by JP
    Essentially I'm wondering how to place callbacks on objects in ruby, so that when an object is changed in anyway I can automatically trigger other changes: class MyClass attr_reader :proxy def proxy=(string_proxy = "") begin @proxy = URI.parse("http://"+((string_proxy.empty?) ? ENV['HTTP_PROXY'] : string_proxy)) @http = Net::HTTP::Proxy.new(@proxy.host,@proxy.port) rescue @http = Net::HTTP end end end m = MyClass.new m.proxy = "myproxy.com:8080" p m.proxy # => <URI: @host="myproxy.com" @port=8080> # However changing m.proxy will not change the @http variable, as proxy= is not being called. # Desired functionality: m.proxy = nil # Now @http.class is Net::HTTP, not Net::HTTP::Proxy

    Read the article

  • most widely used python web app deployment style

    - by mete
    I wonder which option is more stable (leaving performance aside) and is more widely used (I assume the widely used one is the most stable): apache - mod_wsgi apache - mod_fcgid apache - mod_proxy_ajp apache - mod_proxy_http for a project that will serve REST services with small json formatted input and output messages and web pages, up to 100 req/s. Please comment on apache if you think nginx etc. is more suitable. Thanks.

    Read the article

  • How to change suexec root directory from "/var/www" to "/home"?

    - by Oudin
    Hi I've installed suexec using on ubuntu 12.04: apt-get install apache2 apache2-suexec libapache2-mod-fcgid php5-cgi However when I run the following command: sudo /usr/lib/apache2/suexec -V I get the following info: -D AP_DOC_ROOT="/var/www" -D AP_GID_MIN=100 -D AP_HTTPD_USER="www-data" -D AP_LOG_EXEC="/var/log/apache2/suexec.log" -D AP_SAFE_PATH="/usr/local/bin:/usr/bin:/bin" -D AP_UID_MIN=100 -D AP_USERDIR_SUFFIX="public_html" I'm utilizing "/home/user/public_html" to serve users content on the web not "/var/www" How can I change the root directory to "/home"?

    Read the article

  • Tor : Stuck at Connecting to Relay Directory

    - by Ghassan
    i have never ever worked with tor before. the company where i work allows us to have access to any site we wish. nonetheless as of the the beginning of this month, they installed a proxy server to filter which sites can be accessed and which ones cant. the filter isnt only on URLS, but IPS as well, even hexa IPS wont work. so after some research, i decided to use tor, the first day i installed it, everything went smooth and i was accessing any website i wish. just 2day, everything stopped. i try 2 start vidalia, it gets stuck at Connecting to Relay Directory. i work on windows 7 platform. Please help me out! thanks in advance.

    Read the article

  • inetd / xinetd not working under cygwin

    - by Zimmy-DUB-Zongy-Zong-DUBBY
    I am trying to use xinetd (or inetd) with netcat to act as a TCP proxy. This setup works on Linux without issue. Under Cygwin, either as a service or from the a Cygwin command line, the (x)inetd fails to open netcat, with the error "no such file or directory". I have tried specifying /usr/bin/nc, /usr/bin/nc.exe, /cygdrive/d/cygwin/usr/bin/nc.exe, d:\cygwin\bin\nc.exe, and a TON of other combinations of forward flashes, backslashes, Windows paths and Cygwin paths. No matter what, I get errno 2, no such file or dir. Any ideas? I need this working ASAP. Edit: I thought it may have to do with it being in d:\cygwin (lame hardcoding?) but I tested it on a machine with cygwin on C:\, problem exists there too.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >