Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 115/232 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • IIS cannot access itself

    - by dave
    We are on a corporate network that uses ISA and I am having issues trying to not have requests go through ISA. I have IIS7 on my local Windows 7 machine that has websites and a service layer. The websites access the service layer using a xxxx.servicelayer.local address that is set up in my HOSTS file to point to 127.0.0.1. I have Windows Firewall client which I have disabled. I have tried both adding this address into IE so that it does not go through ISA and also disabled this section altogether. When the website (which is actually IIS making the request to itself) tries to access the service layer I receive an ISA error that proxy authentication has failed. Considering that everything I can see to configure is set to not go through the proxy, ISA, I cannot see how this is actually going through the proxy and giving this error. Is there something within Windows 7 that forces the proxy setting, some sort of caching or similar?

    Read the article

  • Apache ProxyPass ignore static files

    - by virtualeyes
    Having an issue with Apache front server connecting to a Jetty application server. I thought that ProxyPass ! in a location block was supposed to NOT pass on processing to the application server, but for some reason that is not happening in my case, Jetty shows a 404 on the missing statics (js, css, etc.) Here's my Apache (v 2.4, BTW) virtual host block: DocumentRoot /path/to/foo ServerName foo.com ServerAdmin [email protected] RewriteEngine On <Directory /path/to/foo> AllowOverride None Require all granted </Directory> ProxyRequests Off ProxyVia Off ProxyPreserveHost On <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> # don't pass through requests for statics (image,js,css, etc.) <Location /static/> ProxyPass ! </Location> <Location /> ProxyPass http://localhost:8081/ ProxyPassReverse http://localhost:8081/ SetEnv proxy-sendchunks 1 </Location>

    Read the article

  • Does an SMTP request contain host header information (or just the IP of the targeted SMTP server)?

    - by Olaf
    We are using an external commercial smtp server for our newsletters (sending them through .NET components), and they offer two smtp URLs - smtp.critsend.com and fast.critsend.com -, and the second one is reserved for sending singular emails, the first one for bulk. Using nslookup shows that both resolve to the same 4 IP addresses (fast.critsend.com being an Alias). Question: (how) is it possible for the smtp relay to distinguish between different names? Is there something in the headers that can be compared to host headers in http protocol (I didn't find any intelligible information for a non-sysadmins)? The reason I'm asking is because we would like to use one of the IPs in our newsletter script (which works) rather than a name (in order to save DNS requests), and we are wondering about potential problems.

    Read the article

  • Configuring Nginx for Wordpress and Rails

    - by Michael Buckbee
    I'm trying to setup a single website (domain) that contains both a front end Wordpress installation and a single directory Ruby on Rails application. I can get either one to work successfully on their own, but can't sort out the configuration that would let me coexist. The following is my best attempt, but it results in all rails requests being picked up by the try_files block and redirected to "/". server { listen 80; server_name www.flickscanapp.com; root /var/www/flickscansite; index index.php; try_files $uri $uri/ /index.php; location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/flickscansite$fastcgi_script_name; } passenger_enabled on; passenger_base_uri /rails; } An example request of the Rails app would be http://www.flickscan.com/rails/movies/upc/025192395925

    Read the article

  • Caching Reverse-Proxy ISP Host for a Low-Bandwidth Server

    - by Casey
    I am building a webcam w/ HTTP server that will be running from a low-bandwith connection. The content on the site will be changing every 5 to 10 minutes. Instead of serving files directly from this connection, are there hosting companies that can act as a reverse proxy for my site? Therefore, if nobody is using the site, the local internet connection remains idle. And if I receive 1000 hits all at the same time, only one HTTP GET is required, and the hosting company (on a fat pipe) continues serving the other 999 requests? This doesn't sound like a very common usage model, but I feel like this would be the optimal solution to my situation.

    Read the article

  • Disable PXE progamatically in parallels?

    - by Stefan Lasiewski
    I'm running Parallels 4.0 on Mac OS X 10.5.8. I'm trying to create a bunch of Virtual Machines from the commandline, using the prlctl tool, like so: $ prlctl create test1 -o linux -d centos $ prlctl set test1 --device-del cdrom0 $ prlctl start test1 Now, each time I start a new VM, the VM spends time waiting for a PXE boot. I'd like to turn this off. Can I disable PXE requests using Parallels or a Parallels commandline tool? Or, can I set the boot order of a VM from the commandline?

    Read the article

  • Why does mod_security require an ACCEPT HTTP header field?

    - by ripper234
    After some debugging, I found that the core ruleset of mod_security blocks requests that don't have the (optional!) ACCEPT header field. This is what I find in the logs: ModSecurity: Warning. Match of "rx ^OPTIONS$" against "REQUEST_METHOD" required. [file "/etc/apache2/conf.d/modsecurity/modsecurity_crs_21_protocol_anomalies.conf"] [line "41"] [id "960015"] [msg "Request Missing an Accept Header"] [severity "CRITICAL"] [tag "PROTOCOL_VIOLATION/MISSING_HEADER"] [hostname "example.com"] [uri "/"] [unique_id "T4F5@H8AAQEAAFU6aPEAAAAL"] ModSecurity: Access denied with code 400 (phase 2). Match of "rx ^OPTIONS$" against "REQUEST_METHOD" required. [file "/etc/apache2/conf.d/modsecurity/optional_rules/modsecurity_crs_21_protocol_anomalies.conf"] [line "41"] [id "960015"] [msg "Request Missing an Accept Header"] [severity "CRITICAL"] [tag "PROTOCOL_VIOLATION/MISSING_HEADER"] [hostname "example.com"] [uri "/"] [unique_id "T4F5@H8AAQEAAFU6aPEAAAAL"] Why is this header required? I understand that "most" clients send these, but why is their absence considered a security threat?

    Read the article

  • Google Chrome suspicious connections

    - by Poni
    I'm using Chrome at Windows and with TCPView (of the SysInternals freeware suit) I see that chrome.exe establish connections to these IPs: 173.194.37.104 209.85.146.138 Using http://www.ipaddresslocation.org/ I check about these IPs and see they're related to Google. Now, in order to clarify, these are the exact things I do: Open up chrome, the default page is set to BLANK (i.e no homepage whatsoever). Then I get into my website which has a blank page, so no "other" http requests are made. Right from this point there is a persistent connection, usually to '173.194.37.104'. What are these?? Very suspicious.. Edit #1: - I'm in 'incognito' mode right from start, when launching Chrome, using a shortcut with the '-incognito' switch. - I've turned off all phishing protections and other "advance" features in order to reduce Chrome's network activity.

    Read the article

  • Keeping rackspace vserver alive

    - by mit
    It appears to me that rackspace somehow freezes cloud VMs after some idle time. This means the first page request to a php page takes much longer to respond than the subsequent requests. I am actually querying a machine with wget from a different host now to keep it "alive". But I wonder what frequency would be necessary. Does anyone know the time period after which they send a VM to "sleep"? I guess it would be some minutes. EDIT: There is no caching involved on the php site. It just recently moved from another vhost and there was never such latency on the first request.

    Read the article

  • apache redirect to https for basic auth

    - by shreddd
    I have a tricky variation on an old problem. I have an apache based site that should generally be accessed via http/port 80. However for certain areas protected areas that require authentication (designated by .htaccess), I want to be able to redirect the user the https/port 443. The key here is that I want this to always happen - i.e. I don't want to have to rewrite each htaccess file with a redirect. I only want to enforce this for basic authentication and the protected areas are scattered all over the site. Is it possible to somehow redirect all basic authentication requests to the SSL host?

    Read the article

  • Help with Apache rewriteengine rules

    - by Vinay
    Hello - I am trying to write a simple rewrite rule using the rewriteengine in apache. I want to redirect all traffic destined to a website unless the traffic originates from a specific IP address and the URI contains two specific strings. RewriteEngine On RewriteLog /var/log/apache2/rewrite_kudithipudi.log RewriteLogLevel 1 RewriteCond %{REMOTE_ADDR} ^199\.27\.130\.105 RewriteCond %{REQUEST_URI} !/StringOne [NC, OR] RewriteCond %{REQUEST_URI} !/StringTwo [NC] RewriteRule ^/(.*) http://www.google.com [R=302,L] I put these statements in my virtual host configuration. But the rewriteengine seems to be redirect all requests, whether they match the condition or not. Am I missing something? Thank you. Vinay.

    Read the article

  • Apache 2.4 and PHP 5.4 getting connection reset errors in the browser

    - by zuallauz
    In the weekend I upgraded my development web server to Apache 2.4 and PHP 5.4. In my web application which was previously working great on Apache 2.2 and PHP 5.3 it now starts getting these messages saying the "connection was reset" in Firefox. See screenshot. I am connecting to the linux machine via local LAN. I'm assuming it might be something to do with the new version of Apache or PHP, or the new LAMP stack which I downloaded from BitNami? It would seem to happen every 5-10 requests and throw this error, perhaps more likely to trigger it is if I send a POST request from a page. Is it timing out the script or something? These are just basic dynamic pages I'm loading and they worked perfectly in Apache 2.2 and PHP5.3. Here are my httpd.conf and PHP.ini if that has any clues. Any ideas? Any help much appreciated.

    Read the article

  • Declaring multiple ports for the same VirtualHosts

    - by user65567
    Declare multiple ports for the same VirtualHosts: SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> How can I declare a new port ('listen', ServerName, ...) for 'domain.localhost'? If I add the following code, apache works (too much) also for all other subdomain of 'domain.localhost' (subdomain1.domain.localhost, subdomain2.domain.localhost, ...): <VirtualHost *:80> ServerName pjtmain.localhost:80 DocumentRoot "/Users/Toto85/Sites/pjtmain/public" RackEnv development <Directory "/Users/Toto85/Sites/pjtmain/public"> Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • A Linux DHCP server that will listen on an non-broadcast (tap) interface?

    - by TomOnTime
    Are there any Linux DHCP servers that will listen to what Cisco calls an "unnumbered" interface, or what others might call a "NBMA" (non-broadcast) interface. I have a Linux system that connects to a number of others using GRE tunnels. The machines on the other end send DHCP requests to this machine, I can see them with tcpdump. However, ISC DHCP 3.0.3 refuses to listen to the interface because it is non-broadcast. The interface I'd like DHCP to listen to is: tap2 Link encap:Ethernet HWaddr removed-for-privacy inet6 addr: removed-for-privacy/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:518 errors:0 dropped:0 overruns:0 frame:0 TX packets:510 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:196242 (191.6 KiB) TX bytes:52425 (51.1 KiB)

    Read the article

  • What settings need to be changed to allow EC2 instances to use Amazon's Route 53 for DNS?

    - by ks78
    I have a number of Amazon EC2 instances, all running Ubuntu, which I'd like to configure to use Amazon's Route 53. I setup a script, following Shlomo Swidler's article, but ran into script-related issues, which were answered here. Now, I have the script working, but my instances are still not able to access Route 53's DNS. By this I mean, they are not able to resolve hostnames to IP addresses. My instances are currently configured with the DNS server IP address Amazon pushes out to them by default, does that need to be changed when using Route 53? I'm also IP-restricting my instances using the Security Groups. Could that be the problem? Is there a certain IP address or port I should open to allow communication with Route 53? It seems that DNS requests should be originating from my instances so the Security Groups shouldn't be an issue, but I've been wrong before. If anyone has any ideas, I'd really appreciate it.

    Read the article

  • IIS 6 on x64 and long URLs

    - by mausch
    I have a very long URL on a site hosted on Windows 2003 x64 that looks like this: http://myhost/a_very_very_long_url_around_300_chars_long (i.e. a single, very long segment around 300 chars long) Problem is, I'm getting a 400 Bad Request response from HTTP.SYS (it doesn't even reach IIS). I can tell because these requests show up in system32\LogFiles\HTTPERR, e.g: 2009-09-17 19:51:29 200.123.179.9 3636 192.168.129.50 80 HTTP/1.1 GET /a_very_very_long_url_around_300_chars_long 400 - URL - I tried setting UrlSegmentMaxLength in the registry and this fixes the issue on my Windows 2003 x86 box but not on the x64 production server. I tried this on another Win2k3 x64 server and it also failed. Any hints?

    Read the article

  • Server 2012 R2 DNS Conditional forwarding not working reliably, possible caching issue?

    - by Matt
    I have a bit of a home lab setup with a domain controller that is acting as the DNS server for my network. For everything, it's working fine and forwards external DNS requests to my ISP. The household recently wanted to get Netflix going and it seemed a DNS option was better than a VPN to get around the region locking, so I signed up for unblock-us.com Since I have a Windows DNS server I thought I'd be clever and make use of conditional forwarders and added the Netflix domain to the list. Initially this worked well and all devices on the network could now access Netflix, however after about an hour going to the Netflix site would result in a page cannot be found. Doing an nslookup of Netflix.com from my PC resulted in it not returning any IP addresses. As a test, I deleted the Netflix domain from the DNS servers cache and things started working again - devices could get to the site again however the same thing happens again after around half an hour to an hour. Have I missed something here that's causing it to stop working?

    Read the article

  • SSL to SSL Redirects in IIS - Possible?

    - by Eric
    We have a situation where we would like to redirect https://service1.domain.com to https://service2.domain.com. I know this is very simple with http endpoints, but I'm not too sure about https. We have some legacy windows application web service clients that will not be updating their software version soon, and we cannot update their web references to https://service2.domain.com. Is there any way to leave these web service clients pointing to https://service1.domain.com, but have their requests forwarded to (and responded to by) https://service2.comain.com? The old server is running IIS 6.0. The new server is running IIS 7.0. We could probably upgrade it to 7.5 if needed, but I'm not certain. We could also probably make a seamless transition of the old web service to a new server using public DNS, but we cannot change the DNS name of "service1.domain.com." Thanks ServerFault!

    Read the article

  • RackSpace Cloud Strips $_SESSION if URL Has Certain File Extensions

    - by macinjosh
    The Situation I am creating a video training site for a client on the RackSpace Cloud using the traditional LAMP stack (RackSpace's cloud has both Windows and LAMP stacks). The videos and other media files I'm serving on this site need to be protected as my client charges money for access to them. There is no DRM or funny business like that, essentially we store the files outside of the web root and use PHP to authenticate user's before they are able to access the files by using mod_rewrite to run the request through PHP. So let's say the user requests a file at this URL: http://www.example.com/uploads/preview_image/29.jpg I am using mod_rewrite to rewrite that url to: http://www.example.com/files.php?path=%2Fuploads%2Fpreview_image%2F29.jpg Here is a simplified version of the files.php script: <?php // Setups the environment and sets $logged_in // This part requires $_SESSION require_once('../../includes/user_config.php'); if (!$logged_in) { // Redirect non-authenticated users header('Location: login.php'); } // This user is authenticated, continue $content_type = "image/jpeg"; // getAbsolutePathForRequestedResource() takes // a Query Parameter called path and uses DB // lookups and some string manipulation to get // an absolute path. This part doesn't have // any bearing on the problem at hand $file_path = getAbsolutePathForRequestedResource($_GET['path']); // At this point $file_path looks something like // this: "/path/to/a/place/outside/the/webroot" if (file_exists($file_path) && !is_dir($file_path)) { header("Content-Type: $content_type"); header('Content-Length: ' . filesize($file_path)); echo file_get_contents($file_path); } else { header('HTTP/1.0 404 Not Found'); header('Status: 404 Not Found'); echo '404 Not Found'; } exit(); ?> The Problem Let me start by saying this works perfectly for me. On local test machines it works like a charm. However once deployed to the cloud it stops working. After some debugging it turns out that if a request to the cloud has certain file extensions like .JPG, .PNG, or .SWF (i.e. extensions of typically static media files.) the request is routed to a cache system called Varnish. The end result of this routing is that by the time this whole process makes it to my PHP script the session is not present. If I change the extension in the URL to .PHP or if I even add a query parameter Varnish is bypassed and the PHP script can get the session. No problem right? I'll just add a meaningless query parameter to my requests! Here is the rub: The media files I am serving through this system are being requested through compiled SWF files that I have zero control over. They are generated by third-party software and I have no hope of adding or changing the URLs that they request. Are there any other options I have on this? Update: I should note that I have verified this behavior with RackSpace support and they have said there is nothing they can do about it.

    Read the article

  • HTTP traffic through PIX VPN from outside site

    - by fwrawx
    I have a remote site with a website that only allows access from the outside IP assigned to our local PIX. I have users connecting to the local networking using a VPN that need to be able to view this remote site. I don't think this works because the packets want to come in and go out over the same (ext) interface. So I'm looking for a way to make this work using the PIX or setting up a service on a server on the local network to act as a middle-man for the HTTP requests. The remote site doesn't support setting up a VPN to our PIX. The remote website is dishing out pages over a non-standard port. Can I use squid or something similar to proxy just one site?

    Read the article

  • DHCP for Multiple Subnets

    - by TheD
    So this is the current setup - essentially I would like to get my DHCP server, serving DHCP requests for two seperate subnets. Netgear DG834G acting as a modem connected to a Sonicwall Pro 2040. X0 - LAN - 192.168.1.0/24 X1 - WAN - <WAN-IP> X2 - WLAN - 192.168.10.0/24 At the moment, I have a 2008R2 server with DHCP installed, with an IP address on the 192.168.1.0/24 range handling DHCP fine for this subnet. The Sonicwall is configured correctly - anything connected to the WLAN has Full Allow to anything in the LAN, and vice versa but it will not lease an IP from my Server. I've also added another IP address to the server, so the physical NIC now has two IP's: 192.168.1.2 and 192.168.10.2 with a DHCP scope configured for each. Still no luck! Any ideas? Thanks!

    Read the article

  • Kill Leaking Connections on SQL Server 2005

    - by Thierry Brunet
    We have a legacy ASP application that somewhere leaks SQL Connections. In Activity Monitor, I can see a bunch of idle processes with Last Batch times over an hour old. When I look at the T-SQL command batch, these are always FETCH API_CURSORXXX, which from my understanding is caused by improperly closed ASP ADO Recordsets. While we are try to pinpoint the offeding code, is there a way for me to monitor which requests open which cursors? I'm assuming profiler, but I'm not sure what I should be monitoring exactly. I can see a bunch of calls to sp_cursoropen but I don't see the API_CUSORXXX name anywhere. Second, would anyone be able to suggest a script we could run to kill these processes based on the Last Batch time 10 minutes and Last Batch Command being FETCH API_CURSORXXX? For various reasons, we unfortunately don't have any SQL Server DBAs.

    Read the article

  • Php template caching design

    - by Thomas
    Hello to all, I want to include caching in my app design. Caching templates for starters. The design I have used so far is very modular. I have created an ORM implementation for all my tables and each table is represented by the corresponding class. All the requests are handled by one controller which routes them to the appropriate webmethod functions. I am using a template class for handling UI parts. What I have in mind for caching includes the implementation of a separate Cache class for handling caching with the flexibility to either store in files, apc or memcache. Right now I am testing with file caching. Some thoughts Should I include the logic of checking for cached versions in the Template class or in the webmethods which handle the incoming requests and which eventually call the Template class. In the first case, things are pretty simple as I will not have to change anything more than pass the template class an extra argument (whether to load from cache or not). In the second case however, I am thinking of checking for a cached version immediately in the webmethod and if found return it. This will save all the processing done until the logic reaches the template (first case senario). Both senarios however, rely on an accurate mechanism of invalidating caches, which brings as to Invalidating caches As I see it (and you can add your input freely) a template cached file, becomes invalidate if: a. the expiration set, is reached. b. the template file itself is updated (ie by the developer when adding a new line) c. the webmethod that handles the request changes (ie the developer adds/deletes something in the code) d. content coming from the db and ending in the template file is modified I am thinking of storing a json encoded array inside the cached file. The first value will be the expiration timestamp of the cache. The second value will be the modification time of the php file with the code handling the request (to cope with option c above) The third will be the content itself The validation process I am considering, according to the above senarios, is: a. If the expiration of the cached file (stored in the array) is reached, delete the cache file b. if the cached file's mod time is smaller than the template's skeleton file mod time, delete the cached file c. if the mod time of the php file is greated than the one stored in the cache, delete the cached file. d. This is tricky. In the ORM implementation I ahve added event handlers (which fire when adding, updating, deleting objects). I could delete the cache file every time an object thatprovides content to the template, is modified. The problem is how to keep track which cached files correpond to each schema object. Take this example, a user has his shortprofile page and a full profile page (2 templates) These templates can be cached. Now, every time the user modifies his profile, the event handler would need to know which templates or cached files correspond to the User, so that these files can be deleted. I could store them in the db but I am looking for a beter approach

    Read the article

  • How to cache streaming video and silverlight with squid windows reverse proxy

    - by V. Romanov
    We have an intranet web server running a silverlight application (ACTUS media monitor if anyone cares to know). The server is used to record video and stream it to clients through a CDN solution. We want to put a reverse proxy in between the server and CDN provider in order to remove the office network bottleneck that's currently strangling us. I've set up SQUID for windows on a separate machine outside the network using squid BasicAccelerator configuration setting. It seems to work as far as the reverse proxy is concerned, requests are forwarded and the application is working but it doesn't seem to cache anything (no space is used on the drive where squid is installed). I found to explicit setting to turn caching on in squid, so i assume it's on by default. Perhaps I need some other trick to make the video and/or silverlight cacheable? Any help will be appreciated. Any info you need to help me will be provided at once. Thanks in advance!

    Read the article

  • Running a webserver behind a firewall I have no access to

    - by reijin
    I'm having a bad time in my student appartment: I want to run a webserver on my Laptop, which should be reachable from outside of the net. I'm sitting behind some proxy-server that passes outgoing packets to the matching server. But when it comes to incoming messages - it wouldn't route them correctly to my PC. (Seems like packets only get passed if some PC from within the student-flat is already connected to the sending server) In the past I had a small virtual private server that was sending incoming website-requests over a reverse shell to my PC. Which then returned the website content, and the visitor could see my website. Sadly I dont have that server anymore... Do you have any idea that might solve my problem? Greetings, Benedikt

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >