Search Results

Search found 50527 results on 2022 pages for 'http expires'.

Page 78/2022 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Linux QoS (Skype / BitTorent / SIP / HTTP priority)

    - by Andre
    We are configuring a linux box that will act as internet gateway for an office of 30-50 computers. We are using iptables/HTB for traffic shaping. Is there a way to match traffic on L7 level? It's easy to identify traffic by TCP/UDP ports (like SIP and HTTP). But what if we are dealing with Skype & BitTorent? It was surprise for me that there is no powerful and matured sulution for tasks like this. I found only l7-filter (http://l7-filter.clearfoundation.com/) patch for the Linux kernel, but it's no longer supported (it seems to). Moreover it couldn't be compiled with modern Linux kernels. The only option I found was to use a Cisco router. Are there other ways to identify and shape Skype and Bittorent traffic?

    Read the article

  • http 301 redirection issue

    - by Guilhem Soulas
    I'm a little bit lost with a redirection. I want mysite.com, www.mysite.com and www.mysite.co.uk to redirect to mysite.co.uk. In Apache, I wrote this for mysite.co.uk in order to redirect www to the root domain: RewriteEngine on RewriteCond %{HTTP_HOST} ^www RewriteRule ^/(.*) http://mysite.co.uk/$1 [L,R=301] And for mysite.com, I wrote this redirect to mysite.co.uk: ServerName www.mysite.com RewriteEngine on RewriteRule ^/(.*) http://mysite.co.uk/$1 [L,R=301] This way, I can make the redirection work properly from www.mysite.com to mysite.co.uk, but it doesn't work for mysite.com too mysite.co.uk (without the www) at the same time. Could someone tell me how to make all my redirections work in all cases?

    Read the article

  • Apache .shared folder

    - by Kevin
    There are already a bunch of rules in my Apache configuration. What I want to add is the following. There are some shared folders (.shared): /var/www/.shared/ and /var/www/.include/.shared/ and /var/www/.include/(.*)/.shared/. Now when someone visits http://domain.com/test.png it first executes the existing apache rules and will (when the file/folder was not found) look in those .shared folders. So suppose I've got this filesystem: /var/www/.shared/dog.png /var/www/.shared/test.gif and /var/www/domain.com/dog.png. Now when someone visits http://domain.com/test.gif, it must load the test.gif from the .shared folder. Now when someone visits http://domain.com/dog.png it must load the dog.png from the domain.com folder (because the existing apache rules will be executed first).

    Read the article

  • Why do I see different TCP behaviour between IIS and FTP server applications on Windows 2003?

    - by rupello
    I am comparing Wireshark traces of a 10MB file download file from: the FileZilla FTP server and IIS (using HTTP) on the same Windows 2003 server. The FTP download performs faster and the trace shows the server behaving as expected, sending more data to the client with every ACK received: Link to full-size image The HTTP server trace shows a more bursty pattern. The timing of the send bursts are sometimes unrelated to any ACKs received from the client (circled in red): Link to full-size image Anyone have a suggestion as to why IIS traffic is having like this? Update: We have tried modifying the http.sys registry settings (setting MaxBytesPerSend to 256k and MaxBufferedSendBytes to 64k as recommended). Changing MaxBytesPerSend does seem to improve performance by increasing the amount of in-flight data , but we still see the same bursty pattern.

    Read the article

  • How to open http for linux server

    - by wtfcoder
    I am a Windows (IIS) software engineer, but recently I've been thrown into a Linux server admin role until we can find someone to fill the position. I am not ashamed to admit I have no idea what I am doing. Currently the problem I am trying to solve is that the server is only responding to https requests. However, we need it to respond to standard http requests as well. We don't really have anything that needs to stay secure on its way to the requester. I am running redhat linux via bash. If anyone could tell me how to enable http requests I would really appreciate it! Thanks Please make sure your response is fairly step by step as I have minimal command line experience :/

    Read the article

  • [Javascript] Linux Ajax (mootools Request.JSON) Header error

    - by VDVLeon
    Hi all, I use the following code to get some json data: var request = new Request.JSON( { 'url': sourceURI, 'onSuccess': onPageData } ); request.get(); Request.JSON is a class from Mootools (a javascript library). But on linux (ubuntu on firefox 3.5 and Chrome) the request always fails. So i tried to display the http request ajax is sending. (I used netcat to display it) The request is like this: OPTIONS /the+url HTTP/1.1 Host: example.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/532.3 (KHTML, like Gecko) Chrome/4.0.226.0 Safari/532.3 Referer: http://example.com/ref... Access-Control-Request-Method: GET Origin: http://example.com Access-Control-Request-Headers: X-Request, X-Requested-With, Accept Accept: */* Accept-Encoding: gzip,deflate Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The HTTP request (first line) is not how it should be: OPTIONS /the+url HTTP/1.1 It should be: GET /the+url HTTP/1.1 Does anybody know why this problem is and how to fix it?

    Read the article

  • Apache proxy to Lighttpd: changing $_SERVER['HTTP_HOST'] in php

    - by watain
    I have a WordPress blog running on lighttpd-1.4.19, listening on at www00:81. On the same host, apache-2.2.11 listens on port 80, which creates a proxy connection from http://blog.mydomain.org:80 to http://blog.mydomain.org:81. The Apache virtualhost looks as follows: <VirtualHost *:80> ServerName blog.mydomain.org ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://blog.mydomain.org:81/ ProxyPassReverse / http://blog.mydomain.org:81/ </VirtualHost> Using debug.log-request-handling = "enable" I get the following log entry when I browse http://blog.mydomain.org:80 (notice the Host headers): 2010-05-10 08:47:14: (request.c.294) fd: 6 request-len: 853 GET / HTTP/1.1 Host: blog.mydomain.org:81 [...] 2010-05-10 08:47:15: (request.c.294) fd: 8 request-len: 754 GET /wp-content/uploads/2010/01/image.gif?w=280 HTTP/1.1 Host: www00:81 My problem: as far as I know, the PHP environment variable $_SERVER['HTTP_HOST'] is set to that Host header variable. Unfortunately, WordPress uses that variable in their system to create URLs to pictures on the blog. These URLs won't be accessible behind a firewall of course. How can I force the host header to be blog.mydomain.org instead of blog.mydomain.org:81, respectively www00:81? I already added set server.name = "blog.mydomain.org" to my lighttpd.conf, but this didn't work. Any suggestions are appreciated, thank you.

    Read the article

  • What can I do about Hack Attempts

    - by Matt
    I have an ASP.net website hosted using the Ultidev Web Server Pro. Every day I get a steady stream of errors generated by my application where page requests were requested and denied. This is obviously someone/something trying to find any exploits on my website. Here is an example log: 28/08/2012 11:37:11 - File not Found:http://MyWebServer/phpmyadmin/index.php 28/08/2012 11:37:11 - File not Found:http://MyWebServer/phpMyAdmin/index.php 28/08/2012 11:37:12 - File not Found:http://MyWebServer/phpMyAdmin-2/index.php 28/08/2012 11:37:12 - File not Found:http://MyWebServer/php-my-admin/index.php 28/08/2012 11:37:13 - File not Found:http://MyWebServer/phpMyAdmin-2.2.3/index.php 28/08/2012 11:37:13 - File not Found:http://MyWebServer/phpMyAdmin-2.2.6/index.php 28/08/2012 11:37:14 - File not Found:http://MyWebServer/phpMyAdmin-2.5.1/index.php 28/08/2012 11:37:14 - File not Found:http://MyWebServer/phpMyAdmin-2.5.4/index.php 28/08/2012 11:37:15 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5-rc1/index.php 28/08/2012 11:37:15 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5-rc2/index.php 28/08/2012 11:37:15 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5/index.php 28/08/2012 11:37:16 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5-pl1/index.php 28/08/2012 11:37:16 - File not Found:http://MyWebServer/phpMyAdmin-2.5.6-rc1/index.php 28/08/2012 11:37:17 - File not Found:http://MyWebServer/phpMyAdmin-2.5.6-rc2/index.php 28/08/2012 11:37:18 - File not Found:http://MyWebServer/phpMyAdmin-2.5.6/index.php 28/08/2012 11:37:18 - File not Found:http://MyWebServer/phpMyAdmin-2.5.7/index.php 28/08/2012 11:37:19 - File not Found:http://MyWebServer/phpMyAdmin-2.5.7-pl1/index.php 28/08/2012 13:52:07 - File not Found:http://MyWebServer/admin/pma/translators.html Is this normal? Is there anything I can do to protect myself against this?

    Read the article

  • Remove HTTP headers from a raw response

    - by Ed
    Let's say we make a request to a URL and get back the raw response, like this: HTTP/1.1 200 OK Date: Wed, 28 Apr 2010 14:39:13 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=ISO-8859-1 Set-Cookie: PREF=ID=e2bca72563dfffcc:TM=1272465553:LM=1272465553:S=ZN2zv8oxlFPT1BJG; expires=Fri, 27-Apr-2012 14:39:13 GMT; path=/; domain=.google.co.uk Server: gws X-XSS-Protection: 1; mode=block Connection: close <!doctype html><html><head>...</head><body>...</body></html> What would be the best way to remove the HTTP headers from the response in C#? With regexes? Parsing it into some kind of HTTPResponse object and using only the body? EDIT: I'm using SOCKS to make the request, that's why I get the raw response.

    Read the article

  • ASP.net AppendHeader not working in ASP MVC

    - by Chao
    I'm having problems getting AppendHeader to work properly if I am also using an authorize filter. I'm using an actionfilter for my AJAX actions that applies Expires, Last-Modified, Cache-Control and Pragma (though while testing I have tried including it in the action method itself with no change in results). If I don't have an authorize filter the headers work fine. Once I add the filter the headers I tried to add get stripped. The headers I want to add Response.AppendHeader("Expires", "Sun, 19 Nov 1978 05:00:00 GMT"); Response.AppendHeader("Last-Modified", String.Format("{0:r}", DateTime.Now)); Response.AppendHeader("Cache-Control", "no-store, no-cache, must-revalidate"); Response.AppendHeader("Cache-Control", "post-check=0, pre-check=0"); Response.AppendHeader("Pragma", "no-cache"); An example of the headers from a correct page: Server ASP.NET Development Server/9.0.0.0 Date Mon, 14 Jun 2010 17:22:24 GMT X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 Pragma no-cache Expires Sun, 19 Nov 1978 05:00:00 GMT Last-Modified Mon, 14 Jun 2010 18:22:24 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Content-Type text/html; charset=utf-8 Content-Length 352 Connection Close And from an incorrect page: Server ASP.NET Development Server/9.0.0.0 Date Mon, 14 Jun 2010 17:27:34 GMT X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 Pragma no-cache, no-cache Cache-Control private, s-maxage=0 Content-Type text/html; charset=utf-8 Content-Length 4937 Connection Close

    Read the article

  • ASP.NET MVC2 and Browser Caching

    - by Dan
    Hi I have a web application that fetches a lot of content via ajax. For example when a user edits some data, the browser will send the changes using an ajax post and then do an ajax get to get fresh content and replace an existing div on the page with that content. This was working just find with MVC1, but in MVC2 I would get inconsistent results. I've found that MVC1 by default included an Expires item in the response headers set to the current time, but in MVC2 the Expires header is missing. This is a problem with some browsers (IE8) actually using the cached version of the ajax get instead of the fresh version. To deal with the problem I created a simple ActionFilterAttribute that sets the reponse cache to NoCache (see below), which works, but it seems kind of sillly to decorate every controller with this attribute. Is there a global way to set this for every controller? Is this a bug in MVC2 and it really should be setting the expires on every ActionResult/view/page? Don't most MVC programs deal with data entry where stale data is a very bad thing? Thanks Dan public class ResponseNoCachingAttribute : ActionFilterAttribute { public override void OnResultExecuted(ResultExecutedContext filterContext) { base.OnResultExecuted(filterContext); filterContext.HttpContext.Response.Cache.SetCacheability(System.Web.HttpCacheability.NoCache); } }

    Read the article

  • setting cookies

    - by aharon
    Okay, so I'm trying to set cookies using Ruby. I'm in a Rack environment. response[name]=value will add an HTTP header into the HTTP headers hash rack has. I know that it works. The following method doesn't work: def set_cookie(opts={}) args = { :name => nil, :value => nil, :expires => Time.now+314, :path => '/', :domain => Cambium.uri #contains the IP address of the dev server this is running on }.merge(opts) raise ArgumentError, ":name and :value are mandatory" if args[:name].nil? or args[:value].nil? response['Set-Cookie']="#{args[:name]}=#{args[:value]}; expires=#{args[:expires].clone.gmtime.strftime("%a, %d-%b-%Y %H:%M:%S GMT")}; path=#{args[:path]}; domain=#{args[:domain]}" end Why not? And how can I solve it? Thanks.

    Read the article

  • Java - How to find the redirected url of a url?

    - by Yatendra Goel
    I am accessing web pages through java as follows: URLConnection con = url.openConnection(); But in some cases, a url redirects to another url. So I want to know the url to which the previous url redirected. Below are the header fields that I got as a response: null-->[HTTP/1.1 200 OK] Cache-control-->[public,max-age=3600] last-modified-->[Sat, 17 Apr 2010 13:45:35 GMT] Transfer-Encoding-->[chunked] Date-->[Sat, 17 Apr 2010 13:45:35 GMT] Vary-->[Accept-Encoding] Expires-->[Sat, 17 Apr 2010 14:45:35 GMT] Set-Cookie-->[cl_def_hp=copenhagen; domain=.craigslist.org; path=/; expires=Sun, 17 Apr 2011 13:45:35 GMT, cl_def_lang=en; domain=.craigslist.org; path=/; expires=Sun, 17 Apr 2011 13:45:35 GMT] Connection-->[close] Content-Type-->[text/html; charset=iso-8859-1;] Server-->[Apache] So at present, I am constructing the redirected url from the value of the Set-Cookie header field. In the above case, the redirected url is copenhagen.craigslist.org Is there any standard way through which I can determine which url the particular url is going to redirect. I know that when a url redirects to other url, the server sends an intermediate response containing a header field that tells the url which it is going to redirect but I am not receiving that intermediate response through the url.openConnection(); method.

    Read the article

  • Why I get java.net.SocketException: Connection reset

    - by Jammy
    I need sent some requests to server side and get reponse, sometimes when I call specific method to run the flollowing common code, I get one error in line(addToCookieJar(connection);), any idea how this get happened? URL url = new URL(providerURL); HttpURLConnection connection = (HttpURLConnection)url.openConnection(); connection.setRequestMethod("POST"); connection.setDoInput(true); connection.setDoOutput(true); connection.setUseCaches(false); connection.setRequestProperty("Content-Type", "application/octet-stream"); // We understand gzip encoding connection.addRequestProperty("Accept-Encoding", "gzip"); if (cookie != null && cookieHandler != null) { connection.setRequestProperty("Cookie", cookie); } if (cookieHandler == null) { addFromCookieJar(connection); } // Send the request ObjectOutputStream oos = new ObjectOutputStream(connection.getOutputStream()); oos.writeObject(remote.getName()); oos.writeObject(m.getName()); // method name oos.writeObject(m.getParameterTypes()); // formal parameters oos.writeObject(args); // actual parameters oos.flush(); oos.close(); if (cookieHandler == null) { cookieJar.put(new URI(providerURL), connection.getHeaderFields()); } Exception: java.lang.reflect.UndeclaredThrowableException at $Proxy0.updateDocument(Unknown Source) at com.agst.ui.gantt.GanttPanel.doUpdateDocument(GanttPanel.java:1931) at com.agst.ui.gantt.GanttPanel.save(GanttPanel.java:1419) at com.agst.ui.gantt.GanttPanel$4.run(GanttPanel.java:1673) at java.lang.Thread.run(Unknown Source) Caused by: java.net.SocketException: Connection reset at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection$6.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.net.www.protocol.http.HttpURLConnection.getChainedException(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at com.agst.rmi.RemoteCallHandler.call(RemoteCallHandler.java:196) at com.agst.rmi.RemoteCallHandler.invoke(RemoteCallHandler.java:142) ... 5 more Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.http.HttpClient.parseHTTP(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getHeaderFields(Unknown Source) at com.agst.rmi.RemoteCallHandler.addToCookieJar(RemoteCallHandler.java:529) at com.agst.rmi.RemoteCallHandler.call(RemoteCallHandler.java:192) ... 6 more

    Read the article

  • Simple 301 redirect; how do I get it to drop the query string?

    - by Throlkim
    Hopefully there's a simple solution to this. I'm setting up a redirect for old search listings, but I don't want any of the query string to carry over. The current rule is: redirect 301 /blog http://newdomain.com But this would redirect the following: http://www.olddomain.com/blog/2009/11/article-name-etc To this: http://newdomain.com/2009/11/article-name-etc When I just want: http://newdomain.com Can this be done in the simple 301 redirect, or will I have to write a mod_rewrite rule?

    Read the article

  • http.conf setup to simplify using 'localhost:81'

    - by Will
    I'm installing portable wampserver within my dropbox folder so I can access anywhere. I have this achieved and accessible using http://locahost:81 I want to access it by using a different address (dropping the :81 port number) such as http://myothersite. I'm fairly certain I need to add a virtualhosts directove somewhere within this, but I am not Apache experienced! This is the current Apache httpd.conf file: ServerRoot "C:/Users/will/Dropbox/Wampee-2.1-beta-2/bin/apache/apache2.2.17" Listen 81 ServerAdmin admin@localhost ServerName localhost:81 DocumentRoot "C:/Users/will/Dropbox/Wampee-2.1-beta-2/www/" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "C:/Users/will/Dropbox/Wampee-2.1-beta-2/www/"> Options Indexes FollowSymLinks AllowOverride all # onlineoffline tag - don't remove Order Deny,Allow Deny from all Allow from 127.0.0.1 </Directory> <IfModule dir_module> DirectoryIndex index.php index.php3 index.html index.htm </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "C:/Users/will/Dropbox/Wampee-2.1-beta-2/logs/apache_error.log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "C:/Users/will/Dropbox/Wampee-2.1-beta-2/logs/access.log" common #CustomLog "logs/access.log" combined </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "cgi-bin/" </IfModule> <IfModule cgid_module> #Scriptsock logs/cgisock </IfModule> <Directory "cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddType application/x-httpd-php .php AddType application/x-httpd-php .php3 </IfModule> # Server-pool management (MPM specific) #Include conf/extra/httpd-mpm.conf # Multi-language error messages #Include conf/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include conf/extra/httpd-autoindex.conf # Language settings #Include conf/extra/httpd-languages.conf # User home directories #Include conf/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include conf/extra/httpd-info.conf # Virtual hosts #Include conf/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include conf/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include conf/extra/httpd-dav.conf # Various default settings #Include conf/extra/httpd-default.conf # Secure (SSL/TLS) connections #Include conf/extra/httpd-ssl.conf # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include "C:/Users/will/Dropbox/Wampee-2.1-beta-2/alias/*" Include "C:/Users/will/Dropbox/Wampee-2.1-beta-2/MyWebAp ps/etc/alias/*"

    Read the article

  • IIS6 all websites displays another site when using https

    - by Lisa
    I have the following websites set up in iis 6. site1.com site2.com site3.com Accessing site1 is via the address https://site1.com. Accessing site2 and site three should be through http. When I try to access https://site2.com it displays the website of https://site1.com. How can I stop this. I either want an error or rediericting to the http site. Any help would be great.

    Read the article

  • ErrorDocument not working when accessing .htaccess

    - by oxguy3
    I've been setting up ErrorDocuments for a website I'm working, and generally they've been working. However, after I set the 403 ErrorDocument, I noticed that it didn't work when I tried to access the .htaccess file itself. When I access a different forbidden file, the Error Document appears just fine. How can I make the ErrorDocument work on the .htaccess file? If you didn't follow my explanation, here are links to show you what I mean: ErrorDocument works fine: http://keycraft.haydencity.net/.ftpquota ErrorDocument doesn't work: http://keycraft.haydencity.net/.htaccess

    Read the article

  • OWA, Outlook Anywhere, RPCPing Inconsistencies

    - by pk.
    I'm troubleshooting an Outlook Anywhere issue with a new Exchange 2010 server. The server in question, MS2010, is behind a SonicWALL NSA 2400 device and works wonderfully except for Outlook Anywhere. Outlook Anywhere works internally and I've verified (through Ctrl-Right Click --> Connection Status) that I'm able to connect to MS2010 over HTTPS. When trying to connect to the server using HTTPS from outside the firewall, I'm unable to do so. A Wireshark trace shows 30 or so successful HTTPS packet transmissions, and then it fails with 3 straight transmissions to a destination port of 135. I have no idea why my computer is attempting to access anything on port 135 since I've setup my profile to use HTTPS on both slow and fast connections. I'm 99% certain that the firewall is configured correctly. I run Outlook Web Access (also HTTPS) on the same server and there are no issues with access. EDIT: My Autodiscover settings are correct (as far as I can tell). My server passes the Outlook Anywhere and Autodiscover tests at https://www.testexchangeconnectivity.com/. I've been using the RPCPing utility to troubleshoot and have come across the following results: Internally- >rpcping -t ncacn_http -s mail.mydomain.com -o RpcProxy=mail.mydomain.com -P "pk,mydomain,*" -I "pk,mydomain,*" -H 1 -u 10 -a connect -F 3 -v 3 -E -R none RPCPing v2.12. Copyright (C) Microsoft Corporation, 2002 OS Version is: 6.1, Service Pack 1 RPCPinging proxy server mail.mydomain.com with Echo Request Packet Sending ping to server Response from server received: 200 Pinging successfully completed in 93 ms Externally- >rpcping -t ncacn_http -s mail.mydomain.com -o RpcProxy=mail.mydomain.com -P "pk,mydomain,*" -I "pk,mydomain,*" -H 1 -u 10 -a connect -F 3 -v 3 -E -R none RPCPing v6.0. Copyright (C) Microsoft Corporation, 2002-2006 Enter password for RPC/HTTP proxy: RPCPing set Activity ID: {fc8411ba-2987-4175-b37b-801dc69d5ff9} RPCPinging proxy server mail.mydomain.com with Echo Request Packet Setting autologon policy to high WinHttpSetCredentials for target server called Error 87 : The parameter is incorrect. returned in WinHttpSetCredentials Ping failed What should I be checking in order to troubleshoot my Outlook Anywhere issues? I'm using Windows 7 SP1 for internal and external access. EDIT: Autodiscover.xml content <?xml version="1.0"?> <Autodiscover xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/exchange/autodiscover/responseschema/2006"> <Response xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a"> <User> <DisplayName>John Doe</DisplayName> <LegacyDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=pk</LegacyDN> <DeploymentId>d35170cc-f3a7-42c5-9427-1f554a469126</DeploymentId> </User> <Account> <AccountType>email</AccountType> <Action>settings</Action> <Protocol> <Type>EXCH</Type> <Server>MS2010.MYDOMAIN.local</Server> <ServerDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=MS2010</ServerDN> <ServerVersion>738180DA</ServerVersion> <MdbDN>/o=MYDOMAIN/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=MS2010/cn=Microsoft Private MDB</MdbDN> <ASUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</ASUrl> <OOFUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</OOFUrl> <OABUrl>http://MS2010.MYDOMAIN.local/OAB/2c34c9f5-5521-4c8c-b684-538df815052a/</OABUrl> <UMUrl>https://MS2010.MYDOMAIN.local/EWS/UM2007Legacy.asmx</UMUrl> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <PublicFolderServer>MS2007.MYDOMAIN.local</PublicFolderServer> <AD>dc1.MYDOMAIN.local</AD> <EwsUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</EwsUrl> <EcpUrl>https://MS2010.MYDOMAIN.local/ecp/</EcpUrl> <EcpUrl-um>?p=customize/voicemail.aspx&amp;exsvurl=1</EcpUrl-um> <EcpUrl-aggr>?p=personalsettings/EmailSubscriptions.slab&amp;exsvurl=1</EcpUrl-aggr> <EcpUrl-mt>PersonalSettings/DeliveryReport.aspx?exsvurl=1&amp;IsOWA=&lt;IsOWA&gt;&amp;MsgID=&lt;MsgID&gt;&amp;Mbx=&lt;Mbx&gt;</EcpUrl-mt> <EcpUrl-ret>?p=organize/retentionpolicytags.slab&amp;exsvurl=1</EcpUrl-ret> <EcpUrl-sms>?p=sms/textmessaging.slab&amp;exsvurl=1</EcpUrl-sms> </Protocol> <Protocol> <Type>EXPR</Type> <Server>mail.mycompany.com</Server> <ASUrl>https://mail.mycompany.com/ews/exchange.asmx</ASUrl> <OOFUrl>https://mail.mycompany.com/ews/exchange.asmx</OOFUrl> <OABUrl>https://mail.mycompany.com/OAB/2c34c9f5-5521-4c8c-b684-538df815052a/</OABUrl> <UMUrl>https://mail.mycompany.com/ews/UM2007Legacy.asmx</UMUrl> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <SSL>On</SSL> <AuthPackage>Basic</AuthPackage> <CertPrincipalName>msstd:mail.mycompany.com</CertPrincipalName> <EwsUrl>https://mail.mycompany.com/ews/exchange.asmx</EwsUrl> <EcpUrl>https://mail.mycompany.com/owa/</EcpUrl> <EcpUrl-um>?p=customize/voicemail.aspx&amp;exsvurl=1</EcpUrl-um> <EcpUrl-aggr>?p=personalsettings/EmailSubscriptions.slab&amp;exsvurl=1</EcpUrl-aggr> <EcpUrl-mt>PersonalSettings/DeliveryReport.aspx?exsvurl=1&amp;IsOWA=&lt;IsOWA&gt;&amp;MsgID=&lt;MsgID&gt;&amp;Mbx=&lt;Mbx&gt;</EcpUrl-mt> <EcpUrl-ret>?p=organize/retentionpolicytags.slab&amp;exsvurl=1</EcpUrl-ret> <EcpUrl-sms>?p=sms/textmessaging.slab&amp;exsvurl=1</EcpUrl-sms> </Protocol> <Protocol> <Type>WEB</Type> <Port>0</Port> <DirectoryPort>0</DirectoryPort> <ReferralPort>0</ReferralPort> <Internal> <OWAUrl AuthenticationMethod="Basic, Fba">https://MS2010.MYDOMAIN.local/owa/</OWAUrl> <Protocol> <Type>EXCH</Type> <ASUrl>https://MS2010.MYDOMAIN.local/EWS/Exchange.asmx</ASUrl> </Protocol> </Internal> <External> <OWAUrl AuthenticationMethod="Fba">https://mail.mycompany.com/owa/</OWAUrl> <Protocol> <Type>EXPR</Type> <ASUrl>https://mail.mycompany.com/ews/exchange.asmx</ASUrl> </Protocol> </External> </Protocol> </Account> </Response> </Autodiscover>

    Read the article

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • Internet Working, Browsing Not.

    - by jeffreypriebe
    I have a very odd problem that I can't resolve. I am connected to the internet, but my browsing doesn't work. I don't mean a web browser - I mean browsing. Firefox, Chrome, Curl all fail to successfully connect to an HTTP address. However existing connections, e.g. to mail in Outlook (Exchange Server and also IMAP server) continue to work. Also, the internet is on, I can confirm both from my machine (other ports / connections) as well as from any other computer connected to the same network. Additionally, it appears to be HTTP, not simple a port issue as HTTP over port 8443 (Tortoise SVN if you must know - running over HTTP not over SVN) also fails. I am using Windows Vista SP2 (build 6002). It seems to "creep up" in that after running the computer for a few hours it will fail. (No found way to systematically reproduce the problem.) Additionally, it seems to be more prone on days where the internet connection is flaky already (not sure why the internet is flaky, just is, lot's of failed browsing requests and have to retry/reload often). What I have tried (when the problem arises) - none have yielded any resolution: Resetting the network connection (dis-connect, re-connect) Disable/re-enable the network adapter Double-checked the ip settings Double-checked the HOSTS file. Note: DNS continues to work (both new and cached responses to DNS queries). (Thanks for the suggestion Daniel and antenore.) Checked the routing tables (ip4 only as ipv6 is beyond my understanding) resetting all involved hardware (routers and modems) Close and reopen browsers Looked for malware interference: Run HijackThis Looked for suspicious processes using SysInternals procexp. Looked for explorer hijacks, lsa provider interference, winsock provider interference using SysInternals Autoruns. Run a complete anti-virus scan. Reviewed the output of a netstat -onab to see if there were stuck ports open or unusual processes running somewhere The only thing that works is to do a full reboot. That works 100% of the time to restore browsing. What else can I try to nail down the problem?

    Read the article

  • Nginx 500 Internal Server error on subdirectory

    - by juyoung518
    I'm getting a 500 Internal Server error only on sub directories. For example, If my website is example.com, example.com/index.php works. But example.com/phpbb/index.php doesn't work. It just turns up a blank php page. The HTTP header shows HTTP error 500 Internal Server error. If I enter example.com/phpbb/index.php/somedirectory, the index.php of my root directory shows up. This is all very strange. I have tried searching etc but nothing worked. tried re-installing nginx but not fixed. I'm sure I got the DNS configured right. My Nginx Config /sites-available/example.com server { server_name www.example.com; return 301 https://example.com$request_uri; } server { listen 443; listen 80; #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /var/www/example.com/public_html; index index.html index.php index.htm; ssl on; ssl_certificate /etc/nginx/ssl/cert.pem; ssl_certificate_key /etc/nginx/ssl/ssl.key; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS; ssl_prefer_server_ciphers on; ssl_stapling on; resolver 8.8.8.8; add_header Strict-Transport-Security max-age=63072000; # Make site accessible from http://localhost/ server_name example.com; location ~* \.(jpg|jpeg|png|gif|ico|css|js|bmp)$ { expires 365d; add_header Cache-Control public; } if ($scheme = http) { return 301 https://example.com$request_uri; } location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } if ($http_user_agent ~ (musobot|screenshot|AhrefsBot|picsearch|Gender|HostTracker|Java/1.7.0_51|Java) ) { return 403; } location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } location /phpMyAdmin { rewrite ^/* /phpmyadmin last; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } } nginx.conf user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## Block spammers and other unwanted visitors ## include /etc/nginx/blockips.conf; fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m max_size=1000m inactive=60m; ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 100; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log off; error_log /var/log/nginx/error.log; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS; ssl_prefer_server_ciphers on; ## # File Cache Settings ## open_file_cache max=5000 inactive=5m; open_file_cache_valid 2m; open_file_cache_min_uses 1; open_file_cache_errors on; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/x-js text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*;

    Read the article

  • serving mp3s to mobile devices is flooding nginx with partial requests

    - by drumfire
    I am serving mp3s with a minimalistic nginx server. What I see in my log files is that there are a lot of requests, in particular from AppleCoreMedia and sometimes Android useragents, that flood the server with short requests. Sometimes they keep requesting to download the same partial content for a very long time; sometimes more than an hour. For example: "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" [...] I also get a lot, but not as much, of these: "-" 400 0 "-" "-" 400 0 "-" The IP addresses are always from clients that start downloading shortly after that request, usually they have roughly the same UserAgent as in the first example. emphasized text I have enabled server throttling and connection limits in nginx to limit the huge amount of log entries from equivalent IPs at least somewhat. There was a performance issue when I saw the same behaviour on the previous server that used Apache. I installed nginx on a better server then moved the site. When Apache could not handle more connections from the increasing number of clients effectively that server was ddossed. There was no bandwidth issue with already connected clients and I don't know if the already connected clients were using more than one connection at a time. Please tell me: Are clients that appear to get stuck on a download a Bad Thing™ I heard people say their mobile bandwidth use was much higher than they could account for. I'm thinking this type of client behaviour can account for that. And costs us more bandwidth too. Which up to date alternatives exist out there that can handle serving this type of data better than plain HTTP? Useful general insights for someone who just came into this field straight out of the late 90s. :-)

    Read the article

  • Untraceable malicious browser calls

    - by MaximusOMaximus
    I installed Fiddler 4 Beta to do some HTTP tracing. I found a lot of calls being made to sites like : facebook, collegehumor and a bunch of other sites I've never visited. Could not trace what/who is initiating these calls as I do not see any Windows Processes. No one else is connected to my network. I use both Google Chrome and IE10 on a Windows 7 box. Please help me tracing and removing these malicious HTTP calls.

    Read the article

  • Force HTTPS with AWS Elastic load balancer

    - by panos2point0
    I need to redirect all incoming HTTP traffic to HTTPS on my elastic load balancer. I tired using Apache mod_rewrite: RewriteEngine On RewriteCond %{HTTP:X-Forwarded-Proto} !https RewriteRule !/status https://%{SERVER_NAME}%{REQUEST_URI} [L,R] Taking advantage of the X-Forwarded-Proto header added by the load balancer, this rule should instruct the users browser to request the HTTPS version of the same URL. So far It doesn't work (no redirection happens). What am I doing wrong? Is there a better way to do this?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >