Search Results

Search found 330 results on 14 pages for 'msie'.

Page 3/14 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Regexp that matches user-agents of end-user browsers but NOT crawlers with >90 % accuracy

    - by knorv
    I'm trying to construct a regexp that will evaluate to true for User-Agent:s of "browsers navigated by humans", but false for bots. Needless to say the matching will not be exact, but if it gets things right in say 90 % of cases that is more than good enough. My approach so far is to target the User-Agent string of the the five major desktop browsers (MSIE, Firefox, Chrome, Safari, Opera). Specifically I want the regexp NOT to match if the user-agent is a bot (Googlebot, msnbot, etc.). Currently I'm using the following regexp which appears to achieve the desired precision: ^(Mozilla.*(Gecko|KHTML|MSIE|Presto|Trident)|Opera).*$ I've observed small number of false negatives which are mostly mobile browsers. The exceptions all match: (BlackBerry|HTC|LG|MOT|Nokia|NOKIAN|PLAYSTATION|PSP|SAMSUNG|SonyEricsson) My question is: Given the desired accuracy level, how would you improve the regexp? Can you think of any major false positives or false negatives to the given regexp? Please note that the question is specifically about regexp-based User-Agent matching. There are a bunch of other approaches to solving this problem, but those are out of the scope of this question.

    Read the article

  • jQuery: preventDefault() not working on input/click events?

    - by Jason
    I want to disable the default contextMenu when a user right-clicks on an input field so that I can show a custom contextMenu. Generally speaking, its pretty easy to disable the right-click menu by doing something like: $([whatever]).bind("click", function(e) { e.preventDefault(); }); And in fact, I can do this on just about every element EXCEPT for input fields in FF - anyone know why or could point me towards some documentation? Here is the relevant code I am working with, thanks guys. HTML: <script type="text/javascript"> var r = new RightClickTool(); </script> <div id="main"> <input type="text" class="listen rightClick" value="0" /> </div> JS: function RightClickTool(){ var _this = this; var _items = ".rightClick"; $(document).ready(function() { _this.init(); }); this.init = function() { _this.setListeners(); } this.setListeners = function() { $(_items).click(function(e) { var webKit = !$.browser.msie && e.button == 0; var ie = $.browser.msie && e.button == 1; if(webKit||ie) { // Left mouse...do something() } else if(e.button == 2) { e.preventDefault(); // Right mouse...do something else(); } }); } } // Ends Class

    Read the article

  • Detect IE version in Javascript

    - by Chad Decker
    I want to bounce users of our web site to an error page if they're using a version of Internet Explorer prior to v9. It's just not worth our time and money to support IE pre-v9. Users of all other non-IE browsers are fine and shouldn't be bounced. Here's the proposed code: if(navigator.appName.indexOf("Internet Explorer")!=-1){ //yeah, he's using IE var badBrowser=( navigator.appVersion.indexOf("MSIE 9")==-1 && //v9 is ok navigator.appVersion.indexOf("MSIE 1")==-1 //v10, 11, 12, etc. is fine too ); if(badBrowser){ // navigate to error page } } Will this code do the trick? To head off a few comments that will probably be coming my way: [1] Yes, I know that users can forge their useragent string. I'm not concerned. [2] Yes, I know that programming pros prefer sniffing out feature-support instead of browser-type but I don't feel this approach makes sense in this case. I already know that all (relevant) non-IE browsers support the features that I need and that all pre-v9 IE browsers don't. Checking feature by feature throughout the site would be a waste. [3] Yes, I know that someone trying to access the site using IE v1 (or = 20) wouldn't get 'badBrowser' set to true and the warning page wouldn't be displayed properly. That's a risk we're willing to take. [4] Yes, I know that Microsoft has "conditional comments" that can be used for precise browser version detection. IE no longer supports conditional comments as of IE 10, rendering this approach absolutely useless. Any other obvious issues to be aware of? Thanks.

    Read the article

  • Focus behavior in Applet-Javascript interaction

    - by Dan
    I have a web page with an applet that opens a popup window and also makes Javascript calls. When that Javascript call results in a focus() call on an HTML input, that causes the browser window to push itself in front of the applet window. But only on certain browsers, namely MSIE. On Firefox the applet window remains on top. How can I keep that behavior consistent in MSIE? Note that using the old Microsoft VM for Java also achieves the desired (applet window in front) result. HTML code: <html> <head> <script type="text/javascript"> function focusMe() { document.getElementById('mytext').focus(); } </script> </head> <body> <applet id="myapplet" mayscript code="Popup.class" ></applet> <form> <input type="text" id="mytext"> <input type="button" onclick="document.getElementById('myapplet').showPopup()" value="click"> </form> </body> </html> Java code: public class Popup extends Applet { Frame frame; public void start() { frame = new Frame("Test Frame"); frame.setLayout(new BorderLayout()); Button button = new Button("Push Me"); frame.add("Center", button); button.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent e) { frame.setVisible(false); } }); frame.pack(); } public void showPopup() { frame.setVisible(true); JSObject.getWindow(this).eval("focusMe()"); } }

    Read the article

  • Rails Browser Detection Methods

    - by alvincrespo
    Hey Everyone, I was wondering what methods are standard within the industry to do browser detection in Rails? Is there a gem, library or sample code somewhere that can help determine the browser and apply a class or id to the body element of the (X)HTML? Thanks, I'm just wondering what everyone uses and whether there is accepted method of doing this? I know that we can get the user.agent and parse that string, but I'm not sure if that is that is an acceptable way to do browser detection. Also, I'm not trying to debate feature detection here, I've read multiple answers for that on StackOverflow, all I'm asking for is what you guys have done. [UPDATE] So thanks to faunzy on GitHub, I've sort of understand a bit about checking the user agent in Rails, but still not sure if this is the best way to go about it in Rails 3. But here is what I've gotten so far: def users_browser user_agent = request.env['HTTP_USER_AGENT'].downcase @users_browser ||= begin if user_agent.index('msie') && !user_agent.index('opera') && !user_agent.index('webtv') 'ie'+user_agent[user_agent.index('msie')+5].chr elsif user_agent.index('gecko/') 'gecko' elsif user_agent.index('opera') 'opera' elsif user_agent.index('konqueror') 'konqueror' elsif user_agent.index('ipod') 'ipod' elsif user_agent.index('ipad') 'ipad' elsif user_agent.index('iphone') 'iphone' elsif user_agent.index('chrome/') 'chrome' elsif user_agent.index('applewebkit/') 'safari' elsif user_agent.index('googlebot/') 'googlebot' elsif user_agent.index('msnbot') 'msnbot' elsif user_agent.index('yahoo! slurp') 'yahoobot' #Everything thinks it's mozilla, so this goes last elsif user_agent.index('mozilla/') 'gecko' else 'unknown' end end return @users_browser end

    Read the article

  • IIS 6 404 error calling on localhost

    - by Paleta
    I'm kinda desperate here I have a Windows 2003 Server and when I try to call http://localhost I get a 404 error I have localhost configured to c:\inetpub\wwwroot\ and whatever file I create there it always shows a 404 This is what I see on the log file 2010-07-22 14:54:06 W3SVC1 127.0.0.1 GET / - 80 - 127.0.0.1 Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.2;+SV1;+.NET+CLR+1.1.4322;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729) 404 0 3 I tried restarting IIS with no luck, the eventvwr shows no error whatsoever. Please any light will be appeciated

    Read the article

  • Secret, unlogged, transparent, case-sensitive proxy in IIS6?

    - by Ian Boyd
    Does IIS have a secret, unlogged, transparent, case-sensitive proxy built into it? A file exists on the web-server: GET http://www.stackoverflow.com/javascript/ModifyQuoteArea.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:20:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... Problem is that a changes made to the file will not get served, the old (i.e. February of last year) version keeps getting served: HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:23:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... The same old file gets served, even though we've: renamed the file deleted the file restarted IIS The request for this file does not appear in the IIS logs (e.g. C:\WINNT\System32\LogFiles\W3SVC7\) And this only happens from the outside (i.e. the internet). If you issue the request locally on the server, then you will: - get the current file (file there) - 404 (file renamed) - 404 (file deleted) But if i change the case of the requested resource, i.e.: GET http://www.stackoverflow.com/javascript/MoDiFyQuOtEArEa.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com Note: MoDiFyQuOtEArEa.js verses ModifyQuoteArea.js Then i do get the proper file (or get the 404 as i expect if the file is renamed or deleted). But any subsequent changes to the file will not show up until i change the case of the file i'm asking for. Checking the IIS logs all indicate that the (internet) requests are all coming the correct client on the internet (i.e. not from some intermediate proxy). Since the file doesn't exist on the hard drive anymore, i conclude that there is a proxy. The requests serviced from this proxy are not logged in the IIS logs. The requests for new files are logged, and from the client IP, not a proxy IP. The proxy is case sensitivie. This does not sound like something Microsoft, or IIS, would do: - a transparent proxy - case-sensitivie - unlogged - surviving restarts of IIS - surviving in a cache for hours i can't believe that our customer's IIS is doing these things. i'm assuming there is some other transparent proxy in front of IIS. Or, does IIS have a transparent, unlogged, case-sensitive, memory based, proxy, that caches content for at least 7 hours? (Come Monday morning, IIS is serving the correct file, unlogged).

    Read the article

  • How to Protect Apache server from this attack

    - by 501496270
    Is there a .htaccess solution against this attack 188.165.198.65 - - [17/Apr/2010:15:46:49 -0500] "GET /blog/2009/04/12/shopping-cart/?cart=../../../../../../../../../../../../../../../../etc/passwd%00 HTTP/1.1" 200 28114""Mozilla/4.0 (compatible; MSIE 7.0;Windows NT 5.1; .NET CLR 1" my WordPress .htaccess is # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /blog/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /blog/index.php [L] </IfModule> # END WordPress

    Read the article

  • Remotely from Chrome or IE page loads ~60seconds, from Firefox or IE on local machine - instantly.

    - by Janis Veinbergs
    The problem: If i access SharePoint from Windows 7 with IE8 or Chrome5 - I must wait for like a minute to get a response. If i use other Windows 7 with IE8, just the same - just wait a MINUTE. If i use Firefox3.6 on W7 machine - page opens up instantly. Now switch to IE rendering engine in Firefox, you will have to wait just as with IE. Now i tried IE8 on XP SP3 - page opens up instantly. I tried IE8 on Windows Server 2003 SP2 (machine on which SharePoint is hosted) - page opens up instantly. IIS6 Logs I did request almost instantly from all 3 browsers and this is what shows up in IIS logs (first 2 entries for each browser): Chrome Ok, IIS saw first Chrome request when i Hit enter in browser, but i had to wait long for things to move on 2010-06-01 05:46:04 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.55+Safari/533.4 401 2 2148074254 Loading... 2010-06-01 05:47:07 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US)+AppleWebKit/533.4+(KHTML,+like+Gecko)+Chrome/5.0.375.55+Safari/533.4 401 1 0 ... etc... Firefox All Instantly 2010-06-01 05:46:06 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+lv;+rv:1.9.2.3)+Gecko/20100401+Firefox/3.6.3 401 2 2148074254 2010-06-01 05:46:06 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+lv;+rv:1.9.2.3)+Gecko/20100401+Firefox/3.6.3 401 1 0 ... etc... IE I did hit enter when it was 05:46:06, but these are first entries in IIS logs 2010-06-01 05:47:08 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+Tablet+PC+2.0;+.NET+CLR+1.1.4322;+.NET4.0C;+.NET4.0E) 401 1 0 2010-06-01 05:47:08 W3SVC1794621940 192.168.0.9 GET /sapulces - 80 - 192.168.0.186 Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+6.1;+Trident/4.0;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+Tablet+PC+2.0;+.NET+CLR+1.1.4322;+.NET4.0C;+.NET4.0E) 401 1 0 ... etc... Nothing to see in Event Logs. The question Similar question has been asked but there is no response and i`m trying to access page without SSL and that happens even on GET requests. Where do I look? Where would be the problem? Browser? OS? I don't even know what to think about. Just a note Just a note about chrome's process isolation: I found it sad that while I was waiting that minute with Chrome, i could not use any other tab (i could switch, but i could not, for example, scroll or use any controls)

    Read the article

  • Elastic Beanstalk and IIS logs

    - by user195744
    I have an ELB app and when logging into an instance and looking at the IIS logs I see something like the following: 2013-10-18 17:14:25 10.240.27.2 GET /FSViewer/Img.aspx trcid=451847431&vhtid=391833142 80 - 10.210.107.159 Mozilla/5.0+(compatible;+MSIE+10.0;+Windows+NT+6.1;+WOW64;+Trident/6.0) 200 0 0 140 The 10.240.27.2 address always repeated, which is the load balancer? So how do I find out the IP addresses that are hitting my server?

    Read the article

  • Internet Explorer cannot display page from apache with single SSL virtual host

    - by P.scheit
    I have a question that has come up somehow in different questions but I still can't find the solution, yet. My problem is that I'm hosting a site on apache 2.4 on debian with SSL and Internet Explorer 7 on windows xp shows Internet Explorer cannot display the webpage I have only ONE virtual host that uses ssl, but DIFFERENT virtual hosts that use http. Here is my config for the site with SSL enabled (etc/sites-avaible/default-ssl is NOT linked) <Virtualhost xx.yyy.86.193:443> ServerName www.my-certified-domain.de ServerAlias my-certified-domain.de DocumentRoot "/var/local/www/my-certified-domain.de/current/www" Alias /files "/var/local/www/my-certified-domain.de/current/files" CustomLog /var/log/apache2/access.my-certified-domain.de.log combined <Directory "/var/local/www/my-certified-domain.de/current/www"> AllowOverride All </Directory> SSLEngine on SSLCertificateFile /etc/ssl/certs/www.my-certified-domain.de.crt SSLCertificateKeyFile /etc/ssl/private/www.my-certified-domain.de.key SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM SSLCertificateChainFile /etc/apache2/ssl.crt/www.my-certified-domain.de.ca BrowserMatch "MSIE [2-8]" nokeepalive downgrade-1.0 force-response-1.0 </VirtualHost> <VirtualHost *:80> ServerName www.my-certified-domain.de ServerAlias my-certified-domain.de CustomLog /var/log/apache2/access.my-certified-domain.de.log combined Redirect permanent / https://www.my-certified-domain.de/ </VirtualHost> my ports.conf looks like this: NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> the output from apache2ctl -S is like this: xx.yyy.86.193:443 www.my-certified-domain.de (/etc/apache2/sites-enabled/020-my-certified-domain.de:1) wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server phpmyadmin.my-certified-domain.de (/etc/apache2/conf.d/phpmyadmin.conf:3) port 80 namevhost phpmyadmin.my-certified-domain.de (/etc/apache2/conf.d/phpmyadmin.conf:3) port 80 namevhost staging.my-certified-domain.de (/etc/apache2/sites-enabled/010-staging.my-certified-domain.de:1) port 80 namevhost testing.my-certified-domain.de (/etc/apache2/sites-enabled/015-testing.my-certified-domain.de:1) port 80 namevhost www.my-certified-domain.de (/etc/apache2/sites-enabled/020-my-certified-domain.de:31) I included the solution for this question: Internet explorer cannot display the page, other browsers can, possibly htaccess / server error And I understand the answer from this question: How to setup Apache NameVirtualHost on SSL? In fakt: I only have one ssl certificate for the domain. And I only want to run ONE virtual host with ssl. So I just want to use the one ip for the ssl virtual host. But still (after rebooting / restarting / testing) internet explorer will still not show the page. When I intepret the apachectl -S as well, I already have only one SSL host and this should response to the initial SSH handshake, shouldn't it? What is wrong in this setup? Thank you so much Philipp

    Read the article

  • NGiNX performance degrades over time.

    - by Rylea Stark
    So here's the situation, I run a small cluster, Dedicated box for MySQL, and a dedicated PHP-FPM/NGINX box, Nginx talks to php-fpm via socket, As far as i can tell the problem does not lie in php-fpm, it lies somewhere in my configuration. What happens, is the site loads instant for a few moments after starting and slowly starts to degrade to load times of greater than 2 seconds, eventually taking 12 seconds to complete a load, PHP is configured to close a child after 175 requests, and spawn 20 at start and have a max of 60. Not really sure where the bottle neck is, most of my code is optimized and works flawlessly, but these issues with nginx will most likely force me to switch back over to Apache, And I really dont want to do that, NGINX.conf configuration below. user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; resolver_timeout 5s; satisfy all; ## Size Limits limit_zone brainbug $binary_remote_addr 5m; client_body_buffer_size 8k; client_header_buffer_size 75M; client_max_body_size 1k; large_client_header_buffers 2 1k; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 60; send_timeout 60; ## General Options ignore_invalid_headers on; recursive_error_pages on; sendfile on; server_name_in_redirect off; server_tokens off; ## TCP options tcp_nodelay on; #tcp_nopush on; output_buffers 128 512k; gzip on; gzip_http_version 1.0; gzip_comp_level 7; gzip_proxied any; gzip_min_length 0; gzip_buffers 32 32k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg image/png image/gif; ## Disable GZIP for MSIE 1-6 gzip_disable "MSIE [1-6].(?!.*SV1)"; ## Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • Nginx + Passenger running a RoR app is returning 401 when 302 is expected

    - by DBruns
    I've got a RoR app running on Passenger on top of Nginx. I'm using devise for my authentication method and have a link that gets sent in an email to users that requires authentication to view. If a user clicks the link from Outlook, and IE is the default browser, IE makes an HTTP request using the following headers: GET http://www.company.com/custom_layouts/108 HTTP/1.1 Accept: */* Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.company.com Returning: HTTP/1.1 401 Unauthorized Content-Type: /; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 401 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15 WWW-Authenticate: Basic realm="Application" Cache-Control: no-cache X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _vxwer_session=[sessionstr]; path=/; HttpOnly X-Runtime: 0.011918 Server: nginx/0.7.67 + Phusion Passenger 2.2.15 (mod_rails/mod_rack) 31 You need to sign in or sign up before continuing. 0 When the exact same URL is typed into the address bar, it does this: GET http://www.company.com/custom_layouts/108 HTTP/1.1 Accept: image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */* Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.company.com Returning: HTTP/1.1 302 Found Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15 Location: http://www.company.com/users/sign_in Cache-Control: no-cache X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _xswer_session=[session_info_here]; path=/; HttpOnly X-Runtime: 0.010798 Server: nginx/0.7.67 + Phusion Passenger 2.2.15 (mod_rails/mod_rack) 6f <html><body>You are being <a href="http://www.company.com/users/sign_in">redirected</a>.</body></html> 0 I expect them to return the same thing regardless.

    Read the article

  • Slash after domain in URL missing for Rails site

    - by joshee
    After redirecting users in a Rails app, for some reason the slash after the domain is missing. Generated URLs are invalid and I'm forced to manually correct them. The problem only occurs on a subdomain. On a different primary domain (same server), everything works ok. For example, after logging out, the site is directing to https://www.sub.domain.comlogin/ rather than https://www.sub.domain.com/login I suspect the issue has something to do with the vhost setup, but I'm not sure. Here are the broken and working vhosts: BROKEN SUBDOMAIN <VirtualHost *:80> ServerName www.sub.domain.com ServerAlias sub.domain.com Redirect permanent / https://www.sub.domain.com </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.sub.domain.com ServerAlias sub.domain.com RailsEnv production # SSL Engine Switch SSLEngine on # SSL Cipher Suite: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL # Server Certificate SSLCertificateFile /path/to/server.crt # Server Private Key SSLCertificateKeyFile /path/to/server.key # Set header to indentify https requests for Mongrel RequestHeader set X_FORWARDED_PROTO "https" BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 DocumentRoot /home/usr/www/www.sub.domain.com/current/public/ <Directory "/home/usr/www/www.sub.domain.com/current/public"> AllowOverride all Allow from all Options -MultiViews </Directory> WORKING PRIMARY DOMAIN <VirtualHost *:80> ServerName www.diffdomain.com ServerAlias diffdomain.com Redirect permanent / https://www.diffdomain.com </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.diffdomain.com ServerAlias diffdomain.com ServerAlias *.diffdomain.com RailsEnv production # SSL Engine Switch SSLEngine on # SSL Cipher Suite: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL # Server Certificate SSLCertificateFile /path/to/server.crt # Server Private Key SSLCertificateKeyFile /path/to/server.key # Set header to indentify https requests for Mongrel RequestHeader set X_FORWARDED_PROTO "https" BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 DocumentRoot /home/usr/www/www.diffdomain.com/current/public/ <Directory "/home/usr/www/www.diffdomain.com/current/public"> AllowOverride all Allow from all Options -MultiViews </Directory> </VirtualHost> Please let me know if there's anything else I could provide that would help determine what's wrong here. UPDATE tried adding a trailing slash to the redirect command, but still no luck.

    Read the article

  • Mysterious visitor to hidden PHP page

    - by B. VB.
    On my website, I have a "hidden" page that displays a list of the most recent visitors. There exist no links at all to this single PHP page, and, theoretically, only I know of its existence. I check it many times per day to see what new hits I have. However, about once a week, I get a hit from a 208.80.194.* address on this supposedly hidden page (it records hits to itself). The strange thing is this: this mysterious person/bot does not visit any other page on my site. Not the public PHP pages, but only this hidden page that prints the visitors. It's always a single hit, and the HTTP_REFERER is blank. The other data is always some variation of Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; YPC 3.2.0; FunWebProducts; .NET CLR 1.1.4322; SpamBlockerUtility 4.8.4; yplus 5.1.04b) ... but sometimes MSIE 6.0 instead of 7, and various other plug ins. The browser is different every time, as with the lowest-order bits of the address. And it's just that. One hit per week or so, to that one page. Absolutely no other pages are touched by this mysterious vistor. Doing a whois on that IP address showed it's from the new york area, and from the "Websense" ISP. The lowest order 8 bits of their address are always different, but always from 208.80.194.*/8. From most of the computers that I access my website, doing a tracerout to my server does not contain a router anywhere along the way with the IP 208.80.*. So that rules out any kind of HTTP sniffing, I might think. I have NO idea how, why this is happening. Does anyone have any clue, or have seen something as strange as this before? It seems completely benign, but unexplainable and a little creepy. Thanks in advance!

    Read the article

  • Repeated calls with random Javascript append to the URL

    - by cjk
    I keep getting calls to my server where there is random Javascript appended on the end of lots of the calls, e.g.: /UI/Includes/JavaScript/).length)&&e.error( /UI/Includes/JavaScript/,C,!1),a.addEventListener( /UI/Includes/JavaScript/),l=b.createDocumentFragment(),m=b.documentElement,n=m.firstChild,o=b.createElement( /UI/Includes/JavaScript/&&a.getAttributeNode( /UI/Includes/JavaScript/&&a.firstChild.getAttribute( /UI/Includes/JavaScript/).replace(bd, /UI/Includes/JavaScript/)),a.getElementsByTagName( The user agent is always this: Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1;+.NET+CLR+2.0.50727) I have jQuery, Modernizr and other JS and originally thought that some browser was messing up it's JS calls, however this particular IP address hasn't requested any images so I'm wondering if it is some kind of attack. Is this a common occurence?

    Read the article

  • Multi Column Block Too Narrow in Chrome

    - by aksarben
    My Web site displays song lyrics in a multi-column format, using CSS3. Both Firefox & MSIE 10+ display the multi-column text perfectly, but Chrome does not. This sample page shows the problem: http://www.hymntime.com/tch/test/html5/html5-multicolumn-test.htm The page uses a media selector, so your Chrome window must be at least 1280 pixels wide to see the effect. In fact, if you make the Chrome window less than 1280 pixels, you'll see the lyrics block change to a single column, of the same overall width. In other words, when Chrome shifts to 1-column to 2-column mode (due to the wider browser window), the lyrics block remains the same width, causing text to be squeezed together. Has anyone else seen this behavior, or know a solution? Is this a Chrome bug, or I am I doing something wrong? I posted this question on a Chrome forum a while back, but got no reply.

    Read the article

  • Crossbrowser issue - navigation-menu [closed]

    - by aztekk
    I'm having issues with crossbrowser compatibility on my navigationmenu for my site. The issue is that it's not working as expected in MSIE. It bugs out on mouseover. The site is run with wordpress and the theme is called GreenChilli. It's a free theme from MyThemeShop and they don't seam to be very active in resolving free theme issues on their forum. Can someone have a look and see if this is an easy fix, or if I maybe have to abaondon this theme for something else? Site is: http://lamslagen.com

    Read the article

  • Apache + Codeigniter + New Server + Unexpected Errors

    - by ngl5000
    Alright here is the situation: I use to have my codeigniter site at bluehost were I did not have root access, I have since moved that site to rackspace. I have not changed any of the PHP code yet there has been some unexpected behavior. Unexpected Behavior: http://mysite.com/robots.txt Both old and new resolve to the robots file http://mysite.com/robots.txt/ The old bluehost setup resolves to my codeigniter 404 error page. The rackspace config resolves to: Not Found The requested URL /robots.txt/ was not found on this server. **This instance leads me to believe that there could be a problem with my mod rewrites or lack there of. The first one produces the error correctly through php while it seems the second senario lets the server handle this error. The next instance of this problem is even more troubling: 'http://mysite.com/search/term/9 x 1-1%2F2 white/' New site results in: Bad Request Your browser sent a request that this server could not understand. Old site results in: The actual page being loaded and the search term being unencoded. I have to assume that this has something to do with the fact that when I went to the new server I went from root level htaccess file to httpd.conf file and virtual server default and default-ssl. Here they are: Default file: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / # force no www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Default-ssl File <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / RewriteCond %{SERVER_PORT} !^443 RewriteRule ^ https://mysite.com%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # Use our self-signed certificate by default SSLCertificateFile /etc/apache2/ssl/certs/www.mysite.com.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.mysite.com.key # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. # SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem # SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown httpd.conf File Just a lot of stuff from html5 boiler plate, I will post it if need be Old htaccess file <IfModule mod_rewrite.c> # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)/$ /$1 [r=301,L] # codeigniter direct RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> Any Help would be hugely appreciated!!

    Read the article

  • jquery prevent paste to iframe designmode from msword

    - by naugtur
    Hi. I've seen some questions on catching paste event. This looks helpful. But I want to prevent paste from happening when the pasted content is not plaintext, but comes from msword or other wysiwyg editor. Anybody got any experience on that? I suppose I should catch the event and look for some specific tags in the clipboard. Any known content that msie adds everytime?

    Read the article

  • VS2010. Dropdownlist Autopostback works in IDE, not when deployed

    - by George
    I have a VS2010 RC ASP.NET web page,when a user changes the drop down selection on an auto postback dropdown, it refreshes a small grid and a few labels in various places on the page. I know wrapping a whole page in a big UpdatePanel control will cause horror from many of you, but that's what I did. I really didn't want a full page refresh and I didn't know how to update a table on the client side using Javascript and I figured it would be a big change. Suggestions for avoiding this are welcomed, but my main desire is to understand teh error I am getting. When I do the auto postbacks in the IDE, everything works fine, but if I deploy the code (IIS 5.5 on XP), the second auto postback works but the seconds one gives me his error. Ajax is one big nasty blackbox to me. Can someone help, please? Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ; InfoPath.1; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.2; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MS-RTC LM 8; MS-RTC EA 2; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; .NET4.0C; .NET4.0E) Timestamp: Sun, 28 Mar 2010 17:23:23 UTC Message: Sys.WebForms.PageRequestManagerServerErrorException: Object reference not set to an instance of an object. Line: 796 Char: 13 Code: 0 URI: http://localhost/BESI/ScriptResource.axd?d=3HKc1zGdeSk2WM7LpI9tTpMQUN7bCfQaPKi6MHy3P9dace9kFGR5G-jymRLHm0uxZ0SqWlVSWl9vAWK5JiPemjSRfdtUq34Dd5fQ3FoIbiyQ-hcum21C-j06-c0YF7hE0&t=5f011aa5 Message: Sys.WebForms.PageRequestManagerServerErrorException: Object reference not set to an instance of an object. Line: 796 Char: 13 Code: 0 URI: http://localhost/BESI/ScriptResource.axd?d=3HKc1zGdeSk2WM7LpI9tTpMQUN7bCfQaPKi6MHy3P9dace9kFGR5G-jymRLHm0uxZ0SqWlVSWl9vAWK5JiPemjSRfdtUq34Dd5fQ3FoIbiyQ-hcum21C-j06-c0YF7hE0&t=5f011aa5 Message: Sys.WebForms.PageRequestManagerServerErrorException: Object reference not set to an instance of an object. Line: 796 Char: 13 Code: 0 URI: http://localhost/BESI/ScriptResource.axd?d=3HKc1zGdeSk2WM7LpI9tTpMQUN7bCfQaPKi6MHy3P9dace9kFGR5G-jymRLHm0uxZ0SqWlVSWl9vAWK5JiPemjSRfdtUq34Dd5fQ3FoIbiyQ-hcum21C-j06-c0YF7hE0&t=5f011aa5

    Read the article

  • Error using httlib's HTTPSConnection with PKCS#12 certificate

    - by Remi Despres-Smyth
    Hello. I'm trying to use httplib's HTTPSConnection for client validation, using a PKCS #12 certificate. I know the certificate is good, as I can connect to the server using it in MSIE and Firefox. Here's my connect function (the certificate includes the private key). I've pared it down to just the basics: def connect(self, cert_file, host, usrname, passwd): self.cert_file = cert_file self.host = host self.conn = httplib.HTTPSConnection(host=self.host, port=self.port, key_file=cert_file, cert_file=cert_file) self.conn.putrequest('GET', 'pathnet/,DanaInfo=200.222.1.1+') self.conn.endheaders() retCreateCon = self.conn.getresponse() if is_verbose: print "Create HTTPS connection, " + retCreateCon.read() (Note: No comments on the hard-coded path, please - I'm trying to get this to work first; I'll make it pretty afterwards. The hard-coded path is correct, as I connect to it in MSIE and Firefox. I changed the IP address for the post.) When I try to run this using a PKCS#12 certificate (a .pfx file), I get back what appears to be an openSSL error. Here is the entire error traceback: File "Usinghttplib_Test.py", line 175, in t.connect(cert_file=opts["-keys"], host=host_name, usrname=opts["-username"], passwd=opts["-password"]) File "Usinghttplib_Test.py", line 40, in connect self.conn.endheaders() File "c:\python26\lib\httplib.py", line 904, in endheaders self._send_output() File "c:\python26\lib\httplib.py", line 776, in _send_output self.send(msg) File "c:\python26\lib\httplib.py", line 735, in send self.connect() File "c:\python26\lib\httplib.py", line 1112, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "c:\python26\lib\ssl.py", line 350, in wrap_socket suppress_ragged_eofs=suppress_ragged_eofs) File "c:\python26\lib\ssl.py", line 113, in __init__ cert_reqs, ssl_version, ca_certs) ssl.SSLError: [Errno 336265225] _ssl.c:337: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib Notice, the openSSL error (the last entry in the list) notes "PEM lib", which I found odd, since I'm not trying to use a PEM certificate. For kicks, I converted the PKCS#12 cert to a PEM cert, and ran the same code using that. In that case, I received no error, I was prompted to enter the PEM pass phrase, and the code did attempt to reach the server. (I received the response "The service is not available. Please try again later.", but I believe that would be because the server does not accept the PEM cert. I can't connect in Firefox to the server using the PEM cert either.) Is httplib's HTTPSConnection supposed to support PCKS#12 certificates? (That is, pfx files.) If so, why does it look like openSSL is trying to load it inside the PEM lib? Am I doing this all wrong? Any advice is welcome. EDIT: The certificate file contains both the certificate and the private key, which is why I'm providing the same file name for both the HTTPSConnection's key_file and cert_file parameters.

    Read the article

  • HTTP request, strange socket behavoir

    - by hoodoos
    I expirience strange behavior when doing HTTP requests through sockets, here the request: POST https://test.com:443/service/XMLSelect HTTP/1.1 Content-Length: 10926 Host: test.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705) Authorization: Basic XXX SOAPAction: http://test.com/SubmitXml Later on there goes body of my request with given content length. After that I recive something like: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 06:13:52 GMT So everything seem to be fine here. I read all contents from network stream and successfuly recieve response. But my socket which I'm doing polling on switches it's modes like that: write ( i write headers and request here ) read ( after headers sent i begin to recieve response ) write ( STRANGE BEHAVIOUR HERE. WHY? here i send nothing really ) read ( here it switches to read back again ) last two steps can repeat several times. So I want to ask what leads for socket's mode change? And in this case it's not a big problem, but when I use gzip compression in my request ( no idea how it's related ) and ask server to send gzipped response to me like this: POST https://test.com:443/service/XMLSelect HTTP/1.1 Content-Length: 1076 Accept-Encoding: gzip Content-Encoding: gzip Host: test.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705) Authorization: Basic XXX SOAPAction: http://test.com/SubmitXml I recieve response like that: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Encoding: gzip Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 07:26:33 GMT 2000 ? I recieve a chunk size and GZIP header, it's all okay. And here's what is happening with my poor little socket meanwhile: write ( i write headers and request here ) read ( after headers sent i begin to recieve response ) write ( STRANGE BEHAVIOUR HERE. And it finally sits here forever waiting for me to send something! But if i refer to HTTP I don't have to send anything more! ) What can it be related to? What it wants me to send? Is it remote web server's problem or do I miss something? PS All actual service references and login/passwords replaced with fake ones :)

    Read the article

  • Protect Apache server

    - by Mike Arnold
    My server is attacked like this 188.165.198.65 ./../../../../../../../../../etc/passwd%00 HTTP/1.1" 200 28114 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1" How can I defend it with .htaccess file?

    Read the article

  • BOM in a PHP page auto generated by Wordpress

    - by Paolo63
    I admin two different blogs. They are both wordpress 2.8.6 (so they have exactly the same source code, plugins apart) but they are located on two different hosting platform (hostmonster.com and aruba.it). To explain my problem I've dumped with SmartSniff a session with each one of the sites. Here is the dump from hostmonster: GET /blog/paolo/ HTTP/1.1 Host: www.e-venturi.com Accept-Encoding: identity Accept-Language: en-us Accept: text/html, text/plain, text/xml, image/gif, image/x-xbitmap, image/x-icon,image/jpeg, image/pjpeg, application/vnd.ms-powerpoint, application/vnd.ms-excel, application/msword, */* User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0;) HTTP/1.1 200 OK Date: Sat, 28 Nov 2009 23:47:38 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 X-Powered-By: PHP/5.2.11 X-Pingback: http://www.e-venturi.com/blog/paolo/xmlrpc.php Vary: Accept-Encoding Transfer-Encoding: chunked Content-Type: text/html; charset=UTF-8 a6 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> and now from aruba: GET /blog/ HTTP/1.1 Host: www.cubanite.net Accept-Encoding: identity Accept-Language: en-us Accept: text/html, text/plain, text/xml, image/gif, image/x-xbitmap, image/x-icon,image/jpeg, image/pjpeg, application/vnd.ms-powerpoint, application/vnd.ms-excel, application/msword, */* User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0;) HTTP/1.1 200 OK Date: Sat, 28 Nov 2009 23:49:19 GMT Server: Apache/2.2 X-Pingback: http://www.cubanite.net/blog/xmlrpc.php Vary: Accept-Encoding Transfer-Encoding: chunked Content-Type: text/html; charset=UTF-8 100b ...<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> (note: a6 and 100b are the packet size reported by SmartSniff) Ok, the big difference are the three dots in front of the <!DOCTYPE in aruba. They are the UTF-8 BOM (0xef 0xbb 0xbf). Being the same PHP source on both the servers, why does it appears only on one server ? The content is generated so the post author can't deliberately insert a BOM and I've verified the template to be BOM free too. Naturally there are different PHP and Apache versions on the servers... what can I check or set to diagnose and resolve the problem ? By the way I don't want the BOM. Many thanks in advance.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >