Search Results

Search found 330 results on 14 pages for 'msie'.

Page 8/14 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Parsing tab delimited file with double quotes in Perl

    - by sfactor
    I have a data set that is tab delimited with the user-agent strings in double quotes. I need to parse each of these columns and based on the answer of my other post I used the Text::CSV module. 94410634 0 GET "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6.6; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; AskTB5.5)" 1 The code is a simple one. #!/usr/bin/perl use strict; use warnings; use Text::CSV; my $csv = Text::CSV->new(sep_char => "\t"); while (<>) { if ($csv->parse($_)) { my @columns = $csv->fields(); print "@columns\n"; } else { my $err = $csv->error_input; print "Failed to parse line: $err"; } } But i get the Failed to parse line: error when I try it on this dataset. what am I doing wrong? I need to extract the 4th column containing the user-agent strings for further processing.

    Read the article

  • What are the security implications of making a clientaccesspolicy proxy workaround?

    - by Edward Tanguay
    I wanted to use a published GoogleDocs document as the datasource of a Silverlight application but ran into clientaccesspolicy issues. I read many articles like this and this about how difficult it is to get around the clientaccesspolicy issue. So I wrote this 15-line CURL script and put it on my PHP site and now I can get the text of any GoogleDocs document and any text from any URL into my Silverlight application: <?php $url = filter_input(INPUT_GET, 'url',FILTER_SANITIZE_STRING); $user_agent = 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)'; $ch = curl_init(); curl_setopt($ch, CURLOPT_COOKIEJAR, "/tmp/cookie"); curl_setopt($ch, CURLOPT_COOKIEFILE, "/tmp/cookie"); curl_setopt($ch, CURLOPT_URL, $url ); // set url to post to curl_setopt($ch, CURLOPT_FAILONERROR, 1); // Fail on errors curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); // allow redirects curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); // return into a variable curl_setopt($ch, CURLOPT_TIMEOUT, 15); curl_setopt($ch, CURLOPT_USERAGENT, $user_agent); curl_setopt($ch, CURLOPT_VERBOSE, 0); echo curl_exec($ch); ?> So it makes me wonder: Why is there so much discussion about whether or not URLs support clientaccesspolicy or not, since you just have to write a simple proxy script and get the information through it? Why aren't there services, e.g. like the URL shortening services, which supply this functionality? What are the security implications of having a script like this?

    Read the article

  • jquery png fix on hidden div and jquerybrowser question

    - by Jared
    Hello, I have some code that if Javascript is available, it will remove a GIF image and replace it with a PNG image. The PNG is display:none and the GIF is visible. Since IE6- browsers can't load PNG, I have loaded the jquery PNG fix. But it only seems to work if the image is already visible. The other issue is I am trying to get the jquery.browser function to apply to less than version 6, and I am not having much luck. <script type="text/javascript"> $(document).ready(function(){ $("#gif").hide(); jQuery.each(jQuery.browser, function(i, val) { if($.browser.msie && jQuery.browser.version <="6"){ $("#png").show(); $('.png').pngFix() }else{ $("#png").fadeIn("slow"); } }); }); </script> HTML <img class="png" id="png" src="images/main_elements/one-2-flush-it-campus-challenge.png" style="display:none;" /> <img id="gif" src="images/main_elements/one-2-flush-it-campus-challenge.gif"/>

    Read the article

  • Is "Server not found" error related to Activclient?

    - by Kent
    Users are getting sporadic "Server not found" errors after idling in the browser. We have a HTTPS web application (Apache/Tomcat) using NSS for authentication on the server. The error occurs when a user opens the application and later lets it sit idle/untouched for 15 minutes. When they try to access the application they can get a "Server not found" error. Users use CAC cards with ActivClient software and our web application uses the certificates for authentication and authorization. We have been able to recreate the problem but have been unable to diagnose it. In recreating the problem the server is getting a series of "Unable to find the certificate or key necessary for authentication" errors in the NSS log associated with the browser error. These erros don't occur until the user tries to access the idle application. When the application is idle for 15 minutes the PIN is not requested yet the PIN Cache timeout in ActivClient is set at 15 minutes. All our server side timeout parameters are set to hours not minutes. IE 6 is our browser and NSS is using TLS. We have tried modifying "SetEnvIf User-Agent ".MSIE." ssl-unclean-shutdown" with no improvement. I understand that the PIN cache timeout and SSL session don't have a 1:1 relationship but the timing is suspicious. Can't find anything in the windows error logs that indicates a problem (security logs are not accessible to us). Any suggestions as to how to identify the cause of the problem would be appreciated.

    Read the article

  • Problem pulling data from website in .NET and C#

    - by Cptcecil
    I have written a web scraping program to go to a list of pages and write all the html to a file. The problem is that when I pull a block of text some of the characters get written as '?'. How do I pull those characters into my text file? Here is my code: string baseUri = String.Format("http://www.rogersmushrooms.com/gallery/loadimage.asp?did={0}&blockName={1}", id.ToString(), name.Trim()); // our third request is for the actual webpage after the login. HttpWebRequest request = (HttpWebRequest)WebRequest.Create(baseUri); request.Method = "GET"; request.UserAgent = "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1)"; //get the response object, so that we may get the session cookie. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); StreamReader reader = new StreamReader(response.GetResponseStream()); // and read the response string page = reader.ReadToEnd(); StreamWriter SW; string filename = string.Format("{0}.txt", id.ToString()); SW = File.AppendText("C:\\Share\\" + filename); SW.Write(page); reader.Close(); response.Close();

    Read the article

  • jscript1.js error in webforms when applying routing

    - by Sean N
    Hello I have a project using webforms. I applied routing on one of the page. My route is structure like this : http://localhost:3576/Request/Admin/Rejected/ = http://localhost:3576/Request/{role}/{action}/{id} Everything works great but i have a javascript error: Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.2; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Timestamp: Fri, 7 May 2010 13:35:54 UTC Message: Syntax error Line: 3 Char: 1 Code: 0 URI: http://localhost:3576/Request/Admin/Rejected/JScript1.js I think it's trying to route to the file where i stored my javascript functions. Any suggestion?

    Read the article

  • Firebug error causing code to fail in Sitecore when using IE8 to view the Content Editor

    - by iamdudley
    Hi, I have a Sitecore 6 CMS with a custom data provider to create child items on the fly based on items added to a field in the parent item. This was working okay (about a week ago was the last time I was working on this project), but now I am getting errors in the web client which are originating in the FirebugLite html and JS files. Basically, I click on a content item, the FirebugLite js fails, and then my code in my custom data provider fails to run. I would have thought any FirebugLite scripts would be disabled or ignored when running under IE8 (isn't FirebugLite a Firefox addin?) When I remove the FirebugLite folder from ..\sitecore\shell\Controls\Lib\ my code runs fine and I don't get the clientside errors. I'm not really sure what my question is. I guess it is should FirebugLite affect IE8? What am I missing out on if I remove FirebugLite from the Sitecore directory tree? I'm running WindowsXP SP3, VS2008. The errors I get are the following: Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Timestamp: Fri, 14 May 2010 06:42:04 UTC Message: Invalid argument. Line: 301 Char: 9 Code: 0 URI: http://xxxxxxx.com.au/sitecore/shell/controls/lib/FirebugLite/firebug.js Message: Object doesn't support this property or method Line: 21 Char: 1 Code: 0 URI: http://xxxxxxxx.com.au/sitecore/shell/controls/lib/FirebugLite/firebug.html Message: Invalid argument. Line: 301 Char: 9 Code: 0 URI: http://xxxxxxxx.com.au/sitecore/shell/controls/lib/FirebugLite/firebug.js Message: Object doesn't support this property or method Line: 21 Char: 1 Code: 0 URI: http://xxxxxxxx.com.au/sitecore/shell/controls/lib/FirebugLite/firebug.html Cheers, James.

    Read the article

  • how to get jquery.couch.app.js to work with IE8

    - by fuzzy lollipop
    I have tested this on Windows XP SP3 and Windows 7 Ultimate in IE7 and IE8 (in all compatiblity modes) and it fails the same way on both. I am running the latest HEAD from the the couchapp repository. This works fine on my OSX 10.6.3 development machine. I have tested with Chrome 4.1.249.1064 (45376) and Firefox 3.6 and they both work fine. As do the Safari 4 and Firefox 3.6 on OSX 10.6.3 Here is the error message Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0) Timestamp: Wed, 28 Apr 2010 03:32:55 UTC Message: Object doesn't support this property or method Line: 159 Char: 7 Code: 0 URI: http://192.168.0.105:5984/test/_design/test/vendor/couchapp/jquery.couch.app.js and here is the "offending" bit of code, which works on Chrome, Firefox and Safari just fine. If says the failure is on the like that qs.forEach() from the file jquery.couch.app.js 157 var qs = document.location.search.replace(/^\?/,'').split('&'); 158 var q = {}; 159 qs.forEach(function(param) { 160 var ps = param.split('='); 161 var k = decodeURIComponent(ps[0]); 162 var v = decodeURIComponent(ps[1]); 163 if (["startkey", "endkey", "key"].indexOf(k) != -1) { 164 q[k] = JSON.parse(v); 165 } else { 166 q[k] = v; 167 } 168 });

    Read the article

  • Javascript: "Dangling" Reference to DOM element?

    - by Channel72
    It seems that in Javascript, if you have a reference to some DOM element, and then modify the DOM by adding additional elements to document.body, your DOM reference becomes invalidated. Consider the following code: <html> <head> <script type = "text/javascript"> function work() { var foo = document.getElementById("foo"); alert(foo == document.getElementById("foo")); document.body.innerHTML += "<div>blah blah</div>"; alert(foo == document.getElementById("foo")); } </script> </head> <body> <div id = "foo" onclick='work()'>Foo</div> </body> </html> When you click on the DIV, this alerts "true", and then "false." So in other words, after modifying document.body, the reference to the DIV element is no longer valid. This behavior is the same on Firefox and MSIE. Some questions: Why does this occur? Is this behavior specified by the ECMAScript standard, or is this a browser-specific issue? Note: there's another question posted on stackoverflow that seems to be referring to the same issue, but neither the question nor the answers are very clear.

    Read the article

  • Rails with passenger only runs in development

    - by Ignace
    Hey, I have a problem on one of our webservers. I'll try to explain it as clear as possible, but I'm not 100% aware of all the configuration of the server. There are 2 sites running next to eachother (blcc_preprod and blcc_prod), so in the 'sites-enabled' of apache this i have a file 'blcc' like this: <VirtualHost *:80> DocumentRoot /opt/dn/blcc/www RailsBaseURI /blcc_preprod RailsBaseURI /blcc_prod RailsEnv production </VirtualHost> My config/environments/production.log (from both) looks like this(I removed all comments, because it messes with the layout) config.cache_classes = true config.action_controller.consider_all_requests_local = false config.action_controller.perform_caching = true config.action_view.cache_template_loading = true config.log_level = :debug The weird thing is, that my production log dates from months ago, so something is really wrong. Could someone help? If you need more info, just ask. Thanks! Edit: Error.log from apache show the normal output for a event to the server (the situation here is that the webserver plugs in to another business (java) server via a framework) Access.log is empty Content from other_vhosts_access.log after we surf to ip/blcc_preprod is the following blcc.localdomain:80 192.168.21.194 - - [25/May/2010:08:33:04 +0200] "GET/ blcc_preprod HTTP/1.1" 500 594 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)"

    Read the article

  • Combining the streams:Web application

    - by Surendra J
    This question deals mainly with streams in web application in .net. In my webapplication I will display as follows: bottle.doc sheet.xls presentation.ppt stackof.jpg Button I will keep checkbox for each one to select. Suppose a user selected the four files and clicked the button,which I kept under. Then I instantiate clasees for each type of file to convert into pdf, which I wrote already and converted them into pdf and return them. My problem is the clases is able to read the data form URL and convert them into pdf. But I don't know how to return the streams and merge them. string url = @"url"; //Prepare the web page we will be asking for HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "GET"; request.ContentType = "application/mspowerpoint"; request.UserAgent = "Mozilla/4.0+(compatible;+MSIE+5.01;+Windows+NT+5.0"; //Execute the request HttpWebResponse response = (HttpWebResponse)request.GetResponse(); //We will read data via the response stream Stream resStream = response.GetResponseStream(); //Write content into the MemoryStream BinaryReader resReader = new BinaryReader(resStream); MemoryStream PresentaionStream = new MemoryStream(resReader.ReadBytes((int)response.ContentLength)); //convert the presention stream into pdf and save it to local disk. But I would like to return the stream again. How can I achieve this any Ideas are welcome.

    Read the article

  • how to get entire document in scrapy using hxs.select

    - by Chris Smith
    I've been at this for 12hrs and I'm hoping someone can give me a leg up. Here is my code all I want is to get the anchor and url of every link on a page as it crawls along. from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.utils.url import urljoin_rfc from scrapy.utils.response import get_base_url from urlparse import urljoin #from scrapy.item import Item from tutorial.items import DmozItem class HopitaloneSpider(CrawlSpider): name = 'dmoz' allowed_domains = ['domain.co.uk'] start_urls = [ 'http://www.domain.co.uk' ] rules = ( #Rule(SgmlLinkExtractor(allow='>example\.org', )), Rule(SgmlLinkExtractor(allow=('\w+$', )), callback='parse_item', follow=True), ) user_agent = 'Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))' def parse_item(self, response): #self.log('Hi, this is an item page! %s' % response.url) hxs = HtmlXPathSelector(response) #print response.url sites = hxs.select('//html') #item = DmozItem() items = [] for site in sites: item = DmozItem() item['title'] = site.select('a/text()').extract() item['link'] = site.select('a/@href').extract() items.append(item) return items What I'm doing wrong... my eyes hurt now.

    Read the article

  • Javascript errors on internet explorer with jQuery but working fine on Firefox

    - by user994319
    A quick question I'm hoping someone can help me out with. On firefox our jQuery slider is working perfectly, however on viewing with internet explorer there are some javascript errors occurring. The website is http://foscam-uk.com/index.php Hoping there is a possible solution to this. Kind Regards and Thank You! Errors: Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.3) Timestamp: Wed, 6 Jun 2012 22:36:43 UTC Message: Object doesn't support this property or method Line: 5653 Char: 9 Code: 0 URI: http://foscam-uk.com/js/prototype/prototype.js Message: Object doesn't support this property or method Line: 5988 Char: 5 Code: 0 URI: http://foscam-uk.com/js/prototype/prototype.js Message: Object doesn't support this property or method Line: 2 Char: 5 Code: 0 URI: http://foscam-uk.com/skin/frontend/default/theme316/js/scripts.js Message: Object doesn't support this property or method Line: 5736 Char: 7 Code: 0 URI: http://foscam-uk.com/js/prototype/prototype.js Message: Object doesn't support this property or method Line: 5988 Char: 5 Code: 0 URI: http://foscam-uk.com/js/prototype/prototype.js Message: Object doesn't support this property or method Line: 73 Char: 11 Code: 0 URI: http://foscam-uk.com/index.php

    Read the article

  • Decoding the IE9 user agent

    - by Portman
    I installed IE9 in a Windows 7 virtual machine, and was surprised to see this user agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; BOIE9;ENUSMSNIP) In particular, the last two keys BOIE9 and ENUSMSNIP look very spammy. I'm used to seeing toolbars and add-ins register themselves at the end of the user agent like that, but this is on a virgin install of Windows 7 with no other software. They're defined in the registry here: HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent\PostPlatform That key has a value of IEAK, which is apparently the Internet Explorer Administrators Kit which according to Microsoft sends a custom user agent string. But why? I'm guessing that BOIE9 is stands for "Bing on IE9". It's the only active Add-On: As for ENUSMSNIP, I'm at a loss. My guesses are: ENUS = Locale, which for me is EN-US ("US English") MS = Microsoft NIP = ??? I tried changing my locale to EN-GB, but the user agent didn't update nor did the registry. So it appears it's only at the time of install that it matters (if I'm even right about ENUS). Does anyone know what these two user agent keys represent? Or, care to share what your IE9 user agent is, and maybe we can piece it together ourselves?

    Read the article

  • PHP cURL error: "Empty reply from server"

    - by ABach
    All, I have a class function to interface with the RESTful API for Last.FM - its purpose is to grab the most recent tracks for my user. Here it is: private static $base_url = 'http://ws.audioscrobbler.com/2.0/'; public static function getTopTracks($options = array()) { $options = array_merge(array( 'user' => 'bachya', 'period' => NULL, 'api_key' => 'xxxxx...', // obfuscated, obviously ), $options); $options['method'] = 'user.getTopTracks'; // Initialize cURL request and set parameters $ch = curl_init(); curl_setopt_array($ch, array( CURLOPT_URL => self::$base_url, CURLOPT_POST => TRUE, CURLOPT_POSTFIELDS => $options, CURLOPT_RETURNTRANSFER => TRUE, CURLOPT_TIMEOUT => 30, CURLOPT_USERAGENT => 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)' )); $results = curl_exec($ch); return $results; } This returns "Empty reply from server". I know that some have suggested that this error comes from some fault in network infrastructure; I do not believe this to be true in my case. If I run a cURL request through the command line, I get my data; the Last.FM service is up and accessible. Before I go to those folks and see if anything has changed, I wanted to check with you fine folks and see if there's some issue in my code that would be causing this. Thanks!

    Read the article

  • Change IE user agent

    - by Ahmed
    I'm using WatiN to automate Internet Explorer, and so far it's been great. However, I would really like to be able to change the user agent of IE so the server thinks it's actually Firefox or some other browser. A Firefox useragent string look something like: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 With the following code RegistryKey ieKey = Registry.LocalMachine.CreateSubKey(@"SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent"); ieKey.SetValue("", "Mozilla/5.0"); ieKey.SetValue("Compatible", "Windows"); ieKey.SetValue("Version", "U"); ieKey.SetValue("Platform", "Windows NT 5.1; en-US"); ieKey.DeleteSubKeyTree("Post Platform"); I have been able to change the IE useragent string from Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; AskTbMP3R7/5.9.1.14019) to Mozilla/4.0 (Windows; U; Windows NT 6.1; Trident/4.0; en-US; rv:1.9.2.13) Now, the question: how do I delete the Trident/4.0 part and add the "Gecko/20101203 Firefox/3.6.13" part after the parentheses? I would really like to do this programatically in C#, without using any IE add-ons. Thanks in advance.

    Read the article

  • How do I enable mod_deflate for PHP files?

    - by DM.
    I have a Liquid Web VPS account, I've made sure that mod_deflate is installed and running/active. I used to gzip my css and js files via PHP, as well as my PHP files themselves... However, I'm now trying to do this via mod_deflate, and it seems to work fine for all files except for PHP files. (Txt files work fine, css, js, static HTML files, just nothing that is generated via a PHP file.) How do I fix this? (I used the "Compress all content" option under "Optimize Website" in cPanel, which creates an .htaccess file in the home directory (not public_html, one level higher than that) with exactly the same text as the "compress everything except images" example on http://httpd.apache.org/docs/2.0/mod/mod_deflate.html) .htaccess file: <IfModule mod_deflate.c> SetOutputFilter DEFLATE <IfModule mod_setenvif.c> # Netscape 4.x has some problems... BrowserMatch ^Mozilla/4 gzip-only-text/html # Netscape 4.06-4.08 have some more problems BrowserMatch ^Mozilla/4\.0[678] no-gzip # MSIE masquerades as Netscape, but it is fine # BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # NOTE: Due to a bug in mod_setenvif up to Apache 2.0.48 # the above regex won't work. You can use the following # workaround to get the desired effect: BrowserMatch \bMSI[E] !no-gzip !gzip-only-text/html # Don't compress images SetEnvIfNoCase Request_URI .(?:gif|jpe?g|png)$ no-gzip dont-vary </IfModule> <IfModule mod_headers.c> # Make sure proxies don't deliver the wrong content Header append Vary User-Agent env=!dont-vary </IfModule> </IfModule>

    Read the article

  • Open Select using Javascript/jQuery?

    - by Newbie
    Hello! Is there a way to open a select box using Javascript (and jQuery)? <select style="width:150px;"> <option value="1">1</option> <option value="2">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc arcu nunc, rhoncus ac dignissim at, rhoncus ac tellus.</option> <option value="3">3</option> </select> I have to open my select, cause of ie bug. All versions of IE (6,7,8) cut my options. As far as I know, there is no css bugfix for this. At the moment I try to do the following: var original_width = 0; var selected_val = false; if (jQuery.browser.msie) { $('select').click(function(){ if (selected_val == false){ if(original_width == 0) original_width = $(this).width(); $(this).css({ 'position' : 'absolute', 'width' : 'auto' }); }else{ $(this).css({ 'position' : 'relative', 'width' : original_width }); selected_val = false; } }); $('select').blur(function(){ $(this).css({ 'position' : 'relative', 'width' : original_width }); }); $('select').blur(function(){ $(this).css({ 'position' : 'relative', 'width' : original_width }); }); $('select').change(function(){ $(this).css({ 'position' : 'relative', 'width' : original_width }); }); $('select option').click(function(){ $(this).css({ 'position' : 'relative', 'width' : original_width }); selected_val = true; }); } But clicking on my select the first time will change the width of the select but I have to click again to open it.

    Read the article

  • cURL gets Internal Server Error when posting to aspx page

    - by Mihai
    I have a big problem. I have some applications made on an unix based system, and I use PHP with cURL to post an XML question to an IIS server with asp.net. Every time I ask the server something I get error: HTTP/1.1 500 Internal Server Error Date: Tue, 04 May 2010 07:36:08 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 3032 But if I ask same question on another server, almost identically to this one (BOTH configured by me) I get results like it should and the headers: HTTP/1.1 200 OK Date: Tue, 04 May 2010 07:39:37 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 9169 I tried everything, searched hundreds of forums, but i don't find anything. In IIS logs I only get: 2010-05-04 07:36:08 W3SVC1657587027 80.xx.xx.xx POST /XML_SERV/XmlAPI.aspx - 80 - 80.xx.xx.xx Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1 500 0 0 any ideas where to look what is going on? I forgot to mention! If I use an XML request software, and ask same question, it works.

    Read the article

  • Lights off effect and jquery placement on wordpress

    - by Alexander Santiago
    I'm trying to implement a lights on/off on single posts of my wordpress theme. I know that I have to put this code on my css, which I did already: #the_lights{ background-color:#000; height:1px; width:1px; position:absolute; top:0; left:0; display:none; } #standout{ padding:5px; background-color:white; position:relative; z-index:1000; } Now this is the code that I'm having trouble with: function getHeight() { if ($.browser.msie) { var $temp = $("").css("position", "absolute") .css("left", "-10000px") .append($("body").html()); $("body").append($temp); var h = $temp.height(); $temp.remove(); return h; } return $("body").height(); } $(document).ready(function () { $("#the_lights").fadeTo(1, 0); $("#turnoff").click(function () { $("#the_lights").css("width", "100%"); $("#the_lights").css("height", getHeight() + "px"); $("#the_lights").css({‘display’: ‘block’ }); $("#the_lights").fadeTo("slow", 1); }); $("#soft").click(function () { $("#the_lights").css("width", "100%"); $("#the_lights").css("height", getHeight() + "px"); $("#the_lights").css("display", "block"); $("#the_lights").fadeTo("slow", 0.8); }); $("#turnon").click(function () { $("#the_lights").css("width", "1px"); $("#the_lights").css("height", "1px"); $("#the_lights").css("display", "block"); $("#the_lights").fadeTo("slow", 0); }); }); I think it's a jquery. Where do I place it and how do I call it's function? Been stuck on this thing for 6 hours now and any help would be greatly appreciated...

    Read the article

  • PHP Curl - returned html all messed up

    - by yoda
    I'm trying to fetch some contents about articles in a website via Curl, which I'm doing as follows : $url = 'http://lisboacity.olx.pt/oportunidade-pastor-alemao-7-meses-com-lop-iid-432402267'; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 10); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322'); curl_setopt($ch, CURLOPT_FAILONERROR, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_AUTOREFERER, true); curl_setopt($ch, CURLOPT_TIMEOUT, 10); $data = curl_exec($ch); curl_close($ch); echo $data; However, as you can see, the result is this : result For indexation purposes, I'm leaving a small paste of the output here : ‹ÜW_oÛ6N€~F{hD’å?±“ØÜÄm‚&qZ;-Ö¢0h‰–™R¢JR¶Óapcèð}*ö²§)%Vœ&YÚ·†eêî~¿;òx<6×ö{{ƒ_N»èp|„NÏžî!ËvÝו=×Ýìg‚ªSòÐ@àXREyŒ™ëvO,dM”Jv\w6›9³ŠÃEè^º±ªË8—Ä TµW›ú•~À#" #mj“)¶¬=++{p‘ùÙ¨e)2Wmù,$Q­Ã~Ïn4jÛ¶g!÷.¨#‡)‹p‰26«>Í–M·.ƒTŽ8ö©ºp8›;‰r-¤°ÕJÂÆ£¢Š‘v/áB¥1 p@ÖN±T\ #Ñ'Žê("’H ŽÐQïÔ…#ƒ:10•(à £¨Ï%¼D]øá??ñ¦›d‘Å8"-+ Ò4Ñ3_ç:åÓÏÁ†ð’\[‘8]ÿÑëÎà zÕ;AOûý½ƒÞÓþA÷ðxíÑê£Uã»ôvS_pB“M ’aÙq€AŠX"øNa¦bx’;hŸÊäoCÃ0þjB3C@ Rå"™0Ãz€cž&ü{æäjúô '&äö'¤åUªõZ½î5êÀd2Ñø=„µ,Ç<†bÛìž3èGöØj±Ð{9Ø; ýÞÉ«,Æ]©.‘îO!Åb~–Á2 !°'uåÊj_Êÿ„œ=†žç;Æ$"Ó-3–­ I've also tried to load the url contents with PHP's DomDocument class with the same result. What could be causing this? Thanks in advance!

    Read the article

  • cURL + HTTP_POST, keep getting 500 error. Has no idea?

    - by mysqllearner
    Okay, I want to make a HTTP_POST using cURL to a SSL site. I already imported the certificate to my server. This is my code: $url = "https://www.xxx.xxx"; $post = "";# all data that going to send $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $post); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0'); $exe = curl_exec($ch); $getInfo = curl_getinfo($ch); if ($exe === false) { $output = "Error in sending"; if (curl_error($ch)){ $output .= "\n". curl_error($ch); } } else if($geInfo['http_code'] != 777){ $output = "No data returned. Error: " . $geInfo['http_code']; if (curl_error($ch)){ $output .= "\n". curl_error($ch); } } curl_close($c); echo $output; It keep returned "500". Based on w3schools, 500 means Internal Server Error. Is my server having problem? How to solve/troubleshoot this?

    Read the article

  • Why this C# Regular Expression crashes my program?

    - by robert_d
    using System; using System.IO; using System.Net; using System.Text.RegularExpressions; namespace Working { class Program4 { static string errorurl = "http://www.realtor.ca/propertyDetails.aspx?propertyId=8692663"; static void Main(string[] args) { string s; s = getWebpageContent(errorurl); s = removeNewLineCharacters(s); getFields(s); Console.WriteLine("End"); } public static void getFields(string html) { Match m; string fsRE = @"ismeasurement.*?>.*?(\d+).*?sqft"; m = Regex.Match(html, fsRE, RegexOptions.IgnoreCase); } private static string removeNewLineCharacters(string str) { string[] charsToRemove = new string[] { "\n", "\r" }; foreach (string c in charsToRemove) { str = str.Replace(c, ""); } return str; } static string getWebpageContent(string url) { WebClient client = new WebClient(); client.Headers.Add("user-agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)"); Stream data = client.OpenRead(url); StreamReader reader = new StreamReader(data); string s = reader.ReadToEnd(); data.Close(); reader.Close(); return s; } } } This program hangs. It runs correctly when I remove RegexOptions.IgnoreCase option or when I remove call to removeNewLineCharacters() function. Could someone tell me what is going on, please?

    Read the article

  • Cross domain ajax POST ie7 with jquery

    - by DickieBoy
    been having trouble with this script, ive managed to get it working in ie8, works on chrome fine. initilize: function(){ $('#my_form').submit(function(){ if ($.browser.msie && window.XDomainRequest) { var data = $('#my_form').serialize(); xdr=new XDomainRequest(); function after_xhr_load() { response = $.parseJSON(xdr.responseText); if(response.number =="incorrect format"){ $('#errors').html('error'); } else { $('#errors').html('worked'); } } xdr.onload = after_xhr_load; xdr.open("POST",$('#my_form').attr('action')+".json"); xdr.send(data); } else { $.ajax({ type: "POST", url: $('#my_form').attr('action')+".json", data: $('#my_form').serialize(), dataType: "json", complete: function(data) { if(data.statusText =="OK"){ $('#errors').html('error'); } if(data.statusText =="Created"){ response = $.parseJSON(data.responseText); $('#errors').html('Here is your code:' +response.code); } } }); } return false; }); } I understand that ie7 does not have the XDomainRequest() object. How can I replicate this in ie7. Thanks, in advance

    Read the article

  • javascript :Object doesn't support this property or method

    - by Kaushik
    In my jsp page, I have in the tag, the following code: <script type="text/javascript" src="<%=request.getContextPath()%>/static/js/common/common.js"></script> <script type="text/javascript"> // Function for Suppressing the JS Error function silentErrorHandler() {return true;} window.onerror=silentErrorHandler; </script> If there is some javascript executing on the jsp page after this, then I guess silentErrorHandler() will have no effect. i.e. the error will still show on page. IS this correct? Because the error is showing and am not sure why. The second part of the question is this: The error is Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; AskTbFXTV5/5.9.1.14019) Timestamp: Fri, 7 Jan 2011 21:26:23 UTC Message: Object doesn't support this property or method Line: 613 Char: 1 Code: 0 URI: http://localhost:9080/Claris/static/js/common/common.js And finally, line 613 states document.captureEvents(Event.MOUSEUP); There is error on IE8. Runs fine on Mozilla and IE7. Any suggestions will be very helpful

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >