Search Results

Search found 904 results on 37 pages for 'ie6'.

Page 34/37 | < Previous Page | 30 31 32 33 34 35 36 37  | Next Page >

  • Chrome Extension - Cross-Origin XMLHttpRequest - Returning HTML/JSON

    - by Tyler
    Hi everyone, I hope you can help me :) I've created a Chrome extension (my first one) and I'm having some difficulty auto-populating a <select> with <option> that are being returned. the default_popup page is index.htm. I have two <select> (listboxes? can't remember the name) boxes. When a user first clicks the extension, it performs a XMLHttpRequest to a php script and get's a list of names from a MySQL database. It returns (onLoad) the list in the form of: <option>blah</option> When a user selects an option from the first listbox/select, it performs another XMLHttpRequest and auto-populates the second listbox/select. Then when a user selects an option from the second listbox it will eventually populate a few details further down the page. I've been testing by just running the index.htm file and seeing if just the code works correctly, which it does. However when trying to view it from the extension, it doesn't work. The onLoad doesn't fill in the first listbox, and selecting an option (one that I typed in the box for testing purposes) from the first listbox doesn't populate the second listbox. I thought maybe it was a permissions error, so I tried adding the domain to the manifest.json file; but I appear to be getting an error in the manifest.json file after doing so. In my default_popup (index.htm) file I have this script for my XMLHttpRequest: <script type="text/javascript"> function getClient(str,type) { if (str=="") { document.getElementById(type).innerHTML=""; return; } if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { document.getElementById(type).innerHTML=xmlhttp.responseText; } } xmlhttp.open("GET","http://(domain removed)/Extension/getInfo.php?q="+type+"&c="+str,true,"user","pass"); xmlhttp.send(); } </script> This is what my manifest.json file looks like: { "name": "Client Center Lite", "version": "1.0", "description": "blah", "browser_action": { "default_icon": "images/icon_19.png", "default_popup": "index.htm", "default_title": "Client Center Lite" }, "icons":{ "128":"images/icon_128.png" } "permissions": { "http://(domain removed)/" }, } Am I doing this correctly? The point of the extension is to be able to quickly view client details. The extension will only be given to employees locally in a .crx file, and not distributed online. The domain I am accessing through the PHP/MySQL is accessible from the web, but I'm currently using localhost in my mysql_connect string. Do I need to be returning the <option> elements encoded as JSON? If so, I'm completely cluesless as how to do that.

    Read the article

  • jqmodal IE (7 or 8) flashes black before modal loaded

    - by brad
    This is killing me. In both IE7 and 8, using jqModal, the screen flashes black before the modal content is loaded. I've set up a test app to show you what's happening. I've taken jqModal EXACTLY from the site, no changes whatsoever, no external css that could be affecting my app. It works perfectly in every other browser (including IE6). http://jqmtest.heroku.com/ So, first two links are ajax calls, second is straight up inline HTML. (I originally thought it was the ajax that was affecting it, but that doesn't seem to be the case, I then thought it was slow loading ajax, hence to two differen ajax links) What's crazy is that the jqmodal site itself works perfectly in IE, no flashing of black, but I can't see what I'm doing wrong. Code is straight forward html: <body> <div id="ajaxModal" class="jqmWindow"></div> <div id="inlineModal" class="jqmWindow"> <div style="height:300px;position:relative;"> <p>Here's some inline content</p> <a href="#" onclick='$("#inlineModal").jqmHide();return false;' style="position:absolute;bottom:10px;right:10px">Close</a> </div> </div> <div style="width:600px;height:400px;margin:auto;background:#eee;"> <p><a href="/ajax/short" class="jqModal">Short loading modal</a></p> <br /> <p><a href="/ajax/long" class="jqModal">Longer loading modal</a></p> <br /> <p><a href="#" class="jqInline">inline modal</a></p> </div> </body> Javascript: <script type="text/javascript"> $(function(){ $("#ajaxModal").jqm({ajax:'@href', modal:true}); $("#inlineModal").jqm({modal:true, trigger:'.jqInline'}); }); </script> CSS is exactly the same as the one downloaded from jqModal's site so I'll omit it, but you can see it on my app Has anyone experienced this? I don't get how his works and mine doesn't.

    Read the article

  • 3-row layout, expanding middle, min-height:100% so footer is at bottom when there is minimal content

    - by David Lawson
    How would I change this to make the middle div expand vertically to fill the white space? <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title></title> <style type="text/css"> body,td,th { font-family: Tahoma, Geneva, Verdana, sans-serif; } html,body { margin:0; padding:0; height:100%; /* needed for container min-height */ } #container { position:relative; /* needed for footer positioning*/ margin:0 auto; /* center, not in IE5 */ width:100%; height:auto !important; /* real browsers */ height:100%; /* IE6: treaded as min-height*/ min-height:100%; /* real browsers */ } #header { height: 150px; border-bottom: 2px solid #ff8800; position: relative; background-color: #c97c3e; } #middle { padding-right: 90px; padding-left: 90px; padding-top: 35px; padding-bottom: 43px; background-color: #0F9; } #footer { border-top: 2px solid #ff8800; background-color: #ffd376; position:absolute; width:100%; bottom:0; /* stick to bottom */ } </style> </head> <body> <div id="container"> <div id="header"> Header </div> <div id="middle"> Middle </div> <div id="footer"> Footer </div> </div> </body> </html>

    Read the article

  • CSS selectors : should I make my CSS easier to read or optimise the speed

    - by Laurent Bourgault-Roy
    As I was working on a small website, I decided to use the PageSpeed extension to check if their was some improvement I could do to make the site load faster. However I was quite surprise when it told me that my use of CSS selector was "inefficient". I was always told that you should keep the usage of the class attribute in the HTML to a minimum, but if I understand correctly what PageSpeed tell me, it's much more efficient for the browser to match directly against a class name. It make sense to me, but it also mean that I need to put more CSS classes in my HTML. It make my .css file harder to read. I usually tend to mark my CSS like this : #mainContent p.productDescription em.priceTag { ... } Which make it easy to read : I know this will affect the main content and that it affect something in a paragraph tag (so I wont start to put all sort of layout code in it) that describe a product and its something that need emphasis. However it seem I should rewrite it as .priceTag { ... } Which remove all context information about the style. And if I want to use differently formatted price tag (for example, one in a list on the sidebar and one in a paragraph), I need to use something like that .paragraphPriceTag { ... } .listPriceTag { ... } Which really annoy me since I seem to duplicate the semantic of the HTML in my classes. And that mean I can't put common style in an unqualified .priceTag { ... } and thus I need to replicate the style in both CSS rule, making it harder to make change. (Altough for that I could use multiple class selector, but IE6 dont support them) I believe making code harder to read for the sake of speed has never been really considered a very good practice . Except where it is critical, of course. This is why people use PHP/Ruby/C# etc. instead of C/assembly to code their site. It's easier to write and debug. So I was wondering if I should stick with few CSS classes and complex selector or if I should go the optimisation route and remove my fancy CSS selectors for the sake of speed? Does PageSpeed make over the top recommandation? On most modern computer, will it even make a difference?

    Read the article

  • Window Media Player issues two requests for the audio on web page

    - by Ron Harlev
    I'm using Windows Media Player in a web page. I have version 11 installed so that is the version I'm testing with right now. The player is embedded on the page with this HTML: <OBJECT id='MS_mediaPlayer' width="400" height="45" classid='CLSID:6BF52A52-394A-11D3-B153-00C04F79FAA6' codebase='http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#Version=5,1,52,701' standby='Loading Microsoft Windows Media Player components...' type='application/x-oleobject'> <param name='autoStart' value="false"> <param name='uiMode' value="invisible"> <param name='loop' value="false"> </OBJECT> I'm calling in JavaScript: MS_mediaPlayer.URL = "SomeAudioFile.mp3" MS_mediaPlayer.controls.play(); When I look at Fiddler I can see that the player actually downloads "SomeAudioFile.mp3" twice. Is there some setting I have wrong? I was trying to set the "autoPlay" to true and avoid calling "play()". Got the same result - two downloads. UPDATE: The first request's user-agent is "Windows-Media-Player/11.0.5721.5268". The second has "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; GTB6; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)". Looks like the browser is running the same request the second time. No Idea why Any ideas? UPDATE (4/1/10): Still no solution. I debugged the JS thoroughly and there is only one call to MediaPlayer.URL='.....' to set the audio file. Nothing else triggers the media player to load the file and there is no other place referencing the audio file on the page. One other interesting fact is that this doesn't happen (the double loading of the audio) when I run the browser locally on my development web server. But other remote requests to the same web server generate the double audio loading. I believe I eliminated any correlation with specific IE version or media player version. This happens with IE6-8 and WM9-12

    Read the article

  • Are jQuery's :first and :eq(0) selectors functionally equivalent?

    - by travis
    I'm not sure whether to use :first or :eq(0) in a selector. I'm pretty sure that they'll always return the same object, but is one speedier than the other? I'm sure someone here must have benchmarked these selectors before and I'm not really sure the best way to test if one is faster. Update: here's the bench I ran: /* start bench */ for (var count = 0; count < 5; count++) { var i = 0, limit = 10000; var start, end; start = new Date(); for (i = 0; i < limit; i++) { var $radeditor = $thisFrame.parents("div.RadEditor.Telerik:eq(0)"); } end = new Date(); alert("div.RadEditor.Telerik:eq(0) : " + (end-start)); var start = new Date(); for (i = 0; i < limit; i++) { var $radeditor = $thisFrame.parents("div.RadEditor.Telerik:first"); } end = new Date(); alert("div.RadEditor.Telerik:first : " + (end-start)); start = new Date(); for (i = 0; i < limit; i++) { var radeditor = $thisFrame.parents("div.RadEditor.Telerik")[0]; } end = new Date(); alert("(div.RadEditor.Telerik)[0] : " + (end-start)); start = new Date(); for (i = 0; i < limit; i++) { var $radeditor = $($thisFrame.parents("div.RadEditor.Telerik")[0]); } end = new Date(); alert("$((div.RadEditor.Telerik)[0]) : " + (end-start)); } /* end bench */ I assumed that the 3rd would be the fastest and the 4th would be the slowest, but here's the results that I came up with: FF3: :eq(0) :first [0] $([0]) trial1 5275 4360 4107 3910 trial2 5175 5231 3916 4134 trial3 5317 5589 4670 4350 trial4 5754 4829 3988 4610 trial5 4771 6019 4669 4803 Average 5258.4 5205.6 4270 4361.4 IE6: :eq(0) :first [0] $([0]) trial1 13796 15733 12202 14014 trial2 14186 13905 12749 11546 trial3 12249 14281 13421 12109 trial4 14984 15015 11718 13421 trial5 16015 13187 11578 10984 Average 14246 14424.2 12333.6 12414.8 I was correct about just returning the first native DOM object being the fastest ([0]), but I can't believe the wrapping that object in the jQuery function was faster that both :first and :eq(0)! Unless I'm doing it wrong.

    Read the article

  • Problem with jQuery.ajax with 'delete' method in ie

    - by Max Williams
    I have a page where the user can edit various content using buttons and selects that trigger ajax calls. In particular, one action causes a url to be called remotely, with some data and a 'put' request, which (as i'm using a restful rails backend) triggers my update action. I also have a delete button which calls the same url but with a 'delete' request. The 'update' ajax call works in all browsers but the 'delete' one doesn't work in IE. I've got a vague memory of encountering something like this before...can anyone shed any light? here's my ajax calls: //update action - works in all browsers jQuery.ajax({ async:true, data:data, dataType:'script', type:'put', url:"/quizzes/"+quizId+"/quiz_questions/"+quizQuestionId, success: function(msg){ initializeQuizQuestions(); setPublishButtonStatus(); } }); //delete action - fails in ie function deleteQuizQuestion(quizQuestionId, quizId){ //send ajax call to back end to change the difficulty of the quiz question //back end will then refresh the relevant parts of the page (progress bars, flashes, quiz status) jQuery.ajax({ async:true, dataType:'script', type:'delete', url:"/quizzes/"+quizId+"/quiz_questions/"+quizQuestionId, success: function(msg){ alert("success"); initializeQuizQuestions(); setSelectStatus(quizQuestionId, true); jQuery("tr[id*='quiz_question_"+quizQuestionId+"']").removeClass('selected'); }, error: function(msg){ alert("error:" + msg); } }); } I put the alerts in success and error in the delete ajax just to see what happens, and the 'error' part of the ajax call is triggered, but WITH NO CALL BEING MADE TO THE BACK END (i know this by watching my back end server logs). So, it fails before it even makes the call. I can't work out why - the 'msg' i get back from the error block is blank. Any ideas anyone? Is this a known problem? I've tested it in ie6 and ie8 and it doesn't work in either. thanks - max EDIT - the solution - thanks to Nick Craver for pointing me in the right direction. Rails (and maybe other frameworks?) has a subterfuge for the unsupported put and delete requests: a post request with the parameter "_method" (note the underscore) set to 'put' or 'delete' will be treated as if the actual request type was that string. So, in my case, i made this change - note the 'data' option': jQuery.ajax({ async:true, data: {"_method":"delete"}, dataType:'script', type:'post', url:"/quizzes/"+quizId+"/quiz_questions/"+quizQuestionId, success: function(msg){ alert("success"); initializeQuizQuestions(); setSelectStatus(quizQuestionId, true); jQuery("tr[id*='quiz_question_"+quizQuestionId+"']").removeClass('selected'); }, error: function(msg){ alert("error:" + msg); } }); } Rails will now treat this as if it were a delete request, preserving the REST system. The reason my PUT example worked was just because in this particular case IE was happy to send a PUT request, but it officially does not support them so it's best to do this for PUT requests as well as DELETE requests.

    Read the article

  • extra vertical space within <li> in IE7

    - by powerboy
    The test case is in below. Or you can view it in jsbin: http://jsbin.com/uxagi. <!DOCTYPE html> <html> <head> <style type="text/css"> body {margin: 20px; } #main {border: 1px solid red;} img {float: left; height: 100px; padding: 0 10px 10px 0;} ul {margin: 0; padding: 0; list-style-type: none;} </style> </head> <body> <div id="main"> <ul> <li> <img src="http://upload.wikimedia.org/wikipedia/en/thumb/0/07/CranebyLinson1894.jpg/100px-CranebyLinson1894.jpg" /> <div class="content">"The Open Boat" is a short story by American author Stephen Crane. First published in 1897, it was based on Crane's experience of having survived a shipwreck off the coast of Florida earlier that year while traveling to Cuba to work as a newspaper correspondent. Crane was stranded at sea for thirty hours when his ship, the SS Commodore, sank after hitting a sandbar. He and three other men were forced to navigate their way to shore in a small boat; one of the men, an oiler named Billie Higgins, drowned. Crane subsequently adapted his report into narrative form, and the short story "The Open Boat" was published in Scribner's Magazine. The story is told from the point of view of an anonymous correspondent, Crane's fictional doppelgänger, and the action closely resembles the author's experiences after the shipwreck. A volume titled The Open Boat and Other Tales of Adventure was published in the United States in 1898. Praised for its innovation by contemporary critics, the story is considered an exemplary work of literary Naturalism. One of the most frequently discussed works in Crane's canon, it is notable for its use of imagery, irony, symbolism, and exploration of themes including survival, solidarity, and the conflict between man and nature. H. G. Wells considered "The Open Boat" to be "beyond all question, the crown of all [Crane's] work".</div> </li> </ul> </div> </body> </html> Note that in standards-compliant browsers and IE8, there is no vertical space between the red border and the text. But there is vertical space in IE7 (haven't tested in IE6).

    Read the article

  • set equal height on multiple divs

    - by Greenie
    I need to set equal height on a series of divs inside another div wrapper. The problem is that I dont want the same height on all of them. The page kind of have 3 columns and the floating divs can be 1, 2 or 3 columns wide. The divs float left, so the following example will give me three rows of divs in my wrapper. How can I set equal height on the divs that are in the same row? In my example I want nr 1 and 2 to have equal height and 3, 4 and 5 another equal height? I cant know beforehand how many divs there is or how wide or high they are. Edit: They can be for instance 300, 600 or 900 px wide and the page width is 900px <div id="wrapper"> <div class="one-wide">nr1</div> <div class="two-wide">nr2</div> <div class="one-wide">nr3</div> <div class="one-wide">nr4</div> <div class="one-wide">nr5</div> <div class="three-wide">nr6</div> </div> Im thinking I somehow need to figure out when the added width of the divs is at the full page width and set equal height on those. Then do the same on the next divs. But I cant wrap my head around it. Currently im just using this to set the height on the children of the wrapper: $.fn.equalHeights = function(px) { $(this).each(function(){ var currentTallest = 0; $(this).children().each(function(i){ if ($(this).height() > currentTallest) { currentTallest = $(this).height(); } }); // for ie6, set height since min-height isn't supported if ($.browser.msie && $.browser.version == 6.0) { $(this).children().css({'height': currentTallest}); } $(this).children('div').css({'min-height': currentTallest}); }); return this; };

    Read the article

  • Ajax to read updated values from XML

    - by punit
    I am creating file upload progress bar. I have a CGI script which copies the data, and here I increment the progress bar value by ONE after certain iterations. I am storing the incremented value in XML file (I also tried using plain text file). On the other side I have ajax reading incremented value from xml and depending on that it increments the DIV element. However, what happens here is, it seems to me that although the ajax reads all the incremented values but it processes it after the CGI has finished execution. That is progress bar starts execution once the file copying and other stuff in CGI is completed. My code is: AJAX:::: function polling_start() { //GETS CALLED WHEN USER HITS FILE UPLOAD BUTTON intervalID = window.setInterval(send_request,100); } window.onload = function (){ request = initXMLHttpClient(); request.overrideMimeType('text/xml'); progress = document.getElementById('progress'); } function initXMLHttpClient() { if (window.XMLHttpRequest){ // code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp = new XMLHttpRequest(); } else{ // code for IE6, IE5 xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } return xmlhttp } function send_request() { request.open("GET","progress_bar.xml",true); request.onreadystatechange = request_handler; request.send(); } function request_handler() { if (request.readyState == 4 && request.status == 200) { var level=request.responseXML.getElementsByTagName('PROGRESS')[0].firstChild; progress.style.width = progress.innerHTML = level.nodeValue + '%'; progress.style.backgroundColor = "green"; } } /****ON SERVER SIDE*********/ char xmlDat1[] = "<DOCUMENT><PROGRESS>"; char xmlDat2[] = "</PROGRESS></DOCUMENT>"; fptr = fopen("progress_bar.xml", "w"); .........OTHER STUFF.............................. ................................. if(i == inc && j<=100) { fprintf(fptr, "%s\n", "\n\n\n]"); //fprintf(fptr, "%s\n", ""); fprintf(fptr, "%s", xmlDat1); // fprintf(fptr, "%d" ,j); fprintf(fptr, "%s" ,xmlDat2); fseek(fptr, 0, SEEK_SET); /*fprintf(fptr, "%d" ,j); fseek(fptr, 0, SEEK_SET);*/ i = 0; //sleep(1); j++; } (I also tried to write in .text, but same response) Any quick response would be appreciable.

    Read the article

  • HTTP caching confusion

    - by Keith
    I'm not sure whether this is a server issue, or whether I'm failing to understand how HTTP caching really works. I have an ASP MVC application running on IIS7. There's a lot of static content as part of the site including lots of CSS, Javascript and image files. For these files I want the browser to cache them for at least a day - our .css, .js, .gif and .png files rarely change. My web.config goes like this: <system.webServer> <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" /> </staticContent> </system.webServer> The problem I'm getting is that the browser (tested Chrome, IE8 and FX) doesn't seem to be caching the files as I'd expect. I've got the default settings (check for newer pages automatically in IE). On first visit the content downloads as expected HTTP/1.1 200 OK Cache-Control: max-age=86400 Content-Type: image/gif Last-Modified: Fri, 07 Aug 2009 09:55:15 GMT Accept-Ranges: bytes ETag: "3efeb2294517ca1:0" Server: Microsoft-IIS/7.0 X-Powered-By: ASP.NET Date: Mon, 07 Jun 2010 14:29:16 GMT Content-Length: 918 <content> I think that the Cache-Control: max-age=86400 should tell the browser not to request the page again for a day. Ok, so now the page is reloaded and the browser requests the image again. This time it gets an empty response with these headers: HTTP/1.1 304 Not Modified Cache-Control: max-age=86400 Last-Modified: Fri, 07 Aug 2009 09:55:15 GMT Accept-Ranges: bytes ETag: "3efeb2294517ca1:0" Server: Microsoft-IIS/7.0 X-Powered-By: ASP.NET Date: Mon, 07 Jun 2010 14:30:32 GMT So it looks like the browser has sent the ETag back (as a unique id for the resource), and the server's come back with a 304 Not Modified - telling the browser that it can use the previously downloaded file. It seems to me that would be correct for many caching situations, but here I don't want the extra round trip. I don't care if the image gets out of date when the file on the server changes. There are a lot of these files (even with sprite-maps and the like) and many of our clients have very slow networks. Each round trip to ping for that 304 status is taking about a 10th to a 5th of a second. Many also have IE6 which only has 2 HTTP connections at a time. The net result is that our application appears to be very slow for these clients with every page taking an extra couple of seconds to check that the static content hasn't changed. What response header am I missing that would cause the browser to aggressively cache the files? How would I set this in a .Net web.config for IIS7? Am I misunderstanding how HTTP caching works in the first place?

    Read the article

  • Browser detection Plugin?

    - by chobo2
    Hi I have a website that I made and I am planning to redo it. The current version of the site used a jquery callout plugin that did not fully work in IE6. This got me thinking about browser detection. At first I was just going to put the supported browsers on the home page but then today on Digg I saw some post about some jquery plugins and wordpress and in the article there was a plugin for detecting IE. So I started to look around for some browser detection plugins. I found a few of them but they where over the top like this one sevenup Its nice but it makes a huge popup and tells them to update. This one is better then another one I found where they basically forced the user to update or they could not continue on the site. So I found this one jquery plugin This one is pretty nice since it looks at the major browsers and does detection on them too expect for chrome which I noticed triggers and an outdated browser with this plugin. So I started to look at the jquery documentation to see if they had a browser detection for chrome this is when I saw that they "Deprecated" and now recommend "Support". So now I am just confused like "Support" seems to be good and I read many posts on this site saying you should use it. But then it does not support stuff like .png detection that might have been useful to me since of that plugin(however I probably will not be using the plugin anymore since I think the author just gave up on it). Plus I don't know if this is something I am looking for at this time. Like I am guessing with "Support" you use it to detect something that is not supported and then do some alternative thing for that browser? For me I am more looking for something to tell the user "Hey look I tested this browser in the these versions of Firefox(3.5+), IE(8+), Opera(9.5+),Chrome(Something), Safari(Something). If your not using these versions you may not being seeing the site how it was intended" Of course I would try to have something shorter then that message but that the gyst. I am also assuming that the site would work in future versions of these browsers. I still check to see if my site works(they usually do) and is half decent in IE 6 but I won't spend hours fixing stuff that might be off in older browsers like IE 6. I won't test my site in older version of other browsers like firefox since I would think the user have to the sense to update so no point testing firefox 2.0 or whatever. So is there a plugin that fits this description? Or can "Support" do what I want? Thanks

    Read the article

  • IE CSS bug: table border showing div with visibility: hidden, position: absolute

    - by Alessandro Vernet
    The issue I have a <div> on a page which is initially hidden with a visibility: hidden; position: absolute. The issue is that if a <div> hidden this way contains a table which uses border-collapse: collapse and has a border set on it cells, that border still shows "through" the hidden <div> on IE. Try this for yourself by running the code below on IE6 or IE7. You should get a white page, but instead you will see: Possible workaround Since this is happening on IE and not on other browsers, I assume that this is an IE bug. One workaround is to add the following code which will override the border: .hide table tr td { border: none; } I am wondering: Is this a known IE bug? Is there a more elegant solution/workaround? The code <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <style type="text/css"> /* Style for tables */ .table tr td { border: 1px solid gray; } .table { border-collapse: collapse; } /* Class used to hide a section */ .hide { visibility: hidden; position: absolute; } </style> </head> <body> <div class="hide"> <table class="table"> <tr> <td>Gaga</td> </tr> </table> </div> </body> </html>

    Read the article

  • Internet Explorer 8 + Deflate

    - by Andreas Bonini
    I have a very weird problem.. I really do hope someone has an answer because I wouldn't know where else to ask. I am writing a cgi application in C++ which is executed by Apache and outputs HTML code. I am compressing the HTML output myself - from within my C++ application - since my web host doesn't support mod_deflate for some reason. I tested this with Firefox 2, Firefox 3, Opera 9, Opera 10, Google Chrome, Safari, IE6, IE7, IE8, even wget.. It works with ANYTHING except IE8. IE8 just says "Internet Explorer cannot display the webpage", with no information whatsoever. I know it's because of the compression only because it works if I disable it. Do you know what I'm doing wrong? I use zlib to compress it, and the exact code is: /* Compress it */ int compressed_output_size = content.length() + (content.length() * 0.2) + 16; char *compressed_output = (char *)Alloc(compressed_output_size); int compressed_output_length; Compress(compressed_output, compressed_output_size, (void *)content.c_str(), content.length(), &compressed_output_length); /* Send the compressed header */ cout << "Content-Encoding: deflate\r\n"; cout << boost::format("Content-Length: %d\r\n") % compressed_output_length; cgiHeaderContentType("text/html"); cout.write(compressed_output, compressed_output_length); static void Compress(void *to, size_t to_size, void *from, size_t from_size, int *final_size) { int ret; z_stream stream; stream.zalloc = Z_NULL; stream.zfree = Z_NULL; stream.opaque = Z_NULL; if ((ret = deflateInit(&stream, CompressionSpeed)) != Z_OK) COMPRESSION_ERROR("deflateInit() failed: %d", ret); stream.next_out = (Bytef *)to; stream.avail_out = (uInt)to_size; stream.next_in = (Bytef *)from; stream.avail_in = (uInt)from_size; if ((ret = deflate(&stream, Z_NO_FLUSH)) != Z_OK) COMPRESSION_ERROR("deflate() failed: %d", ret); if (stream.avail_in != 0) COMPRESSION_ERROR("stream.avail_in is not 0 (it's %d)", stream.avail_in); if ((ret = deflate(&stream, Z_FINISH)) != Z_STREAM_END) COMPRESSION_ERROR("deflate() failed: %d", ret); if ((ret = deflateEnd(&stream)) != Z_OK) COMPRESSION_ERROR("deflateEnd() failed: %d", ret); if (final_size) *final_size = stream.total_out; return; }

    Read the article

  • Precise explanation of JavaScript <-> DOM circular reference issue

    - by Joey Adams
    One of the touted advantages of jQuery.data versus raw expando properties (arbitrary attributes you can assign to DOM nodes) is that jQuery.data is "safe from circular references and therefore free from memory leaks". An article from Google titled "Optimizing JavaScript code" goes into more detail: The most common memory leaks for web applications involve circular references between the JavaScript script engine and the browsers' C++ objects' implementing the DOM (e.g. between the JavaScript script engine and Internet Explorer's COM infrastructure, or between the JavaScript engine and Firefox XPCOM infrastructure). It lists two examples of circular reference patterns: DOM element → event handler → closure scope → DOM DOM element → via expando → intermediary object → DOM element However, if a reference cycle between a DOM node and a JavaScript object produces a memory leak, doesn't this mean that any non-trivial event handler (e.g. onclick) will produce such a leak? I don't see how it's even possible for an event handler to avoid a reference cycle, because the way I see it: The DOM element references the event handler. The event handler references the DOM (either directly or indirectly). In any case, it's almost impossible to avoid referencing window in any interesting event handler, short of writing a setInterval loop that reads actions from a global queue. Can someone provide a precise explanation of the JavaScript ↔ DOM circular reference problem? Things I'd like clarified: What browsers are effected? A comment in the jQuery source specifically mentions IE6-7, but the Google article suggests Firefox is also affected. Are expando properties and event handlers somehow different concerning memory leaks? Or are both of these code snippets susceptible to the same kind of memory leak? // Create an expando that references to its own element. var elem = document.getElementById('foo'); elem.myself = elem; // Create an event handler that references its own element. var elem = document.getElementById('foo'); elem.onclick = function() { elem.style.display = 'none'; }; If a page leaks memory due to a circular reference, does the leak persist until the entire browser application is closed, or is the memory freed when the window/tab is closed?

    Read the article

  • CSS Z-Index with Gradient Background

    - by Jona
    I'm making a small webpage where the I would like the top banner with some text to remain on top, as such: HTML: <div id = "topBanner"> <h1>Some Text</h1> </div> CSS: #topBanner{ position:fixed; background-color: #CCCCCC; width: 100%; height:200px; top:0; left:0; z-index:900; background: -moz-linear-gradient(top, rgba(204,204,204,0.65) 0%, rgba(204,204,204,0.44) 32%, rgba(204,204,204,0.12) 82%, rgba(204,204,204,0) 100%); /* FF3.6+ */ background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,rgba(204,204,204,0.65)), color-stop(32%,rgba(204,204,204,0.44)), color-stop(82%,rgba(204,204,204,0.12)), color-stop(100%,rgba(204,204,204,0))); /* Chrome,Safari4+ */ background: -webkit-linear-gradient(top, rgba(204,204,204,0.65) 0%,rgba(204,204,204,0.44) 32%,rgba(204,204,204,0.12) 82%,rgba(204,204,204,0) 100%); /* Chrome10+,Safari5.1+ */ background: -o-linear-gradient(top, rgba(204,204,204,0.65) 0%,rgba(204,204,204,0.44) 32%,rgba(204,204,204,0.12) 82%,rgba(204,204,204,0) 100%); /* Opera 11.10+ */ background: -ms-linear-gradient(top, rgba(204,204,204,0.65) 0%,rgba(204,204,204,0.44) 32%,rgba(204,204,204,0.12) 82%,rgba(204,204,204,0) 100%); /* IE10+ */ background: linear-gradient(to bottom, rgba(204,204,204,0.65) 0%,rgba(204,204,204,0.44) 32%,rgba(204,204,204,0.12) 82%,rgba(204,204,204,0) 100%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#a6cccccc', endColorstr='#00cccccc',GradientType=0 ); /* IE6-9 */ } /*WebPage Header*/ h1{ font-size:3em; color:blue; text-shadow:#CCCCCC 2px 2px 2px, #000 0 -1px 2px; position: absolute; width: 570px; left:50%; right:50%; line-height:20px; margin-left: -285px; z-index:999; } The z-index works fine, except that because I'm using a gradient any time I scroll down the elements behind the banner are still visible, albeit somewhat transparent. Is there any way to make them total invisible? i.e., what I'm trying to do is make it as though the banner is a solid color, even though it's a gradient. Thanks in advance for any help!

    Read the article

  • Why the parent page get refreshed when I click the link to open thickbox-styled form?

    - by user333205
    Hi, all: I'm using Thickbox 3.1 to show signup form. The form content comes from jquery ajax post. The jquery lib is of version 1.4.2. I placed a "signup" link into a div area, which is a part of my other large pages, and the whole content of that div area is ajax+posted from my server. To make thickbox can work in my above arangement, I have modified the thickbox code a little like that: //add thickbox to href & area elements that have a class of .thickbox function tb_init(domChunk){ $(domChunk).live('click', function(){ var t = this.title || this.name || null; var a = this.href || this.alt; var g = this.rel || false; tb_show(t,a,g); this.blur(); return false; });} This modification is the only change against the original version. Beacause the "signup" link is placed in ajaxed content, so I Use live instead of binding the click event directly. When I tested on my pc, the thickbox works well. I can see the signup form quickly, without feeling the content of the parent page(here, is the other large pages) get refreshed. But after transmiting my site files into VHost, when I click the "signup" link, the signup form get presented very slowly. The large pages get refreshed evidently, because the borwser(ie6) are reloading images from server incessantly. These images are set as background images in CSS files. I think that's because the slow connection of network. But why the parent pages get refreshed? and why the browser reloads those images one more time? Havn't those images been placed in local computer's disk? Is there one way to stop that reloadding? Because the signup form can't get displayed sometimes due to slow connection of network. To verified the question, you can access http://www.juliantec.info/track-the-source.html and click the second link in left grey area, that is the "signup" link mentioned above. Thinks!

    Read the article

  • Using JavaScript to parse an XML file

    - by Chris Clouten
    I am new to Stack OverFlow and coding in general. I am trying to take an XML file and render it in the browser using JavaScript. I have looked around at some sample code of how to do this and came up with the following code: <!DOCTYPE html> <html> <body> <script> if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.open("GET","social.xml",false); xmlhttp.send(); xmlDoc=xmlhttp.responseXML; document.write("<table border='1'>"); var x=xmlDoc.getElementsByTagName("CD"); for (i=0;i<x.length;i++) { document.write("<tr><td>"); document.write(x[i].getElementsByTagName("c_id")[0].childNodes[0].nodeValue); document.write("</td><td>"); document.write(x[i].getElementsByTagName("facebook_id")[0].childNodes[0].nodeValue); document.write("</td></tr>"); } document.write("</table>"); </script> </body> </html> Anyway, when I run this on my local server none of the data that I am trying to display in the table appears. My .html file and .xml file are in the same folder, so I believe I have the correct file pathway. I could just be making a rookie mistake here, but I can't for the life of me figure out why a table listing the c_id and facebook_id values is not being created. I looked around for answers and haven't been able to find any. Any help would be greatly appreciated. Thanks!

    Read the article

  • CSS selectors : should I minimise my use of the class attribute in the HTML or optimise the speed

    - by Laurent Bourgault-Roy
    As I was working on a small website, I decided to use the PageSpeed extension to check if their was some improvement I could do to make the site load faster. However I was quite surprise when it told me that my use of CSS selector was "inefficient". I was always told that you should keep the usage of the class attribute in the HTML to a minimum, but if I understand correctly what PageSpeed tell me, it's much more efficient for the browser to match directly against a class name. It make sense to me, but it also mean that I need to put more CSS classes in my HTML. It also make my .css file a little harder to read. I usually tend to mark my CSS like this : #mainContent p.productDescription em.priceTag { ... } Which make it easy to read : I know this will affect the main content and that it affect something in a paragraph tag (so I wont start to put all sort of layout code in it) that describe a product and its something that need emphasis. However it seem I should rewrite it as .priceTag { ... } Which remove all context information about the style. And if I want to use differently formatted price tag (for example, one in a list on the sidebar and one in a paragraph), I need to use something like that .paragraphPriceTag { ... } .listPriceTag { ... } Which really annoy me since I seem to duplicate the semantic of the HTML in my classes. And that mean I can't put common style in an unqualified .priceTag { ... } and thus I need to replicate the style in both CSS rule, making it harder to make change. (Altough for that I could use multiple class selector, but IE6 dont support them) I believe making code harder to read for the sake of speed has never been really considered a very good practice . Except where it is critical, of course. This is why people use PHP/Ruby/C# etc. instead of C/assembly to code their site. It's easier to write and debug. So I was wondering if I should stick with few CSS classes and complex selector or if I should go the optimisation route and remove my fancy CSS selectors for the sake of speed? Does PageSpeed make over the top recommandation? On most modern computer, will it even make a difference?

    Read the article

  • MooTools request fails

    - by acoder
    Hi everyone, I am trying to achieve this task using MooTools. Description: I have three buttons. Two buttons outside myDiv and one button inside myDiv. A click on any of these buttons initiates an AJAX request (passing button variable to "button.php") and updates myDiv content based on the response text. So, after update, myDiv shows Button3 link + a message showing which button has been clicked. The problem: Everything seems to work fine but after several clicks, it happens that myDiv shows loader.gif image and stops. After this, if I wait a few moments, the browser sometimes stops working (gets blocked). I noticed this problem with IE6. Does anyone know what does this problem mean and how it can be avoided? index.html <html> <head> <script type="text/javascript" src="mootools/mootools-1.2.4-core-nc.js"></script> <script type="text/javascript" src="mootools/mootools-1.2.4.4-more.js"></script> <script type="text/javascript"> window.addEvent('domready', function() { $("myPage").addEvent("click:relay(a)", function(e) { e.stop(); var myRequest = new Request({ method: 'post', url: 'button.php', data: { button : this.get('id'), test : 'test' }, onRequest: function() { $('myDiv').innerHTML = '<img src="images/loader.gif" />'; }, onComplete: function(response) { $('myDiv').innerHTML = response; } }); myRequest.send(); }); }); </script> </head> <body> <div id="myPage"> <a href="#" id="button1">Button1</a> <a href="#" id="button2">Button2</a> <div id="myDiv"> <a href="#" id="button3">Button3</a> </div> </div> </body> </html> button.php <a href="#" id="button3"Button3</a> <br><br> <?php echo 'You clicked ['.$_REQUEST['button'].']'; ?>

    Read the article

  • Providing updating suggestions list with javascript, php and ajax

    - by user1104854
    I'm trying to modify this example on making a live updating list to integrate it with my API. So, instead of using GET on the page with the form, I'd like to send it to that page via a function call. So, here's my form // message.php //function to display the hint sent from gethint.php function message_hint($hint){ echo $hint; } //displays the form for sending messages function send_message_form($to_user,$title,$message){ include 'gethint.php'; ?> <table> <form name = "send_message" method="post"> <td>Send A Message</td> <tr><td>To:</td><td><input type = "text" size="50" name="to_user" id = "to_user" value ="<? echo $to_user; ?>" onkeyup="showHint(this.value)"></td></tr> <tr><td>Title:</td><td><input type = "text" size="50" name="message_title"></td></tr> <tr><td>Message:</td><td><textarea rows="4" cols="50" name="message_details"></textarea></td></tr> <tr><td><input type="submit" name="submit_message"></td></tr> </table> </form> <? } Here's the head of message.php <head> <script> function showHint(str){ var to_user = document.getElementById("to_user").value //to_user is the id of the textbox if (str.length==0){ to_user.innerHTML=""; return; } if (window.XMLHttpRequest){// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); }else{// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function(){ if (xmlhttp.readyState==4 && xmlhttp.status==200){ alert(to_user) //properly displays the name via alert box to_user.innerHTML=xmlhttp.responseText; } } xmlhttp.open("GET","gethint.php?q="+to_user,true); xmlhttp.send(); } </script> </head> The page gethint.php is exactly the same, aside from this at the bottom. //echo $response //this was the original output $message = new messages; $message->message_hint($response);

    Read the article

  • sub menu border calls onmouseout event

    - by insanepaul
    I've created a simple menu and submenu with tags(not allowed to use ul elements). To access the submenu the user hovers their mouse over the menu item. I use the onmouseover and onmouseout events to either show or hide the sub menu depending on which item is selected. A pipe (|) is used to seperate each submenu item and this is what is causing me problems. When a user hovers their mouse above the pipe character the subMenu div calls the onmouseout event which is not what I want. So I added padding around the pipe character and a minus margin so that there were no gaps between the pipe character and the other elements. This worked for all browsers including IE8. But in IE7 (I haven't tested IE6 yet) the submenu div calls the onmouseout event when I touch the top bit of either the left or right border of the pipe character span element. <div id="subMenu" onmouseout="hideSubMenu()" > <div id="opinionSubMenu" onmouseover="showOpinionSubMenu()"> <a id="Blogs" href="HTMLNew.htm">BLOGS</a> <span class="SubMenuDelimiter">|</span> <a id="Comments" href="HTMLNew.htm">COMMENTS</a> <span class="SubMenuDelimiter">|</span> <a id="Views" href="HTMLNew.htm">VIEWS</a> </div> <div id="learningSubMenu" onmouseover="showLearningSubMenu()"> <a id="Articles" href="HTMLNew.htm">ARTICLES</a> <span class="SubMenuDelimiter">|</span> <a id="CoursesCases" href="HTMLNew.htm">COURSES & CASES</a> <span class="SubMenuDelimiter">|</span> <a id="PracticeImpact" href="HTMLNew.htm">PRACTICE IMPACT</a> </div> </div> This is my css class #subMenu{ padding:10px 0px; background-color:#F58F2D; font-weight:normal; text-decoration:none; font-family:Lucida Sans Unicode; font-size:14px; float:left; width:100%; display:none;} #Blogs, #Comments, #Views, #Articles { padding:10px 5px; background:none repeat scroll 0 0 transparent; color:#000000; font-weight:normal; text-decoration:none; border:solid 1px black;} #Blogs:hover, #Comments:hover, #Views:hover, #Articles:hover{ color:#ffffff; text-decoration:none;} .SubMenuDelimiter{ padding:10px 5px; margin:10px -5px;}

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • IE7 and 8 Hangs Randomly on CSS Images

    - by BJ Safdie
    We have an ASP.NET 3.5 application that has been in production for over a year. Our last release was a couple of months ago. We use CSS for styling and application of background images to divs and such. The server is Windows 2003 with IIS. Suddenly, this week, we have had reports from some users that the page seems to hang up while loading. The status bar was showing the name of a background image used in the page main area (assigned in CSS). At our office, some of us could recreate the problem, while others could not. IE6 and Firefox do not seem to be affected, only IE7/8. Running Fiddler on an affected machine and trying to see what was happening with the requests seemed to make the problem go away (while running through Fiddler, it returned when not). Hitting Refresh on a hung load often made the page load just fine. I checked the background image, and even replaced it with an archived copy. No joy. We re-deployed the app from our production source. No Joy. We restarted IIS and eventually rebooted the whole server. There are no unusual entries in the event logs, the app logs or the IIS logs. Finally, I removed the image entirely and re-styled the page not to use a background image. That solved the problem at least for now. However, we have reports of other images "hanging." The images are PNGs, but I have heard some rumors that sometimes a GIF hangs, but I have no screenshot to confirm. This just started happening "out of the blue." There have been no releases or updates applied to the server recently. We even checked updates on clients to see if a recent Windows Update might have caused this on the client, but there was nothing updated within the last couple of weeks. If you have any information about this problem, I would love to hear from you. I would also greatly appreciate any recommendations on additional diagnostics we can try.

    Read the article

  • Can't parse XML from AJAX response.

    - by Pavel
    Hi everyone. I'm having some problems with parsing the xml response from my ajax script. The XML looks like this: <IMAGE> <a href="address"> <img width="300" height="300" src="image.png class="image" alt="" title="LINKING"/> </a> </IMAGE> <LINK> www.address.com </LINK> <TITLE> This <i>is title</i> </TITLE> <EXCERPT> <p> And some excerpt </p> </EXCERPT> The code for js looks like this. function loadTab(id) { if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { xmlDoc=xmlhttp.responseXML; var title=""; var image=""; x=xmlDoc.getElementsByTagName("TITLE"); for (i=0;i<1;i++) { title=title + x[i].childNodes[0].nodeValue; } document.getElementById("ntt").innerHTML=title; x1=xmlDoc.getElementsByTagName("IMAGE"); for (j=0;j<1;j++) { image=image + x1[j].childNodes[0].nodeValue; } document.getElementById("nttI").innerHTML=image; } } var url = 'http://www.factmag.com/staging/page/?id='+id; xmlhttp.open("GET",url,true); xmlhttp.send(); } When I'm parsing it it pulls out the title but not the IMAGE tag contents. What I'm doing wrong? Can someone please tell me? Thanks in advance!

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37  | Next Page >