Search Results

Search found 13705 results on 549 pages for 'browser'.

Page 217/549 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • VLC ActivX plugin not playing video in update IE9

    - by Prashant Mehta
    I am using vlc ActiveX plugin on web browser IE9 to play video live streaming. Its work perfect in IE8 but when I update browser from IE8 to IE9 than it not playing video file or live straming. here is my code. <object type="application/x-vlc-plugin" id="vlc" width="517" height="388" classid="clsid:9BE31822-FDAD-461B-AD51-BE1D1C159921"> <param name="MRL" id="mrlVideo" value="" /> <param name="volume" value="50" /> <param name="autoplay" value="True" /> <param name="loop" value="false" /> <param name="fullscreen" value="false" /> <param name="wmode" value="transparent" /> <param name="toolbar" value="true" /> <param name="windowless" value="true" /> </object> and in javascript I am using these var vlc = document.getElementById("vlc"); var options = new Array(":rtsp-tcp"); var urlVideofile = "hppt://IP:portnumber/" var id = vlc.playlist.add(urlVideofile, null, options); vlc.playlist.playItem(id); Here is attach image that showing what exactly error is coming Any help is greatly appreciated Thanks.

    Read the article

  • Images in IFRAME no longer showing after I log out of Flickr?

    - by contact-kram
    I have a iframe window which displays user's flickr images. I use the flickr.photos.search api to download the user's image from flickr. This works great when the user is logged into flickr. But when I explicitly log off the user from flickr and the yahoo network and then attempt to download the flickr images, I get redirected to www.yahoo.com in a full browser window (not in my iframe). If I remember correctly, I did not have this issue when I was not using iframes and I was being redirected to the yahoo login screen. Any suggestions? To elaborate, this URI - http://www.flickr.com/services/api/auth.howto.web.html, lists the below step - Create an auth handler When users follow your login url, they are directed to a page on flickr.com which asks them if they want to authorize your application. This page displays your application title and description along with the logo, if you uploaded one. When the user accepts the request, they are sent back to the Callback URL you defined in step 2. The URL will have a frob parameter added to it. For example, if your Callback URL was http://test.com/auth.php then the user might be redirected to http://test.com/auth.php?frob=185-837403740 (The frob value in this example is '185-837403740'). This does not happen when I am in my iframe window but it does happen in my full browser window.

    Read the article

  • TortoiseSVN Error: "OPTIONS of 'https://...' could not connect to server (...)"

    - by Zack Peterson
    I'm trying to setup a new computer to synchronize with my SVN repository that's hosted with cvsdude.com. I get this error: Here's what I did (these have worked in the past): Downloaded and installed TortoiseSVN Created a new folder C:\aspwebsite Right-clicked, chose SVN Checkout... Entered the following information, clicked OK: URL of repository: https://<reponame>-svn.cvsdude.com/aspwebsite Checkout directory: C:\aspwebsite Checkout depth: Fully recursive Omit externals: Unchecked Revision: HEAD revision Got TortoiseSVN error: OPTIONS of 'https://<reponame>-svn.cvsdude.com/aspwebsite': could not connect to server (https://<reponame>-svn.cvsdude.com) Rather than getting the error, TortoiseSVN should have asked for my username and password and then downloaded about 90MB. Why can't I checkout from my Subversion repository? Kent Fredric wrote: Either their security certificate has expired, or their hosting is broken/down. Contact CVSDude and ask them whats up. It could also be a timeout, because for me their site is exhaustively slow.. It errors after only a couple seconds. I don't think it's a timeout. Matt wrote: Try visiting https://[redacted]-svn.cvsdude.com/aspwebsite and see what happens. If you can visit it in your browser, you ought to be able to get the files in your SVN client and we can work from there. If it fails, then there's your answer. I can access the site in a web browser.

    Read the article

  • Identical files from different servers. Why might IE 8 display them differently?

    - by jasongetsdown
    I'm working on a site that will go on my company's intranet. I developed it locally on my computer, checking it in different browsers and on colleague's computers, and when it was done I handed it off to IT. They put identical copies on a staging server, and on the production server. This is a site built only with html, javascript, and css. No server side scripting. It also uses a DWF viewer plugin from Autodesk. It is a single standalone page (not part of a CMS) that allows users to load drawings into the viewer and then click to see info from a database of space info saved in a series of js arrays (the space DB software spits out a js file with all the info listed in array literals, creating a crap ton of global variables - ugh, but I digress). When I followed their links (using IE 8) the version on the staging server looked as expected, but the layout is hosed on the version from the production server. Specifically, it seems like a div that is supposed to flow to the right of a div that is float: left is displaying below the floated div at full width, as though it was clear: left (which it is not). It also has the wrong height. I downloaded the files from each and they are identical to my local version. Frustrated, I cleared my browser's cache, restarted my computer, checked it on a colleague's computer who also has IE 8. All the same issue. Staging server good. Production server bad. Finally I uninstalled IE 8 and looked at it in IE 6. Both versions looked fine. So, to recap. Two different servers. No server side scripting. Identical files. One browser agrees they are identical, the other does not. What could cause this?

    Read the article

  • Javascript to check if there is flash player installed and redirect to the neededn page.

    - by themajiks
    Hi. I have this script. This determines if there is a flash player installed in the browser, it redirects the browser to a flash website. if not, then it opens a non-flash website. The Code is here: <SCRIPT LANGUAGE="JavaScript"> <!-- if ((navigator.appName == "Microsoft Internet Explorer" && navigator.appVersion.indexOf("Mac") == -1 && navigator.appVersion.indexOf("3.1") == -1) || (navigator.plugins && navigator.plugins["Shockwave Flash"])|| navigator.plugins["Shockwave Flash 2.0"]){ window.location='flash/index.html'; } else { window.location='index.html'; } --> </SCRIPT> What i want is to embed this code in the non-flash index page. it should just check if there is no flash then simply go with the current index file that already has been opened, or if there is no flash player, then load the index file from within the flash website. Currently, when index.html (non-flash) is opened, it goes into loop and keeps on checking for the flash player. Can I modify the window.location='index.html'; statement no to load any file here, just go on with the file already opened.??

    Read the article

  • How to make smooth transition from a WebBrowser control to an Image in Silverlight 4?

    - by Trex
    Hi, I have the following XAML on my page: `<Grid x:Name="LayoutRoot"> <Viewbox Stretch="Uniform"> <Image x:Name="myImage" /> </Viewbox> <WebBrowser x:Name="myBrowser" /> </Grid>` and then in the codebehind I'm switching the visibility between the image and the browser content: myBrowser.Visibility = Visibility.Collapsed; myImage.Source = new BitmapImage(new Uri(p)); myImage.Visibility = Visibility.Visible; and myImage.Visibility = Visibility.Collapsed; myBrowser.Source = new Uri(myPath + p, UriKind.Absolute); myBrowser.Visibility = Visibility.Visible; This works fine, but what the client now wants is a smooth transition between when the Image is shown and when the browser is shown. I tried several approaches but always ran into dead end. Do you have any ideas? I tried setting two states using the VSM and than displaying a white rectangle on top as an overlay, before the swap takes place, but that didn't work (I guess it's because nothing can be placed above the WebBroser???) I tried setting the Visibility of the image control and the webbrowser control using the VSM, but that didn't work either. I really don't know what else to try to solve this simple task. Any help is greatly appreciated. Jan

    Read the article

  • My python program always brings down my internet connection after several hours running, how do I debug and fix this problem?

    - by Shane
    I'm writing a python script checking/monitoring several server/websites status(response time and similar stuff), it's a GUI program and I use separate thread to check different server/website, and the basic structure of each thread is using an infinite while loop to request that site every random time period(15 to 30 seconds), once there's changes in website/server each thread will start a new thread to do a thorough check(requesting more pages and similar stuff). The problem is, my internet connection always got blocked/jammed/messed up after several hours running of this script, the situation is, from my script side I got urlopen error timed out each time it's requesting a page, and from my FireFox browser side I cannot open any site. But the weird thing is, the moment I close my script my Internet connection got back on immediately which means now I can surf any site through my browser, so it must be the script causing all the problem. I've checked the program carefully and even use del to delete any connection once it's used, still get the same problem. I only use urllib2, urllib, mechanize to do network requests. Anybody knows why such thing happens? How do I debug this problem? Is there a tool or something to check my network status once such situation occurs? It's really bugging me for a while... By the way I'm behind a VPN, does it have something to do with this problem? Although I don't think so because my network always get back on once the script closed, and the VPN connection never drops(as it appears) during the whole process.

    Read the article

  • Jquery thumbnail popup on link Mouseover

    - by Charles Marsh
    Hello All, I have this peice of coding which simply swaps an image when a link is clicked, I also show some hidden html content which is positioned over the image. <script> if($.browser.msie && parseInt($.browser.version) <= 6){ $('#contentone, #contenttwo, #contentthree, #contentfour, .linksBackground').hide(); } $('#contentone, #contenttwo, #contentthree, #contentfour').hide(); $("#linkone").click(function() { $('#contenttwo, #contentthree, #contentfour').hide("1500"); $("#imageone").attr("src","Resources/Images/TEMP-homeOne.jpg"); $("#contentone").show("1500"); }); $("#linktwo").click(function() { $('#contentone, #contentthree, #contentfour').hide("1500"); $("#imageone").attr("src","Resources/Images/TEMP-homeTwo.jpg"); $("#contenttwo").show("1500"); }); $("#linkthree").click(function() { $('#contentone, #contenttwo, #contentfour').hide("1500"); $("#imageone").attr("src","Resources/Images/TEMP-homeThree.jpg"); $("#contentthree").show("1500"); }); $("#linkfour").click(function() { $('#contentone, #contenttwo, #contentthree').hide("1500"); $("#imageone").attr("src","Resources/Images/TEMP-homeFour.jpg"); $("#contentfour").show("1500"); }); </script> Does anyone know how I can further modify this to show a small thumbnail image when the user rolls over the link? I just need a hint because I'm not sure where to turn to... can I achieve it with mouseover?

    Read the article

  • Debug website on host from virtual machine

    - by Luchaguate
    I have a Windows 7 machine hosting a Windows 7 virtual machine. I am developing a web application using visual studio 2010 on my host machine. I want to run the application in debug mode and access my localhost server from a browser on the virtual machine. (The purpose of this is to be able to debug an application that uses Windows Authentication using different users without having to log off and on for different users on my host machine...) I am using a bridged connection for the virtual machine. I googled how to solve this problem and most of the threads that I found said that if I was using a bridged connection, I could access the server on the host machine by just entering the IP address of my host machine into the url in the browser of the virtual machine. I have tried some different urls using the IP but none of them have worked. As an example, suppose I run my web application in visual studio on my host machine and its url is http://localhost:62789/MyPage.aspx Assume also that I ran ipconfig in CommandPrompt on my host machine and found out that the IP address for my host machine is xxx.xxx.xxx.x. What url should I enter on the virtual machine to access my web application? Thanks in advance.

    Read the article

  • Bing search API and Azure

    - by Gapton
    I am trying to programatically perform a search on Microsoft Bing search engine. Here is my understanding: There was a Bing Search API 2.0 , which will be replaced soon (1st Aug 2012) The new API is known as Windows Azure Marketplace. You use different URL for the two. In the old API (Bing Search API 2.0), you specify a key (Application ID) in the URL, and such key will be used to authenticate the request. As long as you have the key as a parameter in the URL, you can obtain the results. In the new API (Windows Azure Marketplace), you do NOT include the key (Account Key) in the URL. Instead, you put in a query URL, then the server will ask for your credentials. When using a browser, there will be a pop-up asking for a/c name and password. Instruction was to leave the account name blank and insert your key in the password field. Okay, I have done all that and I can see a JSON-formatted results of my search on my browser page. How do I do this programmatically in PHP? I tried searching for the documentation and sample code from Microsoft MSDN library, but I was either searching in the wrong place, or there are extremely limited resources in there. Would anyone be able to tell me how do you do the "enter the key in the password field in the pop-up" part in PHP please? Thanks alot in advance.

    Read the article

  • How do I structure my tests with Python unittest module?

    - by persepolis
    I'm trying to build a test framework for automated webtesting in selenium and unittest, and I want to structure my tests into distinct scripts. So I've organised it as following: base.py - This will contain, for now, the base selenium test case class for setting up a session. import unittest from selenium import webdriver # Base Selenium Test class from which all test cases inherit. class BaseSeleniumTest(unittest.TestCase): def setUp(self): self.browser = webdriver.Firefox() def tearDown(self): self.browser.close() main.py - I want this to be the overall test suite from which all the individual tests are run. import unittest import test_example if __name__ == "__main__": SeTestSuite = test_example.TitleSpelling() unittest.TextTestRunner(verbosity=2).run(SeTestSuite) test_example.py - An example test case, it might be nice to make these run on their own too. from base import BaseSeleniumTest # Test the spelling of the title class TitleSpelling(BaseSeleniumTest): def test_a(self): self.assertTrue(False) def test_b(self): self.assertTrue(True) The problem is that when I run main.py I get the following error: Traceback (most recent call last): File "H:\Python\testframework\main.py", line 5, in <module> SeTestSuite = test_example.TitleSpelling() File "C:\Python27\lib\unittest\case.py", line 191, in __init__ (self.__class__, methodName)) ValueError: no such test method in <class 'test_example.TitleSpelling'>: runTest I suspect this is due to the very special way in which unittest runs and I must have missed a trick on how the docs expect me to structure my tests. Any pointers?

    Read the article

  • jQuery - How to animate click function without clicking (Slide-show like) - Is this possible?

    - by iamtheratio
    Hello, I have created a jQuery script (with help) that works great however I need it to automate/animate itself without using the click function, I just want to know is this possible and if so, how? Similar to a slide-show. Basically the code allows me to click on an image and hide/show a DIV, while changing a list of elements such as class/id name. Here's my code: JQUERY: jQuery(document).ready(function(){ //if this is not the first tab, hide it jQuery(".imgs:not(:first)").hide(); //to fix u know who jQuery(".imgs:first").show(); //when we click one of the tabs jQuery(".img_selection a").click(function(){ //get the ID of the element we need to show stringref = jQuery(this).attr("href").split('#')[1]; // adjust css on tabs to show active tab $('.img_selection a').removeAttr('id'); // remove all ids $(this).attr('id', this.className + "_on") // create class+_on as this id. //hide the tabs that doesn't match the ID jQuery('.imgs:not(#'+stringref+')').hide(); //fix if (jQuery.browser.msie && jQuery.browser.version.substr(0,3) == "6.0") { jQuery('.imgs#' + stringref).show(); } else //display our tab fading it in jQuery('.imgs#' + stringref).show(); //stay with me return false; }); });

    Read the article

  • Process data BEFORE a 301 Redirect?

    - by Jesse
    So, I've been working on a PHP link shortener (I know, just what the world needs). Basically when the page loads, php determines where it needs to go and sends a 301 Header to redirect the browser, like so... Header( "HTTP/1.1 301 Moved Permanently" ); header("Location: http://newsite.com"; Now, I'm trying to add some tracking to my redirects and insert some custom analytics data into a MySQL table before the redirect happen. It works perfectly if I don't specify the a redirect type and just use: header("Location: http://newsite.com"; But, of course as soon as you add in the 301 header, nothing else gets processed. Actually, on the first request, it sends the data to MySQL, but on any subsequent requests there's no communication with the database. I assume it's a browser caching issue, once it's seen the 301 it decides they're no reason to parse anything on future requests. But, does anyone know if there's any way to get around this? I'd really like to keep it as a 301 for SEO purposes (I believe if you don't specify it sends a 404 by default?). I thought about using .htaccess to prepend a file to the page that will do the MySQL work, but with the 301, wouldn't that just get ignored as well? Anyway, I'm not sure if there's any solution other than using a different type of redirect, but I'm ready to give up just yet. So, any suggestions would be much appreciated. Thanks!

    Read the article

  • ESXi with software iSCSI

    - by jharley
    Has anyone had any luck using the swiSCSI driver on ESXi? Following the instructions from VMWare.com I get to the point where I have the iSCSI HBA showing up but no LUNs/targets are showing up. The iSCSI target is running on Solaris 10 update 5 and works with other initiators fine. The ESXi initiator (from the logs) sees the targets but just logs in and out of them every 2 - 5 seconds. We're using unauthenticated discovery, and over and over in /var/log/messages I see: iSCSI: bus 0 target 0 trying to establish session 0xb203f90 to portal 0, address 10.1.100.9 port 3260 group 1 iSCSI: bus 0 target 0 establish session 0xb203f90 #4848 to port 0, address 10.1.100.9 port 3260 group 1, alias data/ESXi iSCSI: session 0xb203f90 dropping after receiving unexpected opcode 0x60 iSCSI: session 0xb203f90 to data/ESXi dropped iSCSI: session 0xb203f90 to data/ESXi waiting 2 seconds before next login attempt The only other thing that seems out of wack is that my 'Recent Tasks' pane keeps filling with 'Browse Diagnostic Manager' events and /var/log/vmware/hostd.log is filled with messages like this up to two times per second: [2008-09-19 16:05:57.901 'TaskManager' 196621 info] Task Created: haTask-ha-host-vim.DiagnosticManager.browser-776 [2008-09-19 16:05:57.094 'TaskManager' 196621 info] Task Completed: haTask-ha-host-vim.DiagnosticManager.browser-766 Any help would be appreciated.

    Read the article

  • Progressive enhancement of anchor tags to avoid onclick="return false;"

    - by Chris Beck
    Unobtrusive JS suggests that we don't have any onclick attributes in our HTML templates. <a href="/controller/foo/1">View Foo 1</a> A most basic progressive enhancement is to convert an that anchor tag to use XHR to retrieve a DOM fragment. So, I write JS to add an event listener for a."click" then make an XHR to a.href. Alas, the browser still wants to navigate to "/controller/foo". So, I write JS to dynamically inject a.onclick = "return false;". Still unobtrusive (I guess), but now I'm paying the the cost of an extra event handler. To avoid the cost of 2 event listeners, I could shove my function call into the onclick attribute <a href="/controller/foo/1" onclick="myXHRFcn(); return false;"> But that's grodo for all sorts of reasons. To make it more interesting, I might have 100 anchor tags that I want to enhance. A better pattern is a single listener on a parent node that relies on event bubbling. That's cool, but how do I short circuit the browser from navigating to the URL without this on every anchor node? onclick="return false;" Manipulating the href is not an option. It must stay intact so the context menu options to copy or open in new page work as expected.

    Read the article

  • Need help with height 100% layout

    - by serg
    I am trying to build the following layout (file browser type of thing, left block would contain a list of files, right panel would contain selected file content): <------------- MENU -------------------> <--- LEFT ----><------- RIGHT ---------> <--- LEFT ----><------- RIGHT ---------> <--- LEFT ----><------- RIGHT ---------> <--- LEFT ----><------- RIGHT ---------> <--- LEFT ----><------- RIGHT ---------> <--- LEFT ----><------- RIGHT ---------> The page itself shouldn't have any scrollbars. Menu has 100% width and fixed height Left block has fixed width and 100% height. If content is taller - scroll bar appears inside the left block only. Right block contains <pre> element and takes the remaining width and 100% height. If content is wider or taller - scrollbar appears inside the right block only. This is what I have so far: http://jsbin.com/uqegi4/3/edit The problem is fixed menu height messes up 100% height calculation and all scrollbars inside left and right blocks appear at wrong time. I don't care about cross browser compatibility, only Chrome matters. Thanks.

    Read the article

  • Rails 2.3.14 setting expire_after for sessions is ignored

    - by Sergii Shablatovych
    I have next config in my environment.rb: config.action_controller.session_store = :cookie_store config.action_controller.session = { :expire_after => 14.days, :domain => DOMAIN, :session_key => '_session', :secret => 'some_string' } Setting session_store to active_record_store or mem_cache_store didn't help. Also i've tried just setting cookie from controller (with all founded options for expire): cookies[:test] = { :value => 'test' , :expires => 3600.to_i.from_now.utc } In both ways all sessions and cookies are deleted after closing browser window - they are only for browser session. I've tried almost all variants founded in the Internet - no luck( My config is: Ubuntu 10.04 LTS, rails 2.3.14, ruby Enterprise Edition 1.8.7, Phusion Passenger version 3.0.11 and Nginx compiled by Phusion Passenger. I've an options that it's Nginx not allowing setting some headers but also didn't find any solution. Any help appreciated! Thanks UPD. i've tried to put all configs for sessions to config/initializers/session_store.rb - nothing changed. i have a feeling that it's not a rails problem. may it be phusion + nginx error? i don't even know how to check where the problem is.

    Read the article

  • Once an HTML document has a manifest (cache.manifest), how can you remove it?

    - by Michael F
    It seems that once you have a manifest entry, a la: <html manifest="cache.manifest"> Then that page (the master entry in the cache) will always be cached (at least by Safari) until the user does something to remove the cache, even if you later remove the manifest attribute from the html tag and update the manifest (by changing something within it), forcing the master entry to be reloaded along with everything else. In other words, if you have: index.html (with manifest defined) file1.js (referenced in manifest) file2.js (referenced in manifest) cache.manifest (lists the two js files) -- removing the manifest entry from index.html and modifying the manifest (so it gets expired by the browser and all content reloaded) will not stop this page from behaving as if it's still fully cached. If you view source on index.html you won't see the manifest listed anymore, but the browser will still request only the cache.manifest file, and unless that file's content is changed, no other changes to any files will be shown to the user. It seems like a pretty glaring bug, and it's present on iOS as well as Mac versions of Safari. Has anyone found a way of resetting the page and getting rid of the cache without requiring user intervention?

    Read the article

  • Client Side Only Cookies

    - by Mike Jones
    I need something like a cookie, but I specifically don't want it going back to the server. I call it a "client side session cookie" but any reasonable mechanism would be great. Basically, I want to store some data encrypted on the server, and have the user type a password into the browser. The browser decrypts the data with the password (or creates and encrypts the data with the password) and the server stores only encrypted data. To keep the data secure on the server, the server should not store and should never receive the password. Ideally there should be a cookie session expiration to clean up. Of course I need it be available on multiple pages as the user walks through the web site. The best I can come up with is some sort of iframe mechanism to store the data in javascript variables, but that is ugly. Does anyone have any ideas how to implement something like this? FWIW, the platform is ASP.NET, but I don't suppose that matters. It needs to support a broad range of browsers, including mobile. In response to one answer below, let me clarify. My question is not how to achieve the crypto, that isn't a problem. The question is where to store the password so that it is persistent from page to page, but not beyond a session, and in such a way that the server doesn't see it.

    Read the article

  • Redirect requests only if the file is not found?

    - by ZenBlender
    I'm hoping there is a way to do this with mod_rewrite and Apache, but maybe there is another way to consider too. On my site, I have directories set up for re-skinned versions of the site for clients. If the web root is /home/blah/www, a client directory would be /home/blah/www/clients/abc. When you access the client directory via a web browser, I want it to use any requested files in the client directory if they exist. Otherwise, I want it to use the file in the web root. For example, let's say the client does not need their own index.html. Therefore, some code would determine that there is no index.html in /home/blah/www/clients/abc and will instead use the one in /home/blah/www. Keep in mind that I don't want to redirect the client to the web root at any time, I just want to use the web root's file with that name if the client directory has not specified its own copy. The web browser should still point to /clients/abc whether the file exists there or in the root. Likewise, if there is a request for news.html in the client directory and it DOES exist there, then just serve that file instead of the web root's news.html. The user's experience should be seamless. I need this to work for requests on any filename. If I need to, for example, add a new line to .htaccess for every file I might want to redirect, it rather defeats the purpose as there is too much maintenance needed, and a good chance for errors given the large number of files. In your examples, please indicate whether your code goes in the .htaccess file in the client directory, or the web root. Web root is preferred. Thanks for any suggestions! :)

    Read the article

  • Is it still true, to make cross broswer layouts for desktop browsers using table+css is easier then

    - by metal-gear-solid
    My one of web designer friend still making sites with table but he use css very nicely and I also use css nicely but with <div> and i face cross browser problem in layout more than my friend. and i given some reason to my friend about cons of <table>. read my whole discussion with friend? I - you site will be problematic with screen reader My friend - OK, but i never got any call from any client regarding this. I - you will devote more time to make any changes in layout, if changes comes from client My friend - I don't think so, but if it is then show me how can i save time with <div>? I - your sites will not work well with search engine. My friend - it's not true. I've made many site and no problem with any site or client regarding this I - layout is old way, non w3c and non standard way. My friend - what is old and what is new, Who is W3C i don't know, What is standard? Whatever i make works in all browsers, it's enough for me and my client will not pay for standard and W3C guidelines rules I - Your site will not work in mobile browsers My friend - No problem for me, my client don't care about mobile phone I - Your sites are not accessible? My Friend - What do u mean not accessible? Whatever i make works in all browsers. my any client never asked about accessibility I - You will not get more work in future, with table? My friend - OK, no problem when clients will not accept site with table then i will learn about div based layouts in future. My questions? Is it still true, to make cross browser layouts for desktop browsers using table+css is easier then div+css? What is the benefit for developer to use DIV+CSS layout in place of <table> layouts if client would not mind if i use ?

    Read the article

  • xpath evaluting error in andorid

    - by R_Dhorawat
    i'm running one application in android browser which contain the following code.. [ if (typeof XPathResult != "undefined") { //use build in xpath support for Safari 3.0 //alert("xpathExpr"+xpathExpr); //alert("doc"+doc); var xmlDocument = doc; if (doc.nodeType != 9) { xmlDocument = doc.ownerDocument; } results = xmlDocument.evaluate(xpathExpr,doc, function(prefix) { return namespaces[prefix] || null;}, XPathResult.ANY_TYPE, null ); var thisResult; result = []; var len = 0; do { thisResult = results.iterateNext(); if (thisResult) { result[len] = thisResult; len++; } } while ( thisResult ); } else { try{ if (doc.selectNodes) { result = doc.selectNodes(xpathExpr); } }catch(ex){} } return result; ] but when i run this app in Firefox control come in if statement and everything works fine.. but in android browser it's giving error ... XPathResult undefined... this time control come to else statement and even here it's showing that selectNodes is undefind and. so the result come as null whereas in Firefox it's giving list of nodes.. realy need it to be done ... help needed.. thanks...

    Read the article

  • C# threading solution for long queries

    - by Eddie
    Senerio We have an application that records incidents. An external database needs to be queried when an incident is approved by a supervisor. The queries to this external database are sometimes taking a while to run. This lag is experienced through the browser. Possible Solution I want to use threading to eliminate the simulated hang to the browser. I have used the Thread class before and heard about ThreadPool. But, I just found BackgroundWorker in this post. MSDN states: The BackgroundWorker class allows you to run an operation on a separate, dedicated thread. Time-consuming operations like downloads and database transactions can cause your user interface (UI) to seem as though it has stopped responding while they are running. When you want a responsive UI and you are faced with long delays associated with such operations, the BackgroundWorker class provides a convenient solution. Is BackgroundWorker the way to go when handling long running queries? What happens when 2 or more BackgroundWorker processes are ran simultaneously? Is it handled like a pool?

    Read the article

  • CSS/Javascript: multiple columns

    - by Patrick
    hi, I'm looking for a columnizer plugin (making columns of my small divs). It is very important it has the following features: 1) It has to be as light as possible (if it is only css would be great, but I guess it is difficult make it work on IE then...) 2) It has to be cross-browser (I don't need IE6... IE7 and IE8 compatibility is required). 3) The divs has not to be broken. In other terms, the nodes have to be moved to next block but not splitted in 2. The nodes are div elements, they might include other divs, images and text. 4) The column have to have a fixed width and fixed margin. This means that when I resize the browser, and new columns are created (become the window becomes wider), the new columns have to rigidly keep the same width and distance between them. (margin:20px) (width:200px) Would be great to have some css.. but I'm afraid I need some jQuery plugin because I need all 4 features being supported. I found several plugins and css styleshits with very good solutions, but I couldn't find a complete one. Thanks

    Read the article

  • Dom Traversal to Automate Keyboard Focus - Spatial Navigation

    - by Steve
    I'm going to start with a little background that will hopefully help my question make more sense. I am developing an application for a television. The concept is simple and basically works by overlaying a browser over the video plane of the TV. Now being a TV, there is no mouse or additional pointing device. All interaction is done through a remote control. Therefore, the user needs to be able to visually tell which element they are currently focused upon. To indicate that an element is focused, I currently append a colored transparent image over the element to indicate focus. Now, when a user hits the arrow keys, I need to respond by focusing on the correct elements according to the key pressed. So, if the down arrow is pressed I need to focus on the next focusable element in the DOM tree (which may be a child or sibling), and if they hit the up arrow, I need to respond to the previous element. This would essentially simulate spatial navigation within a browser. I am currently setting an attribute (focusable=true) on any DOM elements that should be able to receive focus. What I would like to do is determine the previous or next focusable element (i.e. attribute focusable=true) and apply focus to the element. I was hoping to traverse the DOM tree to determine the next and previously focusable elements, but I am not sure how to perform this in JQuery, or in general. I was leaning towards trying to use the JQuery tree travesal methods like next(), prev(), etc. What approach would you take to solve this type of issue? Thanks

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >