Search Results

Search found 33151 results on 1327 pages for 'www browser'.

Page 547/1327 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • Change Action of Checkbox to Get Instead of Post

    - by Shiraz Bhaiji
    We have an ASP.Net page that uses a checkbox to toggle between a list view and a tree view. The problem is that this triggers a post. When we then click on one of the documents in the list, then press back in the browser, we get a page expired error. Is it possible to change the action of the check box to trigger a get with a parameter?

    Read the article

  • Standalone XULRunner app not loading the Flash player plugin

    - by Rajeesh
    Hi, I have created a standalone XUL application, and I copied "NPSWF32.dll"(flash plugin dll) to the "plugins" folder under the browser. When I launch the application, flash content is not getting displayed. If I set the MOZ_PLUGIN_PATH to the "plugins" directory before launching the application, everything is working as expected. Could someone tell me, what I needs to do in order to load the flash plugin automatically from the "plugins" folder. Thanks, Rajeesh

    Read the article

  • Web crawler that can interpret javascript

    - by user320662
    Hi, I want to write a web crawler that can interpret JavaScript. Basically its a program in Java or PHP that takes a URL as input and outputs the DOM tree which is similar to the output in Firebug HTML window. The best example is Kayak.com where you can not see the resulting DOM displayed on the browser when you 'view source' but can save the resulting HTML though firebug.

    Read the article

  • Sockets with Silverlight 4

    - by AngryHacker
    I need to implement a persistent socket connection from my in-browser Silverlight 4 app to a device on the network. I need the following: Connect to it and keep a persistent connection Send and Receive data Get some type of event or notification (or detect it) when the connection drops. Is this possible with Silverlight 4? If so, can someone point me to some examples? All I am finding are some attempts at it with Silverlight 2.

    Read the article

  • setting up 301 redirects: dynamic urls to static urls

    - by MS
    We are currently using a template-based website and are hoping to move to a site with static urls. Our domain will stay the same. I understand that using 301 redirects in a .htaccess file is the preferred method -- and the one that has the highest chance of preserving our google rankings. I am still new at all this and am having a hard time figuring out the proper way to code it all. Over a hundred of our pages are indexed. They all have a similar URL but with different pageIDs: http://www.realestate-bigbear.com/Nav.aspx/Page=%2fPageManager%2fDefault.aspx%2fPageID%3d2020765 Some link out to provided content, ex. /RealEstateNews/Default.aspx Then there are many that flow from the main featured listings page: /ListNow/Default.aspx Down to all the specific properties.. where the PropertyId changes /ListNow/Property.aspx?PropertyID=2048098 would a simple set of codes work... like the following.... redirect 301 /Nav.aspx/Page=%2fPageManager%2fDefault.aspx%2fPageID%3d2020765 www.realestate.bigbear.com/SearchBigBearMLS.htm or do I need to do something entirely different?

    Read the article

  • Android Debugging InetAddress.isReachable

    - by badMonkey
    I am trying to figure out how to tell if a particular ipaddress is available in my android app during debugging ( I haven't tried this on an actual device ). From reading it appears that InetAddress.isReachable should do this for me. Initially I thought that I could code something like: InetAddress address = InetAddress.getByAddress( new byte[] { (byte) 192, (byte) 168, (byte) 254, (byte) 10 ); success = address.isReachable( 3000 ); This returns false even though I am reasonably sure it is a reachable address. I found that if I changed this to 127, 0, 0, 1 it returned success. My next attempt was same code, but I used the address I got from a ping of www.google.com ( 72.167.164.64 as of this writing ). No success. So then I tried a further example: int timeout = 2000; InetAddress[] addresses = InetAddress.getAllByName("www.google.com"); for (InetAddress address : addresses) { if ( address.isReachable(timeout)) { success = true; // just set a break point here } } I am relatively new to Java and Android so I suspect I am missing something, but I can't find anything that would indicate what that is.

    Read the article

  • Just messed up a server misusing chown, how to execute it correctly?

    - by Jack Webb-Heller
    Hi! I'm moving from an old shared host to a dedicated server at MediaTemple. The server is running Plesk CP, but, as far as I can tell, there's no way via the Interface to do what I want to do. On the old shared host, running cPanel, I creative a .zip archive of all the website's files. I downloaded this to my computer, then uploaded it with FTP to the new host account I'd set up. Finally, I logged in via SSH, navigated to the directory the zip was stored in (something like var/www/vhosts/mysite.com/httpdocs/ and ran the unzip command on the file sitearchive.zip. This extracted everything just the fine. The site appeared to work just fine. The problem: When I tried to edit a file through FTP, I got Error - 160: Permission Denied. When I Get Info for the file I'm trying to edit, it says the owner and group is swimwir1. I attemped to use chown at this point to change owner - and yes, as you may be able to tell, I'm a little inexperienced in SSH ;) luckily the server was new, since the command I ran - chown -R newuser / appeared to mess a load of stuff up. The reason I used / on the end rather than /var/www/vhosts/mysite.com/httpdocs/ was because I'd already cded into their, so I presumed the / was relative to where I was working. This may be the case, I have no idea, either way - Plesk was no longer accessible, although Apache and things continued to work. I realised my mistake, and deciding it wasn't worth the hassle of 1) being an amateur and 2) trying to fix it, I just reprovisioned the server to start afresh. So - what do I do to change the owner of these files correctly? Thanks for helping out a confused beginner! Jack

    Read the article

  • Showing image in place of flash to iphones and ipads

    - by poindexter
    I want to detect user-agent on load and if the visitor is viewing on an iPhone or iPad I want to display this code: <?php get_header(); ?> <div class="flash"> <img src="/wp-content/themes/iq-iphone/main-page-image.png"/> </div> If it's a regular visitor I want to display this code: <?php get_header(); ?> <div class="flash"> <script type="text/javascript"> AC_FL_RunContent( 'codebase','http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=7,0,19,0','width','924','height','316','src','<?php bloginfo('template_directory');?>/images/featurePanel','quality','high','pluginspage','http://www.macromedia.com/go/getflashplayer','movie','<?php bloginfo('template_directory');?>/images/featurePanel','wmode','transparent' ); //end AC code </script> <noscript> <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=7,0,19,0" width="924" height="316"> <param name="movie" value="<?php bloginfo('template_directory');?>/images/featurePanel.swf" /> <param name="quality" value="high" /> <param name="wmode" value="transparent" /> <embed src="<?php bloginfo('template_directory');?>/images/featurePanel.swf" quality="high" pluginspage="http://www.macromedia.com/go/getflashplayer" type="application/x-shockwave-flash" width="924" height="316"></embed> </object> </noscript> </div> Any ideas? Thanks!

    Read the article

  • XML parsing using jQuery

    - by lmkk
    I have the following xml: <?xml version="1.0" encoding="utf-8"?> <Area xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Scenes> <Scene Index="1" Name="Scene1" /> <Scene Index="2" Name="Scene2" /> </Scenes> </Area> Which i am trying to parse with jquery: <script> $(document).ready(function(){ $.ajax({ type: "GET", url: "list.xml", dataType: "xml", success: function(xml) { $(xml).find('scenes').each(function(){ $(this).find('scene').each(function(){ var name = $(this).attr('name'); $('<div class="items" ></div>').html('<p>'+name+'</p>').appendTo('#page-wrap'); }); }); } }); }); </script> Why is this not working? Help!! first attempt at javascript/jquery This is based on a example I found, but have so far been unable to adapt it to my usage. / Lars

    Read the article

  • HTML5 Web DB Security

    - by darrenc
    Hi all! I'm looking into an offline web app solution using HTML5. The functionality is everything I need BUT the data stored can be directly queried right in the browser and therefore completely unsecure! Is there anyway to encrypt/hide so that the data is secure? Thanks, D.

    Read the article

  • getting sound on .asf video on Mac OS X

    - by user248237
    When I try to open the following video: http://streaming1.hss-win.rpi.edu/ondemand/cogsci/issues_in_cs/issues_in_cogsci_02_04_2009.asf I can get it but without any sound... I tried it both inside the browser (Firefox) and in QuickTime Media player but still it does not work. Any idea how to get this to work on Mac OS X? Using version 10.6. thank you.

    Read the article

  • how to get an html anchor effect with JQuery

    - by frosty
    I have a click handler, which has "event.preventDefault". There is whole lot of logic that occurs within this function. At the end of this logic i would like the page to scroll up to the top. ie the same effect as an anchor. ie " $('.vod-playlist-film a').bind("click", function(event) { // some logic // now i need the browser to goto the top of the page event.preventDefault(); });

    Read the article

  • Applet to Object tags

    - by Andy
    im trying to get from applet to object so i can resolve z-index issues. The first applet tag works...my conversion to object doesn't. Can anyone point me in the right direction? From: <applet name='previewersGraph' codebase="http://www.mydomain.info/sub/" archive="TMApplets.jar" code='info.tm.web.applet.PreviewerStatsGraphApplet' width='446' height='291'> <param name="background-color" value="#ffffff" /> <param name="border-color" value="#8c8cad" /> To: <OBJECT id="previewersGraph" name="previewersGraph" classid="clsid:CAFEEFAC-0014-0002-0000-ABCDEFFEDCBA" width="200" height="200" align="baseline" codebase="http://java.sun.com/products/plugin/autodl/jinstall-1_4_2-windows-i586.cab#Version=1,4,2,0"> <PARAM name="code" value="info.tm.web.applet.PreviewerStatsGraphApplet"> <PARAM name="codebase" value="http://www.mydomain.info/sub/"> <PARAM name="type" value="application/x-java-applet;jpi-version=1.4.2"> <PARAM name="archive" value="TMApplets.jar"> <PARAM name="scriptable" value="true"> No Java 2 SDK, Standard Edition v 1.4.2 support for APPLET!! </OBJECT>

    Read the article

  • How to isolate a single element from a scraped web page in R

    - by PaulHurleyuk
    Hello, I'm trying to do soemone a favour, and it's a tad outside my comfort zone, so I'm stuck. I want to use R to scrape this page (http://www.fifa.com/worldcup/archive/germany2006/results/matches/match=97410001/report.html ) and others, to get the goal scorers and times. So far, this is what I've got require(RCurl) require(XML) theURL <-"http://www.fifa.com/worldcup/archive/germany2006/results/matches/match=97410001/report.html" webpage <- getURL(theURL, header=FALSE, verbose=TRUE) webpagecont <- readLines(tc <- textConnection(webpage)); close(tc) pagetree <- htmlTreeParse(webpagecont, error=function(...){}, useInternalNodes = TRUE) and the pagetree object now contains a pointer to my parsed html (I think). The part I want is <div class="cont")<ul> <div class="bold medium">Goals scored</div> <li>Philipp LAHM (GER) 6', </li> <li>Paulo WANCHOPE (CRC) 12', </li> <li>Miroslav KLOSE (GER) 17', </li> <li>Miroslav KLOSE (GER) 61', </li> <li>Paulo WANCHOPE (CRC) 73', </li> <li>Torsten FRINGS (GER) 87'</li> </ul></div> but I'm now lost as to how to isolate them, and frankly xpathSApply, xpathApply confuse the beejeebies out of me !. So, does anyone know how to fomulate a command to suck out the element conmtaiend within the tags ? Thanks Paul.

    Read the article

  • Is this scenario in compliance with GPLv3?

    - by Sean Kinsey
    For arguments sake, say that we create a web application , that depends on a GPLv3 licensed component, lets say Ext JS. Based on Section 0 of the license, the common notion is that the entire web application (the client side javascript) falls under the definition of a covered work: A “covered work” means either the unmodified Program or a work based on the Program. and that it will therefor have to be distributed under the same license Ok, so here comes the fun part: This is a short 'program' that is based on Ext JS var myPanel = new Ext.Panel(); The question that arises is: Have I now violated the GPL by not including the source of Ext JS and its license? Ok, so lets take another example <!doctype html> <html> <head> <title>my title</title> <script type="text/javascript" src="http://extjs.cachefly.net/ext-3.2.1/ext-all.js"> </script> <link rel="stylesheet" type="text/css" href="http://extjs.cachefly.net/ext-3.2.1/resources/css/ext-all.css" /> <script type="text/javascript"> var myPanel = new Ext.Panel(); </script> </head> <body> </body> </html> Have I now violated the terms of the GPL? The code conveyed by me to you is in a non-functional state - it will have to be combined with the actual source of Ext JS, which you(your browser) will have to retrieve, from a source made public by someone else to be usable. Now, if the answer to the above is no, how does me conveying this code in visible form differ from the 'invisible' form conveyed by my web server? As a side note, a very similar thing is done in Linux with many projects that depends on less permissive licenses - the user has to retrieve these on its own and make these available for the primary lib/executable. How is this not the same if the user is informed on beforehand that he (the browser) will have to retrieve the needed resources from a different source? Just to make it clear, I'm pro FLOSS, and I have also published a number of projects licensed under more permissive licenses. The reason I'm asking this is that I still haven't found anyone offering a definitive answer to this.

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • PHP: How to store XML data in an array?

    - by tommy
    Below is the XML I am working with - there are more items - this is the first set. How can I get these elements in to an array? I have been trying with PHP's SimpleXML etc. but I just cant do it. <response xmlns:lf="http://api.lemonfree.com/ns/1.0"> <lf:request_type>listing</lf:request_type> <lf:response_code>0</lf:response_code> <lf:result type="listing" count="10"> <lf:item id="56832429"> <lf:attr name="title">Used 2005 Ford Mustang V6 Deluxe</lf:attr> <lf:attr name="year">2005</lf:attr> <lf:attr name="make">FORD</lf:attr> <lf:attr name="model">MUSTANG</lf:attr> <lf:attr name="vin">1ZVFT80N555169501</lf:attr> <lf:attr name="price">12987</lf:attr> <lf:attr name="mileage">42242</lf:attr> <lf:attr name="auction">no</lf:attr> <lf:attr name="city">Grand Rapids</lf:attr> <lf:attr name="state">Michigan</lf:attr> <lf:attr name="image">http://www.lemonfree.com/images/stock_images/thumbnails/2005_38_557_80.jpg</lf:attr> <lf:attr name="link">http://www.lemonfree.com/56832429.html</lf:attr> </lf:item> <!-- more items --> </lf:result> </response> Thanks guys EDIT: I want the first items data in easy to access variables, I've been struggling for a couple of days to get SimpleXML to work as I am new to PHP, so I thought manipulating an array is easier to do.

    Read the article

  • Mod Rewrite Trouble

    - by Adrian
    I want to have a main section which is found on services.php and then some sub-sections like services_logodesign.php. I want to make url's look like: When I click a button to open services_logodesign.php the browser should show "services/logo design" I have several other sub-sections, it will be nice to have one code for any other pages.

    Read the article

  • Java resource closing

    - by Bob
    Hi, I'm writing an app that connect to a website and read one line from it. I do it like this: try{ URLConnection connection = new URL("www.example.com").openConnection(); BufferedReader rd = new BufferedReader(new InputStreamReader(connection.getInputStream())); String response = rd.readLine(); rd.close(); }catch (Exception e) { //exception handling } Is it good? I mean, I close the BufferedReader in the last line, but I do not close the InputStreamReader. Should I create a standalone InputStreamReader from the connection.getInputStream, and a BufferedReader from the standalone InputStreamReader, than close all the two readers? I think it will be better to place the closing methods in the finally block like this: InputStreamReader isr = null; BufferedReader br = null; try{ URLConnection connection = new URL("www.example.com").openConnection(); isr = new InputStreamReader(connection.getInputStream()); br = new BufferedReader(isr); String response = br.readLine(); }catch (Exception e) { //exception handling }finally{ br.close(); isr.close(); } But it is ugly, because the closing methods can throw exception, so I have to handle or throw it. Which solution is better? Or what would be the best solution?

    Read the article

  • Comet without AJAX

    - by Carl Whalley
    Suppose I only had the regular J2SE http libraries but wanted to write a client for comet, say in Android etc, but not limited to that, i.e. not using a WebView. Since there's no browser I'm assuming you'd have to open the long term connections yourself ... is this feasible?

    Read the article

  • Sharing session variables from http and https versio

    - by tangurena
    I am trying to fix an ASP.NET site that a friend had botched converting from older technologies. To the user, the site appears to have public and secured sections. Behind the scenes, the public and private sites are separate web applications with separate app pools. The difficulty arises because it appears that the applications share the same session IDs (when going from the public to the secured pages, the session ID remains the same), yet none of the (InProc) session variables are getting passed from the public site to the private one. Basically, the workflow consists of the user checking a checkbox ("I agree" type of stuff) on the public site (let's call that page http://www.boring.gov/iAgree.aspx), then logging in on the secured site (let's call that page https://www.boring.gov/login.aspx). The commandments from the parent agency in DC are that the user may not bookmark the login page, the user has to click "I agree" every time they log in, and that the "I agree" stuff has to be on a separate page. What am I missing? How would you do it? Notes: 1 - This is getting hosted on a single Windows 2003 server. 2 - Yes, it is a government agency. 3 - I would have done things very differently if I was doing the conversion, but I wasn't brought in until the poop hit the fan, and it is too late to redo things. 4 - Two previous SO threads that appear to be related, yet don't apply are this and that

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >