Search Results

Search found 34060 results on 1363 pages for 'webpage access'.

Page 59/1363 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • How to prevent chrome from injecting content to webpage

    - by Nazariy
    Recently I have discovered that my application is misbehaving in Google Chrome. On a page with a form, after it was submitted, my application reloads page using simple method like this: header('Location: ' . $url); after that, page is rendered incorrectly and this content is injected to DOM <div id="sbi_camera_button" class="sbi_search" style="left: 0px; top: 0px; position: absolute; width: 29px; height: 27px; border: none; margin: 0px; padding: 0px; z-index: 2147483647; display: none; "></div> After manual page refresh everything works as expected. I'm not sure what causing this behavior, as I'm working in closed local environment and application works fine in Firefox. My application using following libraries (hosted locally): jQuery v1.7.1 jQuery UI 1.8.16 Bootstrap.js v 2.1.1 Can someone suggest me what can possibly cause this issue?

    Read the article

  • Downloading a webpage in C# example

    - by Chris
    I am trying to understand some example code on this web page: (http://www.csharp-station.com/HowTo/HttpWebFetch.aspx) that downloads a file from the internet. The piece of code quoted below goes through a loop getting chunks of data and saving them to a string until all the data has been downloaded. As I understand it, "count" contains the size of the downloaded chunk and the loop runs until count is 0 (an empty chunk of data is downloaded). My question is, isn't it possible that count could be 0 without the file being completely downloaded? Say if the network connection is interrupted, the stream may not have any data to read on a pass of the loop and count should be 0, ending the download prematurely. Or does ResStream.Read stop the program until it gets data? Is this the correct way to save a stream? int count = 0; do { // fill the buffer with data count = resStream.Read(buf, 0, buf.Length); // make sure we read some data if (count != 0) { // translate from bytes to ASCII text tempString = Encoding.ASCII.GetString(buf, 0, count); // continue building the string sb.Append(tempString); } } while (count 0); // any more data to read?

    Read the article

  • Read the "Human friendly" text of a WebPage

    - by oidfrosty
    I need to read the final user page text, for example : "&#x73;&#x74;&#x61;&#x63;&#x6B;&#x6F;&#x76;&#x65;&#x72;&#x66;&#x6C;&#x6F;&#x77;&#x2E;&#x63;&#x6F;&#x6D;" is displayed as "stackoverflow.com". the aim is to avoid the use of script/encoding to avoid my filters it will be done with a content filtering proxy. i was thinking about injecting a script in the page

    Read the article

  • How to redirect a user to a new webpage after a Javascript Alert/confrim box

    - by David Maldonado
    I have a client who wishes to have an alert/confirm box pop up when a user leaves the site, then based on what they choose, they will either stay on the page or go to a new page (would love if it would work in all browsers). I have been twiddling all day and have got this piece of code, but doesn't work too well. <script> window.onbeforeonload = function exitLeave(){var answer = confirm("You have not filled out your questionnaire yet") if (answer){ window.location = "http://www.google.com/"; } else{ alert("Cancel it !") } } </script> Any help would be greatly appreciated.

    Read the article

  • UIWebView doesn't scroll to the bottom of the webpage loaded/created

    - by tom
    I have a UIWebView inside a normal UIViewController. The content of the UIWebView is programming/dynamically created in my program, and it could be very long (multiple table rows). Somehow, after loading, the page won't scroll more then one and half screen of content when swipe on the screen. Because of that I can only see the beginning few rows of data, but not the many others after them. Why is that?

    Read the article

  • looking to scan documents directly to be uploaded to a webpage

    - by Tom
    I was hoping to do this from a flash plugin, kind of how flash accesses the microphone or webcam but it doesn't seem possible. Is this going to be possible using Java, or ActiveX, or some other strategy that I haven't looked at yet? The idea is to do it without a client install, or at least something lightweight and browser and platform independent, (and possibly the moon on a stick as welll ;-))

    Read the article

  • has anyone tried designing a webpage for psp?

    - by lock
    erm im trying to make a personal bible for my psp (i tried googling but the only bible version i've seen on my skimming is on KJV and im trying to make mine have 3 versions namely TNIV, NLT and Amplified Bible) so my only solution was to make on for myself and my approach was to save an html file on my mem-stick and open it up through the console's browser my concerns are: 1. how does the psp browser handle css and javascript? 2. is there a doctype declaration specifically designed for the psp browser? 3. can i use any local database to store my texts for easier query or do i have no choice but rely on static text files? 4. is there anyone in SO who have experienced developing a page for this console and can he/she give me some tips and advice? thanks much in advance for your responses.. :)

    Read the article

  • Getting a "summary" of a webpage

    - by MattiasK
    I have something of a a hairy problem, I'd like to generate a couple of paragraphs of "description" of a given url, normally the start of an article. The Meta description field is one way to go but it isn't always good or set properly. It's fair to say it's a bit problematic to accomplish this from the screenscraped HTML. I had a general idea that perhaps one could scan the HTML for the first "appropriate" segment but it's hard to say what that is, perhaps something like the first paragraph containing a certain amount of text... Anyone have any good ideas? :) It doesn't have to be foolproof

    Read the article

  • Displaying images in webpage without src URL

    - by Babiker
    Recently i learned that i can display images in a web page without referencing an image URL as follows : <img class="disclosure" img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAkAAAAJCAYAAADgkQYQAAAAAXNSR0IArs4c6QAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB9oIGRQbOY8MjgMAAABVSURBVBjTfc6xDcAwCETRM0rt5nbA+49j70DDAqSLsGXyJQqkVxxwNOeMiEA+waW1VuT/inrvG7wikht8UETy2ygVMjO4O8YYTf6AqrZyUwYlygAAXo+QLmeF4c4uAAAAAElFTkSuQmCC"> I had another small bmp image that i wanted to display, so i opened it in vim and the img source looke like: When i paste this code where it needs to be pasted i only get "BM?" How to i convert/paste this code properly to be used as an image source?

    Read the article

  • change an absolutely positioned webpage into a centered one

    - by Jonathan
    So I have this template design that is currently absolutely positioned, but I'm trying to make it centered in any widescreen browser. I've tried making the width auto on the left and right side in my container, but it is still aligned with the left side. Css .JosephSettin_png { position: absolute; left:0px; top:0px; width:216px; height:40px; background: url("JosephSettin.png") no-repeat; } .home_png { position: absolute; left:472px; top:16px; width:48px; height:16px; } .discography_png { position: absolute; left:528px; top:16px; width:80px; height:24px; } .purchase_png { position: absolute; left:608px; top:16px; width:88px; height:24px; } .about_png { position: absolute; left:696px; top:16px; width:48px; height:24px; } .contact_png { position: absolute; left:744px; top:16px; width:56px; height:24px; } .main__pic_png { position: absolute; left:0px; top:56px; width:264px; height:264px; background: url("main_pic.png") no-repeat; } .footer__lines_png { position: absolute; left:0px; top:512px; width:800px; height:24px; background: url("footer_lines.png") no-repeat; } .info__heading_png { position: absolute; left:32px; top:360px; width:216px; height:32px; background: url("info_heading.png") no-repeat; } .info__pic3_png { position: absolute; left:265px; top:360px; width:159px; height:112px; background: url("info_pic3.png") no-repeat; } .info__pic2_png { position: absolute; left:432px; top:360px; width:176px; height:112px; background: url("info_pic2.png") no-repeat; } .info__pic1_png { position: absolute; left:616px; top:360px; width:177px; height:112px; background: url("info_pic1.png") no-repeat; } .info__pane_png { position: absolute; left:0px; top:345px; width:800px; height:144px; background: url("info_pane.png") no-repeat; } body { text-align: center; background-color:maroon; } #wrapper { width: 800px; margin-left: auto; margin-right: auto; text-align: left; } #a { text-decoration: none; color:white; font-weight:bold; } .style1 { font-weight: bold; color: #FFFFFF; } html <body> <center> <div id="wrapper"> <div class="JosephSettin_png"> </div> <div class="home_png"> <a href="home.html" style="color:yellow">Home</a></div> <div class="discography_png"> <a href="discography.html">Discography</a></div> <div class="purchase_png"><a href="store.html"><span class="style1">Store</span></a></div> <div class="about_png"><a href="about.html">About</a></div> <div class="contact_png"><a href="contact.html"><span class="style1"></span>Contact</a></div> <div class="ad_png"> </div> <div class="main__pic_png"> </div> <div class="welcome__header_png"> </div> <div class="welcome__text_png"> </div> <div class="footer__lines_png"> </div> <div class="footer__text_png"> </div> <div class="info__pane_png"></div> <div class="info__heading_png"> </div> <div class="info__text_png"> </div> <div class="info__pic3_png"> </div> <div class="info__pic2_png"> </div> <div class="info__pic1_png"> </div> <div class="info__pic3_png"> </div> </div> </center> </body> I know the container I create works if all my div classes aren't absolutely positioned. Do I have to change the position or did I make another error?

    Read the article

  • Best practice to hide/encrypt email adress in webpage

    - by Sebi
    I couldn't find a similar question, that's why here it is: Whats the best way to hide or encrypt an email link in a website, so that a crawler can't read it, but the user can nevertheless click it? I don't want to conufse the users by typing the email like this: john (at) mail.com or similar ways. (and i think this kind of links can nevertheless read by crawlers?) I also tried things like that: <script>// <![CDATA[eval(unescape('%76%61%72%20%73%3D%27%61%6D%6C%69%6F%74%72%3A%62%61%40%65%64%61%6E%6F%6C%2E%69%27%3B%76%61%72%20%72%3D%27%27%3B%66%6F%72%28%76%61%72%20%69%3D%30%3B%69%3C%73%2E%6C%65%6E%67%74%68%3B%69%2B%2B%2C%69%2B%2B%29%7B%72%3D%72%2B%73%2E%73%75%62%73%74%72%69%6E%67%28%69%2B%31%2C%69%2B%32%29%2B%73%2E%73%75%62%73%74%72%69%6E%67%28%69%2C%69%2B%31%29%7D%64%6F%63%75%6D%65%6E%74%2E%77%72%69%74%65%28%27%3C%61%20%68%72%65%66%3D%22%27%2B%72%2B%27%22%3E%4F%62%65%72%70%61%72%6C%65%69%74%65%72%3C%2F%61%3E%27%29%3B'))]]></script> but i heard this can also be read by crawler and it isn't really good practices are ther any common approaches?

    Read the article

  • Android: Having trouble getting html from webpage

    - by Kyle
    Hi, I'm writing an android application that is supposed to get the html from a php page and use the parsed data from thepage. I've searched for this issue on here, and ended up using some code from an example another poster put up. Here is my code so far: HttpClient client = new DefaultHttpClient(); HttpGet request = new HttpGet(url); try { Log.d("first","first"); HttpResponse response = client.execute(request); String html = ""; Log.d("second","second"); InputStream in = response.getEntity().getContent(); Log.d("third","third"); BufferedReader reader = new BufferedReader(new InputStreamReader(in)); Log.d("fourth","fourth"); StringBuilder str = new StringBuilder(); String line = null; Log.d("fifth","fifth"); while((line = reader.readLine()) != null) { Log.d("request line",line); } in.close(); } catch (ClientProtocolException e) { } catch (IOException e) { // TODO Auto-generated catch block Log.d("error", "error"); } Log.d("end","end"); } Like I said before, the url is a php page. Whenever I run this code, it prints out the first first message, but then prints out the error error message and then finally the end end message. I've tried modifying the headers, but I've had no luck with it. Any help would be greatly appreciated as I don't know what I'm doing wrong. Thanks!

    Read the article

  • Problem creating a webpage in OOP pattern?

    - by Starx
    I want to develop a website in OOP pattern, but I am stuck in a point whether I need to inherit from multiple classes. For example I have a main class "index" this class has several methods which need to inherited from other classes and I have created seperate classes for it like class "banner", class "content", class "footer" Not only this but class "content" has several methods to be inherited from other classes like class "gallery", class "news", etc I found out that multiple inheritance is not allowed, and using interface I cannot write codes in its methods, so how can i achieve a solution for this problem.

    Read the article

  • Using a dropdown on a static webpage as a DataSource in C#.net

    - by Matt
    I know this is a terrible way of doing things, but it's for an internal app where security is no issue. Basically, an old group created a php page with a drop down and this drop down is populated with entries from a DB. The DB owner is currently absent and for the sake of time, I would just need something that turns the entries in that drop down, always at the same url with the same ID every load into a List. Is there a quick, painless way to do this in .NET?

    Read the article

  • Short snippet summarizing a webpage?

    - by Legend
    Is there a clean way of grabbing the first few lines of a given link that summarizes that link? I have seen this being done in some online bookmarking applications but have no clue on how they were implemented. For instance, if I give this link, I should be able to get a summary which is roughly like: I'll admit it, I was intimidated by MapReduce. I'd tried to read explanations of it, but even the wonderful Joel Spolsky left me scratching my head. So I plowed ahead trying to build decent pipelines to process massive amounts of data Nothing complex at first sight but grabbing these is the challenging part. Just the first few lines of the actual post should be fine. Should I just use a raw approach of grabbing the entire html and parsing the meta tags or something fancy like that (which obviously and unfortunately is not generalizable to every link out there) or is there a smarter way to achieve this? Any suggestions? Update: I just found InstaPaper do this but am not sure if it is getting the information from RSS feeds or some other way.

    Read the article

  • How to properly align textboxes on a webpage with labels

    - by Grant
    Hi, this is a CSS / design question. I have three textboxes that i want to be center aligned on a web page. Then i want a label description to the right of each one. When i use attribute like text:align:centre because the labels are of different length it throws out the aligment of the textboxes [see image link below] http://www.mediafire.com/imageview.php?quickkey=qcyoajm2iuk Is there an easy way of keeping the textboxes aligned and then have the labels off the to the right without changing the textboxes? Thanks.

    Read the article

  • Sending OK Response over HTTP to a webpage request

    - by Prashant
    Hi, I am using an SMS Gateway to make my application receive SMSs. For this, the SMS Gateway sends a request to one of the pages in my application with the message as a querystring parameter. eg. http://myapplication/SMSReceiver.aspx?Message=PaulaIsHome. Now after my page gets invoked, I need to send an OK response to the SMS Gateway so that it doesn't keep retrying to send the same message to my application again and again. I cannot figure out how to send the OK response. I am using ASP .Net and C#. Thanks

    Read the article

  • ctrl-c does not copy text on a webpage

    - by aepheus
    I've come across this several times, ctrl-c randomly does not copy. I think it's caused by javascript or maybe some odd html syntax. I never spent the time to track down what caused it. Anyone know the typical/common causes of ctrl-c not working (to copy) on a website? Speaking from a developers standpoint. What do we developers end up doing to break ctrl-c? Just to clarify, I'm not interested in preventing copying. I'm trying to do the opposite, occasionally I find I've done something that is preventing ctrl-c from copying text, and that is not very user friendly on a text heavy site.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >