Search Results

Search found 13705 results on 549 pages for 'browser'.

Page 187/549 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • How shared hostings, domain names and DNS work together?

    - by vtortola
    Hi, I 've this little doubt but I couldn't find information about it, probably because I'm not searching the correct thing. When a browser ask for "www.mydomain.com", the DNS server returns an IP Address, then the browser go there... but what does happen then? I mean, that IP address could be a shared hosting that contains hundreds of web pages and domains, so how does it knows where it have to go? Is something that the web server does? is it something that I could implement in a web application? I mean, for example I have a web application that contains accounts, and each account has a default web page. You could access that page passing the account namne, for example "www.mydomain.com/myaccount", but now I want to register "www.myaccount.com" and then it will get the "www.mydomain.com/myaccount" content. Is it possible? Kind regards.

    Read the article

  • Way around ASP.NET session being shared across multiple tab windows

    - by ace
    I'm storing some value in an asp.net session on the first page. On the next page, this session value is being read. However if multiple tabs are opened and there are multiple page 1-page 2 navigation going on, the value stored in session gets mixed up since the session is shared between the browser tabs. I'm wondering what are the options around this : Query String: Passing value between the pages using query string, I don't want to take this approach since there can be multiple anchor tags on page 1 linking to page 2 and I can not rewrite the URLs of each tag since they are dynamic. Cookies??? In-memory cookies are shared across browser tabs too, same as the session cookie, rite ? Any other option?

    Read the article

  • waitin closes browsers for all projects that are building.

    - by Scooter
    I'm having an issue running WatiN under CruiseControl.net, where on a .forceclose, watin is closing all open browser instances. I have multiple projects running under cruisecontrol, and its not uncommon for some of those projects to be building and testing at the same time. There has been more than one occasion where watin will close the browser window for a different project, causing it to fail. In my local tests, creating my watin instance under a new process fixes this issue. But running under cruisecontrol, when doing this, I lose my IE object: Object reference not set to an instance of an object. Running CC.net as a service CC.Net server is Windows 2003 IE6 Any thoughts?

    Read the article

  • Render an asynchronous report, wider than the screen, without extra scrollbars

    - by Dubs
    I have an asynchronous local SSRS 2005 report that is of variable height and width, but routinely is bigger than the screen. I want to render it full size so that some of the report renders off screen and the only scrollbars the user sees are the ones on the browser window. What is the best way to accomplish this? The only method that I've found that even comes remotely close to what I want is to set static width/height values that are much larger than the report will ever be. But, this is undesirable since it leaves so much extra whitespace in the browser window. Has anyone had success rendering asynchronous reports without the extra scrollbars?

    Read the article

  • How can I determine what text on a webpage will render the largest?

    - by TMG
    I'd like to write a function (ideally in PHP) where I can input a url and return a string corresponding to the hypertext from that webpage which would render the largest in a browser (any standard browser is fine). Getting the webpage and tokenizing things with DOM is pretty straightforward, but what's the best way to calculate ultimate size of the rendered text tokens - how do you account for CSS that includes px, em, % etc. for different font sizes. Anyone done something like this before I go and re-invent the wheel? Thanks in advance.

    Read the article

  • Is there a need for zero-out DIV's margin and padding?

    - by ssg
    I wonder if on any browser div element comes with a preset margin/padding value other than zero. As far as I know, div and span come with zero padding and margin values by standard to make them suitable canvas for style decoration. Even better, is there a definite standard for default styles for all elements that is cross-browser which we can make assumptions upon? For instance FORM comes with top/bottom margins, OL/UL come with padding-left's. I occasionally see a * { margin: 0; padding: 0; } and this just looks like a dirty hack without knowing the reasons or consequences. Anyone has any better approach to this?

    Read the article

  • jQuery ajax Data Sent to Controller are Empty only in IE

    - by saman gholami
    This is my jQuery code : $.ajax({ url: "/Ajax/GetConcertTime", type: "POST", cache: false, data: { concertID: concertID.replace("ct", ""), date: selectedDateValue }, success: function (dataFromServer) { //some codes ... }, error: function (a, b, c) { alert(c); } }); And this is my controller code for catching parameters : [HttpPost] public ActionResult GetConcertTime(string concertId, string date) { int cid = Convert.ToInt32(concertId); try { MelliConcertEntities db = new MelliConcertEntities(); var lst = (from x in db.Showtimes where x.Concert.ID == cid && x.ShowtimeDate.Equals(date) && x.IsActive == true select x.ShowtimeTime).Distinct().ToList(); JavaScriptSerializer js = new JavaScriptSerializer(); return Content(js.Serialize(lst)); } catch (Exception ex) { return Content(ex.Message); } } After debugging i know the parameters in Controller (concertId and date) are empty when i useing IE browser.but in other browser it's work properly. What should i do for this issue?

    Read the article

  • Detect what is selected (highlighted) or clicked within an element on a page?

    - by Fog Cook
    How would one go about detecting what has been selected on a page in a browser? Example: Click, hold, select 3 words and 1 image on a page, release. Sub-question: How to detect what letter someone clicked on? Without using: A span injector breaking everything up OR a WYSIWYG plugin I'm hoping this isn't just a type of browser interaction you can't detect. There could be many uses, but my goal is a simple 'live' page editor, or at least a way to know what someone is clicking on/selecting aside from just the id of an element.

    Read the article

  • php cURL POST how to follow location

    - by One Stuck Pixel
    I am in a bit of a rut with a cURL issue. The post works greate, the data is POSTED just fine and received ok, but the url of the posted page never appears in the browser after the cURL session is executed, for example look at the following code: $ch = curl_init("http://localhost/eterniti/cart-step-1.php"); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, "error=1&em=$em&fname=$fname&lname=$lname&email1=$email1&email2=$email2&code=$code&area=$area&number=$num&mobile=$mobile&address1=$address1&address2=$address2&address3=$address3&suburb=$suburb&postcode=$postcode&country=$country"); curl_exec($ch); curl_close($ch); The post works fine and I am taken to the cart-step-1.php where I can process the posted data, HOWEVER the location in the URL address bar of the browser remains that of the script page, in this case proc_xxxxxx.php Any ideas how to get the URL address to reflect the page I am actually POSTED to? Thanks a mill

    Read the article

  • How to work around a site forbidding me to scrape their images with PHP

    - by Petruza
    I'm scraping a site, searching for JPGs to download. Scraping the site's HTML pages works fine. But when I try getting the JPGs with CURL, copy(), fopen(), etc., I get a 403 forbiden status. I know that's because the site owners don't want their images scraped, so I understand a good answer would be just don't do it, because they don't want you to. Ok, but let's say it's ok and I try to work around this, how could this be achieved? If I get the same URL with a browser, I can open the image perfectly, it's not that my IP is banned or anything, and I'm testing the scraper one file at a time, so it's not blocking me because I make too many requests too often. From my understanding, it could be that either the site is checking for some cookies that confirm that I'm using a browser and browsing their site before I download a JPG. Or that maybe PHP is using some user agent for the requests that the server can detect and filter out. Anyway, have any idea?

    Read the article

  • Selenium selected option in dropdown not displayed correctly

    - by luckyfool
    I have the following issue with Selenium Webdriver. There are two dropdown menus on a page i am testing "brand" and "items". The options of "items" depend on which brand you choose. I am trying to iterate through all possible choices and print brand-item pairs. I use two possible ways to pick an option from each dropdown menu Using Select(): def retryingSelectOption(name,n): result=False attempts=0 while attempts<5: try: element=Select(driver.find_element_by_name(name)) element.select_by_index(n) print element.all_selected_options[0].text result=True break except StaleElementReferenceException: pass attempts+=1 return result And using .click(): def retryingClickOption(name,n): result=False attempts=0 while attempts<5: try: driver.find_element_by_name(name).find_elements_by_tag_name("option")[n].click() result=True break except StaleElementReferenceException: pass attempts+=1 return result My problem is that at ,what seem to me as random moments (sometimes it works sometimes it does not), even though the above functions return True and printing out the selected option shows me the correct answer, the browser still displays the previous option. So basically Selenium tells me i have picked the right option but the browser displays the previous one.No idea what is wrong.

    Read the article

  • ondevicemotion in Chrome desktop returns true

    - by Martin Klasson
    I am using a "shake" function from github - and it has a detection that is browser-based javascript. //feature detect this.hasDeviceMotion = 'ondevicemotion' in window; This though yields true even on Chrome on OS X. It feels strange, since I am not willing to shake my monitor on my desktop. Safari on OS X gives me "false" in return when testing. I have searched but not been able to find out why Chrome decided to take this path. It bugs me. Is there a better way to make this detection? Not all "mobile devices" has shake as well.. or does not let the browser have that capability, as it does not seem to work in windows phones.

    Read the article

  • Position DIV relative to containing DIV Without Moving Other Stuff

    - by yar
    [I'm not sure if this question has been asked, though I've looked around a bit.] I have a DIV inside a DIV. I would like the inner DIV to have a certain position inside the outer div. I'm having some success with this position: absolute; top: 0px;right:0px; but all other divs are getting moved around. I just want it to float on top of the other stuff (float didn't work, of course). Thanks! Edit: The outer div is relative, and I'd like the inner to move with it when the browser is resized. Edit: Sorry, I've figured out the question (but not the answer): if I use right:0px, the inner div stops moving relative to the outer div and starts moving relative to the browser window. Why would that be?

    Read the article

  • Send data to server with GET

    - by coderjoe
    I have made a site where all my highscores from my game are shown. I send the scores from my iphone game to my site using a script made with php. The script works, because if I enter the link produced by my app in my browser the score is added. However I want to send the scores from my app. The script is using the GET method to get name, scores etcetera. This is what i have now: NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:urlString]]; NSError * e; NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:&e]; But it's not sending data to my server. The urlString is correct, because if I enter the link produced by my app in my browser the score is added. How can i solve this problem, Thanks in advance

    Read the article

  • Caching images with different query strings (S3 signed urls)

    - by Brendan Long
    I'm trying to figure out if I can get browsers to cache images with signed urls. What I want is to generate a new signed url for every request (same image, but with an updated signature), but have the browser not re-download it every time. So, assuming the cache-related headers are set correctly, and all of the URL is the same except for the query string, is there any way to make the browser cache it? The urls would look something like: http://example.s3.amazonaws.com/magic.jpg?WSAccessKeyId=stuff&Signature=stuff&Expires=1276297463 http://example.s3.amazonaws.com/magic.jpg?WSAccessKeyId=stuff&Signature=stuff&Expires=1276297500 We plan to set the e-tags to be an md5sum, so will it at least figure out it's the same image at that point? My other option is to keep track of when last gave out a url, then start giving out new ones slightly before the old ones expire, but I'd prefer not to deal with session info.

    Read the article

  • Need help specifying a ending while condition

    - by johnthexiii
    I have written a Python script to download all of the xkcd comic images. The only problem is I can't tell it to stop when it gets to the last one... Here is what I have so far. import re, mechanize from urllib import urlretrieve from BeautifulSoup import BeautifulSoup as bs baseUrl = "http://xkcd.com/1/" #Specify the first comic page br = mechanize.Browser() #Create a browser response = br.open(baseUrl) #Create an initial response x = 1 #Assign an initial file name while (SomeCondition): soup = bs(response.get_data()) #Create an instance of bs that contains the response data img = soup.findAll('img')[1] #Get the online file path of the image localFile = "C:\\Comics\\xkcd\\" + str(x) + ".jpg" #Come up with a local file name urlretrieve(img["src"], localFile) #Download the image file response = br.follow_link(text = "Next >") #Store the response of the next button x += 1 #Increase x by 1 print "All xkcd comics downloaded" #Let the user know the images have been downloaded Initially what I had was something like while br.follow_link(text = "Next >") != br.follow_link(text = ">|"): but by doing this I actually send skip to the last page before the script has a chance to perform the intended purpose.

    Read the article

  • How to do REST with PUT and DELETE

    - by Svish
    It says about the type option of the jQuery.ajax() method that The type of request to make ("POST" or "GET"), default is "GET". Note: Other HTTP request methods, such as PUT and DELETE, can also be used here, but they are not supported by all browsers. So... Does that mean that PUT and DELETE won't work if the browser does not support it, or just that PUT and DELETE can not be done natively by the user in the browser? If I can't or shouldn't use those, what do people usually do instead? Send the method as a a GET or POST parameter instead? Or?

    Read the article

  • How can I convince IE to simply display application/json rather than offer to download it?

    - by Cheeso
    While debugging jQuery apps that use AJAX, I often have the need to see the json that is being returned by the service to the browser. So I'll drop the URL for the JSON data into the address bar. This is nice with ASPNET because in the event of a coding error, I Can see the ASPNET diagostic in the browser: But when the server-side code works correctly and actually returns JSON, IE prompts me to download it, so I can't see the response. Can I get IE to NOT do that, in other words, to just display it as if it were plain text? I know I could do this if I set the Content-Type header to be text/plain. But this is specifically an the context of an ASPNET MVC app, which sets the response automagically when I use JsonResult on one of my action methods. Also I kinda want to keep the appropriate content-type, and not change it just to support debugging efforts.

    Read the article

  • deep zoom is not displayed

    - by George2
    I am using VSTS 2008 + C# + .Net 3.5 + Windows Vista Enterprise x86. I have used Silverlight Deep Zoom composer tool to export my composed images into Siverlight type. Everything is previewed fine after export successful message (I select browse from browser). But when I click the Test.html in the exported project to show Deep Zoom effects from browser, nothing is displayed. Here is my screen snapshot. Any ideas what is wrong? http://i41.tinypic.com/2dac561.jpg EDIT 1: to my surprise, there is no clientbin folder in my exported project. I have made two screen snapshots for, my project folder generated by Deep Zoom Composer under Exported Data folder; the content of GeneratedImages folder under my project folder. Please refers them to, http://i42.tinypic.com/346ncec.jpg http://i42.tinypic.com/15zqkn9.jpg Any ideas what is wrong? thanks in advance, George

    Read the article

  • What do browsesr use to auto suggest values in web forms?

    - by nedlud
    If I come back to a web site after having filled in a form previously, the browser remembers my username (for example). I'm not talking about cookies remembering user names and passwords, but the way a browser will suggest a value for a previously submitted field. What controls this behaviour? My issue at the moment is that I have login forms on several small apps all running under the one domain. (eg www.example.com/app1/login/ and www.example.com/app2/login/). If I use my user name for app1, then go over to app2 where I use a different username, it only ever auto suggests my app1 user name. How can I change this behaviour? Do browsers use the fields ID to help remember this stuff? If I change the ID of the fields in the login form, will they auto suggest the correct values in future?

    Read the article

  • Is there any way to view PHP code (the actual code not the compiled result) from a client machine?

    - by Columbo
    This may be a really stupid question...I started worrying last night that there might be someway to view PHP files on a server via a browser or someother means on a client machine. My worry is, I have an include file that contains the database username and password. If there were a way to put the address of this file in to a browser or some other system and see the code itself then it would be an issue for obvious reasons. Is this a legitimate concern? If so how do people go about preventing this?

    Read the article

  • PHP: Redirect to the same page, changing $_GET.

    - by Jonathan
    Hi, I have this PHP piece of code that gets $_GET['id'] (a number) and do some stuff with this. When its finished I need to increase that number ($_GET['id']) and redirect to the same page but with the new number (also using $_GET['id']). I am doing something like this: $ID = $_GET['id']; // Some stuff here // and then: $newID = $ID++; header('Location: http://localhost/something/something.php?id='.$newID); exit; The problem here is that the browser stop me from doing it and I get this error from the browser (Firefox) : "The page isn't redirecting properly. Firefox has detected that the server is redirecting the request for this address in a way that will never complete." Some help here please!

    Read the article

  • Project hosting on Google Code. Files are cached?

    - by Frexuz
    I do not really understand how Google Code handles file versioning. I am building a jQuery plugin that anyone can access. Like so: <script type="text/javascript" src="http://jquery-old-browser-warning.googlecode.com/files/jquery.browser-warning.js"></script> This script accesses other files on the same project (via ajax). The problem is, that when I upload a new file, it just seems like there aren't any changed to it. Google recommends that new files should have new names. But then I would have to change the filenames that the script loads. But then I would have to change the script file as well, and that would break everybodys implementation (with the script-tag above) Is there a way to force a file to change when uploading with the same filename? PS: If I go directly to the project page's file list. Then I do get the file with the updated content. But as I said, not when getting it through ajax.

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >