Search Results

Search found 13797 results on 552 pages for 'browser madness'.

Page 105/552 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • How to display programs, started by TSWA Remoteapp, inside a browser instead of directly on the desk

    - by richardboon
    For those not familiar with Terminal Services Web Access and Resulting Internet Communication in Windows Server 2008, here’s a brief overview: technet.microsoft.com/en-us/library/cc754502(WS.10).aspx The problem (I am trying to solve), can be seen in the picture of step 16, where the application is display directly right on the desktop [see link below]: http://blogs.technet.com/askcore/archive/2008/07/22/publishing-the-hyper-v-management-interface-using-terminal-services.aspx I am in the process of setting up Terminal Service Web Access RemoteApp for our company. Users only want remoteapp and needs to see the remote program running within/contain-inside the browser. They don’t want to see or access the whole desktop [as the case with remote desktop, which can be displayed inside a browser].

    Read the article

  • OMG. Is Webmin safe? I can see file codes in Chrome browser without login

    - by Arwana
    When Im in File Manager of Webmin, I can double click and see the codes of the files in new tab in Firefox with its specific URL. But when I remove ?rand=xxxx... after the file.php and paste the URL in Chrome browser, I still can see the codes. This is the URL I just pasted in the Chrome browser http://xxx.xxx.xxx.xxx:10000/file/show.cgi/var/www/html/mysite.com/files/file.php And then, I logout of webmin, and I change the file.php with other file, I can see the codes. OMG. Is Webmin safe? and how to secure this?

    Read the article

  • Lotus Notes won't open a hyperlink/uri in a browser!

    - by Andy O
    I'm still new to Lotus, but I have LN 8.0.2, and have it set up to open hyperlinks "using the browser i have set as the default for this operating system" which in my case is Firefox 3.6.13. If I try to select "use the browser embedded in this client" then it just forgets it after I click save (ie if i go back into preferences it's reset to the default for this operating system option) Windows XP SP 3. Whenever I click on a link in an email...memo... then it does nothing and flashes an underscore underneath it. It's so basic and so annoying there must be something I'm missing. Any help please!

    Read the article

  • How do I get a PDF link in a Word document to open in the default browser?

    - by Tweek
    I'm trying to create a Word document with links to resources on the web. If I create a hyperlink to a regular HTML file, when I click on the link, it opens in my default browser (Google Chrome) as expected. However, if I click on a link to a PDF file on a website, it opens in Internet Explorer. Before it opens, I also get the following prompt: Microsoft Office Opening http://www.example.com/example.pdf Some files can contain viruses or otherwise be harmful to your computer. It is important to be certain that this file is from a trustworthy source. Would you like to open this file? OK Cancel I'm using Office 2010, but I'm asking for a user who is using Office 2007 and is experiencing the same issue. (His default browser is Firefox.) We're both on Windows 7.

    Read the article

  • Mozilla nonsense. Page changes size by itself

    - by Browser Madness
    I have never intentionally changed the size of font on latest mozilla browser install on windows machine. For example Google site is now 200% size, and I did nothing to make this happen. Whats worse is it does not change back but remembers this! Similarly other sites are too small and they remember this per site. What is going on here? I mean what nonsense! How can I undo this? And for extra points who came up with this absurd behavior at mozilla? Not making this up folks. 15.0.1 Not at all clear why it changes size or how to go back to default size for these sites Acutally it just happened again while editing this entry. Icon changes and than font size is too small.

    Read the article

  • How to host my own cloud so that videos are viewable via desktop web browser?

    - by jake9115
    I want to host my own cloud storage solution, something like Dropbox but entirely dependent on my own central machine. This way things are more secure if setup correctly, and there are artificial storage limitations or pay-walls. Some thing similar to ownCloud: http://owncloud.org/ There is one important feature I want to have: the ability the stream movies in a web browser from my personal cloud to anywhere in the world. In the past I tried this with a NAS, and I mapped XBMC to the NAS via SFTP, and certain media types could stream in this manner. I've also used things like PLEX. In this case, I am looking for a single solution for personal cloud storage and movie streaming from that cloud into a web browser. Does anyone know if this can be accomplished? Thanks for the suggestions!

    Read the article

  • Can I pass data via JQUERY to ASP.NET MVC controller action and have a view rendered in new browser

    - by Simon Lomax
    Hi, Can anyone advise if its possible to pass data via JQUERY to an ASP.NET MVC controller action and have a view rendered in new browser tab based on the model data passed to the action method. My scenario is that I have a Jqgrid populated with product info on a page. The user would tick the items in the grid that they would like a label produced for. After they've made their slection they would click a button and I would like (if possible) to render a view of that contains a label for each selected item and have the view render in a new browser tab. All the code to allow the selections and post the relevant data back to the action method is all working fine and I know its easy to use the Jquery $(selector).load() command to populate an element on the current page with the HTML returned from the action. But is it possible to populate an element on a page in a new browser tab. If it is how would I go about it? Hope this make sense.

    Read the article

  • how to autocenter jquery ui dialog whenb resizing browser?

    - by Jorre
    When you use jquery UI dialog, all works well, except for one thing. When the browser is resized, the dialog just stays in it's initial position which can be really annoying. You can test it out on: http://jqueryui.com/demos/dialog/ Click on the "modal dialog" example and resize your browser. I'd love to be able to let dialogs autocenter when the browser resizes. Can this be done in an efficient way for all my dialogs in my app? Thanks a lot!

    Read the article

  • How to change the default Help browser for VS2010?

    - by Scott Bilas
    Visual Studio 2010 changed the help system to run a little daemon and launch the system default web browser to view it. I'm using Firefox for my system browser but would like to use Chrome for VS help. Is there an option to change the Help browser that I'm not seeing in Tools|Options? If not, is there a workaround or registry setting to do this? As a backup I've been using H3Viewer but I'd like to be able to get context-sensitive F1 help from within the VS IDE.

    Read the article

  • How to automate testing of a browser-based app?

    - by mawg
    If it were a windows program, I would use Auto it to automate testing. Is there something similar for browser-based apps? Nothing too complex, it should just allow scripting (preferable for me to macro-recording) to simulate human interaction with the browser, which means being able to identify fields of a form by name, inject text into some, simulate mouse-click on others, etc and then, after submitting a form, should be able to read text certain named controls, check the status of others (checked, radio group index, read-only, etc). While I do appreciate a full featured product, I don't appreciate a steep learning curve. so something as simple as the scripting of Auto It woudl be fine. I don't know if it makes a difference which browser is used, but I could live with MSIE 6 or higher (maybe 7 or higher at a push).

    Read the article

  • Flash: Correctly handling click-and-drag outside the browser?

    - by David Wolever
    What's the correct way to detect, from Flash, when someone has started a drag within the browser (eg, a MOUSE_DOWN event), dragged the mouse outside the browser window, released the button, then moved the mouse back over the browser? For example (assuming StackOverflow was a Flash application): I've tried the "obvious" thing, checking event.buttonDown in the MOUSE_MOVE handler, but even though the mouse button is up, event.buttonDown is true in step 2 (above). So, is there any other way to check the "real" status of the mouse button? Or any other way to handle this situation?

    Read the article

  • Default controller name is removed on browser refresh (CodeigniterPHP/Nginx issue?)

    - by tim peterson
    For all pages in my codeigniter app except my default controller, when I refresh the browser the url isn't affected as one would expect. However for my default controller, main.php, when when I refresh the browser at "http://localhost/main" or "http://mysite.com/main", the main part is stripped off the url. So the browser bar shows just "http://localhost" or "http://mysite.com". Totally lost on where to start with this but was just wondering if anyone has come across this before...? Here's what I think could be the relevant part of my nginx.conf (if Nginx is the problem). if ($request_uri ~* ^(/main(/index)?|/index(.php)?)/?$) { rewrite ^(.*)$ / permanent; }

    Read the article

  • jQuery: What listener do I use to check for browser auto filling the password input field?

    - by Jannis
    Hi, I have a simple problem that I cannot seem to find a solution to. Basically on this website here: http://dev.supply.net.nz/vendorapp/ (currently in development) I have some fancy label animations sliding things in and out on focus & blur. However once the user has logged in once the browser will most likely remember the password associated with the users email address/login. (Which is good and should not be disabled.) However I run into issues triggering my label slide out animation when the browser sets the value on the #password field automatically as the event for this is neither focus nor blur. Does anyone know which listener to use to run my function when the browser 'auto fills' the users password? Here is a quick screenshot of the issue:

    Read the article

  • Regression testing with Selenium GRID

    - by Ben Adderson
    A lot of software teams out there are tasked with supporting and maintaining systems that have grown organically over time, and the web team here at Red Gate is no exception. We're about to embark on our first significant refactoring endeavour for some time, and as such its clearly paramount that the code be tested thoroughly for regressions. Unfortunately we currently find ourselves with a codebase that isn't very testable - the three layers (database, business logic and UI) are currently tightly coupled. This leaves us with the unfortunate problem that, in order to confidently refactor the code, we need unit tests. But in order to write unit tests, we need to refactor the code :S To try and ease the initial pain of decoupling these layers, I've been looking into the idea of using UI automation to provide a sort of system-level regression test suite. The idea being that these tests can help us identify regressions whilst we work towards a more testable codebase, at which point the more traditional combination of unit and integration tests can take over. Ending up with a strong battery of UI tests is also a nice bonus :) Following on from my previous posts (here, here and here) I knew I wanted to use Selenium. I also figured that this would be a good excuse to put my xUnit [Browser] attribute to good use. Pretty quickly, I had a raft of tests that looked like the following (this particular example uses Reflector Pro). In a nut shell the test traverses our shopping cart and, for a particular combination of number of users and months of support, checks that the price calculations all come up with the correct values. [BrowserTheory] [Browser(Browsers.Firefox3_6, "http://www.red-gate.com")] public void Purchase1UserLicenceNoSupport(SeleniumProvider seleniumProvider) {     //Arrange     _browser = seleniumProvider.GetBrowser();     _browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                  //Act     _browser = ShoppingCartHelpers.TraverseShoppingCart(_browser, 1, 0, ".NET Reflector Pro");     //Assert     var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);         Assert.Equal(priceResult.Price, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.Equal(priceResult.Tax, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.Equal(priceResult.Total, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } These tests are pretty concise, with much of the common code in the TraverseShoppingCart() and GetNewPurchasePrice() methods. The (inevitable) problem arose when it came to execute these tests en masse. Selenium is a very slick tool, but it can't mask the fact that UI automation is very slow. To give you an idea, the set of cases that covers all of our products, for all combinations of users and support, came to 372 tests (for now only considering purchases in dollars). In the world of automated integration tests, that's a very manageable number. For unit tests, it's a trifle. However for UI automation, those 372 tests were taking just over two hours to run. Two hours may not sound like a lot, but those cases only cover one of the three currencies we deal with, and only one of the many different ways our systems can be asked to calculate a price. It was already pretty clear at this point that in order for this approach to be viable, I was going to have to find a way to speed things up. Up to this point I had been using Selenium Remote Control to automate Firefox, as this was the approach I had used previously and it had worked well. Fortunately,  the guys at SeleniumHQ also maintain a tool for executing multiple Selenium RC tests in parallel: Selenium Grid. Selenium Grid uses a central 'hub' to handle allocation of Selenium tests to individual RCs. The Remote Controls simply register themselves with the hub when they start, and then wait to be assigned work. The (for me) really clever part is that, as far as the client driver library is concerned, the grid hub looks exactly the same as a vanilla remote control. To create a new browser session against Selenium RC, the following C# code suffices: new DefaultSelenium("localhost", 4444, "*firefox", "http://www.red-gate.com"); This assumes that the RC is running on the local machine, and is listening on port 4444 (the default). Assuming the hub is running on your local machine, then to create a browser session in Selenium Grid, via the hub rather than directly against the control, the code is exactly the same! Behind the scenes, the hub will take this request and hand it off to one of the registered RCs that provides the "*firefox" execution environment. It will then pass all communications back and forth between the test runner and the remote control transparently. This makes running existing RC tests on a Selenium Grid a piece of cake, as the developers intended. For a more detailed description of exactly how Selenium Grid works, see this page. Once I had a test environment capable of running multiple tests in parallel, I needed a test runner capable of doing the same. Unfortunately, this does not currently exist for xUnit (boo!). MbUnit on the other hand, has the concept of concurrent execution baked right into the framework. So after swapping out my assembly references, and fixing up the resulting mismatches in assertions, my example test now looks like this: [Test] public void Purchase1UserLicenceNoSupport() {    //Arrange    ISelenium browser = BrowserHelpers.GetBrowser();    var db = DbHelpers.GetWebsiteDBDataContext();    browser.Start();    browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                 //Act     browser = ShoppingCartHelpers.TraverseShoppingCart(browser, 1, 0, ".NET Reflector Pro");    var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);    //Assert     Assert.AreEqual(priceResult.Price, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.AreEqual(priceResult.Tax, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.AreEqual(priceResult.Total, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } This is pretty much the same as the xUnit version. The exceptions are that the attributes have changed,  the //Arrange phase now has to handle setting up the ISelenium object, as the attribute that previously did this has gone away, and the test now sets up its own database connection. Previously I was using a shared database connection, but this approach becomes more complicated when tests are being executed concurrently. To avoid complexity each test has its own connection, which it is responsible for closing. For the sake of readability, I snipped out the code that closes the browser session and the db connection at the end of the test. With all that done, there was only one more step required before the tests would execute concurrently. It is necessary to tell the test runner which tests are eligible to run in parallel, via the [Parallelizable] attribute. This can be done at the test, fixture or assembly level. Since I wanted to run all tests concurrently, I marked mine at the assembly level in the AssemblyInfo.cs using the following: [assembly: DegreeOfParallelism(3)] [assembly: Parallelizable(TestScope.All)] The second attribute marks all tests in the assembly as [Parallelizable], whilst the first tells the test runner how many concurrent threads to use when executing the tests. I set mine to three since I was using 3 RCs in separate VMs. With everything now in place, I fired up the Icarus* test runner that comes with MbUnit. Executing my 372 tests three at a time instead of one at a time reduced the running time from 2 hours 10 minutes, to 55 minutes, that's an improvement of about 58%! I'd like to have seen an improvement of 66%, but I can understand that either inefficiencies in the hub code, my test environment or the test runner code (or some combination of all three most likely) contributes to a slightly diminished improvement. That said, I'd love to hear about any experience you have in upping this efficiency. Ultimately though, it was a saving that was most definitely worth having. It makes regression testing via UI automation a far more plausible prospect. The other obvious point to make is that this approach scales far better than executing tests serially. So if ever we need to improve performance, we just register additional RC's with the hub, and up the DegreeOfParallelism. *This was just my personal preference for a GUI runner. The MbUnit/Gallio installer also provides a command line runner, a TestDriven.net runner, and a Resharper 4.5 runner. For now at least, Resharper 5 isn't supported.

    Read the article

  • Only IE Browser gives org.springframework.web.multipart.MultipartException: The current request is not a multipart request

    - by Vicky
    I am trying to upload a file using jquery fileupload.js, spring. Code to upload files works fine for Mozilla and chrome browser. But getting following error *only for IE browser* org.springframework.web.multipart.MultipartException: The current request is not a multipart request. at org.springframework.web.method.annotation.RequestParamMethodArgumentResolver.assertIsMultipartRequest(RequestParamMethodArgumentResolver.java:183) at org.springframework.web.method.annotation.RequestParamMethodArgumentResolver.resolveName(RequestParamMethodArgumentResolver.java:149) at org.springframework.web.method.annotation.AbstractNamedValueMethodArgumentResolver.resolveArgument(AbstractNamedValueMethodArgumentResolver.java:82) at org.springframework.web.method.support.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:74)

    Read the article

  • WatiN LogonDialogHandlers not working correctly in Windows 7

    - by Clint
    I have recently updated to Windows 7, VS2010 and IE8. We have an automation suite running tests against IE using WatiN. These tests require the logon dialog handler to be used in order to log different AD Users into the IE Browser. This works perfectly when using Windows XP and IE8, but now using Windows 7 has resulted in the Windows Security dialog box no longer being recognised, the dialogue box is just being ignored. This is the method being used to startup the browser: public static Browser StartBrowser(string url, string username, string password) { Browser browser = new IE(); WatiN.Core.DialogHandlers.LogonDialogHandler ldh = new WatiN.Core.DialogHandlers.LogonDialogHandler(username, password); browser.DialogWatcher.Add(ldh); browser.GoTo(url); return browser; } any suggestions or help would be greatly appreciated...

    Read the article

  • How do I get a screenshot of a given website using C#

    - by Ender
    I'm writing a specialised crawler and parser for internal use and I require the ability to take a screenshot of a web page in order to check what colours are being used throughout. The program will take in around ten web addresses and will save them as a bitmap image, from there I plan to use LockBits in order to create a list of the five most used colours within the image. To my knowledge it's the easiest way to get the colours used within a web page but if there is an easier way to do it please chime in with your suggestions. Anyway, I was going to use this program until I saw the price tag. I'm also fairly new to C#, having only used it for a few months. Can anyone provide me with a solution to my problem of taking a screenshot of a web page in order to extract the colour scheme? EDIT: Sorry for not getting back to this sooner, but I've been busy with some other things. Anyway, the code seems to work well, but the problem I am having right now is that I am running it within a form, and naturally with Application.Run() being called I cannot run two instances of the same form at once. It recommended Form.showDialog() but that broke everything. Can anyone give me a hand with this code? public static void buildScreenshotFromURL(string url) { int width = 800; int height = 600; using (WebBrowser browser = new WebBrowser()) { browser.Width = width; browser.Height = height; browser.ScrollBarsEnabled = true; // This will be called when the page finishes loading browser.DocumentCompleted += new System.Windows.Forms.WebBrowserDocumentCompletedEventHandler(OnDocumentCompleted); //browser.DocumentCompleted += OnDocumentCompleted; browser.Navigate(url); // This prevents the application from exiting until // Application.Exit is called // Application.Run() does not work as it cannot be called twice, recommended form.showDialog() // but still issues Application.Run(); } } public static void OnDocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { // Define size of thumbnail neded int thumbSize = 50; // Now that the page is loaded, save it to a bitmap WebBrowser browser = (WebBrowser)sender; using (Graphics graphics = browser.CreateGraphics()) { using (Bitmap bitmap = new Bitmap(browser.Width, browser.Height, graphics)) { Rectangle bounds = new Rectangle(0, 0, bitmap.Width, bitmap.Height); browser.DrawToBitmap(bitmap, bounds); Bitmap thumbBitmap = new Bitmap(bitmap.GetThumbnailImage(thumbSize, thumbSize, thumbCall, IntPtr.Zero)); thumbBitmap.Save("screenshot.png", ImageFormat.Png); handleImage(thumbBitmap); } }

    Read the article

  • Getting a screenshot of a page using C# - Need help with code

    - by Ender
    I'm writing a specialised crawler and parser for internal use and I require the ability to take a screenshot of a web page in order to check what colours are being used throughout. The program will take in around ten web addresses and will save them as a bitmap image, from there I plan to use LockBits in order to create a list of the five most used colours within the image. To my knowledge it's the easiest way to get the colours used within a web page but if there is an easier way to do it please chime in with your suggestions. Anyway, I was going to use this program until I saw the price tag. I'm also fairly new to C#, having only used it for a few months. Can anyone provide me with a solution to my problem of taking a screenshot of a web page in order to extract the colour scheme? EDIT: Sorry for not getting back to this sooner, but I've been busy with some other things. Anyway, the code seems to work well, but the problem I am having right now is that I am running it within a form, and naturally with Application.Run() being called I cannot run two instances of the same form at once. It recommended Form.showDialog() but that broke everything. Can anyone give me a hand with this code? public static void buildScreenshotFromURL(string url) { int width = 800; int height = 600; using (WebBrowser browser = new WebBrowser()) { browser.Width = width; browser.Height = height; browser.ScrollBarsEnabled = true; // This will be called when the page finishes loading browser.DocumentCompleted += new System.Windows.Forms.WebBrowserDocumentCompletedEventHandler(OnDocumentCompleted); //browser.DocumentCompleted += OnDocumentCompleted; browser.Navigate(url); // This prevents the application from exiting until // Application.Exit is called // Application.Run() does not work as it cannot be called twice, recommended form.showDialog() // but still issues Application.Run(); } } public static void OnDocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { // Define size of thumbnail neded int thumbSize = 50; // Now that the page is loaded, save it to a bitmap WebBrowser browser = (WebBrowser)sender; // Code edited from example below to make smaller bitmap and save as PNG using (Graphics graphics = browser.CreateGraphics()) { using (Bitmap bitmap = new Bitmap(browser.Width, browser.Height, graphics)) { Rectangle bounds = new Rectangle(0, 0, bitmap.Width, bitmap.Height); browser.DrawToBitmap(bitmap, bounds); Bitmap thumbBitmap = new Bitmap(bitmap.GetThumbnailImage(thumbSize, thumbSize, thumbCall, IntPtr.Zero)); thumbBitmap.Save("screenshot.png", ImageFormat.Png); handleImage(thumbBitmap); } }

    Read the article

  • how to prevent download popup when we click on jnlp link from browser because that jnlp is alread

    - by Narasimha Rao
    i have an requirement of opening Swing application through jnlp link from browser. once click on jnlp link my application will download and installed in our local system, but again if i go to my browser and click on jnlp link , then also it will ask for download again . so my problem is if any user clicks again it should not ask for download because it was already installed in my local system. please do needful , very urgent regards, Narasimha

    Read the article

  • Browsers ignoring hosts file

    - by madkris
    Until recently my browsers started to ignore my hosts file. I have Windows 7 operating system installed. 192.168.0.5 livesite.com I have tried: Clearing browser cache Issued "ipconfig /flushdns" from the command line Issued "ping livesite.com" from the command line (response was "Reply from 192.168.0.5: bytes=32 time=1ms TTL=128") Restarting unit Backing up original hosts file and making a new one Checking lmhosts.sam (everything is commented out) Connecting directly to modem using cable Checked \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DataBasePath Tried it on another laptop with exactly the specs as I have Then I tried Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser ok but only for a sec) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser not ok) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Any idea why it worked for a moment? Or better yet anything I havent tried or some error I may have overlooked?

    Read the article

  • Browsers ignoring hosts file

    - by madkris
    Until recently my browsers started to ignore my hosts file. I have Windows 7 operating system installed. 192.168.0.5 livesite.com I have tried: Clearing browser cache Issued "ipconfig /flushdns" from the command line Issued "ping livesite.com" from the command line (response was "Reply from 192.168.0.5: bytes=32 time=1ms TTL=128") Restarting unit Backing up original hosts file and making a new one Checking lmhosts.sam (everything is commented out) Connecting directly to modem using cable Checked \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DataBasePath Tried it on another laptop with exactly the specs as I have Then I tried Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser ok but only for a sec) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Changing entry to "127.0.0.1 livesite.com" (ping ok, browser ok) Changing entry to "192.168.0.5 livesite.com" (ping ok, browser not ok) Issued "ipconfig /flushdns" from the command line (ping ok, browser not ok) Any idea why it worked for a moment? Or better yet anything I havent tried or some error I may have overlooked?

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >