Search Results

Search found 15731 results on 630 pages for 'browser tabs'.

Page 204/630 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Some sites won't load on Ubuntu/Mint

    - by Or W
    I have a REALLY weird problem with either my network or my OS. Last week I've suddenly had difficulties loading some websites or even more odd some parts of different websites. For example, I could load gmail.com, login and view the list of emails in my inbox but when I clicked one of them it would just time out. Another example is http://www.ynet.co.il, I can view the home page but going into any one of the articles fails (times out). I've tried Chrome, Firefox and Opera, all fail the same way. If I take a URL of a page I cannot load via the browser and try to wget it though the console I get the file just fine. I've formatted my machine (Used to run Ubuntu 13.04) and installed Mint Linux this time, it worked fine for a few days and now, again, having the same exact issues. Important to note that I have other machines connected either directly or via Wi-Fi to the router and they are all working fine (two win7 machines and 1 raspberry pi). Another strange behavior is that I can ftp or ssh to remote machines but cannot send files via ftp (times out) even if I set passive mode ON and when using ssh I can do just about anything but I cannot paste text into the remote machine, for example if I nano a file on the remote machine and try to paste anything from my clipboard it freezes. What I've tried so far: Disable IPv6 on the networking admin (and on firefox disabling ipv6 on the about:config page) Changing the port and the network cable I went to the store and bought a new standalone PCIe network adapter Connected my win7 laptop using the same cable and router port (sites that were not working on my Mint are working just fine on the win7 machine) Loaded Mint from a livecd, got the same result Tried changing the MTU (was 1500, tried 1492) Some observations: When I clear my browser cache and go to facebook.com for example, the homepage loads but I fail to load any profile/group page. If I refresh facebook.com homepage a couple of times it stops and fails to load until I clear my browser cache. I changed the chrome cache folder permissions to 0777 but that did not help. When I run netstat -n I see A LOT of connections that are in 'FIN_WAIT' mode (I'm guessing that's when I try to refresh pages that are not working and timing out), I have no idea what it means or if it helps anyone figure out what's wrong. The sites that are not loading correctly are always that same, they don't vary or anything and they fail to load exactly the same way on all three browsers that I've tried. When I Googled 'Ubuntu some sites not loading' I see a huge amount of complaints just like mine, but none of them that I could find actually says what the problem is or how they fixed it. Technical stuff: netstat -n ps aux netstat -nr

    Read the article

  • Download a website that requires log-in with HTTtrack Copier

    - by H.Moss
    Hi guys! I have been researching of how to download content of a site that requires username and password. This is actually harder than I thought it would be. I tried to use both HTTtrack Copier and followed the instruction below, but it's not working! Q: I can not access several pages (access forbidden, or redirect to another location), but I can with my browser, what's going on? A: You may need cookies! Cookies are specific data (for example, your username or password) that are sent to your browser once you have logged in certain sites so that you only have to log-in once. For example, after having entered your username in a website, you can view pages and articles, and the next time you will go to this site, you will not have to re-enter your username/password. To "merge" your personnal cookies to an HTTrack project, just copy the cookies.txt file from your Netscape folder (or the cookies located into the Temporary Internet Files folder for IE) into your project folder (or even the HTTrack folder)

    Read the article

  • Can you change the icon of a pinned IE 9 web application? And how do you do it?

    - by Rick Roth
    In IE 9 you have the ability to click and drag an open browser tab to the Windows 7 taskbar and pin the shortcut to the taskbar. This has the effect of creating a pseudo-application experience where the shortcut can have it's own custom jumplist and is not grouped with other IE 9 browser tabs on the taskbar. Windows uses the "shortcut icon" or "favicon" defined in the HTML for the icon on the taskbar. If no shortcut icon is defined, then the generic IE shortcut icon is used. If you have a bunch of these shortcuts pinned to the taskbar that don't have different icons it can be confusing to the user which is which. Can you change the icon of a pinned IE 9 web application? And how do you do it?

    Read the article

  • Can Apache 2 be configured to start sending gzipped data early?

    - by rikh
    We have Apache set up to gzip compress html pages before they are sent to the client browser. However, some of our pages are slowish to generate and it seems that Apache is holding on until it has the complete page, compressing it, then sending it to the browser. There are big chunks of the page (the main important bits) that are actually generated and output fairly quickly. Is it possible to configure Apache to start compressing and send data for the page as soon as the script starts outputting something? Is it is, can you offer any help is how to do this? If not, can you suggest any other way to get gzip compression working for the server? The scripts that generate the pages are written in PHP. We are using Apache 2.0 on Linux.

    Read the article

  • Squid/Kerberos authentication with only Linux

    - by user28362
    Hi, I would like to know if it possible to let a Windows Xp machine authenticate to Squid (Linux) using Kerberos without the need of an Active Directory domain. I only want to create a Kerberos ticket on the client side, which should give the client access to squid (using I.E.). I only found tutorials about configuring A.D./Squid, not an environment with only Linux servers. Thanks Update: The kerberos setup is correctly done, the proxy and client can get tickets. As for the browser (FF/IE), I get: ERROR Cache Access Denied While trying to retrieve the URL: http://www.google.com/ The following error was encountered: * Cache Access Denied. Sorry, you are not currently allowed to request: http://www.google.com/ from this cache until you have authenticated yourself. In kerberos, I get: squid_kerb_auth: Got 'YR ElRNTVMTUABBAABAB4IIogAAAAAAAAAAAAAAAAAAAAAFASgDAAAADw==' from squid (length: 59). squid_kerb_auth: parseNegTokenInit failed with rc=101 squid_kerb_auth: received type 1 NTLM token This message is strange, as I didn't configure NTLM. It looks like the browser uses the wrong authentication methode.

    Read the article

  • Ubuntu 11.04 and OpenLDAP - where is the config?

    - by Tom SKelley
    I've been asked to setup a multimaster LDAP environment on Ubuntu 11.04 - instead of a single master server. I cloned the master server and recreated it into two VMs. I am trying to follow the instructions on the OpenLDAP documentation here: http://www.openldap.org/doc/admin24/replication.html and it talks about modifying the cn=config tree within LDAP. The subdirectory tree appears to be there at: /etc/ldap/slapd.d/ and a slapcat -b cn=config drops out a load of config information. When I try to connect using a browser and the admin bind credentials: ldapsearch -D '<adminDN>' -w <password> -b 'cn=config' I get: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 32 No such object I don't see the config context when I connect via an LDAP browser either. I'm sure I'm missing something, but I can't see what it is!

    Read the article

  • Why is Google Home page changing spontaneously

    - by Bob
    Periodically, my Google Chrome home page (which I have originally set myself) changes spontaneously on opening the browser to display, instead, the page showing the rectangular small images of webpages I have visited before. In addition the bookmarks bar and my homepage icon has disappeared. I am also not directed to any other website location as might occur if a virus or malware were involved. I can correct all of these changes back to my original settings the way I prefer (and nothing prevents my resetting it) and can continue to browse with no difficulty but this changed situation has happened several times in the past month. I had assumed that this was an idiosyncrasy of how Chrome (mis)behaves but worry whether virus or malware might be involved. I have monitoring by antiviral and antimalware software and run periodic complete scans and occasionally these find Trojans and other malware which are removed but this browser behavior seems to recur I would like to prevent the surprises Thank you.

    Read the article

  • Importing Delicious export into Firefox 3

    - by Jordan Reiter
    The HTML export file from Delicious creates an HTML file that, while importable into Firefox, loses all of the tags, which pretty much misses the point of delicious. Every link I've found to do the import doesn't work: The online converter Delicious-to-Firefox 3 keeps throwing a server error; the "better" version is down I tried using the trick of syncing Delicious bookmarks to Flock and then restoring that file to my Firefox browser. Although the bookmarks are in Firefox, they don't show up anywhere. I know they're in Firefox because when I create a backup file, they're in the JSON file but they refuse to show in the browser anywhere (clicking on any tag shows an empty list) Basically, I'm looking for someone who has successfully imported their delicious bookmarks, with tags, into Firefox 3.6.13 (Mac).

    Read the article

  • SSH Proxy (SOCKS) through remote computer - TCP & DNS

    - by Moz Morris
    My problem: Need DNS to be resolved through my remote machine. So I have a REMOTE that I can access from LOCAL via SERVER. This REMOTE can access a host TARGET_HOST. TARGET_HOST is setup in REMOTE's host file like so: 123.123.123.123 TARGET_HOST I want to be able to access (in the browser & my application) TARGET_HOST from LOCAL. I have setup a 'proxy' like so: LOCAL to SERVER: ssh -L 4567:LOCAL:4568 user@SERVER SERVER to REMOTE: ssh -D 4568 user@REMOTE LOCAL's network config is setup to use a proxy on localhost through port 4567. So, everything is great and I can see TARGET_HOST in my browser. The problem I have is that the DNS doesn't resolve from LOCAL and therefore some code I have going on in my application, fails. Can anyone help me? Can anyone suggest a better method?

    Read the article

  • Seeking webcam with own IP address

    - by Mawg
    I am not sure if this is the correct place to post, but I couldn't find anywhere better. I am looking for a webcam/movie cam and this site seems more addressed to still images, but ... I am looking for a small, rugged web-cam which can be hand-held or perhaps head-mounted (failing that I can settle for a tripod mount, but I need portability). It must have its own IP address so that anyone can view its video stream from a web browser. I would prefer 2 way voice communication too. Encryption of data is nice, as is compression (H.264). Maybe also remote control, like causing it to zoom from the browser. Does anyone have any suggestions, even suggestions of somewhere else to look? Thanks very much in advance.

    Read the article

  • PHP on IIS7 not showing pages

    - by Jeff
    I have a PHP website on a Windows 7 machine I'm working with and it cannot be viewed by any browser - IE, Chrome, Firefox. When navigating to the root of the website (default index.php) the browser reports it cannot find the address. Not a 404 error from the webserver, just as if it cannot resolve the name. Other websites in the same default web application that are also PHP work perfectly. I've aligned all folder permissions and everything else but this has got me stumped. I even went as far to create a new folder and throw in a test phpinfo() page and it worked. Copied this website's content to the new folder and it cannot find the index.php page. I checked all setting I know and can't seem to find what I'm missing. Anyone else encounter this issue? Remember the fix for it?

    Read the article

  • I can't work locally unless connected to the internet - how to fix?

    - by Rodney
    Hi, In Firefox, when I am disconnected from the net, I want to work locally on my local IIS server (Win XP, Firefox 3.5.10). I do NOT have Work Offline checked but FF says that it cannot find my site (ie. the message from FF if you try to access an online site offline) This applies to any localhost URL. I tried 127.0.0.1 and checked my Host file - that does not work either. If I check Work Offline then it shows the Firefox message that it cannot be reached because I have Work Offline checked. Unchecking it does not help. Then - I load up Safari, copy and paste the URL into that browser and it connects to my development localhost site. It is not just browser caching as I can log in etc. So Firefox will not let me develop locally unless I am connected to the internet, which is a problem. Suggestions please?

    Read the article

  • What is AddType application/x-httpd-php-source

    - by egor
    I have the apache2.0, PHP5.2.4 and the directive in the httpd.conf: AddType application/x-httpd-php-source .php .php3 .php4 .php5 .php6 AddType directive is used to maps the given filename extensions onto the specified content type. This is the only meaning of this directive. But why does this method switch off PHP handler, that assigned .php extensions, and I can view source code of scripts in my browser? And another: AddType application/x-httpd-php5 .php Why does this method switch on PHP handler? This simply must send header "Content-Type: application/x-httpd-php5" to my browser and this must be only meaning of directive AddType from mod_mime. I'm confused. Thanks for your replies.

    Read the article

  • Domain Computers Not Listed In Network

    - by Giawa
    Our network computers are all connected to a domain, and I can see them if I search the active directory (I can click 'search active directory' and then select 'computers' and then Find Now, and all of the computers will appear). However, the computers are not listed in the network browser on any of our computers (Win XP, Win7, Linux, etc) which are connected to the domain. DC is running Windows Server 2008 (Windows Server Standard) with a configured DNS and DHCP server. All of the IPs on our local network are static IPs, although I can't see how that would make a difference. I can still connect to computers on the network via \\computer_name, but I cannot browse them in 'network' or in 'my network places'. The computer browser service is not started on the DC, but I tried starting that and it had no effect. DC currently has the firewall configured as 'off' to try to debug this problem. Thanks in advance

    Read the article

  • Can't connect to YouTube from specific network

    - by Tyilo
    Using my current network, I am unable to connect to http://www.youtube.com/. It doesn't matter what browser I use or if I use a cli-command (wget, curl). Error in Google Chrome: Oops! Google Chrome could not connect to www.youtube.com Error using curl: curl: (7) couldn't connect to host If I use nslookup to get the IP-address of YouTube, I get 173.194.32.32. If I go to http://173.194.32.32/ in my browser it can connect, but as Google is probably checking the Host HTTP-header, it shows Google's frontpage instead. There is no blocked websites on the router and other devices on the network seems to work. My computer only has this problem on this specific network. I am using Mac OS X 10.8.2 on a MacBook (mid 2009).

    Read the article

  • Local Apache on Windows XP not finishing page requests

    - by asgeo1
    I have Apache 2.2.11 installed locally on my Windows XP (SP3) dev machine, which I setup about 3 months ago. I have just started having a strange problem in the last few week. Apache is serving some basic PHP applications like phpMyAdmin. When I make a page request, Apache appears to not finish serving all resources for that page. Firefox shows the "Transferring data from servername..." message, and the page never completes. The same problem happens in Internet Explorer too. I can sometimes tell which resource it is waiting on, because most of the page will render except for some image or similar resources. (Not sure why Firebug doesn't show this) It doesn't have the problem every page request - for page requests where most of the resources are cached in my browser, the page request will work with no problems. Or pages that are very light will work with no problems. However, if I "hard" refresh the page, I will have this problem (probably because it is requesting all page resources) Does anyone know what this could be? It is so strange that it has only just started happening - and I did not make any changes to my system (that I am aware of) I tried playing with the Apache ThreadsPerChild setting, but it did not seem to make a difference. UPDATE: I have been doing some more tests. I have been serving the most basic of pages, just a plain HTML file: <html> <body> <h1>testing</h1> </body> </html> If I request this page multiple times in a row, AND each request occurs immediately after the previous has completed, then 50% of the time the request will time out. However, if I put a 1-2 second gap between requests, then there is no problem. This correlates to what I have observed when the brower requests a real application page. When the browser has nothing cached, then all of the page resources are requested from the browser in a short amount of time - this appears to trigger the problem. UPDATE2: Nathan Long has helped me understand the issue a little better with the server-status page (see below). It is weird, it is like the server has a hickup sending data to the client. The client sits there waiting forever for data that never arrives. Closing the client process does not terminate the connection on the server - the server still has active threads for each previously attempted connection, but they just sit there - not sending any data and never terminating. (even though the client is now closed) Only a restart of the server seems to terminate them.

    Read the article

  • Program broke my windows. Everything randomly minimizes even after multiple restarts

    - by Xitcod13
    I just installed LoL (league of legends) on my computer. The problem is that almost every time after I play the game, it minimizes all my windows which is really annoying. What is worse however is that afterwards other programs do this as well. In the internet browser doesn't minimize but it does loose focus. For example when i type it randomly looses focus and i have to click on the browser window again to it registers my typing. Is there any way to fix this? What can cause such a strange behavior?? In addition this problem persists even after restarting my computer. I previously thought it was internet driver. I no longer think so this happened right after I started playing the game and seems pretty persistent. Please read the comments for further information. This pretty much renders my computer useless. Even typing this takes a huge effort.

    Read the article

  • Escaping query strings with wget --mirror

    - by Jeremy Banks
    I'm using wget --mirror --html-extension --convert-links to mirror a site, but I end up with lots of filenames in the format post.php?id=#.html. When I try to view these in a browser it fails, because the browser ignores the query string when loading the file. Is there any way to replace the ? character in the filenames with something else? The answer of --restrict-file-names=windows worked correctly. In conjunction with the flags --convert-links and --adjust-extension/-E (formerly named --html-extension, which also works but is deprecated) it produces a mirror that behaves as expected. wget --mirror --adjust-extension --convert-links --restrict-file-names=windows http://www.example

    Read the article

  • Can't see any YouTube videos

    - by André
    I have a problem with watching YouTube videos. It says: "An error occurred, please try again later". I've tried loading different videos and that's what it says to all the videos I try to watch. I've tried using another browser, clearing cache + cookies etc, but none of that really worked out. My operating system is Windows 7 Home Premium, I use Google Chrome as my browser. And the YouTube videos was able to be watched earlier. I suspect that it has something to do with the PC, since I've got YouTube working on my laptop earlier. Not sure if it still works on my laptop though. Hope I've given enough information for you to help me out with this problem. Feel free to ask if there's anything else you need to know. Anyone who can help me out?

    Read the article

  • multiple domains, one static IP address and latency

    - by shirish
    how is latency affected when multiple domains are using one single static IP address ? The scenario is in shared web-hosting By latency meaning the DNS lookup the client has to do. As far as I understand it, the browser would hit the root servers to try to figure out the IP Address and it belongs where and then when it comes to the correct server, it probably looks up some sort of table to determine which site names much and show that site as such via browser to the user. Is my understanding correct or backwards or what ?

    Read the article

  • How can I make full-screen desktop applications only cover the snapped desktop?

    - by nhinkle
    In Windows 8, you can "snap" two apps next to each other, and one of those apps can be the legacy Windows desktop environment. A convenient application for this (or so I thought) would be to snap a chat client, small browser, or other app while watching content in full-screen on the desktop. The problem with this is that full screen desktop applications will take over the entire screen, even if the desktop is snapped to only occupy 3/4th of the display. What I would like is some way to force "full screen" desktop apps to only cover the snapped desktop area, and to truly go full screen if the desktop is snapped to full-width. Is there some way to configure this? If that didn't make sense, let me illustrate with pictures: Desktop in snapped view with SU chat in mini-browser: What happens when you click "full screen": What I want to happen when I click "full screen" (note this is digitally altered, not a real screenshot)

    Read the article

  • nginx status code 200 and 304

    - by Chamnap
    I'm using nginx + passenger. I'm trying to understand the nginx response 200 and 304. What does this both means? Sometimes, it responses back in 304 and others only 200. Reading the YUI blog, it seems browser needs the header "Last-Modified" to verify with the server. I'm wondering why the browser need to verify the last modified date. Here is my nginx configuration: location / { root /var/www/placexpert/public; # <--- be sure to point to 'public'! passenger_enabled on; rack_env development; passenger_use_global_queue on; if ($request_filename ~* ^.+\.(jpg|jpeg|gif|png|ico|css|js|swf)$) { expires max; break; } } How would I add the header "Last-Modified" to the static files? Which value should I set?

    Read the article

  • Odd domain switching behavior in Firefox and Chrome

    - by Jeremy Detrempe
    We have different development severs and a production server. Testing is done in the development servers. As a QA engineer, I'm switching between these servers quite often throughout the day. In Chrome, sometimes I need to reload a page a few times to get it to pull from the newly switched server. In Firefox, sometimes I need to quit the browser in order to get it to pull from the newly switched server. (We have small tags that indicate which server you are pulling from, which is how I know in-browser.) Why does that happen? I'd love to know how that happens (maybe what it's called?) and what the best way to deal with it is. (I know that Firefox has an extension for domain switching; is that the best solution?)

    Read the article

  • Programs still opening websites through Google Chrome, despite its removal

    - by Russsell Feldman
    Even after I've uninstalled Google Chrome, when other programs want to open a website (e.g.: Yahoo! Messenger getting a profile) they will still attempt to do so through Chrome, and fail looking for it. I've read all the advice on how to make Firefox or IE for Windows 7 the default browser. I don't think Google would do this sort of "hijack the default browser" thing and I'm convinced it must be a trojan or virus or even a registry hack. If so, any ideas on how I would go about fixing this without purchasing every virus/trojan program until it was removed? That method could be an expensive fix.

    Read the article

  • URI Scheme, launch program in its directory

    - by ZaKlaus
    I have registered URI scheme for my app. When I open it with "Run.." or in browser, it runs in hosted directory. For ex. Ive opened url in webpage, program's working dir is in browser. What I want? I want to run program test.exe located at C:\data\test.exe and to use dir. C:\data so it could use other data in relative path. so test.exe would access file .\file.txt without using absolute path Hope I wrote it understandable, sorry for bad English.

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >