Search Results

Search found 23449 results on 938 pages for 'browser close'.

Page 178/938 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • How do I print unprintable web pages?

    - by user1413
    I want to print out a web page that seems to be unprintable in both Firefox and Chrome. It is multiple pages but when I print it out in Firefox and Chrome, they only print the first page. The only way I have found to print out the page is to print it in IE in XPS Document Writer format. Is there a tool (e.g., web browser or web browser plugin) that will help? Or is there a setting I can use in Firefox or Chrome?

    Read the article

  • Error in IE8 Win7: this tab has been recovered

    - by Sohail
    Hello everybody, I get this error every time I open IE8 in Win7, and a popup appear which says: this tab has been recovered and it close open the tab that I can't even surf a page (any page) with IE8, and after a while it crashes and close the program with no such an obvious error, I tried these methods to solve the problem but it doesn't work: 1.Reset Internet Explorer settings 2.Run IE without Add-ons 3.Scanned system for viruses with Eset Smart sercurity 4, there was no viruses. I would appreciate any suggestion, thanks.

    Read the article

  • ERR_INCOMPLETE_CHUNKED_ENCODING apache 2.4

    - by Bujanca Mihai
    I upgraded my Ubuntu server to 14.04 and Apache 2.4.7. Now my images don't load and console yields net::ERR_INCOMPLETE_CHUNKED_ENCODING. Also, I can sometimes see some of the images load for a little while (1 sec max) and then they disappear. .htaccess RewriteEngine On # Serve the favicon file from img folder RewriteCond %{REQUEST_URI} ^/favicon.ico$ RewriteRule ^(.*)$ /img/$1 [NC,L] # Redirect HTTP traffic to WWW subdomain RewriteCond %{HTTPS} off [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] # Redirect HTTPS traffic to WWW subdomain RewriteCond %{HTTPS} on [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ https://www.%{HTTP_HOST}/$1 [R=301,L] # Auto Versioning rules RewriteCond %{REQUEST_FILENAME} !-s RewriteRule ^(.*)\.[\d]+\.(css|js)$ $1.$2 [L] # Default Zend rewrite rules RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] VHost <VirtualHost *:80> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD-access.log combined </VirtualHost> <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-ssl-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire #<FilesMatch "\.(cgi|shtml|phtml|php)$"> # SSLOptions +StdEnvVars #</FilesMatch> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. #BrowserMatch ".*MSIE.*" \ # nokeepalive ssl-unclean-shutdown \ # downgrade-1.0 force-response-1.0 </VirtualHost> </IfModule> logs Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.3 OpenSSL/1.0.1f (internal dummy connection) 127.0.0.1 - - [25/Aug/2014:13:09:53 +0300] "GET /img/header/top-nav-separator.png HTTP/1.1" 200 462 "https://localhost/art" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.132 Safari/537.36"

    Read the article

  • Can't access server running CentOS 6.3 in vmware

    - by localhost
    I just installed CentOS on a vmware machine that uses a bridged connection, installed apache, php and mysql, but when I run service httpd start I get a warning(?): Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using localhost.localdomain for ServerName I can connect to the server using putty, so I really have no idea why it won't load in the browser. EDIT: Httpd starts successfully, return [ OK ]. Running netstat -tuplen | grep :80 yields: tcp 0 0 :::80 :::* LISTEN 0 40392 15894/httpd I am able to connect with putty to 192.168.0.113, but browser says can't connect to 192.168.0.113

    Read the article

  • Is there a client for Google Talk, for Mac, that allows phone calls?

    - by xtraorange
    I'm having no luck finding a client for Google Talk that is both for Mac (OSX) and that allows phone calls (not just chatting with other Google Talk users). If I leave the browser page open to gmail, it will ring when I receive a call to my Google Voice number, and I can call out from my Google Voice number, but I don't want to use the browser. Is there any sort of client, for Mac, that will allow calls through Google Talk?

    Read the article

  • Enforce using proxy in all browsers

    - by Petr Marek
    I've configured squid with squid with squidguard and when using proxy in browser it works fine. But I want to enforce using proxy (probably in iptables) in all browsers. Now it can be disabled in the browser settings by user. My setup is: one standalone pc with ubuntu running the squid and squidguard and on this very same device I want to somehow enforce using the proxy. Squid conf file has set: http_port 3128 transparent THX

    Read the article

  • Using nginx's proxy_redirect when the response location's domain varies

    - by Chalky
    I am making an web app using SoundCloud's API. Requesting an MP3 to stream involves two requests. I'll give an example. Firstly: http://api.soundcloud.com/tracks/59815100/stream This returns a 302 with a temporary link to the actual MP3 (which varies each time), for example: http://ec-media.soundcloud.com/xYZk0lr2TeQf.128.mp3?ff61182e3c2ecefa438cd02102d0e385713f0c1faf3b0339595667fd0907ea1074840971e6330e82d1d6e15dd660317b237a59b15dd687c7c4215ca64124f80381e8bb3cb5&AWSAccessKeyId=AKIAJ4IAZE5EOI7PA7VQ&Expires=1347621419&Signature=Usd%2BqsuO9wGyn5%2BrFjIQDSrZVRY%3D The issue I had was that I am attempting to load the MP3 via JavaScript's XMLHTTPRequest, and for security reasons the browser can't follow the 302, as ec-media.soundcloud.com does not set a header saying it is safe for the browser to access via XMLHTTPRequest. So instead of using the SoundCloud URL, I set up two locations in nginx, so the browser only interacts with the server my app is hosted on and no security errors come up: location /soundcloud/tracks/ { # rewrite URL to match api.soundcloud.com's URL structure rewrite \/soundcloud\/tracks\/(\d*) /tracks/$1/stream break; proxy_set_header Host api.soundcloud.com; proxy_pass http://api.soundcloud.com; # the 302 will redirect to /soundcloud/media instead of the original domain proxy_redirect http://ec-media.soundcloud.com /soundcloud/media; } location /soundcloud/media/ { rewrite \/soundcloud\/media\/(.*) /$1 break; proxy_set_header Host ec-media.soundcloud.com; proxy_pass http://ec-media.soundcloud.com; } So myserver/soundcloud/tracks/59815100 returns a 302 to /myserver/soundcloud/media/xYZk0lr2TeQf.128.mp3...etc, which then forwards the MP3 on. This works! However, I have hit a snag. Sometimes the 302 location is not ec-media.soundcloud.com, it's ak-media.soundcloud.com. There are possibly even more servers out there and presumably more could appear at any time. Is there any way I can handle an arbitrary 302 location without having to manually enter each possible variation? Or is it possible for nginx to handle the redirect and return the response of the second step? So myserver/soundcloud/tracks/59815100 follows the 302 behind the scenes and returns the MP3? The browser automatically follows the redirect, so I can't do anything with the initial response on the client side. I am new to nginx and in a bit over my head so apologies if I've missed something obvious, or it's beyond the scope of nginx. Thanks a lot for reading.

    Read the article

  • I can't log in to Nagios web interface

    - by M. Saâd
    When i try to login to Nagios in my web browser and after having repeatedly enter my login and password on my Nagios page http://127.0.0.1/nagios/, i get this : Authorization Required This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required. Apache/2.2.15 (Red Hat) Server at 127.0.0.1 Port 80 I changed the password : htpasswd -c /etc/nagios/htpasswd.users nagiosadmin And restart the server : service httpd restart But without result !

    Read the article

  • Surfing the internet while downloading becomes painful

    - by Mehper C. Palavuzlar
    When I try to surf the internet while downloading files (http based, torrent etc.), it becomes painful to open web pages since download traffic is mostly allocated for downloads. Is there a program that can manage this process so that when I surf the weight will be given to internet browser, and when the browser is idle, the weight will be on the download side? I'm using Firefox 3.6.2 on Win7, so any good Firefox addons or Windows based freewares are acceptable.

    Read the article

  • Control Less Window gets stuck in Win7

    - by dkjain
    Hi there is a dialer software (by ISP) that connects to my ISP's wi-fi network. Though it connects to the wi-fi network however a control-less windows gets stuck in the middle of the my win7 desktop screen and does not goes off. I cannot close it or minimize it as there are no controls on it. I want to know how to minimize or close the windows without killing the app via Task Manager. Regards, DK Jain.

    Read the article

  • dd-wrt and squid proxy

    - by Soe Naung Win
    I am now using linksys router with dd-wrt firmware & squid proxy from off-site (VPN) for anonymity. The problem is i have to configure proxy setting in my browser to access that proxy. What i would like to do is to get all my traffic pass through the router via squid proxy without configuring any setting in browser. I can't use openvpn due to port blockage in my country. My current squid proxy listen to port 443.

    Read the article

  • How do I disable location services system wide?

    - by Daisetsu
    Google has an API which can determine someone's location based on the wifi router names which a user's computer can see. You will see this if you go to google maps and your browser may ask if you would like to share location data. I am wondering if there is any way to disable this on a system wide setting rather than just in each browser (Chrome can do this too). Is there any way I can limit which applications have a list of the wireless routers I can see?

    Read the article

  • dd-wrt and squid proxy

    - by Soe Naung Win
    I am now using linksys router with dd-wrt firmware & squid proxy from off-site (VPN) for anonymity. The problem is i have to configure proxy setting in my browser to access that proxy. What i would like to do is to get all my traffic pass through the router via squid proxy without configuring any setting in browser. I can't use openvpn due to port blockage in my country. My current squid proxy listen to port 443.

    Read the article

  • how to add solr to run on startup?

    - by Blankman
    I asked this question before, but I think I got answers that were cron job type answers. I want the solr service to startup if I reboot my computer, and I also want it to run by itself instead of me opening up cmd.exe and running it manually, and then having to close my terminal connection (not log off as that will close it). What options do I have? I am on windows server 2008, IIS7 running a asp.net application that uses solr.

    Read the article

  • What is Error 324? Is it related to Google Chrome? Or Verizon Webmail?

    - by Jason Rhodes
    My in-laws are having trouble with signing into their Verizon Webmail account at webmail.verizon.net, only when they are on their wireless network. When they try to log in from wireless they get "Error 324" in the browser, in both Google Chrome AND IE8. But they can access any other site, and they can get on their Verizon email when they plug in directly to the browser. Does anyone know why this is?

    Read the article

  • MediaContextNotificatioNWindow Error Occurs When Trying to Shutdown Windows XP

    - by Psycho Bob
    I am working with a Windows XP laptop that whenever it is shutdown will display a warning that the "MediaContextNotificationWindow" is attempting to close, eventually requiring you to force it to close. It does not appear to matter if the laptop is on or off the network, it will still display the message. I have done a little research into the MediaContextNotificationWindow, and it seems like it is either a part of the Windows Media Player or .Net 3.5, though I'm not entirely sure. Has anyone come across this error and know how to resolve it?

    Read the article

  • How do I make mailto: links open gmail in Ubuntu?

    - by Matthew
    I've tried using this How-To Geek guide, but it doesn't work. Running the script from the terminal works (although I had to change its permissions first), but clicking on a mailto: link does nothing. Note: I am using the Chromium Daily Builds for my browser. I would mailto: links in all applications to point to gmail, not just the ones that I click on in my browser.

    Read the article

  • Web Service gets unavailable after several concurrent calls

    - by Roman
    We are testing GoDaddy Virtual Data Center and came to a very strange issue when our web site gets unavailable. GoDaddy Support keeps saying the issue is in our web server settings, but looking at the result of our tests I doubt it. TEST ENVIRONMENT Virtual DataCenter with Windows hosted at GoDaddy.com. All servers have Windows Server 2008 R2 Datacenter, IIS 7. Server One with IP address 10.1.0.4 Server Two with IP address 10.1.0.3 Both servers are in private network not visible from outside. Port Forward with IP address 50.62.13.174. Port Forward is assigned to Server One TEST DESCRIPTION JMeter is used as a Client App to simulate 30 concurrent users sending 100 SOAP requests each. Interval between requests is 1 second. Http link used for testing: http://50.62.13.174/v2/webservices.asmx TEST ONE Test is run from a computer in our office. After JMeter starts running test, almost immediately, the link above becomes unavailable in a browser. After test completion, the link is not available in a browser for about 5 more minutes. Remote Desktop is working well, so we can connect to Server One remotely. After about 5 minutes since test completion, the link becomes available in a browser again. TEST TWO Test is run from Server Two (that is part of our virtual data center). Test works very well, no visible delays in processing. The link is available in a browser all the time. TEST THREE Test is run from Server One using localhost. The result is the same as in TEST TWO - no issues. TEST FOUR We repeated TEST ONE from other computers that we have located in different countries, all with the same result as TEST ONE. CONCLUSION As the test works well from Server Two, but does not work from outside our virtual data center, we feel there are issues with the network or its capacity. The whole behaviour looks like out requests from outside get stuck somewhere before reaching our virtual data center. Has anybody had similar issues in the past? Are there chances that something is wrong with our server settings?

    Read the article

  • Apache won't serve images larger than ~2K

    - by dtbaker
    Hello, Just upgraded an old box to Ubuntu to 10.04.2 LTS. Apache will not display images to a browser that are over about 2K. Small images seem to display fine. Static HTML and PHP continues to works fine as well. Installed: apache2 2.2.14-5ubuntu8.4 apache2-mpm-prefork 2.2.14-5ubuntu8.4 apache2-utils 2.2.14-5ubuntu8.4 apache2.2-bin 2.2.14-5ubuntu8.4 apache2.2-common 2.2.14-5ubuntu8.4 here is an ngrep of an image that doesn't display fine in the browser: T 192.168.0.4:32907 - 192.168.0.54:80 [AP] GET /path/path/logo.png HTTP/1.1..Host: 192.1 68.0.54..Connection: keep-alive..Accept: application/xml,application/xhtml+ xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5..User-Ag ent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13..Accept-Enco ding: gzip,deflate,sdch..Accept-Language: en-US,en;q=0.8..Accept- Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3.... T 192.168.0.54:80 - 192.168.0.4:32907 [A] HTTP/1.1 200 OK..Date: Wed, 09 Mar 2011 05:28:38 GMT..Server: Apa che/2.2.14 (Ubuntu)..Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT ..ETag: "17b6f4-15fe-491dd63eb2f40"..Accept-Ranges: bytes..Conten t-Length: 5630..Keep-Alive: timeout=15, max=100..Connection: Keep -Alive..Content-Type: image/png.....PNG........IHDR...!...v...... .%.....sRGB.........bKGD..............pHYs.................tIME.. etc... This looks ok to me! I have tried firefox and chrome, both display small images fine but when a large image is requested the browser prompts to download the file. When the image file is saved to the local computer it is corrupt, it also takes a long time to save which makes me think the browser cannot see the content-length header sent from apache. Also when I look at the saved image file it includes the headers from apache, along with a bit of garbage at the top, like so: vi logo.png: ^@^UÅd^@$^]V^S^H^@E^@^Q,n!@^@@^F^@^@À¨^@6À¨^@^D^@P^Y¬rÇŹéw^P^@Ú^@^@^A^A^H ^@^GÝ^]^@pbSHTTP/1.1 200 OK^M Date: Wed, 09 Mar 2011 04:47:04 GMT^M Server: Apache/2.2.14 (Ubuntu)^M Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT^M ETag: "17b6ff-157c-491dd63eb2f40"^M Accept-Ranges: bytes^M Content-Length: 5500^M Keep-Alive: timeout=15, max=94^M Connection: Keep-Alive^M Content-Type: image/png^M ^M PNG^M etc... Any ideas? It's driving me nuts. There is nothing in apache error logs, and permissions are fine (because the image data is there, it's just somewhat corrupt). There's no proxy or iptables on this ubuntu box either. Thanks heaps!! Dave ps: just tried on IE from a different computer, same problem :( pps: rebooted server, no help.

    Read the article

  • Printing PDF books

    - by Robert Munteanu
    I've been taking up reading some of Sonatype's Maven books, available online, but I find it more relaxing to read on paper rather than on screen. Still, loose A4 sheets do not bring me close enough to a book reading experience. Aside from ordering a print copy, what can I do to print/bind a PDF to be as close as possible to an actual book?

    Read the article

  • How can I install firefox automatically, and set some settings?

    - by James.Elsey
    Hi. I need to install firefox on 100+ machines, I've done a few manually but its too much of a task to do the whole lot. I need to : Install firefox executable Set "dont import anything(from Internet Explorer)" Set default browser to "no" Set default browser not to check again Copy a shortcut onto the desktop that goes to a specific URL I know this should be able to be implemented with an MSI package, but could I also set those settings? Thanks

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >