Search Results

Search found 13940 results on 558 pages for 'chromium browser'.

Page 459/558 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • Nginx deny doesn't work for folder files

    - by user195191
    I'm trying to restrict access to my site to allow only specific IPs and I've got the following problem: when I access www.example.com deny works perfectly, but when I try to access www.example.com/index.php it returns "Access denied" page AND php file is downloaded directly in browser without processing. I do want to deny access to all the files on the website for all IPs but mine. How should I do that? Here's the config I have: server { listen 80; server_name example.com; root /var/www/example; location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to front handler expires 30d; ## Assume all files are cachable allow my.public.ip; deny all; } location @handler { ## Common front handler rewrite / /index.php; } location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*.php)/ $1 last; } location ~ .php$ { ## Execute PHP scripts if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss expires off; ## Do not cache dynamic content fastcgi_pass 127.0.0.1:9001; fastcgi_param HTTPS $fastcgi_https; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } }

    Read the article

  • Different external ip addresses from different sites

    - by user630286
    My router is ClearOS 6(Centos 6). In my router, I have two external (internet) network connections from two ISP's. The primary connection is eth2 connected to a cable modem and the second one is ppp0 connected to a dsl modem. I have assigned eth2 as the primary connection (with a high metric value). In fact this is done through clearos's multiwan web interface. I have a test in my Nagios to monitor whether the primary connection. This connection is done based on the result of curl ifconfig.me But it seems that ifconfig.me is always giving the ip address of my secondary connection. I tested it through a browser. Yes ifconfig.me gives the secondary internet's(ppp0) ip address. But whatismyipaddress.[com|org] give my primary ip address (eth2). I checked the default route on the router through ip route list 0/0 which also shows the primary connection (eth2) as the default route. The traceroute www.google.com and traceroute ifconfig.me both seems to trace through the primary connection (eth2). As our secondary internet connection has only got a limited download, I don't want to end up having to pay a large sum at the end of the month. Has somebody got an idea why the ifconfig.me shows my secondary address? What is the best way to ensure that my router(and thus the lan) use the right internet connection.

    Read the article

  • Pinging computer name in LAN results in public IP?

    - by Bob
    Hi, I recently introduced a new machine to my LAN. The computer name for this machine is 'server'. Historically I've been able to access machines from my home network (from a web browser or RDP) using the machine name and it resolves to a local IP address just fine. However, I can't seem to do this anymore. When I ping the computer name, I get the following: C:\Users\Robert>ping server Pinging server.router [67.215.65.132] with 32 bytes of data: Reply from 67.215.65.132: bytes=32 time=24ms TTL=54 Reply from 67.215.65.132: bytes=32 time=23ms TTL=54 Reply from 67.215.65.132: bytes=32 time=24ms TTL=54 Reply from 67.215.65.132: bytes=32 time=24ms TTL=54 I notice also that it appends the 'router' suffix to my domain name for some reason. 'router' is the name of my router, obviously. I'm also using OpenDNS as my DNS provider (configured through my router so it gets passed down through DHCP). Why is this not working for me? Can someone explain how the DNS resolution should take place? For LAN resolution, it shouldn't go straight to OpenDNS. I thought that each Windows machine kept it's own sort of "mini DNS server" that knows about all machines on the local network and it first tries to resolve using that. Please let me know what I can do to get this working!

    Read the article

  • Remote access to phpmyadmin from computer belongs to same LAN

    - by Charles
    OK... I solved it. It is because I have not configured the httpd.conf to allow the centos listen port 80 and 8080. Listen 80 Listen 8080 I have setup the myphpadmin on my CentOS 6.4 recently. I can access and login to the myphpadmin on my localhost. However, when I type http://[hostipaddr]/phpmyadmin on my other computer in the same LAN with the CentOS, the browser simply cannot access the page. Below are some of the current configuration. Anyone can help please......? config.inc.php $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'http'; /* Server parameters */ $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = false; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysql'; $cfg['Servers'][$i]['AllowNoPassword'] = false; phpmyadmin.conf <Directory /var/www/html/phpmyadmin/> order allow,deny allow from all </Directory> Furthermore, I can access the webpage that stored in the CentOS from my other computer without problems. After using wireshark and tcpdump, I found that the server (the Cent OS) keep resetting the connection. (192.168.1.106 is my other computer, 192.168.1.101 is my CentOS) 23:29:42.281473 IP 192.168.1.106.55999 > 192.168.1.101.webcache: Flags [S], seq 2559409090, win 65535, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 23:29:42.281504 IP 192.168.1.101.webcache > 192.168.1.106.55999: Flags [R.], seq 0, ack 2559409091, win 0, length 0 I have disabled the iptables service on the CentOS already.

    Read the article

  • Very Slow DSL (ethernet) speed [New Interesting Update]

    - by Abhijit
    Very IMPORTANT and INTERESTING UPDATE: Due to some reason I just thought to do a complete new setup and this time I decided to again have openSUSE plus ubuntu. So I first reinstall lubuntu and then I installed OpenSUSE 12.2 (64 bit). Now, my DSL speed is working very normal and fine on opensuse. So this is very scary. Is it possible for any operating system to manipulate my NIC so that it will work fine only on that operating system and not on another os? Regarding positive thinking and not being paranoid, what is it that makes ONLY suse to get my NIC to work at normal speed but ubuntu can not do it? Not even fedora? Not even linux mint? What all these OS are lacking that enables suse to work great? == ORIGINAL QUESTION == I 'was' on opensuse 12.2 when my dsl speed was normal. Yesterday I switched from opensuse to ubuntu 12.04 and speed decreased. It came to range of 7-10-13-20-25-kbps. Then I switch to linux mint, and then to fedora. Still slow speed. When I was in ubuntu I disabled ipv6 but still no luck. Now I am in fedora but this time with DIFFERENT ISP. And still I am getting very slow sped. So my guess is this is nothing to do with os. What can be wrong? Is this problem of NIC? Does NIC speed decreases over time? Does NIC life ends over time as with keyboard or mouse? Help please All the os I used are 64 bit and my laptop is Compaq Presario A965Tu Intel Centrino DUal Core. Interesting thing to notice is I get normal speed while downloading torrent inside torrent client softwares. This slow speed issue applied to download from any web browser or installing software using terminal.

    Read the article

  • SAFE MODE Restriction in effect. The script not allowed to access directory owned by uid

    - by user57221
    I am running a dedicated server with multiple websites. I have created a global directory for common scripts for all websites, rather than repeating them in every website directory. How can I make this global directory accessible for all website. I am getting following error. Warning: require_once() [function.require-once]: SAFE MODE Restriction in effect. The script whose uid is XXXX is not allowed to access /vhosts/globallibrary/Zend/Application.php owned by uid XXXX I have change the ownership of global directory for X website. so it works fine for X website. latter I added another website Y Now I am getting the same error again. If I change the CHOWN for Y website then X website will have the same error. I don't want to disable the safemode restriction. Is there a work around, so that this global dir will be accessible by all website. I am getting following error in my browser when I try to access global directory. Global directory is on same level as all other websites. Is this a good practice to enable safemode for websites?

    Read the article

  • What can prevent a Server 2008 machine accessing its OWN UNC shares?

    - by Simon
    I need to set up a UNC share for my hosted dedicated server to access a share on itself. Unfortunately TFS requires a UNC share. I am on a Windows Server 2008 Standard SP2 64bit dedicated server behind a PIX 501 firewall hosted with GoDaddy. I just cannot get the server to access itself and get this error: Windows cannot access \\SERVER\SHARE Check the spelling of the name.. etc. I've found numerous questions about this but no answer to my problem. Server 2008 Standard x64 SP2 Workgroup - not domain Windows Firewall is off Computer browser service is on I am trying to access \\MYMACHINE\TFS-BUILDS by typing in - or double clicking. Neither works. Machine has single network card Filesharing wizard says share was ok Share was showing under 'Computer management' Permissions are set to 'everyone' full control No obvious errors in eventlog Reboot didn't fix it Unfortunately I cannot try to access other shares in or out of this machine because it is a hosted dedicated server and the only machine behind a hardware firewall. The only thing left i can think of is that the hardware firewall needs to be configured. I don't think it is this because we have a 2003 Server machine behind a different hardware firewall and that one works fine. What on earth is left?!

    Read the article

  • Can't get php+sqlite working

    - by facha
    Hi, everyone I'm struggling all morning to make php work with an sqlite database. Here is a piece of php code that I try to execute: #less /var/www/html/test.php <?php $db=new PDO("sqlite:/var/www/test.sql"); $sql = "insert into test (login,pass) values ('login','pass');"; $db->exec($sql); ?> Here is how I've done tests: # sqlite3 /var/www/test.sql sqlite> create table test (login varchar,pass varchar); #chown apache:apache /var/www/test.sql #chmod 644 /var/www/test.sql Here is the stuff that drives me mad: When I execute from command line: #php test.php everything goes well. Sql is being executed and I can see a new row appear in the database. When I execute the same script from a browser - sql is not being executed. I don't get a new row in the database. There are no errors in the apache log file. Please, help

    Read the article

  • HAProxy crashes on all requests in 1.5-dev12

    - by Daniel Hough
    I'm having an issue where HAProxy is crashing with no explanation when I switch from 1.4.12 to 1.5-dev12. The reason I'm switching is for the SSL offloading. My config file doesn't have any errors, it's quite simple and it works well with 1.4 - but for some reason when I run it with 1.5-dev12 I see the logs noting that the two backends I have have been set up, and then when I hit one of the frontends, I get an HTTP 400 in the browser and suddenly HAProxy isn't running anymore when I check. I understand that a common workaround to the lack of SSL support for HAProxy is to use Stud, and I may go with that since I am in need of an SSL solution for my service, but before I dele into that world I thought I might see if anybody has experienced the same problems and might know how to fix it. The server is Ubuntu 10.04 and I followed the make instructions on the Exceliance blog here. EDIT: On the advice of Kyle Brandt, I did a bit more investigation. I attached gdb to the haproxy process and when the crash occurred this is what I got: Program received signal SIGSEGV, Segmentation fault. 0x0804e5c2 in dequeue_all_listeners (list=0x9e1a418) at src/protocols.c:184 184 list_for_each_entry_safe(listener, l_back, list, wait_queue) { P.S. HAProxy is awesome, so thank you Exceliance for providing us with something so useful :)

    Read the article

  • ping incorrectly pinging 127.0.0.1

    - by AlexW
    I've got an odd DNS issue. I'm running a dual ipv4/ipv6 environment on Linux. Pinging some sites results in ping pinging 127.0.0.1. e.g. #> ping authserver.mojang.com PING authserver.mojang.com (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.045 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=2 ttl=64 time=0.043 ms 64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=3 ttl=64 time=0.058 ms --- authserver.mojang.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.043/0.048/0.058/0.010 ms Dig, however correctly returns the following: # dig authserver.mojang.com ; <<>> DiG 9.9.3-P2 <<>> authserver.mojang.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15800 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;authserver.mojang.com. IN A ;; ANSWER SECTION: authserver.mojang.com. 5 IN A 54.235.119.47 ;; Query time: 14 msec ;; SERVER: 2001:4860:4860::8888#53(2001:4860:4860::8888) ;; WHEN: Sat Nov 09 15:34:40 GMT 2013 ;; MSG SIZE rcvd: 66 I'm confused! My web browser returns the correct website, and the same computer booted into Windows also works correctly.

    Read the article

  • IP to IP forwarding with iptables [centos]

    - by FunkyChicken
    I have 2 servers. Server 1 with ip 1.1.1.1 and server 2 with ip 2.2.2.2 My domain example.com points to 1.1.1.1 at the moment, but very soon I'm going to switch to ip 2.2.2.2. I have already setup a low TTL for domain example.com, but some people will still hit the old ip a after I change the ip address of the domain. Now both machines run centos 5.8 with iptables and nginx as a webserver. I want to forward all traffic that still hits server 1.1.1.1 to 2.2.2.2 so there won't be any downtime. Now I found this tutorial: http://www.debuntu.org/how-to-redirecting-network-traffic-a-new-ip-using-iptables but I cannot seem to get it working. I have enabled ip forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward After that I ran these 2 commands: /sbin/iptables -t nat -A PREROUTING -s 1.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:80 /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE But when I load http://1.1.1.1 in my browser, I still get the pages hosted on 1.1.1.1 and not the content from 2.2.2.2. What am I doing wrong?

    Read the article

  • Best way to run site through https on server which can't add additional certs

    - by penguin
    So I'm in a curious situation in that I am using a particular server to host things, which I can't host anywhere else (it has access to user databases etc which can't otherwise be accessed). I've been in quite a bit of discussion with the sysadmin at it looks like the only way to run our site: www.foo.com over https may be through some sort of proxy. Currently, users go to www.foo.com and are redirected to https:// host-server.com/foo, as there is an SSL cert installed on that. I want users to be on https:// www.foo.com. I'm told that for various reasons it's going to be very difficult to add an additional SSL cert to the host server. So I was wondering if it is possible to have the DNS records point to a new server, which then creates the HTTPS connection with the browser. Then it forwards requests to https:// host-server.com/foo and feeds the replies back to the original requester. Does this make sense? And would it be at all feasible? My experience with SSL is limited at best, so thanks in advance for your help :) ps gaps in hyperlinks as ServerFault was getting unhappy with the number of links I was posting!

    Read the article

  • Share one ssl certificate between multiples vhost

    - by Cesar
    I have a setup like this: <VirtualHost 192.168.1.104:80> ServerName domain1 DocumentRoot /home/domain/public_html ... </VirtualHost> <VirtualHost 192.168.1.104:80> ServerName domain2 DocumentRoot /home/domain2/public_html ... </VirtualHost> <VirtualHost 192.168.1.104:80> DocumentRoot /home/domain3/public_html ServerName domain3 ... </VirtualHost> <VirtualHost 192.168.1.104:443> ServerName domain3 SSLCertificateFile /usr/share/ssl/certs/certificate.crt SSLCertificateKeyFile /usr/share/ssl/private/private.key SSLCACertificateFile /usr/share/ssl/certs/bundle.cabundle ... </VirtualHost> I want to use domain3 certificate in the other domains, preferably without having to repeat all the <VirtualHost 192.168.1.104:443> config. In other words I want something like this: If the vhost has no explicit ssl config use cert for domain3 (/usr/share/ssl/certs/certificate.crt) Notes: 1.- I for sure will be setting more vhosts in the future 2.- I know (and don't care) of the ssl warnings the browser will show (hostname mismatch) If this possible? how?

    Read the article

  • Macports install of ack doesn't create correct executable

    - by user1664196
    I am trying to install p5-app-ack port from Mac Ports, but it seems it doesn't create a /opt/local/bin/ack binary at the end: $ sudo port search *app-ack Password: p5-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.8-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.10-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.12-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.14-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.16-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories Found 6 ports. $ perl --version This is perl 5, version 12, subversion 4 (v5.12.4) built for darwin-thread-multi-2level Copyright 1987-2010, Larry Wall Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit. Complete documentation for Perl, including FAQ lists, should be found on this system using "man perl" or "perldoc perl". If you have access to the Internet, point your browser at http://www.perl.org/, the Perl Home Page. $ sudo port install p5-app-ack ---> Computing dependencies for p5-app-ack ---> Cleaning p5-app-ack ---> Updating database of binaries: 100.0% ---> Scanning binaries for linking errors: 35.0% ---> No broken files found. $ $ ls /opt/local/bin/ac* /opt/local/bin/ack-5.12 /opt/local/bin/aclocal /opt/local/bin/aclocal-1.12 /opt/local/bin/activation-client /opt/local/bin/acyclic $ which ack $ ack -bash: ack: command not found Update If I then try to install p5.12-app-ack afterwards, I get $ sudo port install p5.12-app-ack Password: ---> Computing dependencies for p5.12-app-ack ---> Cleaning p5.12-app-ack ---> Scanning binaries for linking errors: 100.0% ---> No broken files found. $

    Read the article

  • .htaccess issue on Apache Web Server in Ubuntu VM

    - by Neon Flash
    I just installed Apache Web Server on Ubuntu 11.04 in VMWare Workstation. I created a basic HTML page, named it index.html and placed it in /var/www directory (document root). I am able to access this web page from my Host OS (Windows 7), by pointing the browser to: http://192.168.2.2/index.html where, 192.168.2.2 is the IP Address of the Ubuntu VM. Next, to test various configurations of .htaccess files, I created a new directory in /var/www called, members. Inside this directory, I created and placed a .htaccess file with the following configuration: AuthUserFile /www/Neon/auth/.htpasswd AuthName "neon's home" AuthType Basic require valid-user IndexIgnore */* I created a directory path like /var/www/Neon/auth/ and then placed a .htpasswd file inside it. To place the username and hash inside the .htpasswd file: I created a username "neon" and calculated the DES hash of a password and placed it inside .htpasswd file in format: username:hash Now, when I try to access the web page: http://192.168.2.2/members/ It does not prompt me to enter the username and password with a popup box. Instead it just displays the index.html which is placed inside members directory. I would like to get this configuration working :)

    Read the article

  • How to configure default text selection behavior in Windows XP, 7? (eg. mouse click selects entire word vs. mouse click inserts an active cursor)

    - by Mouse of Fury
    I find the mouse click behavior of Windows XP and Windows 7 annoying and intrusive. I don't remember Windows NT being quite this bad, or MacOS 7 - 10 which I used in the nineties. When I'm using a browser and I click on a text field - for example, the address bar, or a search box - the first thing which happens is the entire field is selected.Subsequent clicks seem to select parts of words, often deciding arbitrarily to exclude or include adjacent punctuation. The same in Excel and other apps, and when trying to rename files, so I'm assuming this behavior comes from a system-wide text handling routine. I frequently want to edit text, cut out or replace odd parts of the insides of words or chunks of sentences, and often find that to get a simple cursor to insert I have to click the mouse up to 4 times in succession. I've had to do a lot of this recently and it has been driving me insane. Is there a place at the system level where this can be configured? In a perfect world, I'd like a single click on a new text area to insert a cursor point, and a rapid double click to select the entire area. Words or text within the area could be selected by inserting a cursor, holding down the mouse button and dragging to the exact point where I want the selection to end - even if that's in the middle of a word. No, I don't need or want Windows to "smart select" a word or sentence for me. I've looked in the Mouse and Accessibility Options control panels (Windows XP). Haven't found anything even close. Thanks -

    Read the article

  • Exchange 2003: Accounts with only OWA access unable to change passwords when expired or forced

    - by radioactive21
    We have accounts whith only OWA access, because they are generic accounts and we do not want the accounts to be used as machine logins. We have a password policy that users must change their passwords every 6 months. The problem we are having is that since the accounts are not loging into the machines, when the password policy kicks in it is preventing users with OWA only access from changing their password. Also, when we select "User must change the password at next logon" it also causes the same issue. We have two exchange servers the main one and a front end one. what we have been doing with these generic account is in properties, under the "account" tab we restricted "log on to" to the front end server. Just to clarify, when we have no restrictions, users can change their passwords via the web without any issues. It is only when we force them to only login via OWA that they cant change passwords. I tried adding our domain controler and main exchange server to the "This user can log on to The following computers" in the account tab, but still it is not allowing them to change passwords. Currently I have to manually reset the passwords for OWA only accounts. Is there anyway to allow OWA acconts to change passwords? EDIT: Users restricted to only OWA can change their password via the web browser without any issues when there are no restrictions. In other words normally they can just log into outlook via the web and change their password, but when the password policy expires or we force them to change their password at next login, they are unable to.

    Read the article

  • Nginx Server Block Not Working? - Already running other vhosts just this one not working

    - by daveaspinall
    Im running a Debian 6 LEMP server with multiple virtual hosts and everything has been fine for 5 or so sites. But I've just tried adding another but for some reason it's just not working. By not working I mean in Chrome I get the "Oops! Google Chrome could not connect to subdomain.domain.net" error. I've changed the domain for security to subdomain.example.com and the IP is masked. Hosts file (I have multiple sub domains): xxx.xxx.xx.xxx *.example.com *.example Server Block: server { listen 80; server_name subdomain.example.com; access_log /srv/www/subdomain.example.com/logs/access.log; error_log /srv/www/subdomain.example.com/logs/error.log; root /srv/www/subdomain.example.com/public_html; location / { index index.html index.htm index.php; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I've created the system link to the file in the /etc/nginx/sites-enabled/ directory and restarted/reloaded nginx. DNS seems fine: # ping -c 2 subdomain PING subdomain.example.com (xxx.xxx.xx.xxx) 56(84) bytes of data. 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=1 ttl=64 time=0.035 ms 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=2 ttl=64 time=0.048 ms Checking the file with cURL works: # curl http://subdomain.example.com HTML - OK Emptied browser cache but still no dice. Anything I'm missing? Like I mentioned, I have a few sites running fine on the server currently so php-fpm etc etc are working. Any help would be much appreciated! Cheers, Dave

    Read the article

  • Apache2.2 not responding or logging anything on Win 7

    - by Adam
    I'm having some trouble with Apache2.2 on Windows 7. For over a year it's been running no problem, but all of a sudden requests have just stopped responding. They don't time out as such, the browser just keeps on waiting forever. Nothing is recorded in either the error log (set to debug level), the access log, or Windows' Event Log. The problem showed up when I added a new VHost and restarted, however a syntax check has shown there's no problem with the config (from the little I changed), and the service does actually start error free. I've also disabled VHosts and tried with just localhost. I've tried to telnet to the web server, and it connects, but nothing happens. The prompt just goes blank and I can't type anything, and effectively become stuck. I've ensured there's a rule within Windows Firewall for Apache, and I've even disabled the entire thing just to check it wasn't the cause. Still the same. If I stop Apache however, the request fails immediately. I've uninstalled and reinstalled Apache, in the hope it might magically fix something using the default config, but still no joy. I've tried using a different port but nothing different. Does anybody have any suggestions to fix this? Or to perhaps try and figure out either if it's Apache itself not responding or something sitting between the two that's holding things up? I'm not too savvy on debugging Windows issues like this and I've been searching for hours but not found anything of use to me. Cheers Adam

    Read the article

  • Is it possible to use Google Docs Viewer to view files already in Google Docs?

    - by john2x
    The title is a little confusing. I'll elaborate. As far as I can tell, the Google Docs Viewer tool accepts a link to a raw document file (e.g. .doc, .pdf, et. al.), and renders its contents in the browser. For example, this url to a pdf http://research.google.com/archive/bigtable-osdi06.pdf when passed to Viewer, returns this link: http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf What I'm trying to achieve is, use the Viewer to view a document already hosted in Google Docs (i.e. no longer a raw document file). When passing a link to a Google Docs document to the Viewer, the result is not as expected. It renders the link's HTML source instead of the document's contents. The reason I want to do this is that I want to be able to use the "embed" feature of Viewer to view Google Docs documents. Does Google Docs have a "link to embeddable view" feature? P.S. Here is a sample snippet to an embedded document. This is what I want, but pointing to an existing Google Docs document. <iframe src="http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf&embedded=true" width="600" height="780" style="border: none;"></iframe>

    Read the article

  • What does Firefox do when "scanning for viruses" after download?

    - by Joey
    Never mind the fact that Firefox is a browser and not a AV tool, but what exactly does it do after a download? Even on systems that have an up-to-date AV this generates a pause of several seconds after download (where I can't open the file from within the DL manager) and I have no idea what FF might be trying there. I know I can turn it off (using FF only at work anyway) but I'm wondering. I can think of some things here what it might be: FF itself is a AV scanner and it loads signatures in the background and whatnot. Sounds highly unlikely and shouldn't need tens of seconds for 20 KiB files. FF tries to talk with the installed AV to munch the file. Sounds unneeded, given that most AV programs feature real-time protection anyway and therefore will have caught a virus already and also because FF does that on systems without AV installed too. FF uploads the file to some online virus checker. Unlikely and stupid. FF instructs some online virus checker to download the file and check it. Unlikely and would be a nice target for DoSing that service. FF generates a hash of the file and sends that somewhere (presumably Google) to check for. They then respond with either "Whoa, that hash is totally a virus" or "Nope, that MD5 doesn't look very virus-y to me". I'm running out of better ideas. Anyone have a clue?

    Read the article

  • Computer randomly shuts itself off

    - by Decency
    I have not been able to determine a pattern for why this happens, despite my best efforts. I've attempted to run it on full power with Prime95 and this doesn't trigger a restart. Generally the restarts occur while I'm playing games, watching videos, or even just having multiple tabs open in a browser. However, I often play processor intense games for hours without any restarts occurring, and sometimes they'll happen 3-4 times in an hour during less intense activity, so I don't think that is the problem. I imagine it has something to do with overheating or power consumption so I've been monitoring CPU temperature and cleaning with compressed air, but the problem keeps happening. I don't know how to track power consumption, and assume that this is the problem. Whenever this occurs, the sound gets stuck in a short loop of whatever was playing at the time, though restarts also occur when nothing is playing. Here is a screenshot of temperatures: and under load: Here's the parts list: http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=10546754 As shown in the list, the case includes a 585W Power Supply, which I've been told should be plenty. I built the computer myself with a friend's guidance but it's very possible I did something wrong. Right now I'm looking into ensuring that I have the latest drivers for all components. Any help would be appreciated- thanks.

    Read the article

  • NVidia TwinView - slow rendering on dual desktop [closed]

    - by lisak
    Hey, does anybody have experience with it ? I've set it up 4 times on 4 different machines. And there was always problems with slow rendering ( for instance : scrolling pages in browser is not fluent). But there always was something that finally made it work perfectly... I remember that one time this option helped, but not now Option "RenderAccel" "1" Nvidia geforce 8400GS or Zotac geforce 9500GT Monitors connected via dvi and hdmi connectors proper nvidia driver installed Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" Option "Xinerama" "0" EndSection Section "Files" ModulePath "/usr/lib64/xorg/modules" FontPath "/usr/share/fonts/local" FontPath "/usr/share/fonts/TTF" FontPath "/usr/share/fonts/OTF" FontPath "/usr/share/fonts/Type1" FontPath "/usr/share/fonts/misc" FontPath "/usr/share/fonts/CID" FontPath "/usr/share/fonts/75dpi/:unscaled" FontPath "/usr/share/fonts/100dpi/:unscaled" FontPath "/usr/share/fonts/75dpi" FontPath "/usr/share/fonts/100dpi" FontPath "/usr/share/fonts/cyrillic" EndSection Section "Module" Load "dri2" Load "glx" Load "extmod" Load "record" Load "dbe" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Acer AL1715" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 75.0 EndSection Section "Device" Identifier "Nvidia" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "MSI big bang-fuzion" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 8400 GS" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "RenderAccel" "1" Option "AllowGLXWithComposite" "1" Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-1" Option "metamodes" "CRT: 1280x1024 +1920+0, DFP: 1920x1080 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • unicorn and nginx, went wrong

    - by achempion
    I try to deploy my app via capistrano. It was done, but when I start to nginx and show my site in the browser I see 'We're sorry, but something went wrong.' It is bad. I use unicorn. See my configs https://gist.github.com/3904032 I try to start server via rails s -e prodiction and it's work! I think that this error may be because I can't restart server root@li272-194:~# /etc/init.d/nginx restart Restarting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok configuration file /etc/nginx/nginx.conf test is successful [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: still could not bind() nginx. any ideas? nginx log 2012/10/17 02:57:41 [error] 3271#0: *1 could not find named location "@myapp", client: 91.192.62.77, server: 178.79.153.194, request: "GET / HTTP/1.1", host: "178.79.153.194" 2012/10/17 02:19:08 [crit] 2448#0: *8 connect() to unix:/srv/zarcon/shared/unicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 91.192.62.77, server: zarkon, request: "GET / HTTP/1.1", upstream: "http://unix:/srv/zarcon/shared/unicorn.sock:/", host: "178.79.153.194"

    Read the article

  • IIS 7 much slower than IIS 6

    - by JoeJoe
    I have a asp.net 3.5 web application running fine on Windows2003 IIS6. I published same exact application to IIS7.5 (Win2008R2) on a faster box (i5,8Gram) and it is significantly slower. 5-6 sec per page vs. 1-2 sec per page. During that time the Task Mgr CPU is always under 10%. Both attach to same database on other box. Benchmark is consistent from any other client browser or machine. I have connection pool on both, compression on both. Same network subnet. Forms authentication (no SSL yet). Can you give me steps on how to troubleshoot where the delays are being inserted or settings in IIS7 that I may have overlooked. Just using defaults. There is only 1 web site on each box. I understand the roles of an Application as defined in IIS has changed. There is no special Application defined in IIS.

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >