Search Results

Search found 7625 results on 305 pages for 'scraper sites'.

Page 67/305 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Safari corrupting downloads?

    - by Kaji
    First off, a bit of background: I had to do an erase and install about 2-3 weeks ago, so this is a fresh, up-to-date installation of Snow Leopard we're dealing with. That said, I decided recently to branch out from simply programming PHP in a text editor and explore some of the other technologies I keep hearing about, and picked up Drupal, Joomla, and the Zend Framework from their respective official sites. Latest complete, stable builds for all 3. Drupal and Joomla downloaded without a problem, but when I put them in my /~username/Sites folder, XAMPP pretends they're not there, even if I restart Apache or the laptop itself. Zend's archive won't open at all. Is Safari corrupting the downloads, or are there other issues in play that can be investigated?

    Read the article

  • Assigning multiple IPv6 addresses on a Server

    - by andrewk
    Let me uncover my intent. My host provides hundreds of IPV6 addresses free, but charge for an IPV4 address. I have several sites under one server and I was wondering if I can give each site/domain it's own ipv6 address. Is that even possible? If so how? I've read quite a bit about ipv6 but I do not understand it as clear as I'd like. My main goal is, for each domain/site to have it's own unique IP, so someone can't do a reverse ip look up and see what sites I have on that server. Thanks in advance for the patience.

    Read the article

  • Load balanced asp.net websites and required memory usage

    - by Matt
    Each of my servers has 8Gb RAM and the memory usage hovers around 7Gb. I have a load balancer available to me but at the moment I'm worried that putting my sites through it will cause the platform to fall over. The load balancer would be configured with a sticky round-robin where a new connection is round robin but subsequent connections for the same source ip will remain on the same server (until a limit is reached). Thats all standard stuff. How do I know what memory usage my sites will need across the platform when I put them through the load balancer? Rather than knowing that a site is using 150mb on a particular server I could face a situation where the 150mb is taken up on each of the servers. I know that with only 1 gb free I could have a serious problem on my hands. If I free up some memory then how can I work out what I need to have free to prevent this from happening? Thanks Matt

    Read the article

  • Which user account should be used for WSGIDaemonProcess?

    - by Nathan S
    I have some Django sites deployed using Apache2 and mod_wsgi. When configuring the WSGIDaemonProcess directive, most tutorials (including the official documentation) suggest running the WSGI process as the user in whose home directory the code resides. For example: WSGIScriptAlias / /home/joe/sites/example.com/mod_wsgi-handler.wsgi WSGIDaemonProcess example.com user=joe group=joe processes=2 threads=25 However, I wonder if it is really wise to run the wsgi daemon process as the same user (with its attendant privileges) which develops the code. Should I set up a service account whose only privilege is read-only access to the code in order to have better security? Or are my concerns overblown?

    Read the article

  • customErrors="RemoteOnly" not working properly in Server 2008

    - by Atomiton
    It would appear that on my brand new Windows Server 2008 with IIS7, customErrors is not working. We have customErrors set to RemoteOnly in the web.config on our Asp.Net sites and applications. However, no matter what we do, it would appear that our sites act like it's set to On and we can't get any detailed messages showing up on our applications when remoted into our servers. I'm not entirely sure how to trace where this is being overrided, or if there is something in the way the server is configured that would make the server think the request is internal? How does this actually resolve correctly, anyway? Any help is appreciated... Our network admin has added domains to our hosts file to direct applications to the IP address.

    Read the article

  • IPv6 local address in hosts file

    - by Dan
    I have set up a local domain on my Apache server. Then I added the following line in my /etc/hosts file ::1 exampledomain.local After I trying to navigate to it, (I tried Firefox and Chromium) I got a server not found error. Then I tried ping6 and it worked: dan@danny:~$ ping6 exampledomain.local PING exampledomain.local(exampledomain.local) 56 data bytes 64 bytes from exampledomain.local: icmp_seq=1 ttl=64 time=0.032 ms If I replace ::1 with 127.0.0.1 in my hosts file, it works fine. I'm not sure if this is relevant but this is my Virtual Host configuration in Apache2: <VirtualHost *:80> ServerAlias exampledomain.local DocumentRoot /home/dan/sites/exampledomain <Directory /home/dan/sites/exampledomain> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/exampledomain-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog ${APACHE_LOG_DIR}/exampledomain-access.log combined </VirtualHost> My question is: How can I make it work with the IPv6 address?

    Read the article

  • Temporarily Utilizing 304 Header on Apache for Crawlers

    - by Volomike
    I have a client who has a hosting arrangement with 400 customer sites all hosted through SuPHP in CGI mode on Apache. The sysop is now gone and the client is calling on me for rolling out a new PHP thing. Trouble is -- server load is very high right now and we have found that it's due to the crawlers. We had one customer in particular who complained of slow websites, and we engaged a 304 header plugin in his site against most crawlers, and his site perked right up. We'd like to lower that load by issuing a global 304 header to all the crawlers, letting human visitors through. I have a long list of user agent keywords to trap for. What's the best way to temporarily engage that global 304 header, while allowing human visitors to get right on through? I mean, I could roll out 400 .htaccess file changes, but it would be ideal to make this change in like one central Apache config and then it automatically affect all the sites at once.

    Read the article

  • Nginx Installation on Ubuntu giving 500 error

    - by user750301
    I just installed nginx on ubuntu 12.04 LTS. When i access localhost it gives me : 500 Internal Server Error nginx/1.2.3 error_log has following rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" This is default nginx configuration: nginx.conf has: include /etc/nginx/sites-enabled/*; /etc/nginx/sites-enabled/default has following root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules }

    Read the article

  • Windows Server 2003 seems to pick the 'outgoing' IP address at random from all the ones configured in IIS, how can I make it just use one?

    - by Ryan
    We have multiple sites in IIS with different IP addresses. This is cool, want different IPs to all go to this server and use the proper site. However I discovered an issue that when the server makes an outgoing connection, I cannot predict which IP it will use. I had to have one client add ALL the IPs to their firewall so that a certain service could communicate with their server. Well now the time has come to add another IP/site to IIS but I had told them they would not need to add any more IPs. So the question is, how can I make Windows Server 2003 use only ONE specific IP for outgoing calls instead of it being unpredictable? If this is not a good enough description, when I was RDPed into the server and I opened IE and went to 'what is my IP' it was sometimes different which is how I discovered why the one client's firewall was suddenly refusing the connections. How can I just make outgoing calls originate from a static IP yet still allow multiple IPs pointing to different sites in IIS?

    Read the article

  • Options for a site-specific-browser app

    - by cbp
    I would like to have our intranet site accessed through Firefox or Chrome, rather than IE. However we don't want users having access to any other internet sites apart from our intranet, unless they are using IE. I notice that Chrome has what they called 'Hosted Apps' and there is a Firefox spinoff called Prism. Does anyone know whether either of these are suitable? Can you install a Chrome hosted app without giving the user access to other sites through Google Chrome? What about Prism? Are these products stable?

    Read the article

  • How to force Chrome to make bookmarks the priority for auto-complete in the address bar?

    - by NoCatharsis
    As it is right now, if I start typing, for instance, "dictionary" into the address bar, Chrome immediately returns a list of bookmarks, history, and related sites. However, the first and highlighted option is to search Google for "dictionary". I want Chrome to immediately recognize that I have a bookmark specifically named "Dictionary" that links to the site www.dictionary.com. But, that's the second choice, not the first. So I have to type a few letters, get auto-complete to suggest some sites, then key down to my bookmark item before pressing Enter. How annoying. Any way to cut the middle man and make my bookmark the top result?

    Read the article

  • Setting up a vpn and IIS IP address restrictions

    - by carpat
    I'm trying to get a VPN set up with internal access only sites. I have set up a VPN on a windows server (single VPS server), and I can connect from a remote computer and I get an IP assigned correctly (from 192.168.1.1 - 255) Next I configured IIS (running on the same machine) IP Address and Domain Restrictions to only allow only IP address range 192.168.1.0 with subnet mask 255.255.255.0 When I connect to the VPN with "Use Default Gateway on Remote Network" (so that requests must go through the vpn), I get a 403 from the internal sites. What did I miss?

    Read the article

  • Why does using nginx as a reverse proxy break local links?

    - by tsvallender
    I've just set up nginx as a reverse proxy, so some sites served from the box are served directly by it and others are forwarded to a Node.js server. The site being served by Node.js, however, is displayed with no CSS or images, so I assume the links are somehow being broken, but don't know why. The following is the only file in /etc/nginx/sites-enabled: server { listen 80; ## listen for ipv4 listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name dev.my.site; access_log /var/log/nginx/localhost.access.log; location / { root /var/www; index index.html index.htm; } location /myNodeSite { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; } } I had thought perhaps it was trying to find them in /var/www due to the first entry, but removing that doesn't seem to help.

    Read the article

  • OSX: DNS records won't change?

    - by Marko
    I've had two sites on my local host and have had mysite1.local and mysite2.local in my /etc/hosts file to pointing in my localhost. Now, I moved those sites into my homeserver (Ubuntu, Local network) and made changes to hosts file + also in /private/etc/hosts is same. 192.168.0.50 mysite1.local 192.168.0.50 mysite2.local I flushed my dnscache sudo dscacheutil -flushcache rebooted machine, reseted safari but still no changes.. Still if I try to go either mysite1.local or mysite2.local it is pointing to localhost?!? What could be the problem?? OS is Snow Leopard 10.6.8

    Read the article

  • server attack monitor

    - by Basit
    we been getting some attacks on our server i think, cause our server gets down every day now.. i want to monitor what is causing server to go down or if there is any attack from some site or if its crawler doing the attack. is there any tool for this? if not, what should i do to find out what is causing the problem. Edited my server is linux i have cpanel control panel i haven't checked the logs i have done nothing to see whatis causing the problem thats why i came here to ask how can i find out what is causing the problem. there is guy from our server, he said its server ram, they told us to extend more ram, but there isnt many sites on it and not many load from that sites eaither, so i dont see why our 2gb ram is getting used at. so i want to find out :/

    Read the article

  • Apache Passenger can't find gem

    - by purpletonic
    I'm running Ubuntu 10.04 and I've transferred over some sites built in Sinatra. I've set up Phusion passenger, but when I visit the sites I'm getting a Passenger LoadError claiming that passenger has 'no such file to load -- sinatra' yet when I run gem list or sudo gem list, I clearly see sinatra listed. Why can't passenger find this gem? My sudo gem env output looks like this RubyGems Environment: - RUBYGEMS VERSION: 1.3.5 - RUBY VERSION: 1.8.7 (2009-12-24 patchlevel 248) [x86_64-linux] - INSTALLATION DIRECTORY: /usr/local/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/local/bin/ruby - EXECUTABLE DIRECTORY: /usr/local/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /usr/local/lib/ruby/gems/1.8 - /root/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources = true - :verbose = true - :benchmark = false - :backtrace = false - :bulk_threshold = 1000 - REMOTE SOURCES: - http://gems.rubyforge.org/ running 'sudo ruby -v' I see the following: ruby 1.8.7 (2009-12-24 patchlevel 248) [x86_64-linux], MBARI 0x6770, Ruby Enterprise Edition 2010.01 Is that correct, or should the two ruby versions match up correctly, displaying REE in both? Thanks in advance!

    Read the article

  • Squid 3 reloading makes it stop serving requests

    - by coredump
    So, we use Squid 3 here (3.0.STABLE8-3+lenny4), pretty standard configuration (no dansguardian or similar) + NTLM authentication with LDAP background, circa 1000 users on a busy day, and our acls reference some external files (allowed/blocked sites/ip addresses). On Squid 2.X we used to be capable of reloading it's configuration (to add or sites or addresses to rules, etc) and squid would not stop serving during the reload. Since we changed to 3.0, that seems to be impossible: everytime we use reload (or -k reconfigure) it stop serving requests for as long as 2 minutes, and clients receive a Configured proxy is not accepting connections message. I checked the documentation and got nothing about it, does anyone else suffer from this problem or is it a isolated case on my setup? Also, if you have Squid 3.0 and doesn't suffer from this problem, how is your squid configured?

    Read the article

  • How to efficiently merge a lot of vCard files for the same person?

    - by mihi
    I currently have contact information at several places: old PDA's address book mobile phone's phone book (primarily name, phone number) email client's address book (primarily name, email) web mailer's address book (primarily name, email) instant messenger's contact list (primarily name, im, email, birthday) And there are several social or business networking sites on the Internet where contacts provide information about themselves, like LinkedIn or XING. All those sources can export as vCard, but as you might imagine, I get a lot of vCards for the very same contact that way. Are there any tools where I can import them and then merge them (it may ask me which phone number is more current in case of field clashes of course)? Bonus points if it can track which information I have discarded so when I re-export all information from one of the sources I can't import to (networking sites), it won't ask me again if I want to overwrite phone number of person X with the same ancient number... I hope you understand what I try to accomplish, if not just ask :-)

    Read the article

  • How can I block w3schools from appearing in my google search results?

    - by zzzzBov
    A while ago I got fed up with continuously finding w3schools in my search results when I wanted detailed, technically correct information. To fix this without having to continuously append -site:w3schools.com to my search queries, I used google's Manage Blocked Sites page. For a while this worked perfectly, no more results from unwanted sites. Recently, however, I've been seeing w3schools litter my search results. Is there a new way to remove a site from google search results? -or- Is this just a bug that I should report?

    Read the article

  • adding mongo to path

    - by Mike
    Bit of a noob question. I have downloaded MongoDb and installed it here /Users/mike/downloads/mongodb In order to start it, I then have to 'cd' into the 'bin' /Users/mike/downloads/mongodb/bin and run ./mongod (to start the database) and ./mongo (to start the mongo shell) The problem is that I can only work with python and ruby scripts using the mongo shell if I have those scripts stored in the same bin directory, and I don't think that's the ideal set up. Will exporting the path allow me to access mongo from outside the bin? For example, I would prefer to have my ruby scripts in /sites/ruby and be able to access mongo by starting ruby in /sites/ruby. If exporting to path is the solution, how do I do that. I'm using a mac

    Read the article

  • Apache VirtualHost running very slow on OS X 10.7 (Lion)

    - by jwerre
    I've set up a few virtual hosts in Lion and it's running very slowly. NameVirtualHost *:80 <VirtualHost *:80> ServerName localhost DocumentRoot "/Library/WebServer/Documents" </VirtualHost> <VirtualHost *:80> ServerName dev.local DocumentRoot "/Users/me/mysite" <Directory /Users/me/mysite> Order allow,deny Allow from all </Directory> </VirtualHost> then in /etc/hosts I added 127.0.0.1 dev.local Everything works fine but it's sooooo slow — 5 or so second to reload a simple "Hello World" html page. Here's is the strange part. If I make a symbolic link of the site in my ~/Sites folder (ln -s ~/mysite ~/Sites/mysite) and navigate to http://localhost/~me/mysite It's nice and fast the way it should be.

    Read the article

  • Do browsers allows pages loaded on one tab to access/intercept/inject data in other tabs?

    - by jairo
    I was surprised to hear from this Reuters video that it was possible for a page loaded on one tab to access and/or inject data onto another page loaded on a different tab. TL;DW (too lazy; didn't watch) The interviewee in the video suggests that when doing online banking, the user exit his browser (thus closing all windows) and start a new browser session with just your banking page/tab open. Allegedly, malicious sites can check if you have your banking site open and inject commands onto those sites. Can someone confirm and/or deny this claim? Is it only possible even if there is not parent/child relationship between windows/tabs?

    Read the article

  • Improving Chrome performance on OSX

    - by Giannis
    There are a number of sites that do not display properly on Safari and I need to switch to Chrome. Although when the content of the sites requires flash player, Chrome will consume a significant amount of CPU. Running more than 3 windows, will cause my MBP to overheat, start the fans, and reduce battery life way more than Safari. What I am looking for is suggestions on ways to improve performance of Chrome running flash. I know Safari is optimised for OSX, but any improvement is welcome. Following I have a demo to display the issue. I am running same youtube video on Safari 6 and Chrome 21,both updated,at the same time. Both browsers have been reseted and have no extensions. This is run on MBP 13" 2012 with 2.9 i7 running OSX 10.8.1. p.s If any additional details can help please let me know.

    Read the article

  • ssh all machines behind a router

    - by Luc
    Hello, I have several machines on my lan. One is used as a http proxy to target web sites located on the others (that's working fine now thanks to ServerFault). On my router, Port 22 is NATed to this proxy machine. I would like to be able to access the other machines, within internet, with something like: ssh user@first_machine.my_domain.tld ssh user@second_machine.my_domain.tld Could I use the proxy machine to 'filter' the incoming ssh request and to route them to the correct machine ? (in the same way it's possible to do so for web sites using a mix of mod_proxy and namevirtualhost in Apache) Thanks a lot, Luc

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >