Search Results

Search found 7625 results on 305 pages for 'scraper sites'.

Page 15/305 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • how to figure out why sites on my server aren't loading

    - by Derek
    I seem to randomly receive "page cannot load, cannot connect to server" errors for sites on one of my servers. when this happens, it seems to only happen on certain IPs or IP ranges at a time. I say this because while I'll get the error from my home laptop I'll be able to access the site fine from my work computer or from an offsite VPS. DNS records should already be fully propagated as these records were updated months ago. I have no idea how to diagnose what's going on. Is there a tool in cpanel or outside on the web that can help me figure out what's going on?

    Read the article

  • Method to control multiple sites using same cookie?

    - by Frost Shadow
    Is it possible for two different web pages to use the same cookie? For example, some news sites now have buttons on the bottom, where if you are logged into facebook, you can just click the button to "like" the article. Is this a case of a 3rd party website using facebook's cookie to know which account you are, and if so, is there a way I can control it? I'm not sure how the new "like" system works, so maybe the button part isn't actually on the news site, but hosted on facebooks servers or something, so it's really facebook itself accessing its own cookie. If that's the case, is there a way I can choose when a site accesses its own cookie? Thanks for any help!

    Read the article

  • Ideal permissions scheme for multiple Apache/PHP sites...

    - by Omega
    I'm hosting multiple sites from one server where each site has it's own user and www directory in their home dir. Currently our web server runs as user nobody(99). We're noticing that to run several popular scripts and engines, they require write access to their own files. As the home directory is owned by the user, not nobody(99), what is the best policy or change in hosting configuration that would: ...make it so that all the various engines and platforms work? ...still allow us to work with files and edit them without having to diddle with permissions as root? Thanks for the advice!

    Read the article

  • sites now not responding on port 80 [closed]

    - by JohnMerlino
    Possible Duplicate: unable to connect site to different port I was trying to resolve an issue with getting a site running on a different port: unable to connect site to different port But somehow it took out all my other sites. Now even the ones that were responding on port 80 are no longe responding, even though I did not touch the virtual hosts for them. I get this message now: Oops! Google Chrome could not connect to mysite.com However, ping responds: ping mysite.com PING mysite.com (64.135.12.134): 56 data bytes 64 bytes from 64.135.12.134: icmp_seq=0 ttl=49 time=20.839 ms 64 bytes from 64.135.12.134: icmp_seq=1 ttl=49 time=20.489 ms The result of telnet: $ telnet guarddoggps.com 80 Trying 64.135.12.134... telnet: connect to address 64.135.12.134: Connection refused telnet: Unable to connect to remote host

    Read the article

  • Warning popups that direct to 3rd party sites

    - by Kingamoon
    Lately, I've been getting warning popups on my browser (latest version of Chromium) that notify me that my Java version of current browser is outdated and needs to be updated. What's alarming to me is that it sends me to some sites I've never heard of like Malest.com. When I block a site, it redirects me to a different one. I don't know how to track what's causing these alerts. I ran Microsoft Security Essential and it found nothing. Any suggestions on what to do to nail down this irritating problem?

    Read the article

  • Changing Launchpad username, and How to know what sites will be affected?

    - by Daniel Clem
    I am setting up my developer profile on Launchpad, and would like to change my username so it would be same as other sites I use, as well as better reflect me as a person. (that's a much more important thing than it sounds) I want to do this now while I can, because as I understand it, once I set up a PPA it will be impossible to change it due to the username being locked into the PPA URL's to prevent breakages and other problems. But when trying to change my username, it warned me with this message. "Changing your name will change your public OpenID identifier. This means that you might be locked out of certain sites where you used it, or that somebody could create a new profile with the same name and log in as you on these third-party sites." How can I find out which sites will be locked out, and how to still change the username while preventing problems with other sites? Sorry if this is actually a question for Launchpad itself. But I don't know where to post questions like this on the Launchpad site. Edit I understand that it is an issue with OpenID. But how am I to know what sites will be affected? And how do i fix the problems this will cause? Can't I just reset the password (and as a side affect, re-establish the connection with the new username) using my email address?

    Read the article

  • Python's urllib2 don't work on some sites

    - by Binny V A
    I found that you can't read from some sites using Python's urllib2(or urllib). An example... urllib2.urlopen("http://www.dafont.com/").read() # Returns '' These sites works when you visit the site. I can even scrap them using PHP(didn't try other languages). I have seen other sites with the same issue - but can't remember the URL at the moment. My questions are... What is the cause of this issue? Any workaround for this issue?

    Read the article

  • Self-signed ceritificates for many users/browsers/sites

    - by Demiurg
    Here is my problem - I have a lot of users using different browsers accessing many internal web sites using https. I can create my own Certificate Authority, than create a certificate for each server and after that have all the users import it. Obviously, it cannot work in reality - there are too many users and too many sites, and some sites will be added in the future. I'm looking for a way to automate this. Is there a way to create a certificate so that all major browsers (IE, FF, Opera, Chrome and Safari) would trust it for all servers ? If so, what is the best way to install it automatically in all major browsers ?

    Read the article

  • Consolidating data in SharePoint from different sites based on variable site naming

    - by Mughrabi
    Hi, am working on a project where I want to pull data from different lists in SharePoint and have these data imported into a single list. The list has the same attribute everywhere; it is located in different sites. I have a list which contains all the site names and URL to those sites. The idea is to read from this list all the site names and then go to each one of those sites and try and pull the information from the list under that particular site, in synchronies matter. Data that are pulled from last week’s process do not need to be pulled again. Can someone guide me in explaining what would be the best way to doing this solution? Am using SharePoint 2007

    Read the article

  • Getting all pdf files from a domain (for example *.adomain.com)

    - by Zack
    I need to download all pdf files from a certain domain. There are about 6000 pdf on that domain and most of them don't have an html link (either they have removed the link or they never put one in the first place). I know there are about 6000 files because I'm googling: filetype:pdf site:*.adomain.com However, Google lists only the first 1000 results. I believe there are two ways to achieve this: a) Use Google. However, how I can get all 6000 results from Google? Maybe a scraper? (tried scroogle, no luck) b) Skip Google and search directly on domain for pdf files. How do I do that when most them are not linked?

    Read the article

  • Apache2 - Hosting two sites on the same domain with different ports

    - by user1026361
    I am hosting a staging site (test.mydomain.com) which currently work well on port 80 for two sites (test.mydomain.com and test.FRmydomain.com) I am working on a new backend and I would like to deploy a third site on this server for testing. My hope is that it will live at test.mydomain.com:4204. I've got some experience with apache and quickly added statements: Listen 4204 NameVirtualHost *:4204 and created a new config for my site. What I imagine are the relevant parts of my config: <VirtualHost *:4204 > ServerAdmin [email protected] ServerName test.mydomain.com:4204 However, the site is not publicly available, by name or ip. If i curl localhost:4204 from the server, I get the expected page content At this point, I'm a bit of a loss on how to go forwards. It seems like my config is correct but not available to be served. Am I better off defining a proxy definition so that, for instance: test.mydomain.com/4204 proxies to my localhost server or is there a way to make the site available via the internet? EDIT: I have added an iptable rule after further Googling with the command: iptables -I INPUT -p tcp --dport 4204 -j ACCEPT I can see apache listening on 4204 and the rule is definitely in place but cant reach the site

    Read the article

  • configs for several sites in apache with ssl

    - by elCapitano
    i need to secure two different sites in apache. One of them should only be a proxy for a different server which is running on port 8069. Now one (which is natively included in apache) runs with SSL: <VirtualHost *:443> ServerName 192.168.1.20 SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key DocumentRoot /var/www/cloud ServerPath /cloud/ #CustomLog /var/www/logs/ssl-access_log combined #ErrorLog /var/www/logs/ssl-error_log </VirtualHost> The other one is not running and even not registered. When i try to access it, i get an exception (ssl_error_rx_record_too_long): <VirtualHost *:443> ServerName 192.168.1.20 ServerPath /erp/ SSLEngine on SSLCertificateFile /etc/ssl/erp/oeserver.crt SSLCertificateKeyFile /etc/ssl/erp/oeserver.key ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyVia On ProxyPass / http://127.0.0.1:8069/ ProxyPassReverse / http://127.0.0.1:8069 RewriteEngine on RewriteRule ^/(.*) http://127.0.0.1:8069/$1 [P] RequestHeader set "X-Forwarded-Proto" "https" SetEnv proxy-nokeepalive 1 </VirtualHost> My whish is the following configuration: 192.168.1.20 ->> unsecured local path to website 192.168.1.20/cloud/ ->> secured local documentpath from cloud 192.168.1.20/erp/ ->> secured proxy on port 80 for http://192.168.1.20:8069 how is this possible? is this even possible? perhaps cloud.192.168.1.20 and erp.192.168.1.20 is better?! Thank you

    Read the article

  • Relation between server_name in nginx sites-available, /etc/hosts file and A-records

    - by user2818584
    I have the following two server-blocks in my config-file in sites-available: server { listen 80; server_name www.mydomain.be; root /usr/share/nginx/html; index index.html index.htm; location / { try_files $uri $uri/ =404; } } server { listen 80; server_name sub.mydomain.be; root /usr/share/nginx/sub; index index.html index.htm; location / { try_files $uri $uri/ =404; } } I also created an A-record for both www.domain.be and sub.domain.be with the IP of my server as value. Yet, when I try to reload my nginx configuration with service nginx reload it fails. When I remove the second server-block, it reloads as expected. I know this topic is popular, and that there are loads of such [nginx][subdomain] questions here, but none of them seems to discuss explicitly how the following three things hang together: virtual hosts or server blocks in nginx (est. server_name matching) the effect of A-records on how nginx processes requests the need to add hosts to /etc/hosts Right now I have the impression that a lack of knowledge of this bigger picture, rather than specific knowledge of nginx configuration prevents me from making this work.

    Read the article

  • Trouble Downloading from some sites

    - by Fletch
    I am trying to download the new Microsoft Security Essentials but when I click on the Download button instead of getting the Download box popup nothing comes up. The progess bar at the bottom shows it doing something then when it reaches 100% nada. I can down load from HP (Drivers) and sites like Majorgeeks with no problem. I also have this problem on the Adobe download page when trying to get the shockwave and flash player. I am fixing my Granddaughters laptop that she got from someone else. There were over 26 Trojans listed on it when I installed AVG and they would not go away. I used CCleaner and HiJack This and deleted everything I could and wiped the freespace. Then ran AVG again and this time after finding a few Trojans and deleting them the system was reported as clean. IE8 then would not connect to the net so I used my computer to DL a copy and put it on the laptop, after that I was able to use the laptop to connect to the net and download a driver to get the sound working again. Laptop HP dv4000 XP Pro

    Read the article

  • Exchange 2010 mail routing with Hub Transport in multiple sites

    - by jmreicha
    I have two separate physical sites, Site A and Site B. In site A, I have following: 2 CAS servers 2 Hub Transport servers 2 Mailbox servers 2 Edge servers In site B, I have the following: 1 CAS 1 Hub 1 Mailbox 1 Edge Currently everything is working out of site A. That is, all users are housed on mailboxes that are in site A and all inbound mail flow is pointing to site A. I would eventually like to be able to move some of the mailboxes to site B without causing a disruption for resliency and redundancy purposes but I am not quite sure how to go about setting this up or if it is even possible. So far I have created an Edge subscription in site B and am able to send emails out from test accounts set up with mailboxes on the site B Mailbox server. However, I am unable to receive incoming mail messages and am confused. So I'm thinking incoming mail messages are still being directed to site A and then they are getting stuck because there is no way to route the mail to the site B mailboxes. Is this assumption correct? I am unfamiliar with mail flow and routing so I am not really sure what I need to be looking at? Would I add the site B hub transport to the Edge subscription in site A? Or I guess more specifically, how would I go about enabling communication and mail flow between mailboxes split up on site A and B?

    Read the article

  • How to store wiki sites (vcs)

    - by Eugen
    Hello, as a personal project I am trying to write a wiki with the help of django. I'm a beginner when it comes to web development. I am at the (early) point where I need to decide how to store the wiki sites. I have three approaches in mind and would like to know your suggestion. Flat files I considered a flat file approach with a version control system like git or mercurial. Firstly, I would have some example wikis to look at like http://hatta.sheep.art.pl/. Secondly, the vcs would probably deal with editing conflicts and keeping the edit history, so I would not have to reinvent the wheel. And thirdly, I could probably easily clone the wiki repository, so I (or for that matter others) can have an offline copy of the wiki. On the other hand, as far as I know, I can not use django models with flat files. Then, if I wanted to add fields to a wiki site, like a category, I would need to somehow keep a reference to that flat file in order to associate the fields in the database with the flat file. Besides, I don't know if it is a good idea to have all the wiki sites in one repository. I imagine it is more natural to have kind of like a repository per wiki site resp. file. Last but not least, I'm not sure, but I think using flat files would limit my deploying capabilities because web hosts maybe don't allow creating files (I'm thinking, for example, of Google App Engine) Storing in a database By storing the wiki sites in the database I can utilize django models and associate arbitrary fields with the wiki site. I probably would also have an easier life deploying the wiki. But I would not get vcs features like history and conflict resolving per se. I searched for django-extensions to help me and I found django-reversion. However, I do not fully understand if it fit my needs. Does it track model changes like for example if I change the django model file, or does it track the content of the models (which would fit my need). Plus, I do not see if django reversion would help me with edit conflicts. Storing a vcs repository in a database field This would be my ideal solution. It would combine the advantages of both previous approaches without the disadvantages. That is; I would have vcs features but I would save the wiki sites in a database. The problem is: I have no idea how feasible that is. I just imagine saving a wiki site/source together with a git/mercurial repository in a database field. Yet, I somehow doubt database fields work like that. So, I'm open for any other approaches but this is what I came up with. Also, if you're interested, you can find the crappy early test I'm working on here http://github.com/eugenkiss/instantwiki-test

    Read the article

  • Which CMS or blogging engine supports multiple sites ?

    - by Lamnk
    Dont know if SO is the appropriate place to ask this question, but anyway ... I have some sites running Wordpress and maintaining/managing them is a pain. Is there any CMS or blogging platform out there that support multiple sites/blogs in one codebase ? I know there are some hacks for Wordpress but they are quite ugly and do not scale (i need 100-1000 blogs supported). WPMU AFAIK run with subdomains only. Thanks in advance.

    Read the article

  • Wordpress Install on OS X Snow-Leopard under Sites/public for use with capistrano

    - by snowmaninthesun
    I would like to put my local OS X wordpress site in my ~users/Sites/public directory, instead of the ~users/Sites so i can properly setup a capistrano deployment. When i try this i can visit the site locally by visiting http://localhost/public and everything looks and works great but if i ever try to go to http://localhost/public/wp-admin to administer my local copy of wordpress it tries to redirect me back to http://localhost/wp-admin which doesn't exist. Do you know how I might be able to work around this problem?

    Read the article

  • Sharing files between multiple sites using only desktop software

    - by perlyking
    Our organisation has three sites; a head office, where the master copies of company files are stored, plus two branch offices using only workstations and a NAS or two. Currently we're talking about <10GB. At the main office, we have no admin access to the file server, as this is entirely controlled by the larger institution where we are located. For the same reason, we have no VPN remote access to this network. Instead, we simply have access to a network share using over a Novell LAN. Question: how can we share files between offices in way that minimises latency, i.e. that gives us a mirror of the main network share at each site? (There is little likelihood of concurrent editing, and we can live with the odd file conflict now and again). Up to now branch office staff have had to use GotoMyPC-type solutions to remotely access files held at the main office. Or email. I was hoping to use Google Drive on a dedicated workstation at each office to sync the contents of the network share (head office) or NAS (branch offices) via the cloud, but at my last attempt (29 Jun '12), the Google Drive installer would not allow me to designate the remote network share as the "target" folder. (I chose Google Drive over Drobbox et al. as we already use GMail for corporate mail) The next idea was to use a designated workstation at head office to mirror the network share to a local drive, then use Google Drive to push that to the cloud. This seems a step too far. Nor do I have any good ideas about how to achieve this network/local mirroring, as we can't, for example, install the rsync daemon on the server. I do not want to use Google Drive locally on each workstation as this will inconvenience users, and more importantly, move files off the backed-up, well-maintained (UPS, RAID etc) network share at head office. Our budget is only in the £100's. Should we perhaps just ditch the head office server and use something like JungleDisk? At least this presents the user with what appears to be a mapped drive.

    Read the article

  • Form creating sites with output

    - by Alex
    Sites lite faary.com wufoo.com/and theformsite.com help you building forms But the output tables that being created are password protected as far as i know. Are there sites/scripts like the above which can make the output visible to all ? Something like a "guest book" script/form that you can edit the fields and the output will show immediately ? Thank you.

    Read the article

  • Storing images for multiple web sites

    - by Orkun Balkanci
    I've got two web sites (written in c#) which are pretty common: One is an admin site (cms) where you add images into content as needed, through editor pages. Second is the site where the content with images is shown. Images should be stored outside these two sites but where and how?

    Read the article

  • Google sites creation

    - by bhuvi
    Hi, I am creating a sites by java programming using google sites API developer guide. I had easily created different type of pages as parent page and sub pages also. my problem is,I am not able to create a web page as parent page and file cabinet, announcement, list page as sub page. That's parent page and sub page is created as same not as different page. please tell me a solution.

    Read the article

  • Web Crawling Sites with Javascripts or web forms

    - by Jojo
    Hello everyone, I have a webcrawler application. It successfully crawled most common and simple sites. Now i ran into some types of websites wherein HTML documents are dynamically generated through FORMS or javascripts. I believe they can be crawled and I just don't know how. Now, these websites do not show the actual HTML page. I mean if I browse that page in IE or firefox, the HTML code does not match what's actually in the IE or firefox. These sites contain textboxes, checkboxes, etc... so I believe they are what they call "Web Forms". Actually I am not much familiar with web development so correct me if I'm wrong. My question is, does anyone in similar situation as I am now and have successfully solved these types of "challenges"? Does anyone know of a certain book or article regarding web crawling? Those that pertains to these advanced type of websites? Thanks.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >