Search Results

Search found 7625 results on 305 pages for 'scraper sites'.

Page 30/305 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Fetching data from the other sites and displaying into our page.?

    - by user622688
    Is ther any way to get data from other sites and display in our jsp pages dynamically. http://www.dictionary30.com/meaning/Misty see this url in that one block is like Wikipedia Meaning and Definition on 'Misty' In that block they are fetching the data from Wikipedia and displaying into dictionaly30. Question: How they are fetching wiki data to their site.? I need to display data like that in my jsp page by fetching from other site.

    Read the article

  • DNS hijack - prevention tips

    - by user578359
    Hi there, Over the weekend it looks like the DNS was hijacked on two of my domains. My set up is I have the sites registered on 1and1.co.uk, with dns nameservers pointing to Hostgator in the US where the sites are hosted. I also had cloudflare CDN running on the sites (via hostgator cpanel). My question is any ideas as to how this happened, and how I could either monitor it so I know if it occurs again, or strengthen the set up/service to minimise the risk. History: I received a ping from my site monitoring service that the sites were down. When I checked the sites were up so I assumed it was local to the monitoring service I received a ping last night the sites were up When I checked, one site was redirecting to download-manual.com (and checking that URL now, the home page is not the same as the one I saw, so they too may have been hijacked/hacked) The other site URL remained the same but had one of those standard site search pages which bounce you off to either phishing or paid for search sites I notified Hostgator who told me Cloudflare or 1and1 were the issue. I removed cloudflare, and contacted both them and hostgator, and am awaiting a response, but am not holding my breath. Is this common? I've never heard of this or come across this before. It's pretty scary that this can happen so easily. Appreciate any input. **Update: I've now spoken to support at 1and1, Hostgator, and Cloudflare, and each one claims it has nothing to do with them, and must be one of the others. Larry, curly, moe.

    Read the article

  • general questions about link spam

    - by hen3ry
    Hello, A CMS-based site I manage is suffering from a small but ominously growing number of almost certainly bot-emplaced, invisible spam links placed in registered-user-only shoutboxes and user forums. "Link Spam", yes? Until recently, I've kept my eyes on narrow tech issues, and I'm having trouble understanding what's going on. I understand that we need to tighten up our registration procedures, but more generally... Do I understand correctly that our primary interest in combatting link spam on our site is that major search engines reduce or zero the search visibility of sites that contain link spam? Although we're non-commercial, we don't want to be at the bottom of the rankings, or eliminated altogether. Are the linked-to sites the direct beneficiaries of the spam links, or is there some kind of indirection? What is the likely relationship between the link-spammers and the owners of the (directly or indirectly) linked-to sites? Are the owners of the linked-to sites paying the link-spammers for higher visibility? Are the owners aware that this method is being used? It is my impression that major search engines are capable these days of detecting that given sites are being promoted by link spam, and that these sites may consequently be reduced in search rank or dropped altogether. Do these sanctions occur frequently? Is there any potential value in sending notifications to the owners of the linked-to sites that their visibility is at risk? TIA, hen3ry

    Read the article

  • HTG Explains: What Is RSS and How Can I Benefit From Using It?

    - by Jason Fitzpatrick
    If you’re trying to keep up with news and content on multiple web sites, you’re faced with the never ending task of visiting those sites to check for new content. Read on to learn about RSS and how it can deliver the content right to your digital doorstep. In many ways, content on the internet is beautifully linked together and accessible, but despite the interconnectivity of it all we still frequently find ourselves visiting this site, then that site, then another site, all in an effort to check for updates and get the content we want. That’s not particular efficient and there’s a much better way to go about it. Imagine if you will a simple hypothetical situation. You’re a fan of a web comic, a few tech sites, an infrequently updated but excellent blog about an obscure music genre you’re a fan of, and you like to keep an eye on announcements from your favorite video game vendor. If you rely on manually visiting all those sites—and, let’s be honest, our hypothetical example has a scant half-dozen sites while the average person would have many, many, more—then you’re either going to be wasting a lot of time checking the sites every day for new content or you’re going to be missing out on content as you either forget to visit the sites or find the content after it’s not as useful or relevant to you. RSS can break you free from that cycle of either over-checking or under-finding content by delivering the content to you as it is published. Let’s take a look at what RSS is how it can help. HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Drupal migration failed

    - by Marco
    First of all, I'm new to Drupal and the work I have to do is some kind of too hard. My old colleague (webmaster) had a server with a multisite Drupal 6 installation. Sites and their dirs were (e.g.) Sites Site directory b.a.mycompany.com /drupal_install_dir/sites/b.a.mycompany.com c.a.mycompany.com /drupal_install_dir/sites/c.a.mycompany.com d.a.mycompany.com /drupal_install_dir/sites/d.a.mycompany.com Unluckily my colleague moved and server hdd aren't in my hands: all I have is a backup of /drupal_install_dir and three sql dumps (one for each site). I had to restore three sites, but changing them as z.mycompany.com/b z.mycompany.com/c z.mycompany.com/d Beeing a sysadmin, I Extracted tar.gz backup file under wwwroot (let's call full path to extracted directory /new_install_dir) Restored three databases Created mysql users and give them correct GRANTS on databases Then (trying to restore at least first site) I changed /new_install_dir/sites/settings.php putting correct database connection data and new basepath. But there is no way I can see my new site, simply it doesn't work. Watching /var/log/apache2/error.log I saw Drupal searching for main drupal database; so I created that db too setting user and grants, but dump file is empty. Well, now I can run something like install.php or update.php, but my site is not shown. Is there something I can do? Do I have to walk another way? Consider I searched the web, but I'm not able to find a guide that can help me for my problem. Ah, I forgot: before producing the backup, my colleague set site in maintenance mode. When I try to run z.mycompany.com/?q=user (trying to login) nothing happens. I'm really stuck...

    Read the article

  • Deploying play! 2.0 application on an apache server with a reverse proxy

    - by locrizak
    I'm trying to deploy my play! 2.0 application on an Ubuntu 11.10 server and I have been running into error after error and hope someone can help me here. I am try to deploy my Play! application using a reverse proxy on Apache 2. I have enabled the apache proxy modules and configured the proxy.conf file in mods_enabled. The vhost for my domain looks like this: <Directory /var/www/stage.domain.com AllowOverride None Order Deny,Allow Deny from all </Directory <VirtualHost *:80 DocumentRoot /var/www/stage.domain.com/web ServerName stage.domain.com ServerAdmin [email protected] # ProxyRequests Off # ProxyPreserveHost On <Proxy * Order allow,deny Allow from all </Proxy # ProxyVia On # ProxyPass /play/ http://localhost:9000/ # ProxyPassReverse /play/ http://localhost:9000/ ErrorLog /var/log/ispconfig/httpd/stage.domain.com/error.log ErrorDocument 400 /error/400.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 405 /error/405.html ErrorDocument 500 /error/500.html ErrorDocument 502 /error/502.html ErrorDocument 503 /error/503.html <IfModule mod_ssl.c </IfModule <Directory /var/www/stage.domain.com/web Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory <Directory /var/www/clients/client2/web7/web Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory # Clear PHP settings of this website <FilesMatch "\.ph(p3?|tml)$" SetHandler None </FilesMatch # mod_php enabled AddType application/x-httpd-php .php .php3 .php4 .php5 php_admin_value sendmail_path "/usr/sbin/sendmail -t -i [email protected]" php_admin_value upload_tmp_dir /var/www/clients/client2/web7/tmp php_admin_value session.save_path /var/www/clients/client2/web7/tmp # PHPIniDir /var/www/conf/web7 php_admin_value open_basedir /var/www/clients/client2/web7/:/var/www/clients/client2/web7/web:/va$ # add support for apache mpm_itk <IfModule mpm_itk_module AssignUserId web7 client2 </IfModule <IfModule mod_dav_fs.c # Do not execute PHP files in webdav directory <Directory /var/www/clients/client2/web7/webdav <FilesMatch "\.ph(p3?|tml)$" SetHandler None </FilesMatch </Directory # DO NOT REMOVE THE COMMENTS! # IF YOU REMOVE THEM, WEBDAV WILL NOT WORK ANYMORE! # WEBDAV BEGIN # WEBDAV END </IfModule # <Location /play/ # ProxyPass http://localhost:9000/ # SetEnv force-proxy-request-1.0 1 # SetEnv proxy-nokeepalive 1 # </Location ProxyRequests Off ProxyPass /play/ http://localhost:9000/ ProxyPassReverse /play/ localhost:9000/ ProxyPass /play http://localhost:9000/ ProxyPassReverse /play http://localhost:9000/ # SetEnv force-proxy-request-1.0 1 # SetEnv proxy-nokeepalive 1 </VirtualHost This vhost file was generated by ispconfig and I have not touched anything that was there before just added onto. As you can see by the commented out parts I have tried a lot of different things based on random tutorials I have found but all of them have ended up in Internal Server Error, 503 and most often a '502 Bad Gateway`. I can start play and it does connect successfully to my database. I can get a page to show up when there is an error and the play! stack trace error pages comes up but where everything is fine I get one of the errors above. My application.conf file looks like this: db info ....... application.mode=PROD logger.root=ERROR # Logger used by the framework: logger.play=INFO # Logger provided to your application: logger.application=DEBUG http.path="/play/" XForwardedSupport="127.0.0.1" And my hosts file looks like this (I have never changed or added anything to the host file): 127.0.0.1 localhost 127.0.1.1 matrix # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Any insights onto what I might be doing wrong or if theres anything I can try please let me know! Thanks!! Edit Again the reverse proxy will work (I checked with sending to to google.com). Its when there is a successful connection to Netty. It's like Netty refuses the connection to the page. Edit 2 output from apachectl -S _default_:8081 127.0.0.1 (/etc/apache2/sites-enabled/000-apps.vhost:10) *:8090 is a NameVirtualHost default server 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10) port 8090 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10) *:80 is a NameVirtualHost default server 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost stage.domain.com (/etc/apache2/sites-enabled/100-stage.domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7)

    Read the article

  • I want to combine the databases from two different sites under one URL. How is this possible?

    - by Punct Ulica
    I have a small site that I want to merge with a bigger one. How can I merge the second one with the first? I know that one solution would be to make the smaller one a subdomain of the bigger one, but I would like the following thing to happen: when I click on a category or a tag, posts from both sites/databases would appear. Something like Smashing Magazine did when it assimilated designinformer.com. The other solution and the one that I would prefer would be to merge the two databases, but I don't know if this is possible.

    Read the article

  • Urllib's urlopen breaking on some sites (e.g. StackApps api)

    - by Edan Maor
    I'm using urllib2's urlopen function to try and get a JSON result from the StackOverflow api. The code I'm using: >>> import urllib2 >>> conn = urllib2.urlopen("http://api.stackoverflow.com/0.8/users/") >>> conn.readline() The result I'm getting: '\x1f\x8b\x08\x00\x00\x00\x00\x00\x04\x00\xed\xbd\x07`\x1cI\x96%&/m\xca{\x7fJ\... I'm fairly new to urllib, but this doesn't seem like the result I should be getting. I've tried it in other places and I get what I expect (the same as visiting the address with a browser gives me: a JSON object). Using urlopen on other sites (e.g. "http://google.com") works fine, and gives me actual html. I've also tried using urllib and it gives the same result. I'm pretty stuck, not even knowing where to look to solve this problem. Any ideas?

    Read the article

  • Urllib's urlopen breaking on some sites (e.g. StackApps api): returns garbage results

    - by Edan Maor
    I'm using urllib2's urlopen function to try and get a JSON result from the StackOverflow api. The code I'm using: >>> import urllib2 >>> conn = urllib2.urlopen("http://api.stackoverflow.com/0.8/users/") >>> conn.readline() The result I'm getting: '\x1f\x8b\x08\x00\x00\x00\x00\x00\x04\x00\xed\xbd\x07`\x1cI\x96%&/m\xca{\x7fJ\... I'm fairly new to urllib, but this doesn't seem like the result I should be getting. I've tried it in other places and I get what I expect (the same as visiting the address with a browser gives me: a JSON object). Using urlopen on other sites (e.g. "http://google.com") works fine, and gives me actual html. I've also tried using urllib and it gives the same result. I'm pretty stuck, not even knowing where to look to solve this problem. Any ideas?

    Read the article

  • Urllib's urlopen broken on some sites (e.g. StackApps api)

    - by Edan Maor
    I'm using urllib2's urlopen function to try and get a JSON result from the StackOverflow api. The code I'm using: >>> import urllib2 >>> conn = urllib2.urlopen("http://api.stackoverflow.com/0.8/users/") >>> conn.readline() The result I'm getting: '\x1f\x8b\x08\x00\x00\x00\x00\x00\x04\x00\xed\xbd\x07`\x1cI\x96%&/m\xca{\x7fJ\... I'm fairly new to urllib, but this doesn't seem like the result I should be getting. I've tried it in other places and I get what I expect (the same as visiting the address with a browser gives me: a JSON object). Using urlopen on other sites (e.g. "http://google.com") works fine, and gives me actual html. I've also tried using urllib and it gives the same result. I'm pretty stuck, not even knowing where to look to solve this problem. Any ideas?

    Read the article

  • Rackspace Cloud Sites: Compute Cycles exploding. Very expensive.

    - by Jaap
    Hi All, Since last week my compute cycles (CC) went through the roof (Rackspace Cloud Sites). Normally I stay under the 10,000 cycles per month . Now this month I already have more than 75,000 compute cycles. I don't have more visitors and I did not change anything in the code. I looked in the raw log files, that didn't help either... This explosion of CC already costs me more than 750 USD right now. And still counting. Anyone know what to do? I have contacted Rackspace last week. But still no solution/answer.... Looks like Rackspace is liking the money! Help! Thanks.

    Read the article

  • Low noise sites to keep track of news related to computers & programming?

    - by Sridhar Ratnakumar
    I am aware of sites like Slashdot and Ars Technica. Unfortunately they publish way too many articles per day. Ars Technical: 100 posts per week Slashdot: 179 posts per week And I prefer not to waste my attention over reading about 180 posts (even if I were to skim) every week. Is there a site that provides only important news - excluding not-so-important news items like top 6 iPad apps that someone is dying to try out (but haven't done yet; still thinks their readers/advertisers care about it)?

    Read the article

  • Does Apache need to be stopped to edit "/etc/apache2/sites-available/default"?

    - by webworm
    I am attempting to edit the "default" file located at .. "/etc/apache2/sites-available/default" on my Ubuntu machine running Apache 2.2.8. I want to do this in order to enable the use of .htaccess files. I have downloaded the "default" file and edited it and now I am trying to upload it back to the server via SFTP. I keep getting permission denied errors. Could it be because Apache is running and making use of the file? I am an admin on the machine so I would expect to be able to overwrite the file. Thanks for any assistance.

    Read the article

  • Local sites not displaying in VirtualBox when using Django's local development server?

    - by littlejim84
    Hello. I develop web applications using Django on Mac OSX 10.6. I use Django's built in local development server which I run on my computer's IP (such as: http://192.168.0.11:8001/). I test my applications in Firefox, Safari and Chrome and all display fine. I use Sun's VirtualBox with 3 different instances of Windows XP that have IE6, IE7 and IE8 on them. For whatever reason, these sometimes just don't display the Django sites. They come up with 'The page cannot be displayed'. Eight times out of ten, they display fine and function normally but for no reason at all they won't display. Sometimes restarting Django's local development server from the Terminal will fix the problem, sometimes it won't. Is there some sort of VirtualBox settings or Django settings that I need to set to ensure smooth operation of this? Am I overlooking something? Has anyone else had these problems?

    Read the article

  • how do copyright permission systems for content hosting sites work?

    - by zebraman
    I am wondering about subscription sites that host content, like recorded performances from concerts. I'm sure there is a tangle of copyright permissions that must be granted for these video/audio files to be hosted. For example, if a band plays a cover of another band's song, permission must be obtained from not only the band that performed, but the band that owns the song. Perhaps even from the venue that hosted the performance, to record the video and post the content. I am curious how websites that host content like this work. How might an automated copyright system work to keep track of who has ownership of certain performances and obtain permission from said owners to record and post their content.

    Read the article

  • How to suppress javascript errors for sites I'm not developing?

    - by Simon_Weaver
    I like to keep javascript debugging enabled in my browser so when I'm developing my own code I can instantly see when I've made an error. Of course this means I see errors on apple.com, microsoft.com, stackoverflow.com, cnn.com, facebook.com. Its quite fun sometimes to see just how much awful code there is out there being run by major sites but sometimes it gets really annoyed. I've wondered for YEARS how to change this but never really got around to it. Its particularly annoying today and I'd really like to know of any solutions. The only solution I have is : use a different browser for everyday browsing. I'm hopin theres some quick and easy plugin someone can direct me to where I can toggle it on and off based upon the domain i'm on. Edit: I generally use IE7 for everyday browsing

    Read the article

  • Why is Drupal writing to root and not sites/default/files?

    - by Candland
    I'm using Drupal 6.14 on Win7. Everything seems to work except files that should be written to sites/default/files are trying to be written to /. The site was moved from a linux installation, which is writing the files correctly. I have setup a web.config w/ the rewrite rules for drupal. Not sure what or where else I should check. Thanks for any help. <rule name="Drupal Clean URLs" stopProcessing="true"> <match url="^(.*)$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="index.php?q={R:1}" appendQueryString="true" /> </rule>

    Read the article

  • Interpreted vs. Compiled Languages for Web Sites (PHP, ASP, Perl, Python, etc.)

    - by Andrew Swift
    I build database-driven web sites. Previously I have used Perl or PHP with MySQL. Now I am starting a big new project, and I want to do it in the way that will result in the most responsive possible site. I have seen several pages here where questions about how to optimize PHP are criticized with various versions of "it's not worth going to great lengths to optimize PHP since it's an interpreted language and it won't make that much difference". I have also heard various discussions (especiallon on the SO podcast) about the benefits of compiled vs. interpreted languages, and it seems as though it would be in my interests to use a compiled language to serve up the site instead of an interpreted language. Is this even possible in a web context? If so, what would be a reasonable language choice? In addition to speed one benefit I forsee is the possiblity of finding bugs at compile time instead of having to debug the web site. Is this reasonable to expect?

    Read the article

  • WordPress: can't access WordPress.com and other external sites?

    - by Rax Olgud
    Hello, I recently started a WordPress blog using hosting at MyDomain (they offer the application "natively"). The blog works fine, however I have two plugins I can't seem to install correctly. First, the WordPress.com Stats plugin requires the API Key. When I input it, I get the following message: Error from last API Key attempt: Your blog was unable to connect to WordPress.com. Please ask your host for help. (transport error - could not open socket: 110 Connection timed out) Second, the Akismet plugin is not configured. When I go to Akismet page to insert my API key, it has the following message: There was a problem connecting to the Akismet server. Please check your server configuration. I assume the two issues are related... I approached my hosting provider about the subject and all they said is that they don't support WordPress, only provide means to install it. To clarify, up to this point I have only been able to install plugins that don't require an API key. What can I do to diagnose the problem and fix it? As a work-around, are there comparable stats and anti-spam plugins that don't require an API key? Many thanks.

    Read the article

  • How do I Setup Multiple Sites in HostGator Shared Hosting?

    - by cillosis
    I recently decided to consolidate all of my random projects into a single hosting account as it was starting to get very expensive to run each on an individual hosting plan. I purchased the HostGator Baby plan which allows hosting of multiple domains. You have to set it up with a root domain name which is fine (I used my portfolio domain name). As far as file structure, I wanted a folder for each site in /public_html so the structure looks like this: - public_html/ - myportfolio.com/ - ... my files ... - anothersite.com/ - ... my files ... - thirdsite.com/ - ... my files ... I setup add-on domains and pointed them to their respective folders which works fine. My problem is the root domain ex. myportfolio.com expects it's files to be contained at the root of /public_html rather than within it's folder I created. I setup a redirect to point requests for myportfolio.com to myportfolio.com/myportfolio.com/ which works initially except (at least in my WordPress installation) it still references it's root folder as public_html. TL;DR; What is the best way to go about setting up multiple site hosting in a shared hosting environment (i.e. I can't setup vhosts). Does anybody know of any tutorials or videos that walk through this more clearly? Thanks.

    Read the article

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • Differences in memory consumption between two identical D7 sites?

    - by aendrew
    I'm running Drupal on a news site that has a lot of different View blocks on the front page (~5 total, all cached). In trying to reduce the memory footprint of the site, I've checked out source from SVN to a local development install to try and convert some of those blocks into more optimized code. Here's the weird thing. Devel module lists memory consumption at 50mb on the Production site (Running Nginx, PHP 5.2.17, XCache and Zend Optimizer.) but only 14mb on my development site (Running Apache2, PHP 5.2.13 and XCache). These are nearly-identical versions of the same site — frankly, the Production site should use even less memory as I've disabled some of the modules running on the Dev site. Any idea why this might be the case?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >