Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 142/389 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Trigger IP ban based on request of given file?

    - by Mike Atlas
    I run a website where "x.php" was known to have vulnerabilities. The vulnerability has been fixed and I don't have "x.php" on my site anymore. As such with major public vulnerabilities, it seems script kiddies around are running tools that hitting my site looking for "x.php" in the entire structure of the site - constantly, 24/7. This is wasted bandwidth, traffic and load that I don't really need. Is there a way to trigger a time-based (or permanent) ban to an IP address that tries to access "x.php" anywhere on my site? Perhaps I need a custom 404 PHP page that captures the fact that the request was for "x.php" and then that triggers the ban? How can I do that? Thanks! EDIT: I should add that part of hardening my site, I've started using ZBBlock: This php security script is designed to detect certain behaviors detrimental to websites, or known bad addresses attempting to access your site. It then will send the bad robot (usually) or hacker an authentic 403 FORBIDDEN page with a description of what the problem was. If the attacker persists, then they will be served up a permanently reccurring 503 OVERLOAD message with a 24 hour timeout. But ZBBlock doesn't do quite exactly what I want to do, it does help with other spam/script/hack blocking.

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • is a merchant account a requirment for a website to take payments..

    - by calum
    Hi, I have had a quick look but couldn't see anything related. Basically, if we were to accept payments for events on our website, via paypal (essentially a Buy it now! button), as a business, do we need a merchant's account, or will a regular bank account be acceptable? I may have some confusion in terms. My understanding is you need a merchant's account to accept credit card payments, but as we are using PayPal, is this necessary? Thank you for any clarification. disclaimer - I've read What are some options for taking payments on my website? but it doesn't explicitly say if we require a merchant account or not. Thank you.

    Read the article

  • SH404SEF URLs in Joomla 1.5

    - by Tao Bellamine
    I have two modules to play with urls, the global configuration module and the sh404sef module. The global config is set to "Sef urls: YES" and "mod rewrite enabled: YES" and the sh404sef is set "url optimization: NO". My problem is, even with "Sef urls" set in the global config, my urls still don't seem to be that "user friendly" so I turn on the "Url optimization" using the sh404sef module, and I get better descriptive urls. However, the problem I inherit from doing this is that my dynamically populated chronoforms get messed up (only the chrono forms, other forms are fine); These forms are now showing up at the homepage instead of their own reserved page. Here's an example: Old form "GOOD" url: http://www.mycraftwork.com/index.php?option=com_content&view=article&id=94 New optimized "BAD" URL: http://www.mycraftwork.com/handthrown-pottery/alladin-teapot/index.php?option=com_content&view=article&id=94 Any help would be GREATLY appreciated! I can even turn the sh404sef on and off if some people are interested in seeing the issue LIVE. Thanks!! Tao Bellamine

    Read the article

  • Wordpress .htaccess preventing subfolder access

    - by John K.
    This is sort of a goofy setup, but it's not in my power to reconfigure it at this time. I'm running in a shared hosting environment. The domain is example.com. This is an add-on domain on the host side with example.com being redirected to the www/example.com sub-directory. That directory houses a standard Wordpress site which acts as the main site when you visit example.com. The .htaccess file within that directory is: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^wp-admin/profile\.php$ /ssm/welcome [R] </IfModule> I have a subdirectory, at the root level with the /example.com subdirectory that houses a cake php application. That subdirectory is /tracker. My problem is that when I attempt to browse to example.com/tracker, I get a 404 from Wordpress because perma links are on. What I think I need is a rewrite rule in the Wordpress .htaccess file that short circuits the existing rewrite rules and permits example.com/tracker to work independently of the Wordpress install. Or a rewrite rule at the root level that short circuits the redirect to the /example.com directory in the first place. Not sure how well I explained that so here's a summary. The www/ directory structure: example.com/ tracker/ Add on domain of www.example.com redirecting to the /example.com directory with Wordpress and a tracker/ directory running CakePHP which I would like to access via www.example.com/tracker. If you need further info or clarification let me know!

    Read the article

  • Google ranking, page crawl

    - by Nawaf Mubarak
    please don't mind me for asking this newbie question about Google ranking. I know that in order to get ranked the page has to be crawled by Google bots, I have had a page example of which I will get a better understanding of how the system works with Google. I have made a page on my website last month, it got indexed pretty quickly, then I found that it's in Google's page 15 on my keyword as a start, next day it made it to page 13, then after a week it was jumping back and forth in page 17/18 up to 20. Now a month passed by, when and it isn't listed in any position of that 'keyword' sometimes I will find it in page 30, but later I won't find it anywhere, keep happening this way these days. Even if it isn't listed in any page for my keyword if I do a search for "site:thepageadress" it will be listed which means I'm not penalized and my page is there for google to see, but it isn't in the search result for my keyword. But when I write "site:thepage_adress" and I hit "search tools" option and click on "Past day" or "past week" it isn't listed, it is only listed when I click on "Past month" which I think means that Google indexed the page, looked at it once when I published it, and never looked at it again, is this a fair statement? So two questions that comes to mind here. 1- Should Google keep looking at a page even if I haven't changed any info for it? and is this an indication for me that my page is doing fine? or is it normal that Google see's it once and thats it? 2- Why and how to fix the fact that my page keeps jumping back and forth in the ranking result for keyword, and sometimes it isn't even listed, what does that mean? Sorry for the long msg, I hope to god that somebody help me with this. Thank you!

    Read the article

  • URL structure for content that is updated daily

    - by Brendon
    A small, simple site I am working on displays a single page with the day's best offers on it. The user is able to move back and forth between previous days. Which of the following URL structures works best? Structure 1 /index.html -- today's best offers /2013-06-29.html -- yesterday's best offers, etc. Structure 2 /index.html -- 302 redirects to /2013-06-30.html (or whatever today is) /2013-06-30.html -- today's best offers /2013-06-29.html -- yesterday's best offers, etc. I quite like structure 2 from the user's point of view (they can share content easily), but I am a bit concerned about updating the redirect from /index.html every single day -- would this perhaps have unintended SEO consequences?

    Read the article

  • How to prevent duplication of content on a page with too many filters?

    - by Vikas Gulati
    I have a webpage where a user can search for items based on around 6 filters. Currently I have the page implemented with one filter as the base filter (part of the url that would get indexed) and other filters in the form of hash urls (which won't get indexed). This way the duplication is less. Something like this example.com/filter1value-items#by-filter3-filter3value-filter2-filter2value Now as you may see, only one filter is within the reach of the search engine while the rest are hashed. This way I could have 6 pages. Now the problem is I expect users to use two filters as well at times while searching. As per my analysis using the Google Keyword Analyzer there are a fare bit of users that might use two filters in conjunction while searching. So how should I go about it? Having all the filters as part of the url would simply explode the number of pages and sticking to the current way wouldn't let me target those users. I was thinking of going with at max 2 base filters and rest as part of the hash url. But the only thing stopping me is that it would lead to duplication of content as per Google Webmaster Tool's suggestions on Url Structure.

    Read the article

  • Tack anchor link with Google Analytics

    - by Fredrik
    I have searched for how to track anchor links in analytics, but couldn't get it working. I have this code in the header: <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('_setAllowAnchor', true); ga('create', 'UA-*******-1', '****.com'); ga('send', 'pageview'); </script> And my links looks like this: <a href='#/contact'><span>Contact</span></a> I also tried to use this links: <a href='#/contact' onClick="_gaq.push(['_trackPageview', location.pathname+location.search+location.hash]);"><span>Contact</span></a> Is there any tips on what I can do?

    Read the article

  • Canonical links for huge websites

    - by Florin
    Let's say I have 5 products that are identical but the product code, the product color specifications and the product image. The title, meta and description are identical (by the way the color is in a select form). I made 4 products link canonical to the 1 that is the master based on many factors. If the master becomes inactive or without a stock one product from the other 4 will become the new master and the rest will become canonical to it. The question is if that by becomeing master from canonical will the site suffer a penalty from Google or it will work just fine? What will Google think about this strategy?

    Read the article

  • Google Authorship Image of my blogspot has been disappeared in Google SERP. Why?

    - by Sathiya Kumar
    I have a blogspot and i used my image to appear on the Google SERP for my keywords using Google Authorship Markup. My image was showed for last 2 months but while checking SERP for my blog, i found that my authorship markup is not working. My image, name and G+ followers count is not appearing near my blogspot URL in SERP. I didn't made any changes in my google+ profile or in my blogspot header tag where i had put the authorship code. I tried to find the reason but i didn't find any value answer. May anyone answer this question. Please let me know if you had already experienced like this.

    Read the article

  • Which E-commerce Platform works well with Flash Product Customization+Social?

    - by Artur
    What's the best platform out there that is flexible enough to easily integrate this: Custom Flash App I would like customers to : 1 - Select a t-shirt from a gallery of artists. 2 - Customize it ( using a Flash tool i created ) 3 - Select a T-shirt size 4 - Order it. All this flash widget does is generate a JPG on the server. the ecommerce app should assign it to that Order/Customer, and add it to their shopping cart. Social Features Customers should also be able to comment on the t-shirts and artist bios. I was thinking of trying Wordpress plugins like Shopp or Getshopped or Cart66. ----- then BuddyPRess for social features. Or is Magento a better choice? thanks!

    Read the article

  • What is the aim of this email? Is this a ping/sping? [closed]

    - by mplungjan
    Hi, I received this spam in my catch-all. As a webmaster of the domain it was sent to, I am really curious what the reason for this mail is. It was sent to a non-existent user "tania" on my domain - here I used mydomain.zzz - what do the sender want to achieve? Since many mail servers have stopped backscattering, not getting a bounce would not mean anything, would it? And if this is off topic, where inb the StackExchange WOULD it be on topic? Delivered-To: [email protected] Received: (qmail 8015 invoked from network); 27 Jan 2011 02:32:47 -0000 Received: from unknown (HELO p3pismtp01-021.prod.phx3.secureserver.net) ([10.6.12.26]) (envelope-sender <[email protected]>) by smtp35.prod.mesa1.secureserver.net (qmail-1.03) with SMTP for <[email protected]>; 27 Jan 2011 02:32:47 -0000 X-IronPort-Anti-Spam-Result: At4FAAlnQE1GVjtCVGdsb2JhbACWXo4gCwEWCA0YJLwyhU8EhRc Received: from mx.dt3ls.com ([70.86.59.66]) by p3pismtp01-021.prod.phx3.secureserver.net with ESMTP; 26 Jan 2011 19:32:47 -0700 Received: from 70.86.59.66 by mx.dt3ls.com (Merak 8.9.1) with ASMTP id JXF39710 for <[email protected]>; Wed, 26 Jan 2011 17:31:10 -0500 Return-Path: [email protected] Status: Message-ID: <20110126173109.4d9d6c3f2b@1c3c> From: "Tech Support" <[email protected]> To: <[email protected]> Subject: Information, as instructed. Date: Wed, 26 Jan 2011 17:31:09 -0500 X-Priority: 3 X-Mailer: General-Mailer v.3 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Quote: I give it to you not that you may remember time, but that you might forget it now and then for a moment and not spend all your breath trying to conquer it. Because no battle is ever won he said. They are not even fought. The field reveals to a man his own folly and despair, and victory is an illusion of philosophers and fools. William Faulkner The Sound and the Fury

    Read the article

  • Webmaster Tools word count

    - by Henrik Erlandsson
    Is there a way to somehow verify that the googlebot finds the headings and the content, for example by word count? I'm asking this because I tried a program called Screaming Frog, which fails to even fetch the first h1 on a validated page - for about 1/3 of all the pages(!) - and got insecure. Even though the site looks hunky dory in Webmaster Tools, I'd like to know what a googlebot-like content crawler finds on my page and in what order. Any tips on such tools is appreciated. This is not about keyword count.

    Read the article

  • Why the amount of 'indexed' images can go down?

    - by Roman Matveev
    I have a site with several thousand of images. All those images included into the sitemap submitted to Google Webmaster Tools. The amount of 'submitted' images is OK, but the amount of 'indexed' is significantly lower than the amount of 'submitted' and it is going DOWN! I'd understand if not all of my images got indexed (however it is also not clear and very frustrating for me) but I can not understand how the indexing can go in the negative direction?! All the images stays on their places. And pages containing them stays unchanged. At least they intended to be. Any thoughts?

    Read the article

  • Google's process for publishing/modifying pages [closed]

    - by Glenn Dayton
    I'm assuming that a group of people at Google have control of certain sections of google.com, but how does Google make sure that employees don't accidentally or intentionally sabotage the website? Does Google use Adobe Contribute or some similar product for sharing/publishing the website. Do employees use WebDAV, FTP, SFTP, or SSH to publish the site. Since Google has hundreds of thousands of servers it probably takes some time for its servers to update. Do they transmit the new copy of the website to all servers before publishing at once? This question does not apply to Google editing a database and having a page reflect the database's changes. It applies to employees editing the source code and/ or back end of the site.

    Read the article

  • How can I pass referrer header from my https domain to http domains?

    - by nutcracker
    My website is 100% https. I have links to other http domains. The referrer header is not set when linking from a https page to a http page. From http://en.wikipedia.org/wiki/HTTP_referrer If a website is accessed from a HTTP Secure (HTTPS) connection and a link points to anywhere except another secure location, then the referer field is not sent. I would prefer that other domains can see the referrer so that they know that traffic comes from my domain. Is there a way to force this header or is there another solution? Update I've done some basic testing using a redirect: http page -- link to http --> 301 redirect --> http page = referrer intact https page -- link to https --> 301 redirect --> http page = referrer blank https page -- link to http --> 301 redirect --> http page = referrer blank https page -- link to http --> 302 redirect --> http page = referrer blank The referrer is lost when linking from a https page to a http redirect page on my own domain. So there is no referrer on the redirect.

    Read the article

  • Google search results are invalid

    - by Rufus
    I'm writing a program that lets a user perform a Google search. When the result comes back, all of the links in the search results are links not to other sites but to Google, and if the user clicks on one, the page is fetched not from the other site but from Google. Can anyone explain how to fix this problem? My Google URL consists of this: http://google.com/search?q=gargle But this is what I get back when the user clicks on the Wikipedia search result, which was http://www.google.com/url?q=http://en.wikipedia.org/wiki/Gargling&sa=U&ei=_4vkT5y555Wh6gGBeOzECg&ved=0CBMQejAe&usg=AFQjeNHd1eRV8Xef3LGeH6AvGxt-AF-Yjw <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html lang="en" dir="ltr" class="client-nojs" xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Gargling - Wikipedia, the free encyclopedia</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Style-Type" content="text/css" /> <meta name="generator" content="MediaWiki 1.20wmf5" /> <meta http-equiv="last-modified" content="Fri, 09 Mar 2012 12:34:19 +0000" /> <meta name="last-modified-timestamp" content="1331296459" /> <meta name="last-modified-range" content="0" /> <link rel="alternate" type="application/x-wiki" title="Edit this page" > <link rel="edit" title="Edit this page" > <link rel="apple-touch-icon" > <link rel="shortcut icon" > <link rel="search" type="application/opensearchdescription+xml" > <link rel="EditURI" type="application/rsd+xml" > <link rel="copyright" > <link rel="alternate" type="application/atom+xml" title="Wikipedia Atom feed" > <link rel="stylesheet" href="//bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&amp;lang=en&amp;modules=ext.gadget.teahouse%7Cext.wikihiero%7Cmediawiki.legacy.commonPrint%2Cshared%7Cskins.vector&amp;only=styles&amp;skin=vector&amp;*" type="text/css" media="all" /> <style type="text/css" media="all">#mwe-lastmodified { display: none; }</style><meta name="ResourceLoaderDynamicStyles" content="" /> <link rel="stylesheet" href="//bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&amp;lang=en&amp;modules=site&amp;only=styles&amp;skin=vector&amp;*" type="text/css" media="all" /> <style type="text/css" media="all">a:lang(ar),a:lang(ckb),a:lang(fa),a:lang(kk-arab),a:lang(mzn),a:lang(ps),a:lang(ur){text-decoration:none} /* cache key: enwiki:resourceloader:filter:minify-css:7:d5a1bf6cbd05fc6cc2705e47f52062dc */</style>

    Read the article

  • How to promote travel blog? [closed]

    - by Tschareck
    Possible Duplicate: What are the best ways to increase your site's position in Google? How can I increase the traffic to my site? I know this question might seem a little off topic, but blogging may become important part of travel. Nowadays, in time of Facebook, Twitter and similar services, keeping a travel blog may seem a little archaic. It's not 2005 anymore. But a lot of my travel colleagues update their blogs and have significant number of readers. I also tried to keep my blog when I travel. However it seems that the only reader is my mum ;) What is your advice on promoting a travel blog?

    Read the article

  • How can I assign id to subject input box in simple machines forum?

    - by Mahesh Bhat
    I have installed the simple machines forum. For my requirement I need to change the subject input box. Presently the subject input box has the following code <input type="text" class="input_text" maxlength="80" size="80" tabindex="1" name="subject"> I want this to be changed like <input type="text" id="subject" class="input_text" maxlength="80" size="80" tabindex="1" name="subject"> How can I do this? I browsed through many files in simple machines forum and couldn't find out. Anyone one simple machines forum please help me

    Read the article

  • WUXGA revisited

    - by John Paul Cook
    I previously blogged about my search for a 17” 1920x1200 laptop. The only one I could find was a 17” MacBook Pro, which has been an excellent machine for running Windows and SQL Server. It is no longer made. Apple has a few refurbished ones available. Just be sure to get a matte display if you buy one. If you want WUXGA resolution or better in a laptop, your only off the shelf option is now the 15” MacBook Pro with the Retina display, which is 2880x1800. This exceeds the resolution of my 30” 2560x1600...(read more)

    Read the article

  • Help slecting dedicated server with good disk I/O & network

    - by JP19
    Hi, I am looking for a cheap dedicated server. (I was earlier happy with VPS, untill I realized that the disk I/O is not at all reliable and depends on what your neighbours are upto at the moment). I was browsing through http://www.lowenddedi.net/the-database I don't understand memory speed and NIC speed columns at all. What will be their affect? Do I need to worry about them? Also, can someone help suggest a provider, with following criteria: 1) Good & reliable Network 2) Price <= $60/month. Thanks JP

    Read the article

  • Are shorter URLS better for SEO?

    - by articlestack
    Many people shorten their URLs. But as per my understanding it creates overhead of extra redirection, other can not guess about the target article with their url, and it should be less friendly for "inurl:..." type search. Should I shorten the URLs of my sites? Is there any advantage with short URLs besides the fact that they take fewer characters in anchor tags on the page (good for site loading)?

    Read the article

  • Basic Google Analytics Click Tracking and/or Overview

    - by Alan Storm
    This is a really basic Google Analytics question. Apologies in advance if it's not appropriate here, but I've had a lot of luck on Stack Overflow and this seems like the best Stack Exchange site for a question like this. I'm trying to understand how Google Analytics goals work, or if they're the right feature to be using for my situation. Most of the documentation I find online refers to the old version of the UI, not the new one. I have a website, let's call is blog.example.com. This website drives traffic to an ecommerce store, let's call that store.example2.com. I want to get reports on which links from blog.example.com are being clicked through leading to store.example2.com. How do you do this in Google analytics? Are goals the right area to be looking? Do I setup the goals on store.example2.com or blog.example.com? Or both? Is there any canonical user guide (free or paid) that covers how this works? I'm a competent programmer, but it's years since I dealt with conversion tracking on any serious level, and we've progressed well beyond my frozen caveman pixel tracking knowledge. Thanks in advance

    Read the article

  • Estimate of Hits / Visits / Uniques in order to fall within a given Alexa Tier?

    - by Alex C
    I was wondering if anyone could offer up rough estimates that could tell me how many hits a day move you into a given Alexa rank ? Top 5,000 Top 10,000 Top 50,000 Top 100,000 Top 500,000 Top 1,000,000 I know this is incredibly subjective and thus the broad brush strokes with the number ranges... BUT I've got a site currently ranked just over 1.2M worldwide and over 500k in the USA (http://www.alexa.com/siteinfo/fstr.net) Pretty cool for something hand-built on weekends (pat self on back) I was applying to an ad-platform and was told that their program doesn't accept webmasters who have an Alexa rank of greater than 100,000. (Time to take back that pat on the back I guess). I know that my hits in the last 30 days are somewhere on the order of 15,000 uniques and 20,000 pageviews. So I'm wondering how much harder do I have to work to achieve my next "goals"? I'd like to break into the top million, then re-evaluate from there. It'd be nice to know what those targets translate into (very roughly of course). I imagine that alexa ranks and tiers become very much exponential as you move up the ranks, but even hearing annecdotal evidence from other webmasters would be really useful to me. (ie: I have a site that is ranked X and it got Y hits in the last 30 days) Thanks :) - Alex

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >