Search Results

Search found 9717 results on 389 pages for 'gkt pro'.

Page 101/389 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Will bing bot index pages with invalid SSL certificates?

    - by Martin
    Bingbot and Yahoo slurp do not support SNI(Server Name Indication when using SSL). Ignoring other workarounds (multi domain certificates, non-SSL content etc.), will Bingbot index pages that have an invalid SSL certificate, eg. issued for example.net, but used on example.com? If possible please provide an example from Yahoo or Bing. I have found websites in bing, that use self signed certificates and are indexed correctly, but what about invalid certificates?

    Read the article

  • Getting a lot of postmaster undeliverable notices for non-existent users

    - by Mike Walsh
    I've had my domain (straightpathsql.com) for a few years now. I host my e-mail with Google Accounts for business and have for awhile. ALl of the sudden in the past week I am starting to get a lot of postmaster delivery fail notices from various domains, most of them involving bogus e-mail addresses at my domain ([email protected], for example)... My assumption here is that someone is trying to relay on some other host (not my hosts which are secure through google apps for business, I presume) and there isn't much I can do to stop it. But I just want to make sure there isn't something else I need to be looking at here.. An example delivery fail notice is below.. I know nothing of those addresses below and they look like garbage... (Quick edit: the reason I get these messages is I set myself up as a catch all, so it doesn't matter what e-mail you send a note to at my domain, I'll get it if the account isn't setup... All of the failure messages are sent to bogus addresses on my domain) The following message to <[email protected]> was undeliverable. The reason for the problem: 5.1.0 - Unknown address error 553-'sorry, this recipient is in my badrecipientto list (#5.7.1)' Final-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 (permanent failure) Remote-MTA: dns; [118.82.83.11] Diagnostic-Code: smtp; 5.1.0 - Unknown address error 553-'sorry, this recipient is in my badrecipientto list (#5.7.1)' (delivery attempts: 0) ---------- Forwarded message ---------- From: Howard Blankenship <[email protected]> To: omiivi2922 <[email protected]> Cc: Date: Subject: Hi omiivi2922

    Read the article

  • Ideal web application framework for newcomers and whether it is better to use Java or PHP based framework?

    - by Pawan
    My primary question is whether a Java based web application framework is better or a PHP based one and why? Moreover, if I were just starting web development then what would be some ideal frameworks to start with, considering I may want to make a full CMS out of it later? I am not looking for a 'best', rather some good recommendations as I understand that CodeIgnitor has not got a long way to go from here : http://heybigname.com/2012/05/06/why-codeigniter-is-dead/

    Read the article

  • Can I find out the number of searches on a given keyword, per state?

    - by Philippe
    I know that Google tells you how many times a certain keyword is used in a search. You can use the Google Keyword Tool for that. This tool also allows you to find out the number of "local" searches: this is the number of times a person from a given country searches for this keyword. My questions: can you also find out how many searches originate from a given American state ? In the Keyword Tool, I can only select countries, not states. Any other systems I can use to determine where people are searching for a given keyword?

    Read the article

  • Is there any new method for link-backs [on hold]

    - by Mir Hammad
    As all SEOs know that google is trying its very best to kill SEO and linkbacks are quite a difficult task now. Although content is the key but my boss is still possessed with linkbacks. I can not do directory posting, link exchange, paid linking, web 2.0 and blog commenting as they are spam now. I do not see what other choice i have except forum posting and article posting. Can someone suggest new method to acquire link backs ? I know almost all traditional methods so don't say press release or etc. If you really have something out of the box or not very much common please share.

    Read the article

  • Geotargeted subfolder questions (Portugal/Brazil and Switzerland)

    - by Lucy
    We are at the beginning of the process to get multilingual versions of a website. We will be using subfolders working off the core domain (eg mydomain.com/fr/), set the geotargeting at webmaster tools and set hreflang attribute. I would really appreciate your help with a couple of questions. 1/Portuguese: we will have a Portuguese language version of the site. Our intention is to use this to cover users in both Portugal AND Brazil. ie, we are not going to do separate folders mydomain.com/pt/ and mydomain.com/br/ Can I use 2 hreflang attributes for this language version to tell Google it covers Brazil AND Portugal? What country code to use for this subfolder? 2/Switzerland Does anyone have best practice advice how to do this? One one hand, the subfolder should be mydomain.com/ch/ but as Switzerland covers 2 language possibilities (French AND German) - what to do? thanks

    Read the article

  • how to get IP adress of google search/crawler bot to add to our white list of ip address

    - by Jayapal Chandran
    Hi, Google webmaster tools says network unreachable. When i contacted my hosting provider they said that they have installed firewall which could block frequent incoming ip addresses and they dont know the google's ip adress to unblock. so they requested me to find google search/crawler bot's ip adress so that they can add it to their whitelist. How to find the ip address of google search bot or crawler bot? My site stopped appearing in google search. My hits had gone too low. What should i do? any kind of reply would he helpful.

    Read the article

  • Make my website dynamically loaded data available to Facebook Open Graph Object Scrapper

    - by fvaliquette
    Here is the design of my web site: The user enter myWebsite.com/a/1 .htaccess rules redirect to myWebsite.com/b Now the JavaScript ExtJS library is loading. Extracting the value from the URL (in this case it is “1”) Loading ./xml/1.xml From 1.xml setting the Open Graph data (Title, type, image, etc) Loading data that will be shown to the user from 1.xml into the website. My question is: How can I make the Open Graph data available to Facebook? Facebook do not to load my ExtJS JavaScript Library before extracting the Open Graph Object values from the HTML. Is there an easy solution to this problem? The only solutions I found is to make statics web pages or dynamically pages rendered on the server side but I would like to avoid these since my web page implementation is already finished and I would like to avoid re working on it.

    Read the article

  • Does having multiple URIs mapping to the same resource help SEO?

    - by Brian Wheeler
    Let's say I have a site with products that have tags, if each resource is available at GET '/products/tagged/:tag_list/:product_permalink' Could that be better for SEO than just one permalink? For example a product tagged "tea" and "coffee" would be available at GET '/products/tagged/tea/:product_permalink' GET '/products/tagged/coffee/:product_permalink' GET '/products/tagged/tea/coffee/:product_permalink' GET '/products/tagged/coffee/tea/:product_permalink' I would imagine that google would appreciate this because it gives multiple URIs with different levels of detail about the product, but I cant really be certain. Anyone have any direct knowledge on the topic? --EDIT-- As John Conde points, this is a horrible idea. What about having the links on my site link to a route such as GET '/products/tagged/:full_tag_list/:product_permalink', and then any time a user changes tags just have a HTTP moved permanently status to the new URL. Therefore duplicate URLs would be highly unlikely and mitigated by the proper response. Would this be better?

    Read the article

  • My Apache access log contains weird GET and POST requests, what can I do?

    - by Konstantin
    My Apache access log contains weird GET and POST requests, is it possible to examine which of these are harmful? For example: 114.232.151.185 - - [11/Jun/2014:20:11:33 +0200] "GET http://hotel.qunar.com/render/hoteldiv.jsp?&__jscallback=XQScript_4 HTTP/1.1" 404 1167 103.30.175.10 - - [12/Jun/2014:08:35:17 +0200] "GET /vtigercrm/ HTTP/1.1" 404 1034 69.174.245.163 - - [14/Jun/2014:01:22:38 +0200] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 404 1034 69.174.245.163 - - [14/Jun/2014:01:22:38 +0200] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 1034 94.74.229.110 - - [16/Jun/2014:18:46:43 +0200] "GET http://www.msftncsi.com/ncsi.txt HTTP/1.1" 404 1037 80.73.11.164 - - [20/Jun/2014:01:52:14 +0200] "POST /cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%30+%2D%6E HTTP/1.1" 404 1034 162.253.66.76 - - [24/Jun/2014:23:54:30 +0200] "GET /rutorrent HTTP/1.1" 400 226 122.226.223.69 - - [25/Jun/2014:01:14:27 +0200] "GET http://todd0738.gotoip4.com//hello.html HTTP/1.1" 404 1041 My Apache access log file: http://pastebin.com/2x0naQBK

    Read the article

  • Why do my Google sitelinks show gibberish for a PDF link?

    - by Tom
    I have a website which Google lists nicely along with site links. One of the site links - to a PDF file - shows un-human gibberish e.g 67,8;45:: 56 83 @7<1. (7/0;,*;: /59( (7/0;,;<7, <7)(60:4 (9<7 /+ +2, VU I thought it might be due to the PDF's title property so I changed it. But there hasn't been an improvement to the site link. Other PDF site links are fine and display the title property as desired. Does anyone know how I might rectify this problem or what might be the cause? My uninformed guess is it's some transliteration problem between code and display text which, I suppose, means I ought to recondition the PDF file in some way. Not sure how.

    Read the article

  • Prevent mail flagged as spam when switching mail servers (new SPF records)?

    - by Jakobud
    For our business, we send out a significant amount of newsletter alerts to customers that sign up for it on our website. We used to send this mail directly from our web server via PHP. But because the web server limited us to the number of emails we could send per day, we purchased a VM server at a different host (that doesn't throttle email) and we are going to use that account solely for sending out the emails. Anyways, now that the SPF records are going to be different from what they used to be and the source mail server is different, what steps need to be taken to prevent these emails being flagged as spam? I know in Gmail, it's pretty smart about determining if the person actually sending the email is sending it from the server it expects (for flagging Phishing emails, etc). We don't want that to happen to our emails. Just sending a couple test emails out, Gmail's shows the SPF record saying: Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.23.176 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] So is there anything we need to do with regards to SPF records as we move forward?

    Read the article

  • Creating Google sitemap.xml , is it okay for the images to be wrapped in url tags?

    - by AzizAG
    I'm using a tool to generate the sitemap.xml file for me, it started to crawl my website, got the pages and all images, but when exporting it, I review the xml(to make sure nothing is wrong) and I noticed that the images in my website are wrapped in url tags(I think it should be in image tags). See this: <url><loc>http://mywebsite.com/images/12.jpg</loc><lastmod>2012-05-23T13:39:02+00:00</lastmod><changefreq>weekly</changefreq><priority>0.50</priority></url> Shouldn't it be wrapped in image tag?(just like videos wrapped in video tag) Thanks.

    Read the article

  • Self hosted PHP shopping cart with no storefront?

    - by Question
    I am looking for a shopping cart to implement on a simple website instead of the default paypal cart that is used with their add to cart buttons (I don't like the non-styeable new tab/window cart). However, I really like the ability to simply add the buttons to existing pages. I do not have a lot of products and do not want to deal with a storefront and complex templates. The main features I need: Self-hosted Easy to implement with existing website (copy and paste button code, etc.) Ability to have variations on one button with different prices (dropdown with sizes, colors, etc.) Ability to track inventory and disallow out of stock orders Ability to pass cart details to PayPal Website Payments Standard I have seen most of the large storefront options: oscommerce, zencart, cubecart, opencart, prestashop, magento, cs-cart, lemonstand, etc. but these are way more than I need. I don't need the storefront or customer accounts or templated pages, etc. I have seen e-junkie, which is not far off from what I would like, but it is not self-hosted and I would prefer an in-site cart (or dynamic overlay cart) rather than a lightbox or new tab/window cart. I also love the paypal minicart and its implementation, but there is no way to track inventory. So, does anyone have any recommendations that might meet these requests?

    Read the article

  • How to Configure Name Servers using Webmin in Unmanaged VPS on Centos

    - by John
    I want to configure my site's name servers and all related stuff. I'm not able to find any good documention steps to do it straight-forwardly without understanding the nitty-gritty of this. I wish I could afford managed Vps I feel that I'm the odd one out looking for this documentation. I've followed doc at these places: http://www.webtop.com.au/blog/how-to-setup-dns-using-webmin-2009052848 , http://www.beer.org.uk/bsacdns and https://www.virtacoresupport.com/index.php?_m=knowledgebase&_a=viewarticle&kbarticleid=134

    Read the article

  • 301 redirecting a blog's RSS feed URL?

    - by Marc Charbonneau
    I moved my personal blog from Wordpress to Ghost this weekend, which changes the RSS feed URL from /feed/ to /rss/. By default Ghost returns a 301 redirect for /feed/, which I've verified by checking the response header and looking at the logs: In Feedly though, new posts aren't being picked up (at least after 24 hours. I'm not sure if they might have a waiting period before updating the URL). What's the correct thing to do in this situation? Do I need to keep /feed/ alive instead of returning a 301? If so, is there a rewrite rule that would let me do this in nginx instead of having to modify the Ghost source code?

    Read the article

  • DNS query re website Status: inactive

    - by Matthew Brookes
    There is a website that I am assisting with which, when you do a DNS look up on Who.is, returns a Website Status of "inactive". I also noticed the server type is incorrectly reported. This is not a website I generally use for DNS queries so am unsure if it is reliable. Using other DNS checking services reports what Iwould expect and the site is functioning correctly. Research I have done with regard to Website Status: inactive suggests an issue with the DNS configuration? I am looking for help understanding if this is something to be concerned with and if possible how to update this value or how it gets set in the first place.

    Read the article

  • Reset / Remove - Google Keywords

    - by Herr Kaleun
    Summary: My site is ranking for filthy keywords and i would like to remove them from google ranking/keywords. Background: My server was hacked using the timthumb exploit/security vulnerability, apparently i was the last person on earth to read the news about the exploit, several months after it appeared. Anyway, the "hacker" was so friendly to modify the index.php file in such a fashion, that it generated random sexual oriented keywords if the website is fetched as google-bot. So if you would fetch it as google bot/it gets indexed, you would get randomly generated keywords like: sex videos teenager teen sex adult sex preteen A LINK TO A RANDOM CONTENT OF MY WEBPAGE anime sex videos a rough list something similar to that, about 180-200 per page. I've discovered it far too late, so that google had me indexed for the words "sex" and certain adult oriented keywords, about roughly 2000. I've removed all the content, toke the site down, replaced the index.php with a static HTML and added a "ERROR 410" title to the website so that the content is no longer here and removed permanently. I've also applied for a manual review of my website, about 1.5 months ago but still, the keywords are there, and very strange, some of the keyword rankings actually "improve" over time. Here are some screenshots from webmasters tools: Question: How can i remove this filthy keywords and re-rank my website as a "normal" website on the fastest way? I want to "REMOVE" the keywords if possible. Please help me or point me into a direction. Thank you

    Read the article

  • Page Titles - Including gender of a fashion product in page titles?

    - by Cedric
    I need a bit of help to decide whether it is worth including gender in page titles. In the webmaster tools: I looked at our search queries that include "women", and they account for 9% of our total search queries for the site. I am wondering if it is the right way assess the benefit of including "woman" or "men" in page titles, looking at it with existing results pointing to us already? Is there another tool that I can check the actual queries that may not include us in search results? Like google insights maybe? http://www.google.com/insights/search/#q=shoes%2Cshoes%20for%20women&cmpt=q So it looks like 1.1% of searches for "shoes" are also "shoes for women" is that correct? As a direct comparison, doing the same analysis on our own search queries, I get 1.8% when comparing "shoes for women" to "shoes" Implementing this automation would probably affect 99% of our site if not more, splitting it in 2 segments (one portion of page titles including "women" and the other including "men") Will doing so create a massively repetitive keyword throughout the site, hurting SEO? http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35624 (see "Avoid repeated or boilerplate titles.")

    Read the article

  • Maximum file size for iFrame in IE7

    - by Peter Turner
    I've got a "super secure" javascript downloader* that I wrote, and it usually works alright. But I noticed, while trying to download a 90 meg file with it on a client's machine that on IE7, it's getting hung up about 1/3rd of the way through. I've never tried to send a file that large through the iFrame and it works fine in other browsers. Is there a size restriction on files that IE7 can read in an iFrame? * It's really just a PHP line that sets header("location: http://someplace/downloadbigthing.exe"); after it does some logging and verification.

    Read the article

  • Paypal PDT and IPN , how does it work?

    - by slow diver
    PDT Payment Data Transfer is getting the transaction data of the purchase that was made on paypal site and you want to fetch that on your own site and display to the user. Also you may want to store it in your database for archive and tracking purposes. But I cannot exactly follow the documentation here What I am not getting is Once you have activated PDT, every time a buyer makes a website payment and is redirected to your return URL, a transaction token will be passed along as a "GET" variable to this return URL. In order to properly use PDT and display transaction details to your customer, you should fetch the transaction token, variable name "tx", and retreive transaction details from PayPal by constructing an HTTP POST to PayPal. Your POST should be sent to https://www.paypal.com/cgi-bin/webscr. You must post the transaction token using the variable "tx" and the value of the transaction token previously received (e.g. "tx=transaction_token"), and the special identity token using the variable at and the value of your PDT identity token (e.g. "at=identity_token"). You will also need to append a variable named "cmd" with the value "_notify-synch", for example "cmd=_notify-synch", to the POST string. IPN I have setup Instant Payment Notification through setting according to this documentation. This is basically logging into your paypal account and enable IPN while specifying a url where the notification will be sent. This is used to complete an order so that the product can be shipped. What I did is setup a PHP page. I have created a table and whenever that page is called (or hit), it registers an entry in the table so I know a notification came from Paypal. But it does not work either. What am I really doing wrong? The first thing I want to trouble shoot though is when the buyer pays the amount, he is automatically redirected to my site. I have enabled this but automatic redirection just does not work. Instead he is shown the url as an option after payment confirmation is shown. Can someone guide my how the PDT process goes? Where do I make the request for PDT, is it along the very first request (Buy Now button) or it is sent later? Addition I found some good sampling code of how everything should work but it still does not work. I use this code http://officetrio.com/modules/free-php-paypal-ipn-script.php for IPN. I am using this for PDT. This one uses SSL, I changed SSL to regular HTTP (copied paypal version), still does not work. http://ykyuen.wordpress.com/2010/02/17/paypal-payment-data-transfer-sample-code/

    Read the article

  • Which token from a long User-Agent should I use in robots.txt?

    - by Gaia
    The definition of User-Agent states that several tokens can be included, as deemed necessary by the client. I want to block certain bots via robots.txt and I am confused as to which part of the User-Agent string to use, especially for more obscure bots. For example: Mozilla/5.0 (compatible; uMBot-LN/1.0; mailto: [email protected])" JS-Kit URL Resolver, http://js-kit.com/ Mozilla/5.0 (compatible; SEOkicks-Robot +http://www.seokicks.de/robot.html Do I use the second token? Can tokens contain spaces, or did the SEOkicks folks forget a semicolon after SEOkicks-Robot? I don't actually intend on making my question specific to a couple bots - I want to know the guideline: which part of UA do I place in robots.txt for these exotic bots with UA as long as a haiku? User-agent: uMBot-LN/1.0 Disallow: / PS: Thank you but I do not need to hear that undesirable bots are better blocked with mod_security. I already have commercial mod_sec rules in place.

    Read the article

  • How to Export Email "Sent" Folder?

    - by user249493
    A client had her web site and email hosted at "company A". She was switching to "company B" but didn't want to lose her email. I set up a Gmail account, POPed into her webmail account, and pulled the entire inbox into Gmail (for later transfer to her new host). But I forgot about the "sent" folder. Although the hosting plan is still up and running, she changed her domain record to point to the new host. So I can't access the old webmail account via POP or IMAP because the email address needed for authentication now resolves to the new host. Is there any way I can get the contents of the sent folder without having to do a "forward" one message at a time (there are hundreds)?

    Read the article

  • AdBlock users statistics

    - by DataSmarter
    Are there statistics of internet users that use AdBlock or other ad blocking plug-ins? Are there some statistical breakdown, for example, per country (I assume it must vary a lot)? I was unable to google the information I am looking for. The reason I am asking is because I have just signed up for the "Amazon Partners" program and see that this affiliate program is listed on the AdBlock blacklist.

    Read the article

  • hProduct-microformats not work in google

    - by silverfox
    I'm trying to work with hProduct was testing tool for google microformats (http://www.google.com/webmasters/tools/richsnippets), but it is not recognizing the data: does not recognize the photo does not recognize the price does not recognize the category only recognizes the rating HTML: <div class="hproduct"> <span class="brand">ACME</span> <span class="fn">Executive Anvil</span> <img class="photo" src="http://microformats.org/wiki/skins/Microformats/images/logo.gif" /> <span class="review hreview-aggregate"> Average rating: <span class="rating">4.4</span>, based on <span class="count">89 </span> reviews </span> Regular price: $179.99 Sale: $<span class="price">119.99</span> (Sale ends 5 November!) <span class="description">Sleeker than ACME's Classic Anvil, the Executive Anvil is perfect for the business traveler looking for something to drop from a height.</span> Category: <span class="category"> <span class="value-title" title="Hardware > Tools > Anvils">Anvils</span> </span> </div> and still shows this warning: waring: In order to generate a preview with rich snippets, either price or review or availability needs to be present. I used google's own example: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=186036 I also tested the microformas.org: http://microformats.org/wiki/google-rich-snippets-examples

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >