Search Results

Search found 17124 results on 685 pages for 'final cut pro'.

Page 146/685 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Rel = translation

    - by Tom Gullen
    I can't find much online about rel="translation" We have tutorials and manual entries which we are going to get users to translate. If the original page in English is: http://www.scirra.com/tutorial/start And there are two translations: http://www.scirra.com/tutorial/es/start (spanish) http://www.scirra.com/tutorial/de/start (german) How would I correctly link all these up? I'm aware at the top of the page I would need to specify the correct IS639-1 code: <html lang="de"> But I'm more interested in letting Google know they are not duplicates but are translated.

    Read the article

  • Cost effective way to provide static media content

    - by james
    I'd like to be able to deliver around 50MB of static content, either in about 30 individual files up to 10MB or grouped into 3 compressed files, around 5k to 20k times a day. Ideally I'd like to put some sort of very basic security around providing the data to ensure that a request is from the expected source, but if tossing the security for a big reduction in price is possible then it's an option. Does anyone have any suggestions other than what I've found: Google AppEngine is $0.12/GB & I believe has a file size limit of 10MB so I'd have to break the data up a bit. So a rough calculation would seem to be that this would cost me about $30 to $120 a day. Or I've seen something like what seems to be just public static content delivery with no type of logic capabilities like Usenet.nl at what I think calculates to about $0.025/GB which would cost me about $6 to $25 a day. Any idea if I'm going about these calculations right & if there might be a better option for just static content on a decently high volume delivery? Again some basic security would be great but if cost is greatly reduced without it then I'm up for that.

    Read the article

  • Determining cause of random latency and loading issues

    - by Sherwin Flight
    I'm not sure exactly what details to post in regards to my issue, because I'm not sure what is relevant. Prior to the end of September my websites all loaded quickly, in almost all cases. Loading time wasn't usually more than a few seconds. However, since the end of September I noticed a big increase in page loading times. In some cases pages were taking 30 seconds or more to load. I do have a remote monitoring service monitoring some of the sites as well, and the image below shows the response times over the past month. The response times shown at the beginning of this graph were what the usual response times were prior to this issue occurring. You can see that there has been a significant increase in response times from the beginning to the end of this graph. The thing is, the problem is not happening 100% of the time. If I click through the site, or even just keep refreshing the page, about 25% of the time the pages load quickly, the remaining 75% of the time they load slowly. Sometimes the pages take so long to load that they time out, and don't load at all. I have contacted my hosting provider, and they said things at their end was fine. I don't believe the problem is my home internet provider, because all other websites load without a problem. The server is located in Texas, USA. This also raises another interesting point. My remote monitor checks my site from two locations, California, USA, and London, England. As you can see in the chart below the response time is actually shorter when checked from London, which doesn't seem to make sense, since the server is physically closer to the California monitoring location. I would have expected the London monitoring location to have higher response times since they are physically farther away. I should also point out that in some traceroute test I've done it seem like the first connection to the server seems to take the longest, then after that the rest of the page loads quickly. Below is a little chart showing the times for the first connection to the server. So, what could be causing this problem, and what steps can I take to resolve it or at least narrow down the problem? Sending the request to the server was very quick, and receiving the reply back seems pretty quick, but the WAIT time is really long. So it connects, sends the request, but then waits close to 30 seconds before it starts receiving data back. I am also aware that there are things I can do to speed up page loading times, like reducing the number of CSS and JS files used on a page, compressing images, etc. This is not really what the source of the problem is though, because nothing has really changed on the site since before the problem started, and other sites on the same server are loading slowly as well.

    Read the article

  • How can I monitor a website for malicious changes to the files

    - by rossmcm
    I had an occasion recently where our website was compromised - a link farm was added to a couple of the pages on one occasion, and on another occasion, a large and nasty aspx file was put on the server. I won't mention the host's name (Hostway), but I was pretty annoyed that someone was able to do this. No, it wasn't a leaky password - around 10 sites hosted by HW with consecutive IP addresses got trashed. Anyway. What I need is a utility or service (preferably free) that takes a snapshot of my websites contents, and then regularly monitors the files (size and datestamp) for unauthorized changes or additions, and alerts me. I've used web services that monitor one file for changes, but I'm looking for something a bit more aggressive.

    Read the article

  • Amazon Web Services Free Trial: query about get and put requests

    - by abel
    Amazon recently introduced a free tier for its cloud offering. I signed up for AWS and while signing up for the free tier of S3, i found this As part of AWS Free Usage Tier, you can get started with Amazon S3 for free. Upon sign-up, new AWS customers receive 5 GB of Amazon S3 storage, 20,000 Get Requests, 2,000 Put Requests, 15GB of bandwidth in and 15GB of bandwidth out each month for one year. source:aws.amazon.com , emphasis mine. 20,000 GET requests & 2000 puts mean , 20,000 page views(max) and 2000 file uploads per month. Isn't that lower than what App Engine offers 43,200,000 requests per day.Am I missing some thing, please help.

    Read the article

  • Why my site not linking with google.com?

    - by nishant
    i am very tired about my website ranking in google. i am dong hard work about it but not getting anywhere my site in google. actually i am web master of a www.panbeli.in matrimony website INDIA. i am trying to improve its visibility in google last 5 month but not getting any positive result. but other search engine giving good result like yahoo and bing but google showing no any result in top 20 page result. my website is www.panbeli.in and my keyword are- bari samaj bari matrimony bari community bari shadi panbeli please help me if you can, b'coz i am very frustrated about it. my domain age is 4 years when i type link:panbeli.in in google search does not appear any pages from my site in google. whats the meaning is that?my site does not indexed in google?

    Read the article

  • Make Offscreen Sliding Content Without Hurting SEO [duplicate]

    - by etangins
    This question already has an answer here: How bad is it to use display: none in CSS? 5 answers On my website I have content which is positioned off the screen, and then slides in when you click a button. For example, when you click the news button, content slides in with news. It didn't occur to me that this might be labeled as a black hat SEO technique, because I have content positioned off the screen with CSS that links elsewhere on my site, and a search engine could very easily interpret that as me hiding content for SEO purposes by positioning it off screen. Obviously, my intention was not to hide content, but was to make a sort of UI/UX content slider where content slides into view when a button is clicked. How can I make something to this effect (where content slides in and out), that would not comprise SEO?

    Read the article

  • Multi option step-by-step walkthrough? [closed]

    - by James Simpson
    I'm looking for a service ideally (but script maybe) that would allow me to create a step-by-step walkthrough, customised by options users choose in earlier steps. It is difficult to describe and if there is a better description I could Google for, please let me know! Basically, I want to start with a few options for a user to click on and then change what comes next based on that click and be able to do this through a whole walk through (explaining how to set a service up). As mentioned earlier, if it was a SAAS I could use (in the vein of desk.com) that would be perfect. Thank you and please ask questions if I've described it poorly!

    Read the article

  • How to point GoDaddy to EntryDNS domain

    - by geminiCoder
    I have a server connected via dynamic IP. I have set up EntryDNS to manage the change of my IP. If I put in my EntryDNS URL it points me to my server's current IP. I purchased a domain from GoDaddy, but I have been unable to get it to point to my EntryDNS. What I want is to be able to ssh to my server, but ideally I'd like to do this by using my domain name. I must confess I'm a bit overwhelmed by the GoDaddy interface. So the bottom line is, how do I point my GoDaddy domain to my DNS domain so that when I look up the domain I get the current IP of the server?

    Read the article

  • Uploading a file automatically for speed test?

    - by Abhi
    I am building a Web UI for a device for internet connection and one of the requirements in it is a speed test. I know the basic concept of how speed test works. A file is downloaded for a limited time then the same file is uploaded again and the speed is tracked at regular intervals. Downloading the file is not an issue, but how am I supposed to upload the file without the client knowing that the file is getting uploaded? I've read through a lot of documentation, but I'm still not able to get the answer to how I will upload the file from clients machine without asking him to select the file.

    Read the article

  • What's better for SEO for many international markets?

    - by Roy Rico
    Right now, we're working to migrate our company sites for international markets to this scheme www.company.com/[2 letter country codes] www.company.com/uk #for United Kingdom www.company.com/au #for Australia www.company.com/jp #for Japan www.comapny.com/ #for united states, and non identifiable. However, in google webmaster tools, we can geo target each directory, but not the root. If we geo-tag the root with US, all the other markets will inherit. Is it better to move the US market to /us/ or leave it where it is?

    Read the article

  • Good free CSS Sprite for icons

    - by Saif Bechan
    I am working on a small project where I need some of the basic icons: edit, favorite, delete. You know them. Now i can download them all seperate, and put them together in a sprite, but I was wondering if there are ready to download sprites which I can use. Now I am working on an accounting app, so it would be nice if the icons were not too childish. A little but of fancy business type icons. Thanks

    Read the article

  • Daily Blog Archives and Duplicate Content

    - by nemmy
    A few weeks back I realised that my blog software was creating daily post archives. Which basically resulted in duplicate content especially if I only had one post a day. The situation is something like this: www.sitename.com/blog/archives/2013/06/01 - daily archive for 1 June 2013 www.sitename.com/blog/archives/2013/06/my-post-name.html So, here we have two pages that are basically identical except the daily archive has some meaningless title like "Daily Archive for 1 June 2003". And I have no control over which content Google decides is the primary content. It's quite possible (and likely) that the daily archive could be the "primary" content and the actual post itself the "duplicate". Once I realised it was doing this I modified the daily archive template to include <meta name="robots" content="noindex"> Here we are a few weeks later and I still see some daily archives coming up in Google search results. I realise some of those deep pages might not be crawled yet but I am worried that the original post (which should be the PRIMARY content) has been marked duplicate content by Google. Now I've no indexed the daily archives I might end up with no indexed content AND the original articles still flagged as duplicates. And nothing will show up in search at all. Have I screwed myself here or is there a way out?

    Read the article

  • [SSL] Becoming Root CA

    - by Max13
    Hi everybody, I'm the founder of a little non-profit French organization. Currently, we're providing free web and shell hosting. Talking about that, is there a way to become a Trusted Certificate Authority, in order to give free SSL certificates to my customers, but also to avoid being an intermediate (and pay a lot for that), and/or avoid paying a lot for each certificate... Thank you for your help.

    Read the article

  • Squeezing all the SEO out of a URL as possible.

    - by John Isaacks
    I am working on an ecommerce site, I told our SEO consultant that I plan to make the URL scheme: /products/<id>/<name>. This is similar to Stackoverflow's URLs which are /questions/<id>/<title>. He asked me if I could change the URL scheme to /p/<id>/<name> instead. I know why he wants this change, the word "products" isn't needed to find the correct product, and it doesn't offer any SEO, so shortening it to just p would make the relevant keywords in the <name> weigh more. His main priority is maximizing SEO, but the part that I don't think he is considering is how this effects the semantics of the site. Also having the word "products" looks like it has meaning and a reason for being there, just having a p looks chaotic and ugly to me. I also don't think it makes that much of a difference does it? Stackoverflow doesn't use /q/<id>/<title> and they do just fine, I do realize that theres many factors at play here though, not just the URL. So I want some outside opinions on which is the better way and why?

    Read the article

  • Paypal Automatic Billing API

    - by Dale Burrell
    Paypal offer Automatic Billing Buttons (https://merchant.paypal.com/us/cgi-bin/?cmd=_render-content&content_ID=developer/e_howto_html_autobill_buttons#id105ED800NBF) which allow regular billing for different amounts. After a couple of hours googling I cannot find how to access this functionality using the API, so that it can be automated as opposed to done manually via the paypal account. Is it possible? Can someone point me to a sample/reference?

    Read the article

  • Google is not treating two Austrailian schools as separate sites when both are subdomains of qld.edu.au

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and domain). All schools in the state are issued a .qld.edu.au, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as "calvary christian college townsville" (if you check the sitelinks 2/6 are to a different domain). I've put a demotion in for this ages ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime this week. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au. We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • the limit of pageviews per month in Google Analytics

    - by crmpicco
    I have been looking around to try and find some confirmation and clarity on the limit of pageviews that Google allow per month for a Google Analytics account. I have read that the limit of hits per month is 10,000,000, and the limit of pageviews is 5,000,000. Putting 2 and 2 together I am thinking this is to allow the other 5,000,000 for events and social clicks and the like? Google's documentation states 5m, but the hits/pageviews is a bit of a grey area as i've read suggestions that the limit can be considered as 10m

    Read the article

  • Allow access to WordPress site only by links in email newsletters

    - by Shane
    I send out a personal email newsletter, and have been looking into sending it via some service like MailChimp, or sendy.co. Many of these email services suggest, or require, the email newsletter content to be available online, in case the recipient's email app doesn't render it properly, or at all. The thing is I don't want my newsletter contents visible to the whole world. Nor do I want to require existing recipients to make accounts/be assigned accounts, with passwords. So, the question is: How can my WordPress site content be viewable only by clicking on the link to it in the email newsletter. It can't be found in a Google search; but once at the site the visitor can view previous newsletter contents. It seems an .htaccess file would do the trick, but I have been unable to figure out the syntax for this. Thanks for your help. I have copied below two other questions, and answers, which have helped me word my question clearly. Similar to this request about allowing access to a certain group while still restricting access to the world: Is there a way to password protect directory only in cpanel. But the user should not be prompted the password, when they try to access it via web? This persons question is the closest I could find to my situation: Restrict direct folder access via .htaccess except via specific links

    Read the article

  • What alternatives to Animated GIFs are supported on all modern browsers?

    - by Clay Nichols
    I was looking for an alternative animated images to Animated GIFS. But per CanIUseit support for APNGs seems to being phased out. And MNG support isn't even listed there and pages about it don't even mention Chrome (suggesting those pages are very very old) Clarification: This is for a web app, so it'll need to support: - Safari on iPad (so can't depend on extensions) - Chrome on Windows and Mac - Safari 6.0+ on Mac - Chrome on Android

    Read the article

  • Lost Traffic from Google Because of Meta-tag Adding

    - by Marian
    I have a site aroundnails.com. It has English version on subdomain en.aroundnails.com. Reading about language related meta-tags for Google, I have placed such a meta tag on the main page of main site: <link rel="alternate" hreflang="en" href="http://en.aroundnails.com/" /> By this way I have tried to say Google, that my site on en.aroundnails.com is the english version of main site, not a duplicate. After a fortnight I have lost a huge part of traffic from Google, more than a half. At the beginning of September I have moved this meta-tag, but traffic remained at the same level. Hope somebody can help me to solve this issue.

    Read the article

  • Best way to address this Magento issue

    - by robgt
    I am in need of some advice/pointers on how best to attack a problem I am faced with in Magento 1.4.0.1. Here is the scenario: Consider a product catalog of multiple thousands of products in which there are many and various retail markup percentages. There is a need to sell some of those products to other retailers, and I need a way to easily categorise them into percentrage markup groups, so that the other traders can easily purchase items from us without needing to call and confirm pricing. All products prices are added to the magento database including VAT - so the actual retail value is the starting point. I think this needs to be done by adding an attribute on the products that determines the maximum discount level that can be applied to any given product. Obviously, this would entail a massive amount of work to update every product. Is there a way to achieve what we need that I don't yet know about (within Magento)? Can single attribute values be mass-populated somehow, without affecting other attributes such as name/description/etc?

    Read the article

  • How to avoid email reply from my web site being marked as spam? [closed]

    - by Eric
    Possible Duplicate: How could I prevent my mail from being recognized as spam? Here's the situation: Customer fills out inquiry form on web site That inquiry goes to person X Person X goes to my web site (mysite.com) and presses some keys and the customer gets an email from [email protected] Here's my question: how can I be sure the email from [email protected] always gets through to the customer? Can I help it along by using SPF or some other secure email framework/solution? Thank you-- E

    Read the article

  • Google webmaster Index Status. Total Indexed=0

    - by hammad
    I previously changed my domain from www.visualstudiolearn.blogspot.com to www.visualstudiolearn.com... i had around 300 posts with the previous domain name and most of them where showing up on Google. Now that i have changed my domain name the index status shows total indexed as 0 and when i go to the advanced tab it says 304(not selected) and 217 blocked my robots. Im really depressed because of this situation. could you please help out???

    Read the article

  • Why does Google mark one e-mail as spam while does not the other?

    - by nKn
    I've a Postfix installation which works fine, I don't get any trouble with mails sent through a mail client (in my case, Thunderbird or RoundCube) when the To: address is a GMail account. However, I recently needed to use the PHPMailer tool to send some e-mails to some GMail accounts, so I configured an account to be used via SASL authentication + TLS. I don't mean mass mailing, just 2-3 mails. If I send the e-mail from the Thunderbird or RoundCube clients, the mail is not marked as spam. However, if I use PHPMailer, it always gets catalogued as spam. So I compared both headers and I just can't find the reason why the second is marked as spam while the first one is just ok. The first header sent from a mail client which is not marked as spam: Delivered-To: [email protected] Received: by 10.76.153.102 with SMTP id vf6csp230573oab; Tue, 19 Aug 2014 11:08:19 -0700 (PDT) X-Received: by 10.60.23.39 with SMTP id j7mr45544050oef.20.1408471699715; Tue, 19 Aug 2014 11:08:19 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.mydomain.com (X.ip-92-222-X.eu. [92.222.X.X]) by mx.google.com with ESMTPS id t5si27115082oej.10.2014.08.19.11.08.18 for <[email protected]> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 11:08:19 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) client-ip=92.222.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) [email protected]; dkim=pass (test mode) [email protected] Received: by mail.mydomain.com (Postfix, from userid 111) id D8F69120293D; Tue, 19 Aug 2014 19:08:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471697; bh=wKMX9gkQ7tCLv8ezrG5t4bICm/SSLQsNfTdZMToksWw=; h=Date:From:To:Subject:From; b=qRNcYVdmk+n3D1uuv0FInTx7/LzH2ojck9DgCmabFPvfke233lkojUOjezCUGx7iV DL8EayZ28mzzzHpB7ETeMzop/5OS3BmvFtGKVD9gzc78cDIFXTDoRFAnkRWDR2IOxI SOn5tiyODTFpkbDgJOndzQ6qL5K0S9ASNGCZrNL4= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on vpsX.ovh.net X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0 Received: from [192.168.1.111] (unknown [77.231.X.X]) (using TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: [email protected]) by mail.mydomain.com (Postfix) with ESMTPSA id 910341202624 for <[email protected]>; Tue, 19 Aug 2014 19:08:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471697; bh=wKMX9gkQ7tCLv8ezrG5t4bICm/SSLQsNfTdZMToksWw=; h=Date:From:To:Subject:From; b=qRNcYVdmk+n3D1uuv0FInTx7/LzH2ojck9DgCmabFPvfke233lkojUOjezCUGx7iV DL8EayZ28mzzzHpB7ETeMzop/5OS3BmvFtGKVD9gzc78cDIFXTDoRFAnkRWDR2IOxI SOn5tiyODTFpkbDgJOndzQ6qL5K0S9ASNGCZrNL4= Message-ID: <[email protected]> Date: Tue, 19 Aug 2014 19:08:24 +0100 From: My Name <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: My other account <[email protected]> Subject: . Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit . The second header sent from PHPMailer which is always marked as spam: Delivered-To: [email protected] Received: by 10.76.153.102 with SMTP id vf6csp230832oab; Tue, 19 Aug 2014 11:12:10 -0700 (PDT) X-Received: by 10.60.121.67 with SMTP id li3mr44086252oeb.17.1408471930520; Tue, 19 Aug 2014 11:12:10 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.mydomain.com (X.ip-92-222-X.eu. [92.222.X.X]) by mx.google.com with ESMTPS id w8si27103806obn.30.2014.08.19.11.12.10 for <[email protected]> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 11:12:10 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) client-ip=92.222.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) [email protected]; dkim=pass (test mode) [email protected] Received: by mail.mydomain.com (Postfix, from userid 111) id 1999D120293D; Tue, 19 Aug 2014 19:12:09 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471929; bh=N1JuHq1S+8GrjHcEK3xn8P1JS+ygEBv5LKe0BiXuVJo=; h=Date:To:From:Reply-to:Subject:From; b=K7tcPyArzSTY91VEw6mAAFtDurSGwgTLGkfUZdC5mqsg0g/1LzmZkgwdjj4NdJa6M E2kDz3dwYN8FcZmbampJYFXxj4NQVtSnzjiWV40rpfOFqD2rXDGNIyB2QOjBZZ4WK3 7s4lyoJ/BrdQH4en8ctLVsDHed/KpHD4iGFEl67E= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on vpsX.ovh.net X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0 Received: from rpi.mydomain.com (unknown [77.231.X.X]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: [email protected]) by mail.mydomain.com (Postfix) with ESMTPSA id B42AF1202624 for <[email protected]>; Tue, 19 Aug 2014 19:12:08 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471928; bh=N1JuHq1S+8GrjHcEK3xn8P1JS+ygEBv5LKe0BiXuVJo=; h=Date:To:From:Reply-to:Subject:From; b=iXPM0tS36swudPTT4FOHHtPi5Ll6LbR60kNqCinZ8utcWoFE31SFTpoMEq5aCM5ux wQMdFiN8c6vkjRGabmvqFTTIbwJsrToHo/4+Lt5HEBoQQE2Y3T+xGmnmGAHCS6stKB yb7SVmtrIAsVtSMKA8VYIbmu2oYqV3afYt7g0OMQ= Date: Tue, 19 Aug 2014 20:12:07 +0200 To: [email protected] From: Trying another account <[email protected]> Reply-to: Trying another account <[email protected]> Subject: . Message-ID: <[email protected]> X-Priority: 3 X-Mailer: PHPMailer 5.1 (phpmailer.sourceforge.net) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="UTF-8" . I also tried: Adding a User-Agent header to match the first one. Removing the X-Mailer header. No one of them made a difference. Is there some significant difference which is making the second e-mail to be marked as spam by Google?

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >