Search Results

Search found 14702 results on 589 pages for 'dolby pro logic'.

Page 147/589 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • How to avoid email reply from my web site being marked as spam? [closed]

    - by Eric
    Possible Duplicate: How could I prevent my mail from being recognized as spam? Here's the situation: Customer fills out inquiry form on web site That inquiry goes to person X Person X goes to my web site (mysite.com) and presses some keys and the customer gets an email from [email protected] Here's my question: how can I be sure the email from [email protected] always gets through to the customer? Can I help it along by using SPF or some other secure email framework/solution? Thank you-- E

    Read the article

  • Good free CSS Sprite for icons

    - by Saif Bechan
    I am working on a small project where I need some of the basic icons: edit, favorite, delete. You know them. Now i can download them all seperate, and put them together in a sprite, but I was wondering if there are ready to download sprites which I can use. Now I am working on an accounting app, so it would be nice if the icons were not too childish. A little but of fancy business type icons. Thanks

    Read the article

  • Make Offscreen Sliding Content Without Hurting SEO [duplicate]

    - by etangins
    This question already has an answer here: How bad is it to use display: none in CSS? 5 answers On my website I have content which is positioned off the screen, and then slides in when you click a button. For example, when you click the news button, content slides in with news. It didn't occur to me that this might be labeled as a black hat SEO technique, because I have content positioned off the screen with CSS that links elsewhere on my site, and a search engine could very easily interpret that as me hiding content for SEO purposes by positioning it off screen. Obviously, my intention was not to hide content, but was to make a sort of UI/UX content slider where content slides into view when a button is clicked. How can I make something to this effect (where content slides in and out), that would not comprise SEO?

    Read the article

  • jQuery mobile List-View is not working after adding some jquery code [closed]

    - by Kaidul Islam Sazal
    I am using jquery mobile and I have an array makeArrayin jquery and I have created few listview by the values of the array.Everything works fine.But the jquery mobile list-view style is not shown. Rather it is shown an ordinary list view. This is my code: $(document).ready(function(){ var url = "inventory/inventory.json"; var makeArray = new Array(); $.getJSON(url, function(data){ $.each(data, function(index, item){ if(($.inArray(item.make, makeArray)) == -1){ makeArray.push(item.make); $('.upper_case') .append('<li data-icon="list-arrow"> <a href="trade_form.php?='+ item.make +'"><img src="images/car_logo/buick.png" class="ui-li-thumb"/>' + item.make + '</a></li>'); } }); }); });

    Read the article

  • How to write good blog post tags

    - by keruilin
    It seems that you have three choices in deciding how you write tags for your blog posts: Make them user friendly Make them highly searchable Combo of the two For example, let's say that I have a blog post that has write-ups on the top 10 ipad apps for business travel (e.g., Evernote, Dragon Diction, Instapaper, etc.). User friendly tags: ipad apps, business travel Searchable keywords (analyzed with Google Keyword Analyzer): ipad apps, ipad travel apps, evernote ipad, instapaper, instapaper ipad Combo: ipad apps, ipad travel apps So my question comes down to this: which is really the best choice -- 1, 2 or 3? Note: this visible post tags will also serve as the meta keywords for the post page.

    Read the article

  • Moving from http to https - Google webmaster tools | Bing webmaster tools

    - by user2240778
    I'm moving from http to https for my entire site. The site is currently added to google webmaster tools as www.example.com and all the pages are indexed as http. How do i go about moving to the new https URLs on Google webmaster tools. Do I just submit a updated sitemap which has the https URLs OR Do I add a new site as https://www.example.com and submit the sitemap with https urls? All the http urls are set to redirect to their https counterparts.

    Read the article

  • What alternatives to Animated GIFs are supported on all modern browsers?

    - by Clay Nichols
    I was looking for an alternative animated images to Animated GIFS. But per CanIUseit support for APNGs seems to being phased out. And MNG support isn't even listed there and pages about it don't even mention Chrome (suggesting those pages are very very old) Clarification: This is for a web app, so it'll need to support: - Safari on iPad (so can't depend on extensions) - Chrome on Windows and Mac - Safari 6.0+ on Mac - Chrome on Android

    Read the article

  • Why do people crawl sites without downloading pictures?

    - by Michael
    Let me show you what I mean: IP Pages Hits Bandwidth 85.xx.xx.xxx 236 236 735.00 KB 195.xx.xxx.xx 164 164 533.74 KB 95.xxx.xxx.xxx 90 90 293.47 KB It's very clear that these person are crawling my site with bots. There's no way that you could visit my site and use <1MB bandwidth. You might say that there's the possibility that they could be browsing the site using some browser or plug-in that does not download images, js/css files, etc., but the simple fact of the matter is that there are not 90-236 pages that are linked from the home page (outside of WP files), even if you visited every page twice. I could understand if these people were crawling the site for pictures, but once again, the bandwidth indicates that this isn't what is happening. Why, then, would they crawl the site to simply view the HTML/txt/js/etc. files? The only thing that I can come up with is that they are scanning for outdated versions of WordPress, SQL injection vulnerabilities, etc., which makes me inclined to outright ban the IPs, but I'm curious, is it possible that this person is a legitimate user, or at the very least, not intending to be harmful?

    Read the article

  • [SSL] Becoming Root CA

    - by Max13
    Hi everybody, I'm the founder of a little non-profit French organization. Currently, we're providing free web and shell hosting. Talking about that, is there a way to become a Trusted Certificate Authority, in order to give free SSL certificates to my customers, but also to avoid being an intermediate (and pay a lot for that), and/or avoid paying a lot for each certificate... Thank you for your help.

    Read the article

  • Privacy policy and terms of use language

    - by L. De Leo
    I have a Czech registered business with which I'm serving a web app mostly (but not exclusively) targeted to Italian customers. The server is in Amsterdam. The site will be multilingual (with 4 languages supported) but for now it's Italian only. What language should the privacy policy and terms and conditions be? What law should they refer to? Could I just offer these two docs in English? (Easier to write and to maintain)

    Read the article

  • Rel = translation

    - by Tom Gullen
    I can't find much online about rel="translation" We have tutorials and manual entries which we are going to get users to translate. If the original page in English is: http://www.scirra.com/tutorial/start And there are two translations: http://www.scirra.com/tutorial/es/start (spanish) http://www.scirra.com/tutorial/de/start (german) How would I correctly link all these up? I'm aware at the top of the page I would need to specify the correct IS639-1 code: <html lang="de"> But I'm more interested in letting Google know they are not duplicates but are translated.

    Read the article

  • Determining cause of random latency and loading issues

    - by Sherwin Flight
    I'm not sure exactly what details to post in regards to my issue, because I'm not sure what is relevant. Prior to the end of September my websites all loaded quickly, in almost all cases. Loading time wasn't usually more than a few seconds. However, since the end of September I noticed a big increase in page loading times. In some cases pages were taking 30 seconds or more to load. I do have a remote monitoring service monitoring some of the sites as well, and the image below shows the response times over the past month. The response times shown at the beginning of this graph were what the usual response times were prior to this issue occurring. You can see that there has been a significant increase in response times from the beginning to the end of this graph. The thing is, the problem is not happening 100% of the time. If I click through the site, or even just keep refreshing the page, about 25% of the time the pages load quickly, the remaining 75% of the time they load slowly. Sometimes the pages take so long to load that they time out, and don't load at all. I have contacted my hosting provider, and they said things at their end was fine. I don't believe the problem is my home internet provider, because all other websites load without a problem. The server is located in Texas, USA. This also raises another interesting point. My remote monitor checks my site from two locations, California, USA, and London, England. As you can see in the chart below the response time is actually shorter when checked from London, which doesn't seem to make sense, since the server is physically closer to the California monitoring location. I would have expected the London monitoring location to have higher response times since they are physically farther away. I should also point out that in some traceroute test I've done it seem like the first connection to the server seems to take the longest, then after that the rest of the page loads quickly. Below is a little chart showing the times for the first connection to the server. So, what could be causing this problem, and what steps can I take to resolve it or at least narrow down the problem? Sending the request to the server was very quick, and receiving the reply back seems pretty quick, but the WAIT time is really long. So it connects, sends the request, but then waits close to 30 seconds before it starts receiving data back. I am also aware that there are things I can do to speed up page loading times, like reducing the number of CSS and JS files used on a page, compressing images, etc. This is not really what the source of the problem is though, because nothing has really changed on the site since before the problem started, and other sites on the same server are loading slowly as well.

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • Chrome "refusing to execute script"

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plaintext file? It clearly has a js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still, same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get the Chrome to recognize that the linked file is actually a javascript type?

    Read the article

  • Fix 403 errors in Google Webmaster Tools

    - by Justin
    Hi Team, I have a domain that has "fallen off a cliff" for searches in Google. Searches that used to be in position 1-4 are now gone from page 1. The same search in Bing shows the typical position expected (top 5 results). In reviewing Google Webmaster Tools, I am seeing two problems: 1. The Sitemap is reporting two errors: General HTTP error: HTTP 403 error (Forbidden) URLs not accessible However, the URL they provide as "no accessible" is accessible. I can click the link Google provides and it works fine. There are 6,000 crawl errors of type 403. Again, most of these pages that have 403 are accessible in my browser (tried various browsers as well). About half are from January, the other half from November. There are no IP-specific firewall rules on ports 80 and 443 that could block the goolgebot Using the user agent switcher add-on for FF I confirmed that the page loads when the user agent is the googlebot I an confirm that most of the pages reported as 403 are accessible. A search of just "site:thedomain.com" does confirm there are over 9,000 in the index. But most searches don't return the site. I believe the 403 issues are the cause of the fall in search rankings, but I can't seem to find any information online with ideas about how to address this. Any ideas? jpe

    Read the article

  • Uploading a file automatically for speed test?

    - by Abhi
    I am building a Web UI for a device for internet connection and one of the requirements in it is a speed test. I know the basic concept of how speed test works. A file is downloaded for a limited time then the same file is uploaded again and the speed is tracked at regular intervals. Downloading the file is not an issue, but how am I supposed to upload the file without the client knowing that the file is getting uploaded? I've read through a lot of documentation, but I'm still not able to get the answer to how I will upload the file from clients machine without asking him to select the file.

    Read the article

  • What's better for SEO for many international markets?

    - by Roy Rico
    Right now, we're working to migrate our company sites for international markets to this scheme www.company.com/[2 letter country codes] www.company.com/uk #for United Kingdom www.company.com/au #for Australia www.company.com/jp #for Japan www.comapny.com/ #for united states, and non identifiable. However, in google webmaster tools, we can geo target each directory, but not the root. If we geo-tag the root with US, all the other markets will inherit. Is it better to move the US market to /us/ or leave it where it is?

    Read the article

  • the limit of pageviews per month in Google Analytics

    - by crmpicco
    I have been looking around to try and find some confirmation and clarity on the limit of pageviews that Google allow per month for a Google Analytics account. I have read that the limit of hits per month is 10,000,000, and the limit of pageviews is 5,000,000. Putting 2 and 2 together I am thinking this is to allow the other 5,000,000 for events and social clicks and the like? Google's documentation states 5m, but the hits/pageviews is a bit of a grey area as i've read suggestions that the limit can be considered as 10m

    Read the article

  • Why can't GoogleBot forget a 404 File Does Not Exist?

    - by Sam
    Hi folks, For two months now, googlebot is trying to get a file which does not exist anymore. This is just one example out of many: I had renamed the file to a better name and removed the old file. Now, why does Google insist on getting a file which it already has seen does not exist for months? doesn't it just give up and get on being a happy bot? My Error log file is filled with these repeating lines all wanting to get that one file, although they know for already long time that its not there: What to do in these situations? Are there automatic redirection rules that handle via 301 to the home page? [Sat Mar 05 01:55:41 2011] [error] [client 66.249.66.177] File does not exist: /var/www/vhosts/website.org/httpdocs/extraNeus.php [Sat Mar 05 01:58:20 2011] [error] [client 66.249.66.177] File does not exist: /var/www/vhosts/website.org/httpdocs/extraNeus.php [Sat Mar 05 02:03:57 2011] [error] [client 66.249.66.177] File does not exist: /var/www/vhosts/website.org/httpdocs/extraNeus.php on and on ... and on...

    Read the article

  • Google analytics goal funnel visualization issues

    - by Lauren
    This is the goal funnel for checkout. Does anyone have any idea where the "/" is coming from? The cart page is at site: game on glove dot com (I don't want this stackoverflow page being indexed in google particularly well). Go to the site, click on the order button, make your selection, and click the button to enter the cart (it resolves to /Cart and /Shop-Cart). I believe I used the regular expression matching to match "cart". So why the "/" (I don't know what is causing the home page to reload when users are on the Cart page within a Colorbox lightbox where the only way back to home or "/" is to hit the exit button in the top right of the lightbox)? Here's my one guess for the former question but it doesn't seem likely: See the "check out with paypal" button? If you hovered over it, it does default to the home page which is what might be the "/"... but it really redirects the user to the paypal.com page so it shouldn't also load the home page.

    Read the article

  • Google is not treating two Austrailian schools as separate sites when both are subdomains of qld.edu.au

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and domain). All schools in the state are issued a .qld.edu.au, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as "calvary christian college townsville" (if you check the sitelinks 2/6 are to a different domain). I've put a demotion in for this ages ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime this week. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au. We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • What good Social Networking Site solutions there are? [closed]

    - by ZetsubouWebmaster
    What good and free Social Networking Site solutions there are? I tried many options but most of them are either too complicated, too simple, or just do not work... I tried: Dolphin, DZOIC-Handshakes, elgg, Oxwall, SocialEngine, and some plugins for wp and other CMS. I don't need much, just: groups, chats, forums, profiles, PM, photos, pages, comments, search, statistics. Most of which included in pretty much every CMS out there, but not all.. So, what good solutions there are? Also I don't mind paying some money (I guess no more then $200), but I'd prefer if it was a free open source engine. Of course it should be PHP+MySQL based.

    Read the article

  • How can I monitor a website for malicious changes to the files

    - by rossmcm
    I had an occasion recently where our website was compromised - a link farm was added to a couple of the pages on one occasion, and on another occasion, a large and nasty aspx file was put on the server. I won't mention the host's name (Hostway), but I was pretty annoyed that someone was able to do this. No, it wasn't a leaky password - around 10 sites hosted by HW with consecutive IP addresses got trashed. Anyway. What I need is a utility or service (preferably free) that takes a snapshot of my websites contents, and then regularly monitors the files (size and datestamp) for unauthorized changes or additions, and alerts me. I've used web services that monitor one file for changes, but I'm looking for something a bit more aggressive.

    Read the article

  • Google webmaster Index Status. Total Indexed=0

    - by hammad
    I previously changed my domain from www.visualstudiolearn.blogspot.com to www.visualstudiolearn.com... i had around 300 posts with the previous domain name and most of them where showing up on Google. Now that i have changed my domain name the index status shows total indexed as 0 and when i go to the advanced tab it says 304(not selected) and 217 blocked my robots. Im really depressed because of this situation. could you please help out???

    Read the article

  • Nginx routing script for NodeJS and Wordpress

    - by Nilay Parikh
    We are moving blogs and site from wordpress to nodejs and ready to move into production. However I'm not able to figure it out how to implement routing from front server (Nginx) to NodeJS (prefered web instance) and if data not synced yet into NodeJS website than (404 will throw by NodeJS) fall back to (using reverse proxy) to Wordpress and serve page, during the transformation period. Q1. Is the approach good for the scenario, or anyone can suggest better approach? Q2. Should NodeJS treat itself as Reverse proxy (using bouncy : https://github.com/substack/bouncy or similar package) in event of fall back or shoud stick with Nginx to do so using fastcgi approch. Both NodeJS and Wordpress are on single server only, In first scenario, /if resource available than serve directly User -> Nginx -> NodeJS (8080) \if resource not available then reverse query wordpress and serve content second scenario, /if resource available than serve directly User -> Nginx -> NodeJS (8080) \if resource not available then 404 to Nginx and Nginx script fallback to Wordpress (FastCGI PHP) Later we have plan to phase out Wordpress and PHP from the server environment completely. I'd like to see any examples of Nginx or Varnish scripts and/or NodeJS scripts if you have for me to refer. Thanks.

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >