Search Results

Search found 9728 results on 390 pages for 'zee pro'.

Page 135/390 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Is it a bad practice to register sntsh.com if my name is Santosh?

    - by Santosh
    My name is Santosh but I can't register a santosh.com because it is already taken. Most extensions for Santosh are already taken. I was trying to register a domain with .sh extension but santo.sh would cost me very high and I can't afford ~$100 for just a personal domain and that's only for one year only. Now I am thinking that I should register a sntsh.com. But there is a problem, will sntsh.com over my name Santosh don't create a SEO problem? One more thing, that totally different from above topic. If I register a santosh.name domain which is not registered, won't it create copyright and any legal problems with other santosh domains?

    Read the article

  • Fixing Google Chrome text antialias for .ttf fonts

    - by 71GA
    I have found a topic which presents a solution on how to get antialising working in Google Chrome - Windows, but they use .svg format. I have a .ttf format and I import all of my fonts like this at the moment: @font-face {font-family: "t1"; src: url(../fonts/title/circle.ttf);} @font-face {font-family: "t2"; src: url(../fonts/title/sanserifing.ttf);} @font-face {font-family: "t3"; src: url(../fonts/title/serveroff.ttf);} @font-face {font-family: "t4"; src: url(../fonts/title/pupcat.ttf);} How can I achieve antialising done right in Google Chrome Windows?

    Read the article

  • The actual difference between styesheet in the header and a seperate file

    - by David Knight
    Am wondering if someone can give me an opinion on this. I have always been taught to have all of the CSS in a separate file that is referenced from the head of the page. Reading this article http://www.lukew.com/ff/entry.asp?1792 the author is talking about making the Guardian website responsive. One of the things he notes they did to make the site faster and more resilient is to add the CSS inline into the header, thus reducing HTTP requests. Now this got me thinking about the right/best/fastest way of using the CSS If you have one main CSS file, its going to be called and read by the site on every page, no mater how big it is. So with that in mind, Im actually starting to think its better to just inline the whole style sheet and remove one HTTP roundtrip. I know for the purposes of neatness and being able to edit the file a seperate file is better. But which would you recommend and which do you think is faster? Thanks!

    Read the article

  • Does Google AdWords care about duplicate content?

    - by Yarin
    Our site offers several families of products, all of which have a common set of configurations. For simplicity's sake, we'll say we offer products A, B and C, each with configurations 1, 2 and 3 Products: A, B, C Configurations: 1, 2, 3 We want to create landing page <- ad group combinations that reflect each possible combination of each product and configuration. Each product and each configuration have their own page, and so each landing page would have include the product content and the configuration content: ourproducts.com/A-1 (Contains copy for A and 1) ourproducts.com/A-2 ourproducts.com/A-3 ourproducts.com/B-1 ... etc... As you can see, this will lead to duplicate content across our product pages, though in different combinations. My question is, does this matter from AdWords point of view? Will there be any negative consequence to repeating portions of content this way?

    Read the article

  • Reverse proxying only a specific URL

    - by Bart Silverstrim
    I have a web server at www.ourcompany.com running Apache2. Using the proxy modules, I am able to (for example) get 172.16.0.5, an internal IP device, to be accessed on www.ourcompany.com/device. The trouble is that anyone can play with or explore the device using strings sent to www.ourcompany.com/device/change/settings/here.html. I'd like the reverse proxy to only work for a specific URL; www.ourcompany.com/device/you/must/use/this while anything else will be rejected if requested. Is there a setting that can be used to do this, or is it a simple rewrite condition placed in the virtualhost for the site under sites-enabled? What is the simplest, most maintainable way to sanitize requests to the internal device through the reverse proxy? Running Apache2 on Ubuntu.

    Read the article

  • Social media measurement tools [on hold]

    - by user29187
    I work for a non-profit and we are currently trying to develop a system to measure and evaluate our social media platforms (Facebook and Twitter). We would liek to tarck a variety of social metrics, including, but not limited to: Twitter: # of followers, # of mentions, # of retweets Facebook: # of likes, # people talking about, # of page views We are currently using a paid platform for this. I wondered if there is a way to configure Google Analytics to do this or if there are any other free/clever or smart ways to track social engagement with our brand?

    Read the article

  • Page load speeds effect on crawl rate

    - by Sam Pegler
    We've noticed a big drop in the total pages crawled per day on our site, we have no control over the crawl rate in google webmaster tools so it's possible this has been changed by google. However it's a fairly large site and I wouldn't of thought that the crawl rate would've been decreased. What we have noticed though is a sizeable increase in page load times, in my mind this would be the cause. Can anyone else confirm if the crawl rate is directly correlated to page load time? Seems logical, longer page load time, less pages crawled. Any decent documentation on this would be appreciated, I don't normally have any input on SEO so this is new to me.

    Read the article

  • Bad Bot blocking Revisited

    - by Tom
    I've read a lot about bad bot blocking, php scripts, .htaccess techniques, etc... Is this a valid method? Since .htacces can rewrite and send a bad bot a 403 deny or forward to something like spam poison, is it possible to Disallow a folder, then through .htaccess in that specific folder redirect to spampoison? Since Apache reads each .htaccess independently and follows specific instructions, then a bad bot not following robots.txt would just be redirected. Or anyone trying to access, /badbot/ or whatever I choose to call my trap folder. Thanks Tom

    Read the article

  • Temporary background image while the big one is loading? [migrated]

    - by Mikhail
    Is there a way, without javascript, to load a small image for a background before the real image is downloaded? Without javascript because I know how to do it with it. I can't test if the following CSS3 would work because it works too quick: body { background-image:url('hugefile.jpg'), url('tinypreload.jpg'); } If the tinypreload.jpg is only, say 20k, and the hugefile.jpg is 300k -- would this accomplish the task? I assume that both downloads would start at the same time instead of being consecutive.

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • .htaccess language redirects with seo-friendly urls

    - by jlmmns
    How do I setup my .htaccess file to detect several languages, and redirect them to specific seo-friendly urls? Basically every url needs to go to index.php?lang=(...) So, for English language detection http://mysite.com has to go to http://mysite.com/en/ (index.php?lang=en) my .htaccess as of now (not working): RewriteEngine On RewriteCond %{HTTP:HOST} http://mysite.com/ RewriteCond %{HTTP:Accept-Language} ^en [NC] RewriteRule ^$ http://mysite.com/en/ [L,R=301] RewriteCond %{HTTP:Accept-Language} ^de [NC] RewriteRule ^$ http://mysite.com/de/ [L,R=301] RewriteCond %{HTTP:Accept-Language} ^nl [NC] RewriteRule ^$ http://mysite.com/nl/ [L,R=301] RewriteCond %{HTTP:Accept-Language} ^fr [NC] RewriteRule ^$ http://mysite.com/fr/ [L,R=301] RewriteCond %{HTTP:Accept-Language} ^es [NC] RewriteRule ^$ http://mysite.com/es/ [L,R=301] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-l RewriteRule ^(en|de|nl|fr|es)$ index.php?lang=$1 [L,QSA]

    Read the article

  • Protect js code from being stolen

    - by Kaidul Islam Sazal
    I have developed an web app with jquery,html-css markup which would be an premium web app. So I have to ensure the security of the code from being stolen.But as all these are client side,so there is no 100% secure way to protect them.But I want to make them harder to steal.For this I did : I have disabled the right click button of mouse I have minified and obfuscated the code. I have used js code to add external js file and obfuscated the code so that none can understand the name of the external js file I have created a index.html file in the js folder so that none can get access the js folder Do you think all these are enough to make stealing harder? Or any suggestion/advice for me?

    Read the article

  • Changed url from non www to www...Google Indexing

    - by user20321
    I have recently changed (about 1 week ago) my url from non www version to www version. I told my hosting company to do this and they did it successfully all my urls are directed to www version. But google is indexing my non www version on the search results. I have updated new content on my website and google indexes that content with the changed url i.e with prefix www but the mainpage i.e the site name is still shown without www and its not updated. I have checked that my www.sitename.com is listed on google but not shown when I type www.sitename.com. So how much time does it take to remove the old urls from indexing and updating into new urls ??????

    Read the article

  • Paid Website Code Review

    - by clifgray
    I have written a pretty extensive webapp and it is going to go live in the next fews weeks and before I really publicize it I want to get some professionals to review it for optimization and best practices. Is there any online service or way to find local software engineers who would be willing to do this? Just to give some specifics that may be helpful, my site is on Google App Engine and written in Python and it is tough to find someone with extensive experience in that area.

    Read the article

  • HTTP 303 redirection and robots.txt

    - by Ian Dickinson
    On a site I'm working on, we're using the HTTP 303 redirect pattern (see this article for background) to distinguish between information and non-information resources. So: some URL's under /id get redirected to dynamically-created pages under /doc. These dynamic pages are built from a database, and contain links to other /doc/ resources, so in general we don't want them to be crawled. Our robots.txt contains: Disallow: /doc However, we do want the non-redirected pages under /id to get indexed by Google et al: Allow: /id So the question I have, which I can't find an answer to so far, is: if an allowed /id page 303-redirects to a /doc page, will it still be blocked by robots.txt? If yes, we're OK, but otherwise I'm going to disallow all /id resources in the robots file, as having the crawler hammer the db would be worse than losing search indexing for the /id pages.

    Read the article

  • How to easily delete all email forwarders in cPanel?

    - by psoft
    I know that I can import a list of email forwarders using CPanel, but how can I delete a list? I want to manage 300+ addresses - as a membership list for my organization. I want to be able to delete that many without clicking 'Delete' and then 'Confirm' (or whatever it is) 300 times. Even if I am able to simply delete ALL forwarders, then upload a modified list - that's fine with me. Note: I'm using a shared hosting package through SiteGround. The tech service rep informed me that I can't use CPanel scripts in a shared environment. Any suggestions?

    Read the article

  • Dealing with blackhat SEO companies and low quality link building competitors [closed]

    - by Mikko Ohtamaa
    I have often faced a case where the competitors of my client use SEO blackhat tactics where they contact a SEO company to do link building for their websites and products. Here is an example of a typical case of a fake blog created only for link building purposes A very low content article http://marshallfab.com/fundus-camera-explained.html in obvious fake blog: no author information, partially machine generated text, all blog posts are solely about link building Following the link you get to the promoted company page http://www.patternless.com/ ... which, unsurprisingly, links the SEO company homepage in the footer text http://www.affordableseofl.com/ ... who are not shy to advertise their Extremely aggressive SEO plan Does Google have any feedback channel where one could submit cases like this, so that Google would punish the link builders? Are there any means to bring these blackhat companies to pushame to damage their reputation?

    Read the article

  • Cheaper alternatives to 99Designs.com (outsource CSS design)

    - by Chris Smith
    I'm designing my own website as a side project and I want the site to look professional. (Read, not designed by a programmer.) I don't mind spending a little money to have a professional do it, but design sites like 99designs.com cost way to much. (~$500+) Is there a cheaper (~$100 - $200) alternative for getting a designer to improve an existing site? (Things like updating the CCS or suggesting better ways for laying out the navigation.) Or is my best bet trying to pick up a freelancer on Craigslist?

    Read the article

  • Move subdomain into subdirectory SEO question

    - by JMC
    I have read this article: http://www.mattcutts.com/blog/subdomains-and-subdirectories/ But I'm not 100% clear if moving my subdomain website into a subdirectory on the main domain would change anything related to SEO. I inherited this structure: Informational site related to our specific industry lives at: http://website.com StoreFront where we sell product related to our industry lives at: http://store.website.com The informational site gives a lot of good information on how to use the products we sell. The storefront is primarily used for the ecommerce function of selling the products, but there is a lot of info specific to the products on that site. Question: Is our main domain http://website.com getting page rank credit for the product info contained at http://store.website.com? Would there be a benefit to changing the structure?

    Read the article

  • How to block FeedReader from fetching my content to their site?

    - by Wei Kai
    As you all can see from the picture below, my site's content is duplicated by FeedReader and indexed at Google. When I clicked at the FeedReader link, it uses some sort of iFrame to draw content from my site live. This forms some sort of content duplication to me, and I believe it does harm to my site. (Stackoverflow doesn't allow me to post image due to new account, pls click at the link above to see the picture, million thanks to you.) What can I do to prevent Feedreader to fetch my content to their site? I know robots.txt can perform such function, but I don't know how to do it. Any help would be much appreciated. I have also highlighted this issue to FeedReader 2 days ago, but yet to get any reply from them.

    Read the article

  • How to handle URLs with diacritic characters

    - by user359650
    I am wondering how to handle URLs which correspond to strings containing diacritic (á, u, ´...). I believe what we're seeing mostly are URLs where diacritic characters where converted to their closest ASCII equivalent, for instance Rånades på Skyttis i Ö-vik converted to ranades-pa-skyttis-i-o-vik. However depending on the corresponding language, such conversion might be incorrect. For instance in German, ü should be converted to ue and not just u, as seen with the below URL representing the Bayern München string as bayern-muenchen: http://www.bundesliga.de/en/liga/clubs/fc-bayern-muenchen/index.php However what I've also noticed, is that browsers can render non-ASCII characters when they are percent-encoded in the URL, which is the approach Wikipedia has chosen, for instance http://de.wikipedia.org/wiki/FC_Bayern_M%C3%BCnchen which is rendered as: Therefore I'm considering the following approach for creating URL slugs: -(1) convert strings while replacing non-ASCII characters to their recommended ASCII representation: Bayern München - bayern-muenchen -(2) also convert strings to percent encoding: Bayern München - bayern_m%C3%BCnchen -create a 301 redirect from version (1) to version (2) Version (1) URLs could be used for marketing purposes (e.g. mywebsite.com/bayern-muenchen) but the URLs that would end being displayed in the browser bar would be version (2) URLs (e.g. mywebsite.com/bayern-münchen). Can you foresee particular problems with this approach? (Wikipedia is not doing it and I wonder why, apart from the fact that they don't need to market their URLs)

    Read the article

  • Google crawler not found an error inside of the <head> tag

    - by inckka
    I've found a crawler error in my site and it is listed as a page not found(404) link. Heres the broken link http://mydomain.com/blog/comments/feed/ I'm using Google web master tools and found that broken link coming from my web site pages' head tag. here's actual code where that link situated. <head> <link rel="alternate" type="application/rss+xml" title="My Domain Blog &raquo; Feed" href="http://www.my-domain.com/blog/feed/" /> </head> So Google report this link as a not found. Actually this link target is not an exact page or a location. But essential for the blog feeds. Anyway I have to fix this and remove from the Google crawler error's list. But haven't got any idea, because cannot redirect or do a 404 header with this link target. Have anyone got an idea of fixing this?

    Read the article

  • Is it good to buy multiple domains for competitive reasons?

    - by lowestofthekeys
    I am attempting to convince the higher ups at my company that spending $55 to renew one domain for a year is bad when they end up having 3-4 domains names for one website. They're reasoning for doing so is to keep these domains names out of the hands of the competition. For example, the company name is Pie Consulting & Engineering. They want to buy up pieforensicconsulting.com to keep it out of the hands of a competitor (we also do forensic engineering). Could a competitor use that domain in any kind of diabolical way? I mean I figure if someone is typing in pieforensiconsulting into the url field, they know what they're looking for and if it redirects to another company, they're not just going to stay on the site.

    Read the article

  • Sudden drop in Total Indexed pages and increase in 'Not Selected' number.

    - by Pravin
    My blog is around 1 year old and have PR2. The average daily pageviews upto last 1 week were 1800. The total number of posts are 180. Though I have only 180 total posts, the total number of Indexed URL was increasing and it was as high as 510. But in the month of Sept2012, the total number of Indexed pages dropped from 510 to 214. The drop was sudden and it is now increasing very slowly. Also, the other main concern is huge increase in 'Not Selected' number. It is currently 814. I have never posted any post again and never copied any idea from any other blog. But I do use internal linking to some older post those are related to the new posts. The questions are:; Why there is sudden drop in the 'Total Indexed' pages. Why there was increase in total indexed pages to 500 even though the total posts were only 180. As the drop in 'Total Indexed' was in the month of sept2012, I was getting same organic traffic and it was steadily increasing till last week and then there was a 50 drop in the total pageviews. Why. Now, again the traffic is becoming to normal but still there is a problem. Is increase in the 'Not selected' number is a cause of drop in 'Total Indexed'? How to prevent or reduce the number of 'Not Selected' even though I do not have any duplicate post withing blog. Is the 'internal linking' to older post creating 'Not selected' problem? Should I edit my 'Robot.txt' to avoid crawling of labes that may be creating duplicate posts or something like that, if so, what is correct robot.txt. I have uploaded the screenshot of the graph of Webmaster Tools. Please take a look and give suggestions. Please help. Thank you in advance.

    Read the article

  • Projects to learn web development

    - by David McDavidson
    I'm trying to get a job as a web developer, but the great majority of jobs offers requires previous experience and a portfolio to prove you've got the required skills. Unfortunately I don't have any real experience or anything to show. The best way to learn is to try and tackle real world problems, so I'd like to know what would be some nice projects to learn stuff and that will look good in a portfolio?

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >