Search Results

Search found 9757 results on 391 pages for 'shekhar pro'.

Page 88/391 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Can I remove visible referer from link?

    - by Andreas
    I use referer info to track which of my campaigns works the best. So instead of <a href="someweb.com">someweb</a> I have a link like <a href="http://someweb.com?utm_source=john&utm_medium=email&utm_content=NAME&utm_campaign=campaing">someweb</a> Now when a user cliks "someweb" the whole URL string is shown in the adressbar. Is this possible to mask/hide somehow? Maybe via .htaccss? Thanks in advance

    Read the article

  • Breadcrumb using and schema.org rich snippets

    - by Adam Jenkin
    I am having problems implementing the breadcrumb rich snippets from schema.org. When I construct my breadcrumb using the documentation and run via Google Rich Snippet testing tool, the breadcrumb is identified but not shown in the preview. <!DOCTYPE html> <html> <head> <title>My Test Page</title> </head> <body itemscope itemtype="http://schema.org/WebPage"> <strong>You are here: </strong> <div itemprop="breadcrumb"> <a title="Home" href="/">Home</a> > <a title="Test Pages" href="/Test-Pages/">Test Pages</a> > </div> </body> </html> If I change to use the snippets from data-vocabulary.org, the rich snippets show correctly in the preview. <!DOCTYPE html> <html> <head> <title>My Test Page</title> </head> <body> <strong>You are here: </strong> <ol itemprop="breadcrumb"> <li itemscope itemtype="http://data-vocabulary.org/Breadcrumb"> <a href="/" itemprop="url"> <span itemprop="title">Home</span> </a> </li> <li itemscope itemtype="http://data-vocabulary.org/Breadcrumb"> <a href="/Test-Pages/" itemprop="url"> <span itemprop="title">Test Pages</span> </a> </li> </ol> </body> </html> I want the breadcrumb to be shown in the search result rather than the url to the page. Given that schema.org is the recommended way to be using rich snippets, I would rather use this, however as the breadcrumb is not showing in the preview of the search result using this method, i'm not convinced this is working correctly. Am I doing something wrong in the markup for schema.org example?

    Read the article

  • CSS just for most basic HTML

    - by Gerenuk
    I've read that my note system Wikidpad, which exports to very simple HTML, can use CSS (http://wikidpad.sourceforge.net/help/HtmlCss.html) The elements in the output are not more than basic headings, bullet points and tables. I'd like to try some kind of improved style, but I as I have no knowledge about CSS, so the best I can do is to save some Myfile.css to a directory :) However if I google "CSS template" I get all sorts of complicating results that I cannot make sense of :( Am I using wrong terminology? Can you suggest what I should search for or maybe you even know a ressource where a get a simple CSS file with some decent standard HTML elements. I do not wish to make custom adjustments.

    Read the article

  • GA UA codes for testing site - set up

    - by Drew
    Anyone know the process for using a GA live UA code to test a site in development. I.e. I have a live site with a GA UA code attached, tracking live traffic data etc e.g. UA-123456. I've been told that there is a way to produce another code associated to the primary code to use on the testing version of my live site e.g. test code could be UA-123457. Can anyone shed some light on this? If not possible should I just set up a completely separate GA account for my testing site?

    Read the article

  • Highly SEO optimised forum posts

    - by Tom Gullen
    Given the following forum post: Basics of how internals of Construct work I've used GameMaker in the past. And I know some C++ and have used a few 3d engines with it. I have also looked at Unity, though I didn't get too much into it. So I know my way around programming etc... My question is, how does construct work internally? I know it allows python scripting, which itself is "technically" interpreted, though python is pretty fast as far as being interpreted goes. But what about the rest? Is the executable that gets cre... The forum software will take the first 150 chars of the first post as the page meta description, and the title will be the thread title. All ok. So in Google it will appear as: Basics of how internals of Construct work I've used GameMaker in the past. And I know some C++ and have used a few 3d engines with it. I have also looked at Unity, though I didn't get too much... http://www.domain.com/forum/basics-of-how-internals-of-construct-work.html Now the problem is (not so much with this thread, but other ones) is the first 150 chars don't always create the best meta description. Is it worth my time to cherry pick threads and manually set their description/title tags so they read like: Internal workings of Construct 2 Events aren't converted to any other language. The runtime is a standalone compiled EXE application, which is optimised and actually very fast. Your events... http://www.domain.com/forum/basics-of-how-internals-of-construct-work.html The H1 on the page is still the original title, but we have overridden the title and description to look more friendly on search results. Is this advantageous forgetting the obvious time cost?

    Read the article

  • How does bing-bot( is that the right spider-name? ) and googlebot interpret 301 redirect?

    - by jbcurtin
    I've been looking for documentation on how the Microsoft and Google bots interpret 301 redirects. It seems that google-bot stores documents on a url based index system. But I haven't been able to figure out how bing works. Should I assume that they are still working towards coping everyone else and assume they use an algorithm close to google? Is it best to just forward a page to a new location via Javascript? I think this might be a blackhat trick, but how would I tell the bots that it's not? Is 301 redirect my best option and I just have to bit the bullet because said pages are no longer in existence? What other options do I have that I might not be aware of?

    Read the article

  • DNS and Wildcard CNAME

    - by Thomas Chapman
    Whenever I attempt to make a record for *.schneiderdonnelly.com.au and CNAME it, I get two errors: You can't mix CNAME/MX records together using the same hostname. Domain root's cannot be CNAME's, however you can web-forward this record to www.schneiderdonnelly.com.au instead for the same effect. I've read it's possible so why can't I make it work? I donated $5 to be a premium member and I've been trying to make it work for yonks. http://i.stack.imgur.com/D9Ui5.jpg This is how I want it to appear. The last record. I am prepared to swap DNS providers as long as they're free.

    Read the article

  • Weird referral traffic [closed]

    - by Noam
    Possible Duplicate: Strange incoming links appearing on site statistics I'm getting weird traffic from Japan, from a site called ime.nu Why weird? because I'm not able to identify the link and also when going to their homepage link it just shows an Apache Test Page, while I'm seeing it is a pretty big site in analysis sites (Alexa Ranked 121 in JP) Can someone help me understand the mystery?

    Read the article

  • Apache HTTPS ProxyPass certificate location

    - by oz1cz
    I'm trying to set up an Apache server that uses ProxyPass to pass HTTPS requests on to another server. Let's call the proxy server ALPHA and the target server BETA. ALPHA does not run HTTPS, but BETA does. I first tried using this virtual host specification on ALPHA: <VirtualHost *:443> ServerName mysite.com ProxyPass / https://192.168.1.105/ # BETA's IP address ProxyPassReverse / https://192.168.1.105/ # BETA's IP address ProxyPreserveHost On ProxyTimeout 600 SSLProxyEngine On RequestHeader set Front-End-Https "On" CacheDisable * </VirtualHost> But when I tried this, Apache complained saying, "[error] Server should be SSL-aware but has no certificate configured [Hint: SSLCertificateFile]". I had to copy the SSL certificate from BETA to ALPHA and add these lines to the host specification on ALPHA: SSLEngine on SSLCertificateKeyFile /usr/local/ssl/private/BETA_private.key SSLCertificateFile /usr/local/ssl/crt/BETA_public.crt SSLCertificateChainFile /usr/local/ssl/crt/BETA_intermediate.crt Now the system works. But I have a feeling that I have done something wrong or unnecessary. I have the web site's private key and certificate lying on both ALPHA and BETA. Is that necessary? Should I have done it differently?

    Read the article

  • Sending HTML to Gmail always lands in Spam

    - by cartaysm
    I am having an issue with sending HTML emails to Gmail. I can send them to Yahoo, Hotmail, RR, AOL, etc. with no problem at all, but when I send them to Gmail I get kicked to spam. I have checked my IP with a lot of different list to make sure it is not listed anywhere, which it is not. spamhaus = is not listed in the DBL abuse.net = is not listed in the SBL abuse.net = is not listed in the PBL abuse.net = is not listed in the XBL spamcop = not listed in bl.spamcop.net host 24.172.204.xxx xxx.204.172.24.in-addr.arpa domain name pointer xxxevents.com. host xxxevents.com xxxevents.com has address 24.172.204.xxx xxxevents.com mail is handled by 10 mail.xxxevents.com. I am just trying to send a very VERY basic HTML message (listed below). I use an Ubuntu server, swiftmailer, multipart/alternative (HTML & plain), SPF = pass, and I am going to setup DKIM today to see if that fixes it (but I doubt it will)... For now I will only post the message I sent that gets kicked to spam and can provide any details needed. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title>Triathlon</title></head> <body> <table cellpadding="0" cellspacing="0"> <tr> <td> <p>Thank you for attending our 4th annual Triathlon/Duathlon/5k at Hueston Woods State Park on August 12th. This event is held annually to raise research funding for Crohn's Disease, Ulcerative Colitis, and Muscular Dystrophy diseases.</p> </td> </tr> <tr> <td> <p>As you know the results and pictures have been posted on our home page at since Sunday 8/13/2012. Now we also have updated our Facebook page with those photos and you can start tagging yourself or downloading the pictures now! <br /> our page and tag yourself at </p> <p> test test </p> <p>Race day events is professionally managed by Speedy-Feet</p> </td> </tr> </table> </body> </html> Just plain text works great, I thought maybe wording was messing me up but not the case... I am almost done install opendkim so I will be able to rule that out very soon. Edit: Okay installed opendkim and I am getting passing results so I sent the html I posted above it went through just fine. So now when I start to add a few more lines I am getting kicked back to spam again. Here is updated html code: ` <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><title>Triathlon</title></head> <body> <table cellpadding="0" cellspacing="0"> <tr> <td> <center><a href='http://xxxevents.com' target="_blank"> <font face="Verdana, Arial, Helvetica, sans-serif" color="#666666" size="2"> <img src="http://xxxevents.com/marketemailimages/xxxlogo.png" alt="xxx It Events | Raising funds for Crohns, Colitis, and Muscular Dystrophy" border="0" /> </font></a></center> </td> <tr> <td> <p>Thank you for attending our 4th annual Triathlon/Duathlon/5k at Hueston Woods State Park on August 12th. This event is held annually to raise research funding for Crohn's Disease, Ulcerative Colitis, and Muscular Dystrophy diseases.</p> </td> </tr> <tr> <td> <p>As you know the results and pictures have been posted on our home page at since Sunday 8/13/2012. Now we also have updated our Facebook page with those photos and you can start tagging yourself or downloading the pictures now! <br /> our page and tag yourself at </p> <p> test test </p> <p>Race day events is professionally managed by Speedy-Feet</p> </td> </tr> </table> <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td valign="top"> <div align="center" style="font-family:Verdana, Arial, Helvetica, sans-serif; font-size:10px;"><br />PO Box xxx Maineville, OH 45039<br /> <a href="mailto:[email protected]">[email protected]</a> | <a href='http://xxxevents.com' target="_blank">xxxevents.com</a><br /> <br /> </div> </td> </tr> </table> </body> </html>`

    Read the article

  • web hosting locally

    - by Pradyut Bhattacharya
    i have made a website and hosted in my local computer using a static ip where can i buy a domain name such as www.something.com such that it can redirect to my static ip so that if i m using a page like a http://localhost/index.jsp it can be accessed by http://www.something.com/index.jsp does it matter if i run the server locally or i buy a managed web hosting server from a big company if the traffic is low on my site?? thanks

    Read the article

  • Narrowing down my large keyword list for new PPC campaign

    - by gijoemike
    If I have a list of 100 keywords that are candidates for a PPC campaign (my list is actually 1000+). What is the best approach to narrowing this down to the top 5-10 keywords I should start with? I'm also wondering if my top chosen keywords for PPC campaign should be my main keywords for SEO site optimization for organic traffic. I also have another question on this site asking: How does one estimate where a competitor is getting most of their traffic from? Thanks. The website isn't created yet, but will be up in January.

    Read the article

  • Why Facebook profiles are Google-searchable?

    - by Jose
    Facebook has around 1B user profiles. They can be found by searching in Google. However, I don't think these profiles are linked from anywhere, so how could Google discover them? As far as I know, sitemaps are not enough for that (http://webmasters.stackexchange.com/a/5151), as all URLs should be crawlable anyway. I ask the question as I also have a site with user profiles and would like to make them discoverable.

    Read the article

  • Should I rely on externally-hosted services?

    - by Mattis
    I am wondering over the dangers / difficulties in using external services like Google Chart in my production state website. With external services I mean them that you can't download and host on your own server. (-) Potentially the Google service can be down when my site is up. (+) I don't have to develop those particular systems for new browser technologies, hopefully Google will do that for me. (-) Extra latency while my site fetch the data from the google servers. What else? Is it worth spending time and money to develop my own systems to be more in control of things?

    Read the article

  • SEO Service - Refresh SEO

    - by Dan
    I've been approached to possible take over SEO/marketting work for a site. The guy is currently using a paid service at http://refreshseo.com/ and paying around $80p/m. From what I can make out all refreshseo does is automatically generate keyword rich content pages and attach them to the site. These pages aren't actually linked to from within the site. So I'm wondering two things has anyone had any experience with this particular company or similar types - has it been worth it? How do you think the recent Google Panda updates impacts on this kind of strategy? Thanks in advance

    Read the article

  • What is the right way to do this HTML: header with icon linked

    - by Hell Awaits
    I know how to make these examples look and behave the same. But I would like to know which is the right way to build a HTML structure. <a><img><h1></h1></a> - looks wrong because an inline element is inside of a block element <a><img></a><h1><a></a></h1> - the same a-element is defined twice. Also I'm not sure about markup inside headers Any other solutions ?

    Read the article

  • How do I handle having too many links on a webpage because of my menu

    - by RandomBen
    I am developing a website that has a drop-down menu at the top of it. The Menu has around 100 links in it that are repeated on every page. Every page also has some number of links below the Menu that may or may not be in the menu itself. My issue is that Google says they generally don't like pages with more than 100 links on them. Is there any way to change the links on the menu so that they no longer "count" towards my max of 100 links? It seems like there should be an easy way to do this but their really doesn't seem to be. the rel=nofollow still counts towards the number of links on the page at least according to Google, so what other options do I have? I looked into where the 100 comes from and I found that it used to be here: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=35769#2 but that is no longer the case. I found a more definitive and frankly muddier answer here: http://www.seomoz.org/blog/questions-answers-with-googles-spam-guru from Matt Cutts from 2007. Long story short, in 2007 they still felt 100 links was a good number but they stated you could go far beyond that. In fact, they said that pages with high PageRank could have 2-300. It did sound like having many links could reduce the PageRank of the page with all of the links or possibly all of the items linked to. Also, I know IIS7's SEO 1.0 toolkit suggests that pages should have no more than 250 links.

    Read the article

  • Bizarre image loading problem from apache2

    - by NateDSaint
    Users have complained a few times about seeing a bizarre set of pink or green stripes on our webpage. At first I thought there were a rash of video card outages, but then someone sent me a screenshot from their browser (IE8). I later saw the same thing, but with slightly different colors on Chrome. Users have experienced this on their iPads and iPhones (iOS Safari). Because I've optimized the site to cache images, the bad image stays around until you clear your cache, so once you do, it resolves itself. My assumption is that the transmission of the image is being cut off mid-stream and then staying that way, but I can't for the life of me figure out why. Here's what I've checked: Header length is being sent, and transmission looks okay (wget sample below): wget http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg --2012-04-05 08:46:00-- http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg Resolving www.superiorlivestock.com (www.superiorlivestock.com)... [ip redacted] Connecting to www.superiorlivestock.com (www.superiorlivestock.com)|[ip redacted]|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45926 (45K) [image/jpeg] Saving to: `wallbg2.jpg' Images are not being served gzipped (apache conf below): SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary The site is www.superiorlivestock.com, and here's a sample of the bad page load: Is there something obvious I'm missing? Am I saving my images in the wrong format somehow?

    Read the article

  • Integrating eBay and PayPal inventory

    - by JW01
    Say I have an item for sale on eBay, and the same item for sale on another site via PayPal. Is it possible to have sales on one site reflected in the inventory for the other site, and vice-versa? In other words, if I have ten items for sale, and I buy one on either site, it should show that there are nine items left on both sites. I know that PayPal has an API for setting the inventory level of an item associated with a button. eBay also has an API for controlling an item's inventory. I'm wondering if anyone has tried to integrate them.

    Read the article

  • Google Author information in search results still havent displayed my details in search results

    - by Jayapal Chandran
    I followed the following instructions but still not clear whether i had completely understood it. http://www.google.com/support/webmasters/bin/answer.py?answer=1408986 http://www.labnol.org/internet/author-profile-in-google/19775/ I did the above last week and i did not find my picture in google search result. First i added google + link in certain web pages and in my google profile i added those pages which had google + anchor link with rel=author tag. After updating i used the following to verify. http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fvikku.info%2Fcodesnippets%2Fphp%2F&view= You can see that my pic is appearing at the right. here is a screen shot. so, what am i missing? why it is not in the search result. The author of labnol.org said it will take 3 days for my profile photo link to appear... ? Google has stated the following Note that there is no guarantee that a Rich Snippet will be shown for this page on actual search results. For more details, see the FAQ( http://knol.google.com/k/google-rich-snippets-tips-and-tricks#Frequently_Asked_Questions ). Fingers crossed. Thoughtful.

    Read the article

  • Serve most of a domain with Apache, but use mod_proxy to serve some URLs from Lighttpd

    - by Alex Pineda
    So we wish to host some pages on a new server with apache2, and embed some of our old content & functionality from another server with lighttpd in an iframe. I'm looking at this configuration from the apache docs (http://httpd.apache.org/docs/2.2/vhosts/examples.html#page-header) under "Using Virtual_host and mod_proxy" together. <VirtualHost *:*> ProxyPreserveHost On ProxyPass / http://192.168.111.2/ ProxyPassReverse / http://192.168.111.2/ ServerName hostname.example.com </VirtualHost> The only issue is that I want to proxy only on a subdomain, or even better, if I can keep the top domain and proxy only if the url contains a particular path ie. "/myprocess.php". So in essence the DNS will point to the apache2 as the "master router".

    Read the article

  • Canonicalization of single, small pages like reviews or product categories

    - by Valorized
    In general I pretty much like the idea of canonicalization. And in most cases, Google explains possible procedures in a clear way. For example: If I have duplicates because of parameters (eg: &sort=desc) it's clear to use the canonical for the site, provided the within the head-tag. However I'm wondering how to handle "small - no to say thin content - sites". What's my definition of a small site? An Example: On one of my main sites, we use a directory based url-structure. Let's see: example.com/ (root) example.com/category-abc/ example.com/category-abc/produkt-xy/ Moreover we provide on page, that includes all products example.com/all-categories/ (lists all products the same way as in the categories) In case of reviews, we use a similar structure: example.com/reviews/product-xy/ shows all review for one certain product example.com/reviews/product-xy/abc-your-product-is-great/ shows one certain review example.com/reviews/ shows all reviews for all products (latest first) Let's make it even more complicated: On every product site, there are the latest 2 reviews at the end of the page. So you see, a lot of potential duplicates. Q1: Should I create canonicals for a: example.com/category-abc/ to example.com/all-categories/ b: example.com/reviews/product-xy/abc-your-product-is-great/ to example.com/reviews/product-xy/ or to example.com/review/ or none of them? Q2: Can I link the collection of categories (all-categories/) and collection of all reviews (reviews/ and reviews/product-xy/) to the single category respectively to the single review. Example: example.com/reviews/ includes - let's say - 100 reviews. Can I somehow use a markup that tells search engines: "Hey, wait, you are now looking at a collection of 100 reviews - do not index this collection, you should rather prefer indexing every single review as a single page!". In HTML it might be something like that (which - of course - does not work, it's only to show you what I mean): <div class="review" rel="canonical" href="http://example.com/reviews/product-xz/abc-your-product-is-great/"> HERE GOES THE REVIEW</div> Reason: I don't think it is a great user experience if the user searches for "your product is great" and lands on example.com/reviews/ instead of example.com/reviews/product-xy/abc-your-product-is-great/. On the first site, he will have to search and might stop because of frustration. The second result, however, might lead to a conversion. The same applies for categories. If the user is searching for category-Z, he might land on the all-categories page and he has to scroll down to the (last) category, to find what he searched for (Z). So what's best practice? What should I do?

    Read the article

  • difference in using third party social buttons and directly integrating each social buttons ourselves

    - by Jayapal Chandran
    I wanted to add specific social buttons to my article. I used ShareThis. It gives a facebook like button, google plus button, etc... by default. were as in other articles of different modules i had integrated the facebook like by myself by following the documentation (including markup in the head section) What is the difference in adding manually with many markups and using third party code? Will that affect SEO or any other advantage over the respective social networking site (here for example facebook and google plus)?

    Read the article

  • Is there any way to lock down Photoshop to prevent designers from creating styles that cannot be rendered in CSS?

    - by Hugo Rodger-Brown
    Photoshop is a much more powerful design tool than CSS, and given free reign to design at will, designers will often tweak things like font settings to a degree that cannot be recreated on the web. Is there any way to lock down Photoshop, or perhaps run an equivalent of the Office 2010 "Compatability report" that shows the designer where they have designed something that cannot be rendered on a web page. Something like the old-school "web-safe" colour palette, but for an overall design.

    Read the article

  • Unindexing my tumblr blogs content and moving it to another tumblr blog

    - by sam
    ive been writing a tumblr blog for the past yr or so, ive writen about 300 articles, but now i need to move the blog to another site. (before it was running under blog.mysite.com and i now want it to run under blog.my*new*site.com) I want to keep the archived articles and have them on the new site, so what i was hoping to do was export the blog from tumblr, go into webmaster tools remove all the blogs indexed urls from google webmaster, then make a new tumblr blog and import the posts. Would google see this as new content as ive deleted their indexed copy ? Could i just move the mapping of the tumblr blog to the new subdomain, but in doing this i would lose all the pr and it would still look like duplicate content whats the best way to approach this ?

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >