Search Results

Search found 4147 results on 166 pages for 'meta keywords'.

Page 27/166 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • C# HtmlRequest and Javascript

    - by TechByte
    Is there anyway to get the same web source with c# as using Firebug? My source code is shown below. When I saved this, loaded with Firefox I couldn't get the same source as with Firebug. Are there any libraries that can help me to get all data? <html lang="pl" xml:lang="pl" xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml" xmlns:og="http://ogp.me/ns#"> <head> <meta property="og:title" content="Mitchel Official FanPage "/><meta property="og:type" content="website"/><meta property="og:url" content="http://facebook.com/MitchelOfficiall"/> <meta property="og:locale" content="pl_PL"/><meta property="og:site_name" content="Mitchel Official FanPage "/><meta property="og:description" content="Mitchel Official FanPage "/> <meta content="pl" http-equiv="Content-Language" /> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type" /> <meta content="text/javascript" http-equiv="Content-Script-Type" /> <title>Mitchel Official FanPage </title> <style> body { overflow: hidden; } </style> </head> <body> <div id="fb-root"> </div> <script> window.fbAsyncInit = function() { FB.init({appId: '0', status: true, cookie: true,xfbml: true}); FB.Event.subscribe("edge.create", function(targetUrl) { save(1); }); FB.Event.subscribe("edge.remove", function(targetUrl) { save(0); }); }; (function() { var e = document.createElement('script'); e.async = true; e.src = 'http://connect.facebook.net/pl_PL/all.js'; document.getElementById('fb-root').appendChild(e); }()); function save(x) { setTimeout('save2('+x+')',1000); } function save2(x) { document.location='http://www.likeplus.eu/post/saver?l=pl&t=112499|71272|00cb6c3576a64115878087272c970f751a0418f2e3d7440ca7c84c70b1d91ddb|8904e5cf28544785a42366aa89401017|'+x+'&h='+document.domain; } </script> <div class="fb-like" data-href="http://facebook.com/MitchelOfficiall" data-layout="button_count" data-send="false" data-show-faces="false" data-width="120"></div> </body> </html>

    Read the article

  • Regex Searching in Emacs

    - by Inaimathi
    I'm trying to write some Elisp code to format a bunch of legacy files. The idea is that if a file contains a section like "<meta name=\"keywords\" content=\"\\(.*?\\)\" />", then I want to insert a section that contains existing keywords. If that section is not found, I want to insert my own default keywords into the same section. I've got the following function: (defun get-keywords () (re-search-forward "<meta name=\"keywords\" content=\"\\(.*?\\)\" />") (goto-char 0) ;The section I'm inserting will be at the beginning of the file (or (march-string 1) "Rubber duckies and cute ponies")) ;;or whatever the default keywords are When the function fails to find its target, it returns Search failed: "[regex here]" and prevents the rest of evaluation. Is there a way to have it return the default string, and ignore the error?

    Read the article

  • How to successfully implement og:image for the LinkedIn

    - by Sabo
    THE PROBLEM: I am trying, without much success, to implement open graph image on site: http://www.guarenty-group.com/cz/ The homepage is completeply bypassing the og:image tag, where internal pages are reading all images from the site and place og:image as the last option. Other social networks are working fine on both internal pages and homepage. THE CONFIGURATION: I have no share buttons or alike, all I want is to be able to share the link via my profile. The image is well over 300x300px: http://guarenty-group.com/img/gg_seal.png Here is how my head tag looks like: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Guarenty Group : Pojištení pro nájemce a pronajímatelé</title> <meta name="keywords" content="" /> <meta name="description" content="Guarenty Group pojištuje príjem z nájmu pronajímatelum, kauci nájemcum - aby nemuseli platit velkou cástku v hotovostí predem - a dále nájemcum pojištuje príjmy, aby meli na nájem pri nemoci, úrazu ci nezamestnání." /> <meta name="image_src" content="http://guarenty-group.com/img/gg_seal.png" /> <meta name="image_url" content="http://guarenty-group.com/img/gg_seal.png" /> <meta property="og:title" content="Pojištení pro nájemce a pronajímatelé" /> <meta property="og:url" content="http://guarenty-group.com/cz/" /> <meta property="og:image" content="http://guarenty-group.com/img/gg_seal.png" /> <meta property="og:description" content="Guarenty Group pojištuje príjem z nájmu pronajímatelum, kauci nájemcum - aby nemuseli platit velkou cástku v hotovostí predem - a dále nájemcum pojištuje príjmy, aby meli na nájem pri nemoci, úrazu ci nezamestnání [...]" /> ... </head> THE TESTING RESULTS: In order to trick the cache i have tested the site with http://www.guarenty-group.com/cz/?try=N, where I have changed the N every time. The strange thing is that images found for different value of N is different. Sometimes there is no image, sometimes there is 1, 2 or 3 images, but each time there is a different set of images. But, in any case I could not find the image specified in the og:graph! MY QUESTIONS: https://developer.linkedin.com/documents/setting-display-tags-shares is saying one thing, and the personnel on the support forum is saying "over 300" Does anyone know What is the official minimum dimension of the image (both w and h)? Can an image be too large? Should I use the xmlns, should I not use xmlns or it doesn't matter? What are the maximum (and minimum) lengths for og:title and og:description tags? Any other suggestion is of course welcomed :) Thanks in advance, cheers~

    Read the article

  • JavaScript DEBUG Issue

    - by Rachel
    I am trying to debug this piece of code: $(document).track( { 'module' : 'Omniture', 'event' : 'instant', 'args' : { 'linkTrackVars' : 'products,events,eVar31,eVar32,eVar33,eVar34,eVar35,eVar36,eVar37', 'linkTrackEvents' : '', 'linkType' : 'o', 'linkName' : 'SPM Click' 'svalues' : { 'products' : ';OFFERID1[,;OFFERID2]', // Product added to cart 'events' : 'scAdd', // Cart event 'eVar31' : this.meta.offer_id, 'eVar32' : this.meta.family, 'eVar33' : this.meta.component_id, 'eVar34' : this.meta.ruleset_id, 'eVar35' : this.meta.in_network, // <in-network|out-of-network> 'eVar36' : this.meta.customer, // <customer|non-customer> 'eVar37' : this.page_tag_spm }, }, 'defer' : '0'; }, ); I am getting following error messages: missing } after property list 'svalues' : {\n Any clue.

    Read the article

  • FACEBOOK LINTER ERROR: value for property 'og:image:url' could not be parsed as type 'url'

    - by Martin Devarda
    I've read all threads in stack overflow about this issue, but my problem persists. THE PROBLEM IS ON THIS PAGE: http://www.organirama.it/minisite-demo/001.html THE PAGE CONTAINS THIS TAGS <meta property="og:title" content="A wonderful page" /> <meta property="og:type" content="video.movie" /> <meta property="og:url" content="http://www.organirama.com/minisite-demo/001.html" /> <meta property="og:image" content="http:/www.organirama.com/minisite-demo/photos-small/001.png" /> <meta property="og:site_name" content="Organirama"/> <meta property="fb:admins" content="1468447924"/> LINTER ERROR Object at URL 'http://www.organirama.com/minisite-demo/001.html' of type 'video.movie' is invalid because the given value 'http:/www.organirama.com/minisite-demo/photos-small/001.png' for property 'og:image:url' could not be parsed as type 'url'. WHAT I DISCOVERED The problem seems somehow related to the domain. Infact, if I make og:image point to another image on another domain, everything works.

    Read the article

  • WordPress SEO Plugins to make your Blog Search Engine Friendly

    - by Vaibhav
    WordPress is the most common blogging system in use today and its use as a CMS is also wide spread. With hundreds of millions of sites using wordpress, getting correct SEO for your WordPress based Blog or Site is very important. We get regular queries from people who want Search Engine Optimisation for their site or blog which is made using wordpress. Here is a list of 16 of the best WordPress Plug-ins That can help you achieve better rankings: All in one SEO Pack This is most popular plugin among all SEO plugins for WordPress. It is easy to use and is compatible with most of the WordPress plugins. It works as a complete package of SEO plugin – automatically generating META tags and optimizing search engines for your titles and avoiding duplicate content. You can also include META tags manually (Met title, Meta description and Met keywords) for all pages and post in your website. HeadSpace2 HeasSpace2 is available in different languages , you can manage a wide range of SEO Tasks related with meta data, you can tag your posts, Custom descriptions and titles. So your page can rank the created relevancy on Search engines and you can load different settings for different pages. Platinum SEO plugin Automatic 301 redirects permalink changes, META tags generation, avoids duplicate content, and does SEO optimization of post and page titles and a lots of other features. TGFI.net SEO WordPress Plugin It’s a modified version of all-in-one SEO Pack. It has some unique feature over All-in-one SEO plugin, It generate titles, meta descriptions and meta keywords automatically when overrides are not present. Google XML Sitemaps Sitemaps Generated by this tool are supported by  Google,  Yahoo,  Bing, and Ask. We all know Sitemaps make indexing of web pages easier for web crawlers. Crawlers can retrieve complete structure of site and more information by sitemaps. They notify all major search engines about new posts every time you create a new post. Sitemap Generator You can generate highly customizable sitemap for your WordPress page. You can choose what to show and what not to show, you can list the items in your choice of orde. It supports pages and permalinks and multi-level categories. SEO Slugs They can generate more search engine friendly URLs for your site. Slugs are filename assigned to your post , this plugin removes all  common words like ‘a’, ‘the’, ‘in’, ‘what’, ‘you’ from slug which are assigned automatically to your post. SEO Post Links This is a similar plugin to SEO Slug, it removes unnecessary keywords from slug to make it short and SEO friendly and you can fix the number of characters in your post. Automatic SEO links With this tool you can create auto linking in your post. You can use this tool for inter linking or external linking too. Just select your words, anchor text target URL nature of links ( Do fallow / No follow ). This plugin will replace the matches found in post, WP Backlinks A helpful plugin for link exchange , whenever any webmaster submits a link for link exchange, the plugin will spider webmasters site for reciprocal link, and if everything is found good , your link will be exchanged. SEO Title Tag You can optimize your Title  tags of  Word press blog through this plugin . You can also override the title tag with custom titles , mass editing and title tags for 404 pages which are the main feature of this plugin. 404 SEO plugin With this Plugin you can customize 404 page of your site; you can give customized error message and links to relevant pages of your site. Redirection A powerful plugins to manage 301 redirection and logs related with redirection, with this plugin you can track 404 errors and track the log of all redirected URLs , this plugin can redirect  post automatically when URL changes for that post. AddToAny This plugin helps your readers to share, save, email and bookmark your posts and pages. It supports more than a hundred social bookmarking , networking and sharing sites. SEO Friendly Images You can make SEO friendly images available on your site with the help of this tool. It updates images with proper titles and ALT tags. Robots Meta A plugin which prevents Search engines to index comments on your post, login and admin pages. It also allows to add tags for individual pages.

    Read the article

  • How to fix some damages from site hack?

    - by Towhid
    My site had been hacked. I found vulnerability,fixed it, and removed shell scripts. But hacker had uploaded thousands of web pages on my web server. after I removed those pages I got over 4 thousand "Not Found" Pages on my site(All linked from an external free domain and host which is removed now). Also hundreds of Keywords had been added to my site. after 3 weeks I can still see keywords from removed pages on my Google Webmaster Tools. I had 1st result on google search for certain keywords but now I am on 3rd page for the same keywords. 50% of my traffic was from google which is now reduced to 6%. How can I fix both those "Not Found" pages problem and new useless keywords? and Will it be enough to get me back on first result on google? P.S: 1)Both vulnerability and uploaded files are certainly removed. 2)My site is not infected, checked on google webmaster and a few other security web scan tools. 3) all files had been uploaded on one directory so i got something like site.com/hacked/page1.html and site.com/hacked/webpage2.html

    Read the article

  • Need to sanity-check my .htaccess, especially Limit GET POST line for Google repellent

    - by jose
    I need a sanity check on this .htaccess (from a WordPress site) I inherited from a 5 month+ old site. What's the symptom? Google + Bing crawl, but don't index any of the pages. Let me be clear: I'm not mad about "not ranking high." I think something is (accidentally) rejecting search engine indexing. I am not an expert on .htaccess, but one part especially looked funny, the Limit GET POST line. Is it not weird to have both Allow and Deny all, with no parameters? Also, I've ruled out robots.txt, but if I were you I'd want to see it, so here it is: User-agent: * Crawl-delay: 30 And here's the more suspect .htaccess: # temp redirect wordpress content feeds to feedburner <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_USER_AGENT} !FeedBurner [NC] RewriteCond %{HTTP_USER_AGENT} !FeedValidator [NC] RewriteRule ^feed/?([_0-9a-z-]+)?/?$ http://feeds.feedburner.com/anonymousblog [R=302,NC,L] </IfModule> # temp redirect wordpress comment feeds to feedburner <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_USER_AGENT} !FeedBurner [NC] RewriteCond %{HTTP_USER_AGENT} !FeedValidator [NC] RewriteRule ^comments/feed/?([_0-9a-z-]+)?/?$ http://feeds.feedburner.com/anonymous_comments [R=302,NC,L] </IfModule> <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> IndexIgnore .htaccess */.??* *~ *# */HEADER* */README* */_vti* <Limit GET POST> order deny,allow deny from all allow from all </Limit> <Limit PUT DELETE> order deny,allow deny from all </Limit> php_value memory_limit 32M Adding header by request: <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta name="robots" content="noindex,nofollow" /> <meta name="description" content="buncha junk i've deleted." /> <meta name="keywords" content="keywords i've deleted" /> <meta name="viewport" content="width=device-width" />

    Read the article

  • Keyword Research - Major Help in SEO Campaign

    Keyword research service specializes in keyword research and they specifically know what keywords works and what keywords will not work with regards to improving your site traffic. They believe in supplying quality keywords, thus guaranteeing that the site has original and innovative info on all of its pages.

    Read the article

  • Pages in IE render differently when served through the ASP.NET Development server and Production Ser

    - by rajbk
    You see differences in the way IE renders your web application locally on the ASP.NET Development server compared to your production server. Comparing the response from both servers including response headers and CSS show no difference. The issue may occur because of a setting in IE. In IE, go to Tools –> Compatibility ViewSettings. The checkbox “Display intranet sites in Compatibility View” turned on forces IE8 to display the web application content in a way similar to how Internet Explorer 7 handles standards mode web pages. Since your local web server is considered to be in the intranet zone, IE uses “Compatibility View” to render your pages. While you could uncheck this setting in or propagate the change to all developers through group policy settings, a different way is described below. To force IE to mimic the behavior of a certain version of IE when rendering the pages, you use the meta element  to include a “X-UA-Compatible” http-equiv header in  your web page or have it sent as part of the header by adding it to your web.config file. The values are listed below: <meta http-equiv="X-UA-Compatible" content="IE=4"> <!-- IE5 mode --> <meta http-equiv="X-UA-Compatible" content="IE=7.5"> <!-- IE7 mode --> <meta http-equiv="X-UA-Compatible" content="IE=100"> <!-- IE8 mode --> <meta http-equiv="X-UA-Compatible" content="IE=a"> <!-- IE5 mode --> This value can also be set in web.config like so: <?xml version="1.0" encoding="utf-8"?> <configuration> <system.webServer> <httpProtocol> <customHeaders> <clear /> <add name="X-UA-Compatible" value="IE=EmulateIE7" /> </customHeaders> </httpProtocol> </system.webServer> </configuration> The setting can added in the IIS metabase as described here. Similarly, you can do the same in Apache by adding the directive in httpd.conf <Location /store> Header set X-UA-Compatible “IE=EmulateIE7” </Location> Even though it can be done on a site level, I recommend you do it on a per application level to avoid confusing the developer. References Defining Document Compatibility Implementing the META Switch on IIS Implementing the META Switch on Apache

    Read the article

  • facebook share and opengraph

    - by hannit cohen
    I'm managing a video blog. The blog contains a main youtube video on the homepage and several thumbnail images of other videos elsewhere. I have a share button for the main page and the single posts pages and have added opengraph to specify the used image. For some reason facebook ignores my opengraph image and uses othe images it finds in the page... the header looks like this: (for the homepage) <!-- Facebook Opengraph --> <meta property="fb:app_id" content="155967927783206" /> <meta property="og:url" content="http://mttv.co.il/2010/12/%d7%9e%d7%a4%d7%92%d7%a9-%d7%a1%d7%99%d7%99%d7%a2%d7%95%d7%aa-%d7%92%d7%a0%d7%a0%d7%95%d7%aa-%d7%9c%d7%93%d7%95%d7%a8%d7%aa%d7%99%d7%94%d7%9d-1959-2010-%d7%91%d7%9e%d7%a2%d7%9c%d7%95%d7%aa/"/> <meta property="og:title" content="???? ?????? ????? ???????? 1959-2010 ??????" /> <meta property="og:description" content="???? ????? ?? ?????? ????? ?????? ????????? ???? ?????? 1959 ??? 2010 ?????? ????? ??????? ???' ??? ??? ????? ???? ??????? ???????? ?????? ?&quot;? ???? ??? ?&quot;? ????? ?????? ????? ???? 08.12.10 " /> <meta property="og:type" content="article" /> <meta property="og:image" content="http://www.mttv.co.il/wp-content/uploads/2010/12/Gvi91UEjCAw_mid-135x77.jpg" /> The website address is: http://mttv.co.il Any help will be appreciated

    Read the article

  • Keyword 101

    Keywords are a vital key to your websites success and will attract you customers. People use keywords to find products or information online. If you use keywords correctly, they can take you to the top of search engines like Google for free giving you free targeted traffic that are interested in what you have to say or sell to them.

    Read the article

  • Not Provide count has reached 80%! But what about remaining 20?

    - by Rajesh Magar
    I am updated with all not-provided reasons as Google has encrypted their all searches, but here is the little question banging again and again in my head. That if all search results has encrypted with HTTPS protocol then how did Google analytics still able to track some of (20%) organic keywords details? I means their still some keywords appreading in my organic keywords section. So how did Google analytics track or bypass that HTTPS thing? Thanks in advance!

    Read the article

  • How to properly remove URL's from Google's index?

    - by ElHaix
    On some of our sites, we now have several thousand pages that dilute our website's keyword density. The website is an MVC site with SEO routing. If I submit a new sitemap with say only the 2000 or so pages that we want indexed, even though navigating to the diluting pages still works, will Google re-index the site with only those 2000 pages, dropping the superfluous ones? For example, I want to keep roughly 2000 of the following: www.mysite.com/some-search-term-1/some-good-keywords www.mysite.com/some-search-term-2/some-more-good-keywords And remove several thousand of the following that have already been indexed. www.mysite.com/some-search-term-xx/some-poor-keywords www.mysite.com/some-search-term-xx/some-poor-more-keywords These pages are not actually "removed" as navigating to these URL's still renders a page. Even though there are potentially hundreds of thousands of pages, I only want say 2000 to be re-indexed and retained. The others removed (without having to do these manually). Thanks.

    Read the article

  • Keyword 101

    Keywords are a vital key to your websites success and will attract you customers. People use keywords to find products or information online. If you use keywords correctly, they can take you to the top of search engines like Google for free giving you free targeted traffic that are interested in what you have to say or sell to them.

    Read the article

  • Ensure Future Browser Compatibility - SharePoint Branding

    - by KunaalKapoor
    HTML and Future Internet Explorer Compatibility with SharePointAs new versions of Internet Explorer are released, the way HTML is rendered by the browser could change over time. To address the possibility of changes, Microsoft uses the X-UA-Compatible META tag that targets HTML markup to a specific version of Internet Explorer. The default SharePoint 2010 master pages are set to force current and future versions of Internet Explorer to render HTML in Internet Explorer 8 mode like the following markup:<meta http-equiv="X-UA-Compatibile" content="IE=IE8" />The Adventure Works Travel HTML includes the META tag to help ensure future Internet Explorer versions will display the SharePoint HTML properly.For more information about the Internet Explorer Standards Mode, see META Tags and Locking in Future Compatibility.

    Read the article

  • The Minimalist Approach to Content Governance - Create Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. In this installment of our Minimalist Approach to Content Governance we finally get to the fun part of the content creation process! Once the content requester has addressed the items outlined in the Request Phase it is time to setup and begin the production of content.   For this to be done correctly it is important the the content be assigned appropriate workflow and security information. As in our prior phase, let's take a look at what can be done to streamline this process - as contributors are focused on getting information to their end users as quickly as possible. This often means that details around how to ensure that the materials are properly managed can be overlooked, but fortunately there are some techniques that leverage our content management system's native capabilities to automatically take care of some of the details. 1. Determine Access Why - Even if content is not something that needs to restricted due to security reasons, it is helpful to apply access rights so that the content ends up being visible only to users that it relates to. This will greatly improve user experience. For instance, if your team is working on a group project many of your fellow company employees do not need to see the content that is being worked on for that project. How - Make use of native content features that allow propagation of security and meta data from parent folders within your content system that have been setup for your particular effort. This makes it painless to enforce security, as well as meta data policies for even the most unorganized users. The default settings at a parent level can be set once the content creation request has been accepted and a location in the content management system is assigned for your specific project. Impact - Users can find information will less effort, as they will only be exposed to what they need for their work and can leverage advanced search features to take advantage of meta data assigned to content. The combination of default security and meta data will also help in running reports against the content in the Manage and Retire stages that we will discuss in the next 2 posts. 2. Assign Workflow (optional depending on nature of content) Why - Every case for workflow is going to be a bit different, but it generally involves ensuring that content conforms to management, legal and or editorial requirements. How - Oracle's Universal Content Management offers two ways of helping to workflow content without much effort. Workflow can be applied to content based on Criteria acting on meta data or explicitly assigned to content with a Basic workflow. Impact - Any content that needs additional attention before release is addressed, allowing users to comment and version until a suitable result is reached. By using inheritance from parent folders within the content management system content can automatically be given the right security, meta data and workflow information for a particular project's content. This relieves the burden of doing this for every piece of content from management teams and content contributors. We will cover more about the management phase within the content lifecycle in our next installment.

    Read the article

  • Implementing ASP.NET 4.0 Page.MetaDescription Property

    Before ASP.NET 4.0, you had to manually code your meta description tags. The meta description tag, though no longer used by major search engines in their ranking algorithm, is still an important factor for increasing website traffic. Bear in mind that searchers coming from search engines (such as Google) will only click on the results provided if the meta description is relevant to the query. If you want to increase your organic traffic (traffic coming from search engines) then one thing that you can easily improve are the meta descriptions. In ASP.NET 4.0, this can be easily implemented using...

    Read the article

  • Keyword Generator Tool Gets Your Ahead of the Competition

    A keyword generator tool provides ideas that website owners and search engine optimizers use for site and engine optimization. Key phrase generators rely on search query popularity from introductory keywords to a more complex keyword search management to drive more traffic to a website. It maximizes prospective and potential high-traffic keywords and integrates it with your sites campaign techniques. Keyword generator tool allows you to manage and add "exact match" and "phrase match" keywords to your lists, also allows you to create misspellings, combine and reverse keywords then automatically calculates the ad group focus score of your keyword lists.

    Read the article

  • Keyword Usage For Maximum SEO

    After analyzing and ranking your keywords for profitability you now need to know what to do with them. The keywords you choose are only useful if you use them correctly. You don't need to know computer programming or even how the search engines work, just follow these simple tips and you will get the maximum search engine optimization benefit from your keywords.

    Read the article

  • Adding Facebook Open Graph Tags to an MVC Application

    - by amaniar
    If you have any kind of share functionality within your application it’s a good practice to add the basic Facebook open graph tags to the header of all pages. For an MVC application this can be as simple as adding these tags to the Head section of the Layouts file.<head> <title>@ViewBag.Title</title> <meta property="og:title" content="@ViewBag.FacebookTitle" /> <meta property="og:type" content="website"/> <meta property="og:url" content="@ViewBag.FacebookUrl"/> <meta property="og:image" content="@ViewBag.FacebookImage"/> <meta property="og:site_name" content="Site Name"/> <meta property="og:description" content="@ViewBag.FacebookDescription"/></head>  These ViewBag properties can then be populated from any action: private ActionResult MyAction() { ViewBag.FacebookDescription = "My Actions Description"; ViewBag.FacebookUrl = "My Full Url"; ViewBag.FacebookTitle = "My Actions Title"; ViewBag.FacebookImage = "My Actions Social Image"; .... } You might want to populate these ViewBag properties with default values when the actions don’t populate them. This can be done in 2 places. 1. In the Layout itself. (check the ViewBag properties and set them if they are empty) @{ ViewBag.FacebookTitle = ViewBag.FacebookTitle ?? "My Default Title"; ViewBag.FacebookUrl = ViewBag.FacebookUrl ?? HttpContext.Current.Request.RawUrl; ViewBag.FacebookImage = ViewBag.FacebookImage ?? "http://www.mysite.com/images/logo_main.png"; ViewBag.FacebookDescription = ViewBag.FacebookDescription ?? "My Default Description"; }  2. Create an action filter and add it to all Controllers or your base controller. public class FacebookActionFilterAttribute : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { var viewBag = filterContext.Controller.ViewBag; viewBag.FacebookDescription = "My Actions Description"; viewBag.FacebookUrl = "My Full Url"; viewBag.FacebookTitle = "My Actions Title"; viewBag.FacebookImage = "My Actions Social Image"; base.OnActionExecuting(filterContext); } } Add attribute to your BaseController. [FacebookActionFilter] public class HomeController : Controller { .... }

    Read the article

  • How Important is Keyword Research to My Marketing Campaign?

    Engaging in a good deal of keyword research will help you ensure that your website obtains the type of attention that you desire. The way that the internet is set up these days, everything is done through keywords, if your website content does not contain relevant keywords then your website may not be able to get the respectable attention that you desire. Keywords are commonly defined as one word or a phrase of words that describes the type of product or service that you are opting to promote.

    Read the article

  • The Complexity of SEO - Made Easy For You

    The very first thing in the SEARCH ENGINE OPTIMIZATION process is deciding on what keywords you want to optimize your website for, similarly what keywords you want your website to be ranked for. You want keywords that have a large number of searches, but have a few results or competitions. The effective and easiest way to get this is to look into Google's keyword tool.

    Read the article

  • Quick Search Engine Optimization Strategies

    One of the quick strategies you can use to help improve how a search engine like Google or Yahoo will find your Web Site is to change your basic keywords into long tail keywords. If you are like me, you probably did not even know that keywords had a tail. Well apparently, they have either a short or a long tail.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >