Search Results

Search found 4781 results on 192 pages for 'seo audit'.

Page 126/192 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • What messaging technologies in windows-ce for guaranteed msg delivery?

    - by Aidanapword
    We are building a windows-ce (6.0R3) based device that requires guaranteed and audit-ready message delivery (including store & forward) up to and down from the cloud. I have been looking for choices beyond: MSMQ a proprietary solution (what our prototype device is using) AMQP (I have not found any RabbitMQ clients for CE, by example) ... are there any others? We will be transporting sensitive data (who isn't?!?!) over a public network, and large scale options are required. Anything running on an embedded device will be performance sensitive too.

    Read the article

  • Canonical Link as a Way of Fighting Scrapers?

    - by James D
    Hi, Let's say several external sites are scraping/harvesting your content and posting it as their own. Let's also say that you maintain a single unique/permanent URL for each piece of content, so that content aliasing (on your site) is never an issue. Is there any value from an SEO perspective to including a canonical link in your header anyway, such that when your site is "scraped", the canonical indication is injected into whatever site is stealing your content (assuming they harvest the raw HTML rather than going in through RSS etc.)? I've heard different things about the behavior of cross-site canonical links, from "they're ignored" to "behavior undefined" to "it can't hurt" to "sure that's exactly what canonical is intended for". My impression was that canonical was a good way of dealing with intra-site but not necessarily inter-site aliasing. Thanks~

    Read the article

  • Prevent bot from crawling certain areas of site.

    - by Skoder
    Hey, I don't know much about SEO and how web spiders work, so forgive my ignorance here. I'm creating a site (using ASP.NET-MVC) which has areas that displays information retrieved from the database. The data is unique to the user, so there's no real server-side output caching going on. However, since the data can contain things the user may not wish to have displayed from search engine results, I'd like to prevent any spiders from accessing the search results page. Are there any special actions I should take to ensure that the search result directory isn't crawled? Also, would a spider even crawl a page that's dynamically generated and would any actions preventing certain directories being search mess up my search engine rankings? edit: I should add, I'm reading up on robots.txt protocol, but it relies on co-operation from the web crawler. However, I'd also like to prevent any data-mining users who will ignore the robots.txt file. I appreciate any help!

    Read the article

  • How do I create SEO-Friendly urls in ASP.Net-MVC

    - by blesh
    I'm getting myself acquainted with ASP.Net-MVC, and I was trying to accomplish some common tasks I've accomplished in the past with webforms and other functionality. On of the most common tasks I need to do is create SEO-friendly urls, which in the past has meant doing some url rewriting to build the querystring into the directory path. for example: www.somesite.com/productid/1234/widget rather than: www.somesite.com?productid=1234&name=widget What method do I use to accomplish this in ASP.Net-MVC? I've search around, and all I've found is this, which either I'm not understanding properly, or doesn't really answer my question: http://stackoverflow.com/questions/734583/seo-urls-with-asp-net-mvc

    Read the article

  • is a negative text-indent considered cloaking?

    - by John Isaacks
    I am using the negative-text-indent technique I learned to show a text-image to the user, while hiding the corresponding actual text. This way the user sees the fancy styled text while search engines can still index it. However I am started to think this sounds like cloaking since I am serving different content to the user vs the spider. However, I am not using this in a deceitful way. Plus it seems like this is a popular technique. So is it SEO-safe or is it cloaking? Thanks!

    Read the article

  • What is the best way to generate a sitemap?

    - by Zakaria
    Hi everybody, I need to build a sitemap for my website. The url will be "www.example.com/mysitemap.html". I know that there are some tools that generate automatically an XML file that contains the reachable URLs and also improve the SEO. So my questions are: How can I build this HTML page going from the generated XML? Or am I wrong and this kind of HTML page is built manually? If not, how do we integrate the XML and convert it to the website? Thank you very much. Regards.

    Read the article

  • noindex, follow on list views?

    - by Fabrizio
    On one of our client's website we have lot's of list views with links to detail views. (Image a blog with the posts overview and the single pages). The detail views don't change, but the list views will change when new items come up. The pages displaying the list view don't contain any other valuable content. So my question is: Does it make sense to define meta "noindex, follow" on the list view pages (and of course "index, follow" on the detail views) to prevent search engines to point to the list views when the keyword is found in the title or teaser of the list view. By the time the visitor clicks on the list view search result it might have changed and the content is not visible anymore, whereas if he goes directly to the single view he will definitly find what he was searching for? Related question: The startpage also contains mainly a list view. Is it a bad idea to have the start page not indexed? Any SEO gurus here? :) Thanks, Fabrizio.

    Read the article

  • Map a domain to an MVC area

    - by Simon_Weaver
    Anybody got any experience in mapping a domain to an MVC area? Here's our situation: Old system (still active but will soon redirect to new store): www.example.com - our main site where we send traffic store.example.com - our store site which is a completely separate site that is indexed in google New system: www.example.com - same site as before www.example.com/store - new store site - built in an ASP.NET MVC area Because store is a separate domain google gives it a separate entry in the search results. I'd like to keep this benefit in future but wondering whether or not there is a good way to map a domain (store.example.com) to the MVC area or if its just going to be more trouble than its worth. PS. I'm not trying to keep existing indexing - its a completely separate store so thats not possible. I just want to redirect to the corresponding page in the new store. I'm just trying not to lose the benefit of two domains for SEO purposes.

    Read the article

  • Webpage layout order for my webapp - does it matter if the Sidebar is programmatically displayed bef

    - by Jack W-H
    OK that's the worst title I could ever possibly think up. But I'm not too sure how to phrase it! What I mean is, is it inefficient for the browser, search engine optimisation, or any other important factors, if programmatically my float:righted sidebar appears in the markup before the main content div, which is set to float:left? To the user, the main content appears on the left, and the sidebar on the right. In the source code it appears like so: <div id="sidebar">This is where my sidebar goes </div> <div id="content">This is where my content goes </div> Will this affect SEO or other factors in my page?

    Read the article

  • CentOS - Configuring Puppet to play nice with SELinux

    - by Mike Purcell
    I am running into an issue every time I attempt to start the puppetmasterd service, for which I receive the following error message: root@service1 ~ # -> /etc/init.d/puppetmaster start Starting puppetmaster: Could not prepare for execution: Got 1 failure(s) while initializing: change from absent to directory failed: Could not set 'directory on ensure: Permission denied - /etc/puppet/ssl [FAILED] Apparently there was a known issue with this scenario as outlined in this bug report, however in the bug report it states the issue has been resolved in selinux-policy-3.9.16-29.fc15, but the latest CentOS default upstream version is 3.7.19-155.el6_3.4. So I am trying to figure out the best solution. I can either create a local security policy to allow puppetmasterd the access it needs, or keep researching and install a newer version of selinux-policy outside of the default upstream channel. Anyone have any recommendations? Please don't recommend disabling SELinux... ----- Update ----- Here is the puppet.conf: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl [master] certname=puppetmaster.ownij.lan dns_alt_names=puppetmaster.ownij.lan [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig server=puppetmaster.ownij.lan And here are the denials per the audit log: type=AVC msg=audit(1349751364.985:666): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751364.985:666): arch=c000003e syscall=4 success=no exit=-13 a0=1391420 a1=7fffef09ed10 a2=7fffef09ed10 a3=120c500 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.302:667): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.302:667): arch=c000003e syscall=4 success=no exit=-13 a0=1d18530 a1=7fffef0d04d0 a2=7fffef0d04d0 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.465:668): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.465:668): arch=c000003e syscall=4 success=no exit=-13 a0=1af3930 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751365.467:669): avc: denied { search } for pid=15093 comm="puppetmasterd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349751365.467:669): arch=c000003e syscall=4 success=no exit=-13 a0=1b17aa0 a1=7fffef0c5c70 a2=7fffef0c5c70 a3=8 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) type=AVC msg=audit(1349751366.401:670): avc: denied { write } for pid=15093 comm="puppetmasterd" name="puppet" dev=dm-0 ino=132035 scontext=unconfined_u:system_r:puppetmaster_t:s0 tcontext=system_u:object_r:puppet_etc_t:s0 tclass=dir type=SYSCALL msg=audit(1349751366.401:670): arch=c000003e syscall=83 success=no exit=-13 a0=2d7a400 a1=1f9 a2=2d7a40f a3=7fffef0a6df0 items=0 ppid=15092 pid=15093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=13 comm="puppetmasterd" exe="/usr/bin/ruby" subj=unconfined_u:system_r:puppetmaster_t:s0 key=(null) And the audit log if I pass through audit2allow: root@service1 ~ # -> fgrep puppetmasterd /var/log/audit/audit.log | audit2allow -m puppetmasterd module puppetmasterd 1.0; require { type home_root_t; type puppetmaster_t; type puppet_etc_t; type puppet_var_run_t; type httpd_sys_content_t; class lnk_file { relabelfrom relabelto }; class file { relabelfrom read getattr open }; class dir { write read search getattr setattr }; } #============= puppetmaster_t ============== allow puppetmaster_t home_root_t:dir { search getattr }; allow puppetmaster_t httpd_sys_content_t:dir read; allow puppetmaster_t httpd_sys_content_t:file { read getattr open }; #!!!! The source type 'puppetmaster_t' can write to a 'dir' of the following types: # puppet_log_t, puppet_var_lib_t, puppet_var_run_t, puppetmaster_tmp_t allow puppetmaster_t puppet_etc_t:dir { write setattr }; allow puppetmaster_t puppet_etc_t:lnk_file { relabelfrom relabelto }; allow puppetmaster_t puppet_var_run_t:file relabelfrom;

    Read the article

  • Is this a "valid" css image replacement technique?

    - by user278457
    I just came up with this, it seems to work in all modern browsers, I just tested it then on (IE8/compatibility, Chrome, Safari, Moz) HTML <img id="my_image" alt="my text" src="images/small_transparent.gif" /> CSS #my_image{ background-image:url('images/my_image.png'); width:100px; height:100px;} Pro's: image alt text is best-practice for accessibility/seo no extra HTML markup, and the css is pretty minimal too gets around the css on/images off issue where "text-indent" techniques hide text from low bandwidth users The biggest disadvantage that I can think of is the css off/images on situation, because you'll only send a transparent gif. I'd like to know, who uses images without stylesheets? some kind of mobile phone or something? I'm making some sites for clients in regional Australia (hundreds of km from the nearest city), where many users will be suffering from dial-up connections, and often outdated browsers too, so the "images off" issue is an important consideration. are there any other side effects with this technique that I haven't considered?

    Read the article

  • How can I explain to a programmer that CSS positioning has many benefits over table based layouts?

    - by Pat
    I have a friend who wishes to work as a freelance web developer, but insists that tables are the way forwards for layouts. Several points he maintains in favour of tables: 1 This is what was taught at the beginning of 10 years of programming & computer science degrees. 2 Large companies use tables to achieve 'technical' things. 3 It saves time I have coded him some examples of CSS exactly matching table based layouts, and provided many links to articles explaining SEO and accessibility benefits. From the perspective of a client, I have been explaining to him that I wouldn't hire someone using outdated methods as their main strategy for layout. As he is my friend and I wish him every success, I believe it is important for him to gain the best start when pitching for work. The question again: How can I explain to a programmer that CSS positioning has many benefits over table based layouts?

    Read the article

  • mod rewrite, title slugs and htaccess

    - by chris
    I have been taken in to provide some seo guidance on a website which has been running since 2005. My problem is i want to use clean urls. The code that handles the url is hidden away in some class file.. and with over a few thousand lines of code its a struggle to rewrite it. So I'm think, I have gone through all the products and created a slug for them as a field in the product table. Is it possible to do something like an intermediate file for htaccess. Some thing like 1./clean-slug-comes-in/ 2.htaccess catches this and uses slug.php to find the relevant product id for the slug. 3.Then product.php?id=(ID.found.from.2) is loaded?

    Read the article

  • 302 vs 301 redirect in this specific case

    - by Binder
    We have a website that displays information in a location based manner, i.e. it detects the IP of the visiting user and redirects him/her to an appropriate landing page; for e.g. a user coming from 'Egypt' will be redirected to http://www.mysite.com/egypt/cairo and a user visting from dubai will be redirected to http://www.mysite.com/uae/dubai, so on and so forth and we cater to multiple locations in the middle-east. Now, we have been advised by our SEO consultant that we should put a 301 (permanent redirect) on http://www.mysite.com to point to http://www.mysite.com/ksa/riyadh I would like to know the negative implications that this would have on Google indexing or otherwise, as I fundamentally disagree with this suggestion and believe that in a siutation like this a 302 redirect would be more appropriate.

    Read the article

  • Non-Latin characters in URLs - is it better to encode them or replace with their Latin "counterparts

    - by Pawel Krakowiak
    We're implementing a blog for a site which supports six different languages and five of them have non-Latin characters in their alphabets. We are not sure whether we should have them encoded (that is what we're doing at the moment) Létání s potravinami: Co je dovoleno? becomes l%c3%a9t%c3%a1n%c3%ad-s-potravinami-co-je-dovoleno and the browser displays it as létání-s-potravinami-co-je-dovoleno. or if we should replace them with their Latin "counterparts" (similar looking letters) Létání s potravinami: Co je dovoleno? becomes letani-s-potravinami-co-je-dovoleno. I can't find a definitive answer as to what's better from SEO perspective? Search engine optimization is very important for us. Which approach would you suggest?

    Read the article

  • dynamic text in <h1> tag

    - by Ami
    what would be the impact on SEO of changing the text of the <h1> dynamically on the server side each time the web page loads? I'm not talking about changing the whole text, just part of it, for example if the header contains some fixed text (with keywords of course), and also contains the current date or time/the current number of logged on users/the count of items current in stock/whatever. how would that affect my ranking? is it bad? doesn't make a difference? thanks.

    Read the article

  • Making dynamic images have static filenames

    - by michaeltk
    My website currently has various links to a php script that generates the images dynamically. For example, the link may say "img source="/dynamic_images.php?type=pie-chart&color=red" Obviously, this is not great for SEO. I'd like to somehow make the filenames of these links appear to be static, and use a solution (like Mod-Rewrite) to ensure that the images can still be dynamically created. I suppose I could have something like "img src="average-profits-in-scuba-diving-industry.png?type=pie-chart&color=red" (and use Mod-Rewrite to take care of changing the filename prefix to dynamic_images.php), but I'm afraid that the search engines would shy away from the querystring on the end of the image filename. Any solutions? Thanks in advance.

    Read the article

  • For external links on my webpage, should I use a redirector page or just link direct to the external

    - by AaronM
    Hello, just wondering if I should be using a 'redirector' type page or link directly to the external pages on my site http://www.onedaysalefinder.co.nz/ - currently I use a redirector page to track what links are being clicked on (which simply takes an ID, looks up the URL in the database, and then does a Response.Redirect(URL); From a SEO point of view, is this a good idea/bad idea? I understand it can add a few milliseconds extra to the external page load time whilst it looks up the actual URL, but am not too concerned about this. I also get the benefit of tracking the clicks accurately, but are the pros/cons of using a redirector vs the actual link? Am I worrying about something I don't need to? Thanks

    Read the article

  • Lightweight open source CMS - current situation

    - by patrikas
    Hello, It happened that I need to pick up PHP based open source CMS. I did a small research and found many candidates. CMS made simple seems to be right choice, but I am not sure what's the current state, I know it was widely used time ago. I need it to be fully compliant with web standarts, lightweight (especially interface). It needs to be as simple as possible - basically just style and page content editing (news, maybe some image gallery) is enough. Sure content and style have to be separated from each other since content will be edited by non-programmer. One of the main goals is SEO so I'd like it to have friendly URLs. I think CMSes like Joomla, Drupal and Wordpress are too big for this project. Are there any recommendations ?

    Read the article

  • How to correctly migrate urls from custom asp.net solution to Wordpress?

    - by Marek
    I have a web site built using asp.net with ugly URLs like /DisplayContent.aspx?id=789564. I know how to migrate the database, but the Wordpress urls will be (naturally) different. Can I simply write some mapping or do I have to include a rewrite rule for each subpage (300 pages) in .htaccess? Should I provide a rewrite rule for each existing page that would transform a full old url to the known new url, like for example: /DisplayContent.aspx?id=789798 -> /2010-5-10/Title-Of-The-Post Even if I manage to migrate the URLs, the structure of the HTML for the new content will naturally be different. How does this affect SEO? Should I run asp.net and wordpress side by side and issue the redirects from the asp.net application? What is the most efficient solution to this kind of migration of URLs without doing PHP programming?

    Read the article

  • Loading external content with jquery or iframe?

    - by nailuenlue
    Hiho, There's an existing website that i need to include into another site which goes like this: a.mysite.com and i need to fetch content from this site in my www.mysite.com website... As i need to access the content of the iframe the Same origin policy produces a problem here. What i did was to configure mod_proxy on Apache to proxy pass all requests from www.mysite.com/a to a.mysite.com This will work fine...but my problem is that im not sure what the best way would be to include those pages. 1. Idea As the content of the iframe is a full featured site with a top navigation...left navigation etc....i would need to change the page template to only show the content box to be able to integrate that page in the iframe. 2. Idea I could just load the DIV where the content lies through JQuery.load() and integrate it into my site. What is the best way to accomplish such a task? How bad is both ideas from the SEO point of view?

    Read the article

  • SEO Marketing - How to Promote Your Website and Gain More Traffic?

    Having problems in promoting your website? Do your risk everything to put your website on top with weak SEO marketing strategy? SEO Marketing is a very important part in promoting your website and to market your products. It will help you gain more traffic to your website and increase your page rank. However, it will be only a waste of money if your website has weak seo marketing strategy. Remember that people nowadays use the internet to gain any information in any website or probably your website.

    Read the article

  • ASP.NET MVC GoogleBot Issues

    - by Khalid Abuhakmeh
    I wrote a site using ASP.NET MVC, and although it is not completely SEO optimized at this point I figured it is a good start. What I'm finding is that when I use Google's Webmaster Tools to fetch my site (to see what a GoogleBot sees) it sees this. HTTP/1.1 200 OK Cache-Control: public, max-age=1148 Content-Type: application/xhtml+xml; charset=utf-8 Expires: Mon, 18 Jan 2010 18:47:35 GMT Last-Modified: Mon, 18 Jan 2010 17:07:35 GMT Vary: * Server: Microsoft-IIS/7.0 X-AspNetMvc-Version: 2.0 X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Mon, 18 Jan 2010 18:28:26 GMT Content-Length: 254 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title> Index </title> </head> <body> </body> </html> Obviously this is not what my site looks like. I have no clue where Google is getting that HTML from. Anybody have an answer and a solution? Anybody experience the same issues? Thanks in advance.

    Read the article

  • How important is the website logo on a page?

    - by meo
    I have stopped to insert "img" tags for the logo of the page. Because its not an image that is part of the content, its a design element but its still a information I want to have control over. So I just write the title in a "a" element as display: block, overflow: hidden and I push the text out with some padding. I think thats a good solution for SEO because you are keeping control of how important the logo should be on a page. But now my dilemma is starting. How important is the logo of a page? "A list apart" puts the logo in a h1 element. But is the logo really that important? On article pages you have two H1 elements (the logo and the title of the article) Most of the sites just use a img balbal /a, but I don't like this solution. Because I just want to use img for images that are part of the content... Its kinda philosophical question, I hope you can give me some input or some articles to read about that...

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >