Search Results

Search found 22992 results on 920 pages for 'custom pages'.

Page 205/920 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Disable mobile page redirection for SharePoint 2013

    - by Sahil Malik
    SharePoint, WCF and Azure Trainings: more information SharePoint 2013 (foundation too), detects requests from mobile devices and automatically changes the uRL of the requested non mobile page to its mobile substitute. This logic is now built into SPRequestModule. The mobile view is pretty damned amazing. Even though the set of pages for mobile access is completely different, SharePoint has an entirely separate set of controls for the mobile pages. These are in the Microsoft.SharePoint.MobileControls namespace which inherit from Microsoft ASP.NET controls in the System.Web.UI.MobileControls namespace. These Mobile pages can even use mobile Web Part adapters to mimic the behavior of webparts on mobile webpart pages. Read full article ....

    Read the article

  • Is multiple domain names and links from same IP causing poor search engine rankings?

    - by John
    I have an ecommerce website which is not doing so well in Google. I am trying to improve this of course, and am looking at some possibilities for why it isn't doing well. The website has four domain names, all of which have been indexed by Google. A few months ago I applied 301 redirects to any requests for two of the domain names so now it is down to two domain names (one is a .net, the other is a .com.au, the others were .net.au and .com). I prefer to use my main domain name (the .com.au), but one of the names has been around for a long time and has more inbound links. According to a PageRank tool, both are PR2. It is a Classic ASP site and up until recently had a lot of querystring parameters. In the last week or so I added URL rewriting so there is now no parameters for most pages. I don't do 301 redirects from the old URLs but instead I add the META canonical tag indicating the preferred new URL. At the same time I redesigned the site and improved title tags, META descriptions, and H tags but it hasn't been long enough yet for Google to index many of these yet. I also looked at what pages Google has indexed and strangely it has some strange pages in the index, there are a lot of pages which are actual keyword searches (more a bunch of random letters than an actual word). What I mean is that it is as if they had typed in something to search for in my search box - there are no links to pages like this and the only way of getting this is to type something in to the search box). So I added a META robots tag with noindex,nofollow anytime that I render pages like this. Years ago I set up a fake price comparison site which lists all my products and links back to my site. It has a different keyword rich domain name but is on the same server and same IP address. It's a completely different layout but does have the same product categories and product descriptions (although I have stripped formatting out of them so they are not identical except in text). I also have a few blog sites which again are on the same server/IP and all have advertising for the website. My questions are: What should I do with the multiple domains, just use one, or continue with two or more? Should I add 301 redirects, not just the META canonical tag? Any idea about Google indexing my search results page, and did I do the right thing with the META robots tag? Is the fake price comparison site likely to be causing problems? Are all the links to the site from other domain names but the same IP address likely to be causing problems? Thanks for any help. Sorry for so many questions in one.

    Read the article

  • SEO - Coaching Newbies

    All search engines use algorithms and each search engines have different ones. An algorithm is the formula that the search engine uses to evaluate your web pages. The robots will crawl all pages on your site but not all pages will be indexed.

    Read the article

  • Web master tools is throwing out 404 errors on link not on page

    - by plantify
    Webmaster tools is showing thousands of 404 errors, where pages on the site are referring to another incorrect url. For example, URL not found www.plantify.co.uk/shop/=, linked from http://www.plantify.co.uk/shop/gift-voucher and http://www.plantify.co.uk/shop/special-plant-offers. I obviously have checked the source and cannot find any references to this link on any page. The only consistent issue is that it only seems to report this error on pages with two section i.e. www.plantify.co.uk/shop does not report any error whilst all pages with www.plantify.co.uk/shop/xxx (where xxx can be several different pages such as gift-voucher) all report this. I cannot seem to duplicate this error. I have run a link checker (we use Screaming Frog) and it does not report this error. I have fetched these pages as a bot, and these do not report this error. I am at a total loss. I cannot even duplicate the issue, but it is most definitely an issue, as Webmaster Tools is reporting new errors every day. Is this perhaps google bot doing its own thing?

    Read the article

  • how to include tags in permalinks of wordpress

    - by babua
    while using custom structure in my wordpress url , when i am trying to include tags, it shows me , it is showing errors , but when i add category , it reflects in url. i want that the tag gets included in custom url structure automatically , how can it be done using wordpress ... please help ... when i am adding /%tag%/ to custom structure field in wordpress admin , the url shows not found message.

    Read the article

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

  • how does private sales ecommerce site work on their SEO?

    - by 142857
    In a private sales ecommerce site, users need to sign up/in before they can access the pages of website. So, even if a user tries to directly navigate to a product page, he is redirected to sign in. I am wondering then how does these sites manage their SEO, as it would imply google too can't crawl these pages, or do they completely ignore the SEO benefit of allowing google to crawl the product and catalogue pages?

    Read the article

  • Canonical tags for separate mobile URLs

    - by DnBase
    I have a Drupal website serving mobile pages from different urls (starting from /mobile). According to Google recommendations I should use the canonical tag to map desktop and mobile pages. Right now I did this in case I serve the same node (e.g: node/123 and mobile/node/123) but should I do this for other pages as well that are equivalent but share a different content? For example do I need to map the desktop and mobile homepages even if they don't have the same content at all?

    Read the article

  • Will we be penalized for having multiple external links to the same site?

    - by merk
    There seem to be conflicting answers on this question. The most relevant ones seem to be at least a year or two old, so I thought it would be worth re-asking this question. My gut says it's ok, because there are plenty of sites out there that do this already. Every major retailer site usually has links to the manufacturer of whatever item they are selling. go to www.newegg.com and they have hundreds of links to the same site since they sell multiple items from the same brand. Our site allows people to list a specific genre of items for sale (not porn - i'm just keeping it generic since I'm not trying to advertise) and on each item listing page, we have a link back to their website if they want. Our SEO guy is saying this is really bad and google is going to treat us as a link farm. My gut says when we have to start limiting user useful features to our site to boost our ranking, then something is wrong. Or start jumping through hoops by trying to hide text using javascript etc Some clients are only selling 1 to a handful of items, while a couple of our bigger clients have hundreds of items listed so will have hundreds of pages that link back to their site. I should also mention, there will be a handful of pages with the bigger clients where it may appear they have duplicate pages, because they will be selling 2 or 3 of the same item, and the only difference in the content of the page might just be a stock #. The majority of the pages though will have unique content. So - will we be penalized in some way for having anywhere from a handful to a few hundred pages that all point to the same link? If we are penalized, what's the suggested way to handle this? We still want to give users the option to go to the clients site, and we would still like to give a link back to the clients site to help their own SE rankings.

    Read the article

  • Website Remodel Redirects

    - by inKit
    We've recently built a site for a new client who has not inserted all the content that they had from their old site into their new one. Also a lot of content is dynamic with ID's not matching from the old site to the new one. We have added dynamic redirects for most of the patterns we could find in pages that were 404ing, but there are still a lot of pages that had content, or just jumbled urls that we cannot match up with content pages on the new site. Is it better to redirect these leftover pages to the homepage? Or leave them 404ing?

    Read the article

  • How can I search in transcluded categories?

    - by Wikis
    I want to add functionality to a MediaWiki wiki to search in specific categories: Platform 1 Platform 2 etc. So I created a template which, based on a certain field, assigns pages to those categories. The template was already included on most of these pages. So now most pages are in either: Category:Platform 1 or Category:Platform 2 Then I thought I just need to add incategory to the search and I'm done, as described on the Wikipedia page. But then I reread it and to my horror discovered: incategory: – using the incategory: parameter returns pages in a given category (as long as the pages are directly categorized, and not transcluded through templates). Eeeek! Is there any other way to search even in transcluded templates? Or any other way of resolving this?

    Read the article

  • How to make the most of GWT's "Search queries"?

    - by DisgruntledGoat
    I've been looking at the "Search queries" section in Google Webmaster Tools recently, and it seems like there is a lot of potential there in finding which pages on a site need improvement. I'm trying to figure out exactly what to sort or filter on. Do I look at pages with a low average position? Low impressions but high clicks? Pages that are rising up/falling down the rankings? What is the low-hanging fruit here?

    Read the article

  • How to run WordPress and Java web app running on Tomcat on the same server?

    - by Chantz
    I have to run a WordPress site served via Apache2 & Java-based webapp using Tomcat on the same server. When users come to example.com or example.com/public-pages they need to served from WordPress but when they come to example.com/private-pages they need to be served from the Tomcat. I have asked this question on serverfault where they suggested using different port, different IP & sub-domain. I want to go for different port solution since it will mean I need to buy only one SSL certificate. I tried doing the reverse proxy method by having the following in my default-ssl.conf <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName localhost:443 DocumentRoot /var/www <Directory /var/www> #For Wordpress Options FollowSymLinks AllowOverride All </Directory> <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass /private-pages ajp://localhost:8009/ ProxyPassReverse /private-pages ajp://localhost:8009/ SSLEngine on SSLProxyEngine On SSLCertificateFile /etc/apache2/ssl/apache.crt SSLCertificateKeyFile /etc/apache2/ssl/apache.key </VirtualHost> As you have noticed I am using mod_proxy_ajp in Apache2 for this. And that my Tomcat is listening to port 8009 and then serving content. So now when I go to example.com/private-pages I am seeing the content from my Tomcat. But 2 issues are happening. All my static resources are getting 404-ed, so none of my images, CSS, js are getting loaded. I see that the browser is requesting for the resources using URL example.com/css/* This will clearly not work because it translates to example.com:80/css/* instead of example.com:8009/css/* & there are no such resources in the WordPress directory. If I go to example.com/private-pages/abcd I am somehow kicked to the WordPress site (which obviously displays a 404 page). I can understand why #1 is happening but have no clue why the #2 is happening. Regardless, if there is another clean solution for resolving this, I would appreciate y'alls help.

    Read the article

  • Duplicate page content and the Google index

    - by Kit Sunde
    I have a static pages with dynamically expanding content that google is indexing. I also have deep links into virtually duplicate pages which will pre-expand the relevant section of content into the relevant section. It seems like Google is ignoring all my specialized pages and not putting them in the index. Even after going through web-masters tools, crawling and submitting them to the index manually. I also use the google API for integrating search on the site, and the deep linked pages won't show up. Is there a good solution for this?

    Read the article

  • TabCtrl

    Adjustable contol with zooming and scrolling tabs, dragging with the mouse, custom drawing and much more

    Read the article

  • backlinks: Two domains with same IP

    - by Michal
    I run several different web pages on different servers (with different IP addresses). These pages are linking to each other in order to boost number of back links pointing to my pages. I would like to move all those projects to a single virtual host (with a single IP address). My question is, how google handles links within different domain names but same ip address. Is there some penalization for it? Could this lead to lower page rank?

    Read the article

  • Using the Same Domain to Bury Bad Publicity

    Receiving bad publicity can be a devastating blow to a brand's online reputation, and in order to mitigate the damage often the best course of action is to try to create enough alternate content to push the negative publicity down to the second, third, or even deeper, search result pages. Most people do this by creating a number of different pages on new or alternate domains, but in fact it can be much more effective to try to create pages on the same domain.

    Read the article

  • how can i track visits to ALL of the subpages of my website COMBINED TOGETHER?

    - by realcheesypizza
    Right now im using statcounter and google analytics. They are great. But my counts are currently separated. Ex: website.com = 1000 visits a day, website.com/about = 50 visits a day, website.com/privacy = 10 visits a day, etc.. How can have a combined count of all of my sub-pages? (mainpage + about page + about 100 other sub-pages ) I can of course manually add them all together, but that's time consuming because there are many pages. I tried placing a separate tracking code in a php include that sits in each of the sub-pages, but it doesnt seem to be working. It seems to require a single URL to create it, which it then only counts the visits from the one url, rather than ALL of them. Ex: website.com) Any help would be appreciated. Hopefully im just missing something very obvious. Thank you!

    Read the article

  • Exclude PHP from output from WYSIWYG in CMS

    - by bytewalls
    I'm writing a basic CMS for one of my sites and have run into an issue where some pages need to dynamically serve PHP and JS, where as others are plain HTMl. I want there to be a setting which will allow this for the pages that need it and will load ACE editor instead of a different wysiwyg editor. The challenge here is that on the pages which I do not explicitly tell it there will be code, I want to reject any inputs that code. I can set it up to insert a for all pages without JS, but how can I keep php code from running?

    Read the article

  • LOB Pointer Indexing Proposal

    - by jchang
    My observations are that IO to lob pages (and row overflow pages as well?) is restricted to synchronous IO, which can result in serious problems when these reside on disk drive storage. Even if the storage system is comprised of hundreds of HDDs, the realizable IO performance to lob pages is that of a single disk, with some improvement in parallel execution plans. The reason for this appears to be that each thread must work its way through a page to find the lob pointer information, and then generates...(read more)

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >