Search Results

Search found 14037 results on 562 pages for 'master pages'.

Page 20/562 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Pointing a subdomain to Github Pages

    - by ratage
    I set up an Octopress blog on Github Pages at myusername.github.com. I now want blog.myusername.me (which currently has a Wordpress blog set up) to point to this Octopress blog. So I followed the instructions here on setting up a custom domain: I ran echo 'blog.myusername.me' >> source/CNAME in my Octopress repository, and then ran rake generate and rake deploy to deploy it to Github. I went to Namecheap, and added a new CNAME under my myusername.me domain: "blog - myusername.github.com - CNAME". Waited a couple hours. However, now when I go to myusername.github.com, it redirects me to blog.myusername.me (which is my old Wordpress blog), which seems like the inverse of what I want. (Going to blog.myusername.me directly still shows my Wordpress blog.) I checked http://www.whatsmydns.net/#CNAME/blog.myusername.me and it seems like my DNS has propagated (myusername.github.com shows up in the right-hand side). Any ideas what I'm doing wrong?

    Read the article

  • HTML 5, Fluid Pages and Google Mobile Index

    - by Bob
    I am currently migrating my site to HTML5, at the same time designing pages so that they are "fluid" and are equally presentable for a mobile or a large screen. I took the fluid approach so as not to have to develop a separate application for mobile devices and I'm pleasantly surprised with the results that look equally as good on an iPhone as they do on a large screen. Then I went into the Google Webmaster Tools facility and became aware of the Google Mobile Index. I'm confused now as HTML5 doesn't seem to be supported by Google Mobile Indexing. Does this mean that when I go live with my new "pride and joy" HTML5 site on a mobile it won't appear on any Google searches as it's not in the Google Mobile Index?

    Read the article

  • SEO - PageRank on Facebook pages, but pages have no back links to them?

    - by Marco Demaio
    have a look at these two pages: 1) http://it-it.facebook.com/jeanchristophe.cataliotti (PageRank 2 from Google toolbar) Amazingly it has got NO links to it: http://siteexplorer.search.yahoo.com/search?p=it-it.facebook.com/jeanchristophe.cataliotti&fr=sfp 2) http://www.facebook.com/group.php?gid=18463182878&v=wall&viewas=0 (PageRank 1 from Google toolbar) Still amazingly it has got NO links to it: http://siteexplorer.search.yahoo.com/search?p=www.facebook.com/group.php?gid=18463182878&v=wall&viewas=0&fr=sfp How do you explain this? Hoping for an explanation that goes beyond just saying that the PR in Goole toolbar it's not updated, because it can not be the reason for this!

    Read the article

  • Google Search not displaying results from sub-pages

    - by nlovric
    I published a new site with some delicate content on September 26, 2012 UTC and no results from sub-pages - only from the main page - appear in Google Search. Entering "neven lovric" "cat out of the bag" into Google Search finds the main page. Is this type of behavior normal? I ask this because the first site was ceased - my account was locked - by the NameCheap, Inc. Risk Assessment Team, allegedly due to PayPal, Inc. reversing my payment for the extension of the registration of the domain before I was able to publish any content on it. In 2011 UTC, Google, Inc. blocked all results for certain keywords from being displayed to their users in the Arab Republic of Egypt during the demonstrations there. So, considering previous events, this is not an unlikely scenario in this case, also.

    Read the article

  • Can I make query strings produce separate pages?

    - by John Smith
    I have a profile page with a URL like so: localhost/profile.php/?username=Bob I was wondering, if I had a separate <title> which changed according to the username, would they produce separate pages in the google search results? How do I tell Google to only use the username string or does it search within the title? On a similar note, how would I create a separate page with the username like so: localhost/bob instead of a query string like facebook does. Do that make a new file for each user?

    Read the article

  • 301 redirects - can we not delete old pages?

    - by KBS
    First time here :) We have a page on the site which ranks well for an SEO term (top 5) but contains old information. We have added a new page but Google doesn't rank it that well. Information on these pages is time sensitive. Old: example.com/2013-related-information.html New: example.com/2014-related-information.html Obvious solution is to delete old page and do a 301 redirect to the new page. Now, can we still keep the old page by giving it a new URL. Step1: example.com/2013-related-information.html is redirect to example.com/2014-related-information.html Step2: example.com/2014-related-information.html is recreated with a new address such as example.com/new-2013-related-information.html What we are trying to do is to send the user to the fresh page but still not wasting the record copy if someone wants to go and dig up old page. Would appreciate help!! Cheers

    Read the article

  • Question about mod_rewrite rule for redirecting failing pages

    - by SimpleCoder
    I'm setting up a mod_rewrite rule that redirects failing pages to a custom Page Not Found page. This is with Wordpress. I'm using the guide here: http://httpd.apache.org/docs/2.2/rewrite/rewrite_guide_advanced.html#redirect404. My rule so far looks like this: RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.+) http://example.com/?page_id=254 [R] This works. It seems to be a combination of the first and second suggestion that worked, since the -U flag did nothing. My question is, out of curiosity why the following happens: When I change REQUEST_FILENAME to REQUEST_URI (as the second example suggests), the page loads, but none of the style sheets load. All of my formatting is gone, and this happens on every page. Can anyone think of why this might happen?

    Read the article

  • Google showing meta descriptions from other pages in the SERPs

    - by ojek
    Recently I added some content to my website and submitted a sitemap files to Google. Now that Google has indexed those pages, I discovered that some of words and sentences that are listed in Google that lead to my website are having their meta descriptions somehow mixed up. Here is how it works: After I put a sentence on Google to check for my website ranking, I can see a page title in the results of page1, a link to page1, and a description from page2. Since my website is a forum, if Google mixes the links of threads, it leads my users to different kind of material that they were looking for. Is there anything I can do about it?

    Read the article

  • Can i have a facebook fangate between pages of my website

    - by Onaiza
    I am not sure whether this is possible can i have a fan gate between two pages of a website, so that before a visitor can access a particular url it would be mandatory to like our fan page on facebook? Scenario I have a travel website puneritraveller.com, and for each destination featured in the website i have listed hotels, with a direct link to the website of the hotel. for example i have featured a destination 'Alibaug' with a hotels page - puneritraveller.com/alibaug-hotels.html , in this page i have given banner ads for a few hotels for example Yellow house which on clicking redirects to www.yellowhousealibag.com What i want to achieve is, when someone clicks on the banner ad for Yellow house they should be redirected to a page with a like button to facebook.com/puneritraveller and once he/she clicks on the like button should be redirected to www.yellowhousealibag.com Is this possible? Please help

    Read the article

  • User Control not loading based on location

    - by mwright
    I have an ASP.net MVC solution that uses nested master pages to load content. On the first Master page I load a header, then have the Content Placeholder, and then load a footer. This master page is referenced by another master page which adds some additional information based on the user being logged in or not. When I load a page that references these master pages, the header loads, but the footer does not. If I move the footer up above the Content Place Holder it loads into the page. Any ideas why this might be the case? The code for the master page that contains the footer is as follows: <%@ Master Language="C#" Inherits="System.Web.Mvc.ViewMasterPage" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title> <asp:ContentPlaceHolder ID="TitleContent" runat="server" /> </title> </head> <body> <div class="header"> <% Html.RenderPartial("Header"); %> </div> <div> <asp:ContentPlaceHolder ID="MainContent" runat="server"> </asp:ContentPlaceHolder> </div> <div class="footer"> <% Html.RenderPartial("Footer"); %> </div> </body> </html>

    Read the article

  • puppet master --compile logs errors to stdout

    - by danny
    I see a bug about this that was accepted and then closed a year ago: http://projects.puppetlabs.com/issues/3670 but I'm using puppet 2.7.14 and am getting the same issue. I'm trying to use "puppet solo" (i.e. just running puppet apply on each server to be configured) as I only have 2 or 3 servers in this project and adding another server as a puppetmaster would be completely overkill. Unless I'm mistaken, the best way to apply a node manually to a server is to do: puppet master --compile=mynode > catalog.json puppet apply --catalog catalog.json But the puppet master command outputs a couple of warnings and notices to stdout, mixed in with the desired json content. And it uses colored output so I can't just pipe it through egrep -v '^warning:' EDIT: I guess it's not too big of a deal to use grep - since puppet 2.7 pretty-prints the actual content and the warnings don't ever start with spaces, piping the output through egrep '^( |{|})' works So my questions are basically: Is there a better way than this to apply a puppet node without using a puppetmaster? I can't really find any good references online to using puppet without a puppetmaster, even though that seems like a perfectly reasonable thing to do for a small project. Is there a setting or flag that I'm missing that will get puppet master to stop being an asshole and send its errors to stderr instead of stdout? Or do I really have to turn off color logging, then grep to exclude warning: and notice: lines?

    Read the article

  • Seizing naming master from child domain server

    - by meera
    when I am trying to seize the role from my child domain server the naming master I get the following error fsmo maintenance: seize naming master Attempting safe transfer of domain naming FSMO before seizure. ldap_modify_sW error 0x34(52 (Unavailable). Ldap extended error message is 000020AF: SvcErr: DSID-03210380, problem 5002 (UN AVAILABLE), data 8438 Win32 error returned is 0x20af(The requested FSMO operation failed. The current FSMO holder could not be contacted.) ) Depending on the error code this may indicate a connection, ldap, or role transfer error. Transfer of domain naming FSMO failed, proceeding with seizure ... Server "win-fb20ixk90mu" knows about 5 roles Schema - CN=NTDS Settings,CN=WIN-3918XHC5STU,CN=Servers,CN=Default-First-Site-Na me,CN=Sites,CN=Configuration,DC=HCL,DC=com Naming Master - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First- Site-Name,CN=Sites,CN=Configuration,DC=HCL,DC=com PDC - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First-Site-Name, CN=Sites,CN=Configuration,DC=HCL,DC=com RID - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First-Site-Name, CN=Sites,CN=Configuration,DC=HCL,DC=com Infrastructure - CN=NTDS Settings,CN=WIN-FB20IXK90MU,CN=Servers,CN=Default-First -Site-Name,CN=Sites,CN=Configuration,DC=HCL,DC=com

    Read the article

  • Set Error-Pages for all Applications in Tomcat

    - by user38511
    I'm trying to set up custom error pages in tomcat 6, because I don't want the default ones to show up. My error pages are static html, no jsp or anything dynamic. I know how to do this through the web.xml in each application but I'd prefere to setup the error pages only once for the entire server. I tried to add the following fragment to the global web.xml (in conf), but no matter what I add under location, it does not show. <error-page> <error-code>404</error-code> <location>/404.html</location> </error-page> What do I need to do to gobally define custom error pages? Thanks!

    Read the article

  • Disabling the Squid Error pages

    - by Nicholas Smith
    I've just started looking at using Squid for a project and can't seem to see an easy way of disabling the Squid error pages (e.g. "Name Error: The domain name does not exist"). We use a custom browser which handles that scenario in our way, so the Squid error pages are overriding our custom logic. Is it possible to set them too 'off'? I've been through the .conf and I've found where the error pages are stored, but no real options to disable them.

    Read the article

  • Puppet gives SSL error because master is not running?

    - by Daniel Huger
    I started with two clean machines this time. My master is running 12.04 Version: 2.7.11-1ubuntu2 Depends: ruby1.8, puppetmaster-common (= 2.7.11-1ubuntu2) My client is 10.04 Version: 2.6.3-0ubuntu1~lucid1 Depends: puppet-common (= 2.6.3-0ubuntu1~lucid1), ruby1.8 To setup Puppet tutorial: http://shapeshed.com/setting-up-puppet-on-ubuntu-10-04/ To connect master and client: http://shapeshed.com/connecting-clients-to-a-puppet-master/ The first time I tried to connect master to client failed with SSL_connect error. So I did rm -rf /etc/puppet/ssl/ to remove all the keys inside ssl folders. It looked like it work.... BUT client# puppet agent --server puppet --waitforce 60 --test /usr/lib/ruby/1.8/facter/util/resolution.rb:46: warning: Insecure world writable dir /etc/condor in PATH, mode 040777 /usr/lib/ruby/1.8/puppet/defaults.rb:67: warning: Insecure world writable dir /etc/condor in PATH, mode 040777 info: Creating a new SSL key for giab10 warning: peer certificate won't be verified in this SSL session info: Caching certificate for ca warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session info: Creating a new SSL certificate request for mybox123 info: Certificate Request fingerprint (md5): XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session warning: peer certificate won't be verified in this SSL session info: Caching certificate for mybox123 err: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed warning: Not using cache on failed catalog It cached but then it couldn't retrieve it. Let me stop here.... worrying I would mess something up. But let's check master's status. * master is not running WoW.... ??? master# service puppetmaster start * Starting puppet master [OK] master# service puppetmaster status * master is not running I think time is sync. Well, we are behind a firewall so the port to sync time is disbaled. I checked with date and they seem okay. What about master not running? Is that the cause? Any help is appreciated. Thanks! /var/lib/puppet/log/masterhttp.log [2012-06-30 00:13:25] INFO WEBrick 1.3.1 [2012-06-30 00:13:25] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:13:25] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 00:19:40] INFO WEBrick 1.3.1 [2012-06-30 00:19:40] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:19:40] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 00:28:58] INFO WEBrick 1.3.1 [2012-06-30 00:28:58] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 00:28:58] WARN TCPServer Error: Address already in use - bind(2) [2012-06-30 15:31:25] INFO WEBrick 1.3.1 [2012-06-30 15:31:25] INFO ruby 1.8.7 (2011-06-30) [x86_64-linux] [2012-06-30 15:31:25] WARN TCPServer Error: Address already in use - bind(2) 1 S puppet 5186 1 0 80 0 - 29410 poll_s 15:44 ? 00:00:00 /usr/bin/ruby1.8 /usr/bin/puppet master --masterport=8140 4 S root 5235 5005 0 80 0 - 2344 pipe_w 15:45 pts/0 00:00:00 grep --color=auto puppet kill -9 5186 puppet master service puppetmaster status * master is not running I always have this error, but I always ignored it. http://pastebin.com/exbpArjv What could it mean? Time sync? Package not installed? Then how could we do puppetca in the first place?

    Read the article

  • How to export a brochure into PDF with each layout as a page?

    - by nhj
    I have created a 3-fold brochure in Mac iWork - Pages. If I print the brochure then I can 3-fold it and everything is fine. But if I want to export as PDF then I get a 2-A4 size pages, and this distorts the user the order of the pages, I would like to export each layout as a separate page. The 'Layout Break' option is diabled and I don't know how to enable it? Any ideas? Thanks.

    Read the article

  • Set Error-Pages for all Applications in Tomcat

    - by phisch
    I'm trying to set up custom error pages in tomcat 6, because I don't want the default ones to show up. My error pages are static html, no jsp or anything dynamic. I know how to do this through the web.xml in each application but I'd prefere to setup the error pages only once for the entire server. I tried to add the following fragment to the global web.xml (in conf), but no matter what I add under location, it does not show. <error-page> <error-code>404</error-code> <location>/404.html</location> </error-page> What do I need to do to gobally define custom error pages? Thanks!

    Read the article

  • Setup Firefox to save .pages as .zip automatically

    - by Mike Dtrick
    What do I want to do? I would like Firefox to save files with the .pages extension as .zip files automatically. Scenario You are browsing through your emails and you notice your friend just sent you an email with a file attached (a .pages in this example). Unfortunately, you have a laptop that runs Windows. Your friend continues to send tons of emails with .pages files attached and you are tired of manually saving the files as a .zip file. Ultimately, you would like Firefox to be set up so that the download/file manager recognizes the .pages extension and automatically converts it to a .zip file. What have I done? I have saved files manually by selecting save as "All Files" and setting the extension to .zip. I've gone through Firefox and their documentation and have not found anything on how to complete this task. Why am I doing this? To save time (only a few seconds, not the main reason). I would like to setup a simple solution that "converts" a file automatically without having to recall steps on how to achieve the task manually (for clients who aren't exactly tech savvy). So that clients with Windows can access the files. IMPORTANT NOTE: I am not trying to save the web page, rather an Apple document equivalent to Microsoft Word. UPDATE: The really easy method would be to save one file, right click it, choose properties and open all .pages files up with WinRAR (or any other program that extracts files from a compressed folder). For the sake of learning, I am going to "neglect" this method and continue to do some research on Firefox add-ons. I would still like to have Firefox or the download manager to do the bulk of the work for converting the file.

    Read the article

  • Index fragmentation and reorganizing database pages

    - by TiQ
    Say you have a database with heavy index fragmentation. Say this database also has a lot of free space due to frequent deletes in its data file. This free space is not contiguous. If I rebuild all indexes to remove fragmentation and then reorganize the database pages so allocated pages and free pages are contiguous, would this cause further fragmentation in my indexes? I guess the question can be posed as: if it matters, which should I do first, reorganize or rebuild?

    Read the article

  • Nginx error page using django master page

    - by user835199
    I am using python django to develop a web app and using nginx and gunicorn as servers. I need to define nginx error page (for error codes like 500, 501 etc), but i want to keep the layout same as that in other site pages. For site pages, i use the include functionality of django but in this case, since django won't preprocess the page, i need to create a pure html page. Is there a way to reuse the master page that i created in django for creating this error page?

    Read the article

  • Man pages in Linux

    - by Ayos
    I don't seem to have all the man pages that I need. For example, my college computers (running Fedora 14) have man pages for ASCII, all the standard C libraries (stdlib.h, stdio.h) and so on and so forth. I wish to "install" these pages, after looking up on the Internet I couldn't really find anything that made sense. How can I get, say, the man-page of ASCII (I know it's not really a command or a daemon or anything like that, but typing man ASCII on the college computer gets me a page with the ASCII value table + a little more information). I don't want to keep using the Internet for looking up man pages every time I need to look up a function, function prototype or the ASCII table or something like that.

    Read the article

  • Problem resolving many of the Web Pages

    - by Aditya
    I am presently running Ubuntu 12.04 and using Chrome/Firefox along with OpenDNS (have tried Google Public DNS as well as DNS of my ISP). Suddenly, a lot of websites that I visit frequently don't load anymore. Some of them are imgur, yahoo, fed-sudoku, microsoft and addons page of firefox. I am sure there are many more that won't load. I have Windows 7 in Dual-Boot and there are no problems whatsoever in opening these pages on Windows. A little History: 2 weeks back I installed Ubuntu 12.10. I faced this issue right-away on Ubuntu 12.10. I thought something must have went wrong with the installation, so I removed Ubuntu 12.10 and instead installed Lubuntu 12.10 on it. But the same issue persisted on Lubuntu as well. So, I tried opening these webpages in Live Environments (of Ubuntu 12.10, Lubuntu 12.10 and Ubuntu 12.04.1) from USB. The issue was there for Ubuntu 12.10 and Lubuntu 12.10. However, I was able to access these webpages from Ubuntu 12.04.1. So, I installed 12.04.1 on my HardDisk. Everything on 12.04 was fine till yesterday; but suddenly, these sites don't load anymore. All this while, Windows 7 in Dual-Boot works flawlessly. Please help me resolve this issue.

    Read the article

  • WordPress page title repeated in SOME pages

    - by cmykrgbb
    I have created a Wordpress site and titles were working just fine. Then, some time and plugins installed later, I noticed that in SOME pages I get the title repeated 2 times. Example of wrong page title: Contact - NAME | NAME Example of normal title: Our Services | NAME Now, if I go to General Settings and change title it will change both, no improvement. SEO by Yoast has the option to reset page titles, but that just removes all titles leaving the current URL as page title, so no good either. Here is the code I originally had: <title><?php wp_title(''); ?><?php if(wp_title('', false)) { echo ' | '; } ?><?php bloginfo('name'); ?></title> Here is the code I am using now: <title><?php wp_title('|'); ?></title> To sum up, I think somewhere in the database there's a wp_title repeated: once using '-' as separator, another one (the current one) using '|'. Any help will be most appreciated, thanks!

    Read the article

  • Letting search engines know that different links to identical pages stress different parts of the page

    - by balpha
    When you follow a permalink to a chat message in the Stack Exchange chat, you get a view of the transcript page for the day that contains the particular message. This message is highlighted in yellow, and the page is scrolled to its position. Sometimes – admittedly rarely, but it happens – a web search will result in such a transcript link. Here's a (constructed, obviously) example: A Google search for strange behavior of the \bibliography command site:chat.stackexchange.com gives me a link to this chat message. This message is obiously unrelated to my query, but the transcript page does indeed contain my search terms – just in a totally different spot. Both the above links lead to the same content, and Google knows this, since both pages have <link rel="canonical" href="/transcript/41/2012/4/9/0-24" /> in their <head>. The only difference between the two links is Which message has the highlight css class?. Is there a way to let Google know that while all three links have the same content, they put an emphasis on a different part of the content? Note that the permalinks on the transcript page already have a #12345 hash to "point" to the relavant chat message, but Google appears to drop it.

    Read the article

  • Advertising on personalized pages behind a login

    - by johneth
    I am currently building a web app which requires a user to log in. After they log in, they can see the content they've added to the web app, and things that the web app has done with the content they added. The URL structure won't differentiate different users (e.g. all user's 'homepage' would be example.com/home, not something like example.com/username/home). This is much the same way that Facebook works (all FB user's messages are at facebook.com/messages, for example). This presents a problem with advertising. I know that you can use AdSense behind a login, but as far as I'm aware, that's for things like forums, where everyone sees the same things (which wouldn't be the case in this site). I also know that I could put AdSense on the pages without allowing it to log in, which would produce inferior ads. I'm fairly certain it would be against the Terms of Service to give AdSense a login to a 'dummy' account with typical content, as it would not be seeing the same thing as every other user (which is impossible, as they all see different things). So, my question is: Is there an ad network, or other method, that can serve ads behind a login, maybe based on keywords rather than content?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >