Search Results

Search found 11671 results on 467 pages for 'man pages'.

Page 11/467 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • 301 redirects - can we not delete old pages?

    - by KBS
    First time here :) We have a page on the site which ranks well for an SEO term (top 5) but contains old information. We have added a new page but Google doesn't rank it that well. Information on these pages is time sensitive. Old: example.com/2013-related-information.html New: example.com/2014-related-information.html Obvious solution is to delete old page and do a 301 redirect to the new page. Now, can we still keep the old page by giving it a new URL. Step1: example.com/2013-related-information.html is redirect to example.com/2014-related-information.html Step2: example.com/2014-related-information.html is recreated with a new address such as example.com/new-2013-related-information.html What we are trying to do is to send the user to the fresh page but still not wasting the record copy if someone wants to go and dig up old page. Would appreciate help!! Cheers

    Read the article

  • Question about mod_rewrite rule for redirecting failing pages

    - by SimpleCoder
    I'm setting up a mod_rewrite rule that redirects failing pages to a custom Page Not Found page. This is with Wordpress. I'm using the guide here: http://httpd.apache.org/docs/2.2/rewrite/rewrite_guide_advanced.html#redirect404. My rule so far looks like this: RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.+) http://example.com/?page_id=254 [R] This works. It seems to be a combination of the first and second suggestion that worked, since the -U flag did nothing. My question is, out of curiosity why the following happens: When I change REQUEST_FILENAME to REQUEST_URI (as the second example suggests), the page loads, but none of the style sheets load. All of my formatting is gone, and this happens on every page. Can anyone think of why this might happen?

    Read the article

  • New AIA Product Information Center (PIC) pages

    - by Lionel Dubreuil
    Did You Know? The AIA Support team has spent the last few months reformatting our Product Master Notes into a brand new Product Information Center (PIC) format! The PIC pages contains the links to documentation, new and most popular documents, alerts, FAQs, recommended patches and the certification matrix. Our main PIC page has all the PIP and Foundation Pack PIC notes listed on it for easier navigation. Take a look and tell us what you think!  You can email [email protected] with your feedback!

    Read the article

  • Google showing meta descriptions from other pages in the SERPs

    - by ojek
    Recently I added some content to my website and submitted a sitemap files to Google. Now that Google has indexed those pages, I discovered that some of words and sentences that are listed in Google that lead to my website are having their meta descriptions somehow mixed up. Here is how it works: After I put a sentence on Google to check for my website ranking, I can see a page title in the results of page1, a link to page1, and a description from page2. Since my website is a forum, if Google mixes the links of threads, it leads my users to different kind of material that they were looking for. Is there anything I can do about it?

    Read the article

  • Can i have a facebook fangate between pages of my website

    - by Onaiza
    I am not sure whether this is possible can i have a fan gate between two pages of a website, so that before a visitor can access a particular url it would be mandatory to like our fan page on facebook? Scenario I have a travel website puneritraveller.com, and for each destination featured in the website i have listed hotels, with a direct link to the website of the hotel. for example i have featured a destination 'Alibaug' with a hotels page - puneritraveller.com/alibaug-hotels.html , in this page i have given banner ads for a few hotels for example Yellow house which on clicking redirects to www.yellowhousealibag.com What i want to achieve is, when someone clicks on the banner ad for Yellow house they should be redirected to a page with a like button to facebook.com/puneritraveller and once he/she clicks on the like button should be redirected to www.yellowhousealibag.com Is this possible? Please help

    Read the article

  • JavaOne in Brazil

    - by janice.heiss(at)oracle.com
    JavaOne in Brazil, currently taking place in Sao Paolo, is one event I'd love to attend. I once heard "father of Java" James Gosling talk about Java developers throughout the world. He observed that there were good developers everywhere. It was not the case, he said, that that the really good developers are in one place and the not-so-good developers are in another. He encountered excellent developers everywhere. Then he paused and said that the craziest developers were definitely the Brazilians. As anyone who knows James would realize, this was meant as high praise. He said the Brazilians would work through the night on projects and were very enthusiastic and spontaneous - features that Brazilian culture is known for. Brazilian developers are responsible for creating one of the most impressive uses of Java ever - the applications that run the Brazilian health services. Starting from scratch they created a system that enables an expert doctor in Rio to look at an X-Ray of a patient near the Amazon and offer advice. One of the main architects of this was Java Champion Fabinane Nardon the distinguished Brazilian Java architect and open-source evangelist. As she writes in her blog:"In 2003, I was invited to assemble a team and architect a Public Healthcare Information System for the city of São Paulo, the largest in Latin America, with 14 million inhabitants. The resulting software had 2.5 million of lines of code and it was created, from specification to production, in only 10 months. At the time, the software was considered the largest J2EE application in the world and was featured in several articles, as this one. As a result, we won the Duke's Choice Award in 2005 during JavaOne, the largest development conference in the world. At the time, Sun Microsystems make a short documentary about our work." "In 2007, a lightning struck twice and I was again invited to assemble a new team and architect an even larger information system for healthcare. And thus I became CTO and one of the founders of Zilics Healthcare Information Systems. "In 2010, I started to research and work on Cloud Computing technology and became leader of the LSI-TEC Cloud Computing group. LSI-TEC is a research laboratory in the University of Sao Paulo, one of the best in Brazil. Thus, I became one of the ghost writers behind the popular Cloud Computing Twitter @the_cloud."You can see and hear Nardon in a 4 minute documentary on Java and the Brazilian health care system produced by Sun Microsystems. And you can listen to a September 2010 podcast with Nardon and her fellow Brazilian Java Champion Bruno Souza (known in Brazil as "Java Man") here at 11:10 minutes into the podcast.Next year, I'll hope to be reporting in Brazil at JavaOne!

    Read the article

  • Set Error-Pages for all Applications in Tomcat

    - by user38511
    I'm trying to set up custom error pages in tomcat 6, because I don't want the default ones to show up. My error pages are static html, no jsp or anything dynamic. I know how to do this through the web.xml in each application but I'd prefere to setup the error pages only once for the entire server. I tried to add the following fragment to the global web.xml (in conf), but no matter what I add under location, it does not show. <error-page> <error-code>404</error-code> <location>/404.html</location> </error-page> What do I need to do to gobally define custom error pages? Thanks!

    Read the article

  • Disabling the Squid Error pages

    - by Nicholas Smith
    I've just started looking at using Squid for a project and can't seem to see an easy way of disabling the Squid error pages (e.g. "Name Error: The domain name does not exist"). We use a custom browser which handles that scenario in our way, so the Squid error pages are overriding our custom logic. Is it possible to set them too 'off'? I've been through the .conf and I've found where the error pages are stored, but no real options to disable them.

    Read the article

  • How to export a brochure into PDF with each layout as a page?

    - by nhj
    I have created a 3-fold brochure in Mac iWork - Pages. If I print the brochure then I can 3-fold it and everything is fine. But if I want to export as PDF then I get a 2-A4 size pages, and this distorts the user the order of the pages, I would like to export each layout as a separate page. The 'Layout Break' option is diabled and I don't know how to enable it? Any ideas? Thanks.

    Read the article

  • Set Error-Pages for all Applications in Tomcat

    - by phisch
    I'm trying to set up custom error pages in tomcat 6, because I don't want the default ones to show up. My error pages are static html, no jsp or anything dynamic. I know how to do this through the web.xml in each application but I'd prefere to setup the error pages only once for the entire server. I tried to add the following fragment to the global web.xml (in conf), but no matter what I add under location, it does not show. <error-page> <error-code>404</error-code> <location>/404.html</location> </error-page> What do I need to do to gobally define custom error pages? Thanks!

    Read the article

  • Setup Firefox to save .pages as .zip automatically

    - by Mike Dtrick
    What do I want to do? I would like Firefox to save files with the .pages extension as .zip files automatically. Scenario You are browsing through your emails and you notice your friend just sent you an email with a file attached (a .pages in this example). Unfortunately, you have a laptop that runs Windows. Your friend continues to send tons of emails with .pages files attached and you are tired of manually saving the files as a .zip file. Ultimately, you would like Firefox to be set up so that the download/file manager recognizes the .pages extension and automatically converts it to a .zip file. What have I done? I have saved files manually by selecting save as "All Files" and setting the extension to .zip. I've gone through Firefox and their documentation and have not found anything on how to complete this task. Why am I doing this? To save time (only a few seconds, not the main reason). I would like to setup a simple solution that "converts" a file automatically without having to recall steps on how to achieve the task manually (for clients who aren't exactly tech savvy). So that clients with Windows can access the files. IMPORTANT NOTE: I am not trying to save the web page, rather an Apple document equivalent to Microsoft Word. UPDATE: The really easy method would be to save one file, right click it, choose properties and open all .pages files up with WinRAR (or any other program that extracts files from a compressed folder). For the sake of learning, I am going to "neglect" this method and continue to do some research on Firefox add-ons. I would still like to have Firefox or the download manager to do the bulk of the work for converting the file.

    Read the article

  • Index fragmentation and reorganizing database pages

    - by TiQ
    Say you have a database with heavy index fragmentation. Say this database also has a lot of free space due to frequent deletes in its data file. This free space is not contiguous. If I rebuild all indexes to remove fragmentation and then reorganize the database pages so allocated pages and free pages are contiguous, would this cause further fragmentation in my indexes? I guess the question can be posed as: if it matters, which should I do first, reorganize or rebuild?

    Read the article

  • Problem resolving many of the Web Pages

    - by Aditya
    I am presently running Ubuntu 12.04 and using Chrome/Firefox along with OpenDNS (have tried Google Public DNS as well as DNS of my ISP). Suddenly, a lot of websites that I visit frequently don't load anymore. Some of them are imgur, yahoo, fed-sudoku, microsoft and addons page of firefox. I am sure there are many more that won't load. I have Windows 7 in Dual-Boot and there are no problems whatsoever in opening these pages on Windows. A little History: 2 weeks back I installed Ubuntu 12.10. I faced this issue right-away on Ubuntu 12.10. I thought something must have went wrong with the installation, so I removed Ubuntu 12.10 and instead installed Lubuntu 12.10 on it. But the same issue persisted on Lubuntu as well. So, I tried opening these webpages in Live Environments (of Ubuntu 12.10, Lubuntu 12.10 and Ubuntu 12.04.1) from USB. The issue was there for Ubuntu 12.10 and Lubuntu 12.10. However, I was able to access these webpages from Ubuntu 12.04.1. So, I installed 12.04.1 on my HardDisk. Everything on 12.04 was fine till yesterday; but suddenly, these sites don't load anymore. All this while, Windows 7 in Dual-Boot works flawlessly. Please help me resolve this issue.

    Read the article

  • WordPress page title repeated in SOME pages

    - by cmykrgbb
    I have created a Wordpress site and titles were working just fine. Then, some time and plugins installed later, I noticed that in SOME pages I get the title repeated 2 times. Example of wrong page title: Contact - NAME | NAME Example of normal title: Our Services | NAME Now, if I go to General Settings and change title it will change both, no improvement. SEO by Yoast has the option to reset page titles, but that just removes all titles leaving the current URL as page title, so no good either. Here is the code I originally had: <title><?php wp_title(''); ?><?php if(wp_title('', false)) { echo ' | '; } ?><?php bloginfo('name'); ?></title> Here is the code I am using now: <title><?php wp_title('|'); ?></title> To sum up, I think somewhere in the database there's a wp_title repeated: once using '-' as separator, another one (the current one) using '|'. Any help will be most appreciated, thanks!

    Read the article

  • Letting search engines know that different links to identical pages stress different parts of the page

    - by balpha
    When you follow a permalink to a chat message in the Stack Exchange chat, you get a view of the transcript page for the day that contains the particular message. This message is highlighted in yellow, and the page is scrolled to its position. Sometimes – admittedly rarely, but it happens – a web search will result in such a transcript link. Here's a (constructed, obviously) example: A Google search for strange behavior of the \bibliography command site:chat.stackexchange.com gives me a link to this chat message. This message is obiously unrelated to my query, but the transcript page does indeed contain my search terms – just in a totally different spot. Both the above links lead to the same content, and Google knows this, since both pages have <link rel="canonical" href="/transcript/41/2012/4/9/0-24" /> in their <head>. The only difference between the two links is Which message has the highlight css class?. Is there a way to let Google know that while all three links have the same content, they put an emphasis on a different part of the content? Note that the permalinks on the transcript page already have a #12345 hash to "point" to the relavant chat message, but Google appears to drop it.

    Read the article

  • Advertising on personalized pages behind a login

    - by johneth
    I am currently building a web app which requires a user to log in. After they log in, they can see the content they've added to the web app, and things that the web app has done with the content they added. The URL structure won't differentiate different users (e.g. all user's 'homepage' would be example.com/home, not something like example.com/username/home). This is much the same way that Facebook works (all FB user's messages are at facebook.com/messages, for example). This presents a problem with advertising. I know that you can use AdSense behind a login, but as far as I'm aware, that's for things like forums, where everyone sees the same things (which wouldn't be the case in this site). I also know that I could put AdSense on the pages without allowing it to log in, which would produce inferior ads. I'm fairly certain it would be against the Terms of Service to give AdSense a login to a 'dummy' account with typical content, as it would not be seeing the same thing as every other user (which is impossible, as they all see different things). So, my question is: Is there an ad network, or other method, that can serve ads behind a login, maybe based on keywords rather than content?

    Read the article

  • create previous next button for iframe pages

    - by Resu
    This topic may have lots of code out there, BUT I seem to be looking for a variation that isn't based on history, is it possible... So I have this code... <script type="text/javascript"> var pages=new Array(); pages[0]="listItem1.html"; pages[1]="listItem2.html"; pages[2]="listItem3.html"; pages[3]="listItem4.html"; pages[4]="listItem5.html"; var i=0; var end=pages.length; end--; function changeSrc(operation) { if (operation=="next") { if (i==end) { document.getElementById('the_iframe').src=pages[end]; i=0;} else { document.getElementById('the_iframe').src=pages[i]; i++;}} if (operation=="back") { if (i==0) { document.getElementById('the_iframe').src=pages[0]; i=end;} else { document.getElementById('the_iframe').src=pages[i]; i--;}}} </script> </head> <body> <ul id="menu" role="group"> <li><a href="listItem1.html" target="ifrm" role="treeitem">Welcome</a> <ul> <li><a href="listItem2.html" target="ifrm" role="treeitem">Ease of Access Center</a></li> </ul> </li> <li><a href="listItem3.html" target="ifrm">Getting Started</a> <ul> <li><a href="listItem4.html" target="ifrm">Considerations</a></li> <li><a href="listItem5.html" target="ifrm">Changing Perspective</a></li> </ul> </li> </ul> <iframe id="the_iframe" scrolling="no" src="listItem1.htm" name="ifrm" style="width:540px;></iframe> <input type="button" onClick="changeSrc('back');" value="Back" /> <input type="button" onClick="changeSrc('next');" value="Next" /> and if I click on the next or prev button, it does move somewhere,but... let's say my iframe is showing listItem2, then I click on listItem4 in the menu (there is a tree menu involved), then I want to go to listItem3 and I hit the back button...instead of going to listItem3, it goes to listItem2 (or someplace that is not back a page from 4 to 3). It appears that the buttons are navigating based on history?...but I just want a straight forward or backward movement...I don't want my buttons to have this browser-type functionality...If I'm on listItem4 and hit the next button, I want it to go to listItem5. Many Thanks For Any Help!

    Read the article

  • ASP.NET Asynchronous Pages and when to use them

    - by rajbk
    There have been several articles posted about using  asynchronous pages in ASP.NET but none of them go into detail as to when you should use them. I finally found a great post by Thomas Marquardt that explains the process in depth. He addresses a key misconception also: So, in your ASP.NET application, when should you perform work asynchronously instead of synchronously? Well, only 1 thread per CPU can execute at a time.  Did you catch that?  A lot of people seem to miss this point...only one thread executes at a time on a CPU. When you have more than this, you pay an expensive penalty--a context switch. However, if a thread is blocked waiting on work...then it makes sense to switch to another thread, one that can execute now.  It also makes sense to switch threads if you want work to be done in parallel as opposed to in series, but up until a certain point it actually makes much more sense to execute work in series, again, because of the expensive context switch. Pop quiz: If you have a thread that is doing a lot of computational work and using the CPU heavily, and this takes a while, should you switch to another thread? No! The current thread is efficiently using the CPU, so switching will only incur the cost of a context switch. Ok, well, what if you have a thread that makes an HTTP or SOAP request to another server and takes a long time, should you switch threads? Yes! You can perform the HTTP or SOAP request asynchronously, so that once the "send" has occurred, you can unwind the current thread and not use any threads until there is an I/O completion for the "receive". Between the "send" and the "receive", the remote server is busy, so locally you don't need to be blocking on a thread, but instead make use of the asynchronous APIs provided in .NET Framework so that you can unwind and be notified upon completion. Again, it only makes sense to switch threads if the benefit from doing so out weights the cost of the switch. Read more about it in these posts: Performing Asynchronous Work, or Tasks, in ASP.NET Applications http://blogs.msdn.com/tmarq/archive/2010/04/14/performing-asynchronous-work-or-tasks-in-asp-net-applications.aspx ASP.NET Thread Usage on IIS 7.0 and 6.0 http://blogs.msdn.com/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx   PS: I generally do not write posts that simply link to other posts but think it is warranted in this case.

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • Applying a Dojo Toolkit (Dijit) theme to ASP.NET pages.

    - by mcoolbeth
    In the code below, I am trying to apply a Dijit theme to the controls in my .aspx page. However, the controls persist in their normal, unthemed appearance. Anybody know why? Master Page: <%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Main.master.cs" Inherits="WebJournalEntryClient.Main" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>My Web Application</title> <link rel="stylesheet" href="dojoroot/dijit/themes/tundra/tundra.css" /> <script type="text/javascript" src="dojoroot/dojo/dojo.js"/> <script type="text/javascript"> dojo.require("dijit.form.Button"); dojo.require("dijit.form.TextBox"); dojo.require("dijit.form.ComboBox"); </script> </head> <body class = "tundra"> <form id="form1" runat="server"> <div> <div> This is potentially space for a header bar. </div> <table> <tr> <td> Maybe <br /> a <br /> Side <br /> bar. </td> <td> <asp:ContentPlaceHolder ID="CenterPlaceHolder" runat="server"/> </td> </tr> </table> <div> This is potentially space for a footer bar. </div> </div> </form> </body> </html> Content Page: <%@ Page Title="" Language="C#" MasterPageFile="~/Main.Master" AutoEventWireup="true" CodeBehind="LogIn.aspx.cs" Inherits="WebJournalEntryClient.LogIn" %> <asp:Content ID="Content" ContentPlaceHolderID="CenterPlaceHolder" runat="server"> <div> User ID: <asp:TextBox ID = "UserName" dojoType="dijit.form.TextBox" runat="server" /><br /> Password: <asp:TextBox ID = "PassWord" dojoType="dijit.form.TextBox" runat="server" /><br /> <asp:Button ID="LogInButton" Text="Log In" dojoType="dijit.form.Button" runat="server" /> </div> </asp:Content>

    Read the article

  • SEO Help with Pages Indexed by Google

    - by Joe Majewski
    I'm working on optimizing my site for Google's search engine, and lately I've noticed that when doing a "site:www.joemajewski.com" query, I get results for pages that shouldn't be indexed at all. Let's take a look at this page, for example: http://www.joemajewski.com/wow/profile.php?id=3 I created my own CMS, and this is simply a breakdown of user id #3's statistics, which I noticed is indexed by Google, although it shouldn't be. I understand that it takes some time before Google's results reflect accurately on my site's content, but this has been improperly indexed for nearly six months now. Here are the precautions that I have taken: My robots.txt file has a line like this: Disallow: /wow/profile.php* When running the url through Google Webmaster Tools, it indicates that I did, indeed, correctly create the disallow command. It did state, however, that a page that doesn't get crawled may still get displayed in the search results if it's being linked to. Thus, I took one more precaution. In the source code I included the following meta data: <meta name="robots" content="noindex,follow" /> I am assuming that follow means to use the page when calculating PageRank, etc, and the noindex tells Google to not display the page in the search results. This page, profile.php, is used to take the $_GET['id'] and find the corresponding registered user. It displays a bit of information about that user, but is in no way relevant enough to warrant a display in the search results, so that is why I am trying to stop Google from indexing it. This is not the only page Google is indexing that I would like removed. I also have a WordPress blog, and there are many category pages, tag pages, and archive pages that I would like removed, and am doing the same procedures to attempt to remove them. Can someone explain how to get pages removed from Google's search results, and possibly some criteria that should help determine what types of pages that I don't want indexed. In terms of my WordPress blog, the only pages that I truly want indexed are my articles. Everything else I have tried to block, with little luck from Google. Can someone also explain why it's bad to have pages indexed that don't provide any new or relevant content, such as pages for WordPress tags or categories, which are clearly never going to receive traffic from Google. Thanks!

    Read the article

  • I can I separate multiple logical pages in a text file I create in Perl?

    - by Micah
    So far, I've been successful with generating output to individual files by opening a file for output as part of outer loop and closing it after all output is written. I had used a counting variable ($x) and appended .txt onto it to create a filename, and had written it to the same directory as my perl script. I want to step the code up a bit, prompt for a file name from the user, open that file once and only once, and write my output one "printed letter" per page. Is this possible in plain text? From what I understand, chr(12) is an ascii line feed character and will get me close to what I want, but is there a better way? Thanks in advance, guys. :) sub PersonalizeLetters{ print "\n\n Beginning finalization of letters..."; print "\n\n I need a filename to save these letters to."; print "\n Filename > "; $OutFileName = <stdin>; chomp ($OutFileName); open(OutFile, ">$OutFileName"); for ($x=0; $x<$NumRecords; $x++){ $xIndex = (6 * $x); $clTitle = @ClientAoA[$xIndex]; $clName = @ClientAoA[$xIndex+1]; #I use this 6x multiplier because my records have 6 elements. #For this routine I'm only interested in name and title. #Reset OutLetter array #Midletter has other merged fields that aren't specific to who's receiving the letter. @OutLetter = @MiddleLetter; for ($y=0; $y<=$ifLength; $y++){ #Step through line by line and insert the name. $WorkLine = @OutLetter[$y]; $WorkLine =~ s/\[ClientTitle\]/$clTitle/; $WorkLine =~ s/\[ClientName\]/$clName/; @OutLetter[$y] = $WorkLine; } print OutFile "@OutLetter"; #Will chr(12) work here, or is there something better? print OutFile chr(12); $StatusX = $x+1; print "Writing output $StatusX of $NumRecords... \n\n"; } close(OutFile); }

    Read the article

  • Using real fonts in HTML 5 & CSS 3 pages

    - by nikolaosk
    This is going to be the fifth post in a series of posts regarding HTML 5. You can find the other posts here, here , here and here.In this post I will provide a hands-on example on how to use real fonts in HTML 5 pages with the use of CSS 3.Font issues have been appearing in all websites and caused all sorts of problems for web designers.The real problem with fonts for web developers until now was that they were forced to use only a handful of fonts.CSS 3 allows web designers not to use only web-safe fonts.These fonts are in wide use in most user's operating systems.Some designers (when they wanted to make their site stand out) resorted in various techniques like using images instead of fonts. That solution is not very accessible-friendly and definitely less SEO friendly.CSS (through CSS3's Fonts module) 3 allows web developers to embed fonts directly on a web page.First we need to define the font and then attach the font to elements.Obviously we have various formats for fonts. Some are supported by all modern browsers and some are not.The most common formats are, Embedded OpenType (EOT),TrueType(TTF),OpenType(OTF). I will use the @font-face declaration to define the font used in this page.  Before you download fonts (in any format) make sure you have understood all the licensing issues. Please note that all these real fonts will be downloaded in the client's computer.A great resource on the web (maybe the best) is http://www.typekit.com/.They have an abundance of web fonts for use. Please note that they sell those fonts.Another free (best things in life a free, aren't they?) resource is the http://www.google.com/webfonts website. I have visited the website and downloaded the Aladin webfont.When you download any font you like make sure you read the license first. Aladin webfont is released under the Open Font License (OFL) license. Before I go on with the actual demo I will use the (http://www.caniuse.com) to see the support for web fonts from the latest versions of modern browsers.Please have a look at the picture below. We see that all the latest versions of modern browsers support this feature. In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here.I create a simple HTML 5 page. The markup follows and it is very easy to use and understand<!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">       </head>  <body>      <div id="header">      <h1>Learn cutting edge technologies</h1>      <p>HTML 5, JQuery, CSS3</p>    </div>        <div id="main">          <h2>HTML 5</h2>                        <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>      </div>             </body>  </html> Then I create the style.css file.<style type="text/css">@font-face{font-family:Aladin;src: url('Aladin-Regular.ttf')}h1{font-family:Aladin,Georgia,serif;}</style> As you can see we want to style the h1 tag in our HTML 5 markup.I just use the @font-face property,specifying the font-family and the source of the web font. Then I just use the name in the font-family property to style the h1 tag.Have a look below to see my page in IE10. Make sure you open this page in all your browsers installed in your machine. Make sure you have downloaded the latest versions. Now we can make our site stand out with web fonts and give it a really unique look and feel. Hope it helps!!!  

    Read the article

  • What does the -R flag do for chflags?

    - by ralphthemagician
    I'm not clear on exactly what the -R flag does for chflags. I was wondering if someone might be able to help me. The man page says this: Recurse: Change the file flags of file hierarchies rooted in the files instead of just the files themselves. I don't understand what that means. Can someone tell me what the difference would be between chflags -R hidden and just chflags hidden? There's an online man page here for reference: http://ss64.com/osx/chflags.html

    Read the article

  • Larry Ellison is szerepel az Iron Man 2 c. filmben

    - by Fekete Zoltán
    Az Oracle CEO-ja, Larry Ellison is szerepel az Iron Man 2 címu mozifilmben! :) Tehát nem csupán Scarlett Johansson bájaiban gyönyörködhetünk, hanem az Oracle elso embere, Larry Ellison is megjelenik a színen. Az Oracle-nek az Iron Man 2 filmhez kapcsolódó anyagai itt találhatók. Itt meg tekintheto a film trailere is, illetve háttérképek nagy méretben. Légy Te is szuperhíró! Legyen szuperhos! (sot, Superhero :) ) - innovatív Oracle megoldásával

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >