Search Results

Search found 1919 results on 77 pages for 'semantic markup'.

Page 6/77 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Semantic #region usage

    - by Luca
    What's your opinion about using #region folding using application semantic, instead of folding for "syntax". For example: #region Application Loop #region User Management #region This Kinf of stuffs instead of #region Private Routines #region Public Properties #region ThisRoutine // (Yes, I've seen this also!) In this logic, I'm starting fold even routine bodies. I'm starting to love #region directive (even using #pragma region when using C++!).

    Read the article

  • Help with Boost Grammar

    - by Decmanc04
    I have been using the following win32 console code to try to parse a B Machine Grammar embedded within C++ using Boost Spirit grammar template. I am a relatively new Boost user. The code compiles, but when I run the .exe file produced by VC++2008, the program partially parses the input file. I believe the problem is with my grammar definition or the functions attached as semantic atctions. The code is given below: // BIFAnalyser.cpp : Defines the entry point for the console application. // // /*============================================================================= Copyright (c) Temitope Jos Onunkun 2010 http://www.dcs.kcl.ac.uk/pg/onun/ Use, modification and distribution is subject to the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) =============================================================================*/ //////////////////////////////////////////////////////////////////////////// // // // B Machine parser using the Boost "Grammar" and "Semantic Actions". // // // //////////////////////////////////////////////////////////////////////////// #include <boost/spirit/core.hpp> #include <boost/tokenizer.hpp> #include <iostream> #include <string> #include <fstream> #include <vector> #include <utility> /////////////////////////////////////////////////////////////////////////////////////////// using namespace std; using namespace boost::spirit; /////////////////////////////////////////////////////////////////////////////////////////// // // Semantic actions // //////////////////////////////////////////////////////////////////////////// vector<string> strVect; namespace { //semantic action function on individual lexeme void do_noint(char const* str, char const* end) { string s(str, end); if(atoi(str)) { ; } else { strVect.push_back(s); cout << "PUSH(" << s << ')' << endl; } } //semantic action function on addition of lexemes void do_add(char const*, char const*) { cout << "ADD" << endl; for(vector<string>::iterator vi = strVect.begin(); vi < strVect.end(); ++vi) cout << *vi << " "; } //semantic action function on subtraction of lexemes void do_subt(char const*, char const*) { cout << "SUBTRACT" << endl; for(vector<string>::iterator vi = strVect.begin(); vi < strVect.end(); ++vi) cout << *vi << " "; } //semantic action function on multiplication of lexemes void do_mult(char const*, char const*) { cout << "\nMULTIPLY" << endl; for(vector<string>::iterator vi = strVect.begin(); vi < strVect.end(); ++vi) cout << *vi << " "; cout << "\n"; } //semantic action function on division of lexemes void do_div(char const*, char const*) { cout << "\nDIVIDE" << endl; for(vector<string>::iterator vi = strVect.begin(); vi < strVect.end(); ++vi) cout << *vi << " "; } //semantic action function on simple substitution void do_sSubst(char const* str, char const* end) { string s(str, end); //use boost tokenizer to break down tokens typedef boost::tokenizer<boost::char_separator<char> > Tokenizer; boost::char_separator<char> sep("-+/*:=()"); // default char separator Tokenizer tok(s, sep); Tokenizer::iterator tok_iter = tok.begin(); pair<string, string > dependency; //create a pair object for dependencies //save first variable token in simple substitution dependency.first = *tok.begin(); //create a vector object to store all tokens vector<string> dx; // for( ; tok_iter != tok.end(); ++tok_iter) //save all tokens in vector { dx.push_back(*tok_iter ); } vector<string> d_hat; //stores set of dependency pairs string dep; //pairs variables as string object for(int unsigned i=1; i < dx.size()-1; i++) { dependency.second = dx.at(i); dep = dependency.first + "|->" + dependency.second + " "; d_hat.push_back(dep); } cout << "PUSH(" << s << ')' << endl; for(int unsigned i=0; i < d_hat.size(); i++) cout <<"\n...\n" << d_hat.at(i) << " "; cout << "\nSIMPLE SUBSTITUTION\n"; } //semantic action function on multiple substitution void do_mSubst(char const* str, char const* end) { string s(str, end); //use boost tokenizer to break down tokens typedef boost::tokenizer<boost::char_separator<char> > Tok; boost::char_separator<char> sep("-+/*:=()"); // default char separator Tok tok(s, sep); Tok::iterator tok_iter = tok.begin(); // string start = *tok.begin(); vector<string> mx; for( ; tok_iter != tok.end(); ++tok_iter) //save all tokens in vector { mx.push_back(*tok_iter ); } mx.push_back("END\n"); //add a marker "end" for(unsigned int i=0; i<mx.size(); i++) { // if(mx.at(i) == "END" || mx.at(i) == "||" ) // break; // else if( mx.at(i) == "||") // do_sSubst(str, end); // else // { // do_sSubst(str, end); // } cout << "\nTokens ... " << mx.at(i) << " "; } cout << "PUSH(" << s << ')' << endl; cout << "MULTIPLE SUBSTITUTION\n"; } } //////////////////////////////////////////////////////////////////////////// // // Simple Substitution Grammar // //////////////////////////////////////////////////////////////////////////// // Simple substitution grammar parser with integer values removed struct Substitution : public grammar<Substitution> { template <typename ScannerT> struct definition { definition(Substitution const& ) { multi_subst = (simple_subst [&do_mSubst] >> +( str_p("||") >> simple_subst [&do_mSubst]) ) ; simple_subst = (Identifier >> str_p(":=") >> expression)[&do_sSubst] ; Identifier = alpha_p >> +alnum_p//[do_noint] ; expression = term >> *( ('+' >> term)[&do_add] | ('-' >> term)[&do_subt] ) ; term = factor >> *( ('*' >> factor)[&do_mult] | ('/' >> factor)[&do_div] ) ; factor = lexeme_d[( (alpha_p >> +alnum_p) | +digit_p)[&do_noint]] | '(' >> expression >> ')' | ('+' >> factor) ; } rule<ScannerT> expression, term, factor, Identifier, simple_subst, multi_subst ; rule<ScannerT> const& start() const { return multi_subst; } }; }; //////////////////////////////////////////////////////////////////////////// // // Main program // //////////////////////////////////////////////////////////////////////////// int main() { cout << "************************************************************\n\n"; cout << "\t\t...Machine Parser...\n\n"; cout << "************************************************************\n\n"; // cout << "Type an expression...or [q or Q] to quit\n\n"; //prompt for file name to be input cout << "Please enter a filename...or [q or Q] to quit:\n\n "; char strFilename[256]; //file name store as a string object cin >> strFilename; ifstream inFile(strFilename); // opens file object for reading //output file for truncated machine (operations only) Substitution elementary_subst; // Simple substitution parser object string str, next; // inFile.open(strFilename); while (inFile >> str) { getline(cin, next); str += next; if (str.empty() || str[0] == 'q' || str[0] == 'Q') break; parse_info<> info = parse(str.c_str(), elementary_subst, space_p); if (info.full) { cout << "\n-------------------------\n"; cout << "Parsing succeeded\n"; cout << "\n-------------------------\n"; } else { cout << "\n-------------------------\n"; cout << "Parsing failed\n"; cout << "stopped at: \": " << info.stop << "\"\n"; cout << "\n-------------------------\n"; } } cout << "Please enter a filename...or [q or Q] to quit\n"; cin >> strFilename; return 0; } The contents of the file I tried to parse, which I named "mf7.txt" is given below: debt:=(LoanRequest+outstandingLoan1)*20 || newDebt := loanammount-paidammount The output when I execute the program is: ************************************************************ ...Machine Parser... ************************************************************ Please enter a filename...or [q or Q] to quit: c:\tplat\mf7.txt PUSH(LoanRequest) PUSH(outstandingLoan1) ADD LoanRequest outstandingLoan1 MULTIPLY LoanRequest outstandingLoan1 PUSH(debt:=(LoanRequest+outstandingLoan1)*20) ... debt|->LoanRequest ... debt|->outstandingLoan1 SIMPLE SUBSTITUTION Tokens ... debt Tokens ... LoanRequest Tokens ... outstandingLoan1 Tokens ... 20 Tokens ... END PUSH(debt:=(LoanRequest+outstandingLoan1)*20) MULTIPLE SUBSTITUTION ------------------------- Parsing failedstopped at: ": " ------------------------- My intention is to capture only the variables in the file, which I managed to do up to the "||" string. Clearly, the program is not parsing beyond the "||" string in the input file. I will appreciate assistance to fix the grammar. SOS, please.

    Read the article

  • PHP & WP: Render Certain Markup Based on True False Condition

    - by rob
    So, I'm working on a site where on the top of certain pages I'd like to display a static graphic and on some pages I would like to display an scrolling banner. So far I set up the condition as follows: <?php $regBanner = true; $regBannerURL = get_bloginfo('stylesheet_directory'); //grabbing WP site URL ?> and in my markup: <div id="banner"> <?php if ($regBanner) { echo "<img src='" . $regBannerURL . "/style/images/main_site/home_page/mock_banner.jpg' />"; } else { echo 'Slider!'; } ?> </div><!-- end banner --> In my else statement, where I'm echoing 'Slider!' I would like to output the markup for my slider: <div id="slider"> <img src="<?php bloginfo('stylesheet_directory') ?>/style/images/main_site/banners/services_banners/1.jpg" alt="" /> <img src="<?php bloginfo('stylesheet_directory') ?>/style/images/main_site/banners/services_banners/2.jpg" alt="" /> <img src="<?php bloginfo('stylesheet_directory') ?>/style/images/main_site/banners/services_banners/3.jpg" alt="" /> ............. </div> My question is how can I throw the div and all those images into my else echo statement? I'm having trouble escaping the quotes and my slider markup isn't rendering.

    Read the article

  • What causes markup-controls to be null?

    - by Earlz
    Ok, I have a very strange problem. I have a regular UserControl with some controls in the markup. At Page_Load these controls are still null. And I have tried EnsureChildControls It is laid out like this: Masterpage - Page - mycontrol1 - mycontrol2 - problemcontrol ProblemControl is where the controls are null. MyControl1 contains MyControl2. MyControl2 is another UserControl which contains ProblemControl in it's markup. Masterpage is nothing special and Page contains MyControl1 in it's markup. The only oddity is that ProblemControl is created dynamically at Page_Init. Everything works fine until I get to ProblemControl where none of the controls are being created. ProblemControl has the proper things all set such as the Page and Parent property. I do not see any problems. The source code for all of these(except ProblemControl) is pretty extensive, so I'm hoping someone can just give me some troubleshooting tips for this problem and if anyone has encountered it before. Also, I can place ProblemControl on another Page control and it will work fine, it's something about mycontrol1 and/or mycontrol2. But we've never had any problems with mycontrol1 and mycontrol2 doesn't have anything I can see wrong with it. (which I've been tediously analyzing for the past few hours). Has anyone else had this same problem? Are there any common things to check for?

    Read the article

  • Natural language processing and semantic

    - by laknath27
    i would like know how to identify the semantic of the user input in NLP. i made a ontology make a relationship.there are 3 categories in my ontology... accommodation, culture,location.i faced some problem with this, how to redirect the user input into the specific area of the ontology? eg: input --- trip to Canada ... then it redirect all the categories in my ontology. input --- culture in Canada .. then it redirect only the Culture in my ontology. can u show me the way :::: thanks ::

    Read the article

  • Wrap words in tags, keep markup

    - by spacevillain
    For example I have a string with markup (from html node): hello, this is dog "h<em>e<strong>llo, thi</strong>s i</em><strong>s d</strong>og" What is the most correct way to find some words in it (let's say "hello" and "dog"), wrap them in a span (make a highlight) and save all the markup? Desired output is something like this (notice properly closed tags) <span class="highlight">h<em>e<strong>llo</strong></em></span><strong>,</strong> <em><strong>thi</strong>s<em> i</em><strong>s <span class="highlight"><strong>d</strong>og</span> Looks the same as it should: hello, this is dog

    Read the article

  • CSS-Friendly Menu adapter that emits the same markup as .NET 4.0

    - by Joe
    For .NET 2.x/3.x there exists a CSS-Friendly Adapter on CodePlex that emits markup for an ASP.NET Menu Control as an ul. The .NET 4.0 Menu control will also emit an ul, but the CSS class names are different from those emitted by the CSS-Friendly Adapter 1.0 on CodePlex. In the interests of having a single version of CSS for .NET 2/3/4 sites, I want to create a version of the CSS-Friendly menu adapter that emits the same markup as the .NET 4.0 Menu control. Before doing so, I thought I'd ask here to see if it's already been done, so I don't reinvent the wheel. Anyone?

    Read the article

  • Use HTML markup into web.config file

    - by stighy
    Hi, i've to display a messagge in my homepage (default.aspx) different for each "installation" of my web app. i would like to avoid to make a call to database to show this message.. so i've thougth to use web.config to store something like this <add key="WelcomeString" value="lorem ipsus <b>doloret sit amen</b>" /> But i've noticed i can't use html markup into web.config ... Is there a better approach ? Or is there a way to insert html markup into web.config ? Thank you again stack overflow guru's... i'm learning from you a lot of things !

    Read the article

  • Twitter integration and iOS5: semantic and parsing issues

    - by Tyler
    I was using some of Apple's example code to write the Twitter integration for my app. However, I get a whopping amount of errors (mostly being Semantic and parse errors). How can this be solved? -(IBAction)TWButton:(id)sender { ACAccountStore *accountstore = [[ACAccountStore alloc] init]; //Make sure to retrive twitter accounts ACAccountType *accountType = [accountstore accountTypeWithAccountTypeIdentifier:ACAccountTypeIdentifierTwitter]; [accountstore requestAccessToAccountsWithType:accountType withCompletionHandler:^(BOOL granted, NSError *error) { if (granted) [{ NSArray *accountsArray = [accountstore accountsWithAccountType:accountType]; if ([accountsArray count] > 0) { ACAccount *twitterAccount = [accountsArray objectAtIndex:0]; TWRequest *postRequest = [[TWRequest alloc] initWithURL:[NSURL URLWithString:@"http://api.twitter.com/1/statuses/update.json"] parameters:[NSDictionary dictionaryWithObject:[@"Tweeted from iBrowser" forKey:@"status"] requestMethod:TWRequestMethodPOST]; [postRequest setAccount:twitterAccount]; [postRequest preformRequestWithHandeler:^(NSData *responseData, NSHTTPURLResponse *urlResponse, NSError *error) { NSString *output = [NSString stringWithFormat:@"HTTP response status: %i", [urlResponse statusCode]]; [self preformSelectorOnMainThread:@selector(displaytext:) withObject:output waitUntilDone:NO]; }]; } }]; } //Now lets see if we can actually tweet -(void)canTweetStatus { if ([TWTweetComposeViewController canSendTweet]) { self.TWButton.enabled = YES self.TWButton.alpha = 1.0f; }else{ self.TWButton.enabled = NO self.TWButton.alpha = 0.5f; } }

    Read the article

  • NLP - Queries using semantic wildcards in full text searching, maybe with Lucene?

    - by Zsolt
    Let's say I have a big corpus (for example in english or an arbitrary language), and I want to perform some semantic search on it. For example I have the query: "Be careful: [art] armada of [sg] is coming to [do sg]!" And the corpus contains the following sentence: "Be careful: an armada of alien ships is coming to destroy our planet!" It can be seen that my query string could contain "semantic placeholders", such as: [art] - some placeholder for articles (for example a / an in English) [sg], [do sg] - some placeholders for NPs and VPs (subjects and predicates) I would like to develop a library which would be capable to handle these queries efficiently. I suspect that some kind of POS-tagging would be necessary for parsing the text, but because I don't want to fully reimplement an already existing full-text search engine to make it work, I'm considering that how could I integrate this behaviour into a search engine like Lucene? I know there are SpanQueries which could behave similarly in some cases, but as I can see, Lucene doesn't do any semantic stuff with stored texts. It is possible to implement a behavior like this? Or do I have to write an own search engine?

    Read the article

  • How to create a Semantic Network like wordnet based on Wikipedia?

    - by Forbidden Overseer
    I am an undergraduate student and I have to create a Semantic Network based on Wikipedia. This Semantic Network would be similar to Wordnet(except for it is based on Wikipedia and is concerned with "streams of text/topics" rather than simple words etc.) and I am thinking of using the Wikipedia XML dumps for the purpose. I guess I need to learn parsing an XML and "some other things" related to NLP and probably Machine Learning, but I am no way sure about anything involved herein after the XML parsing. Is the starting step: XML dump parsing into text a good idea/step? Any alternatives? What would be the steps involved after parsing XML into text to create a functional Semantic Network? What are the things/concepts I should learn in order to do them? I am not directly asking for book recommendations, but if you have read a book/article that teaches any thing related/helpful, please mention them. This may include a refernce to already existing implementations regarding the subject. Please correct me if I was wrong somewhere. Thanks!

    Read the article

  • HTML5 Semantics - H1 or H2 for ARTICLE titles in a SECTION

    - by Matt
    It's my understanding (based from this chapter of Dive into HTML5: http://goo.gl/9zliD) that it can be considered semantically appropriate to use H1 tags in multiple areas of the page, as a method of setting the most important title for that particular content. If I'm using this methodology, and I have a SECTION which I've assigned an H1 of 'Articles', should I use H1 or H2 to define the titles for ARTICLEs in that SECTION? This is a bit confusing to me as the article titles are the most important heading for their ARTICLE, but are also 'children' of the SECTION's title. Example code: <section class="article-list"> <header> <h1>Articles</h1> </header> <article> <header> <h2>Article Title</h2> <time datetime="201-02-01">Today</time> </header> <p>Article text...</p> </article> <article> <header> <h2>Article Title</h2> <time datetime="2011-01-31">Yesterday</time> </header> <p>Article text...</p> </article> <article> <header> <h2>Article Title</h2> <time datetime="2011-01-30">The Day Before Yesterday</time> </header> <p>Article text...</p> </article> </section>

    Read the article

  • Unleash AutoVue on Your Unmanaged Data

    - by [email protected]
    Over the years, I've spoken to hundreds of customers who use AutoVue to collaborate on their "managed" data stored in content management systems, product lifecycle management systems, etc. via our many integrations. Through these conversations I've also learned a harsh reality - we will never fully move away from unmanaged data (desktops, file servers, emails, etc). If you use AutoVue today you already know that even if your primary use is viewing content stored in a content management system, you can still open files stored locally on your computer. But did you know that AutoVue actually has - built-in - a great solution for viewing, printing and redlining your data stored on file servers? Using the 'Server protocol' you can point AutoVue directly to a top-level location on any networked file server and provide your users with a link or shortcut to access an interface similar to the sample page shown below. Many customers link to pages just like this one from their internal company intranets. Through this webpage, users can easily search and browse through file server data with a 'click-and-view' interface to find the specific image, document, drawing or model they're looking for. Any markups created on a document will be accessible to everyone else viewing that document and of course real-time collaboration is supported as well. Customers on maintenance can consult the AutoVue Admin guide or My Oracle Support Doc ID 753018.1 for an introduction to the server protocol. Contact your local AutoVue Solutions Consultant for help setting up the sample shown above.

    Read the article

  • Tools for modelling data and workflows using structured text files

    - by Alexey
    Consider a case when I want to try some idea of an application. But I want to avoid investing a lot of effort in coding UI/work flows/database schema etc before I see that it's going to be useful to me (as example of potential user). My idea is stay lightweight and put all the data in text files. So the components could be following: Domain objects are represented by text files or their fragments Domain objects are grouped by their type using directories Structure the files using some both human- and machine-friendly format, e.g. YAML Use some smart text editor (e.g. vim, emacs, rubymine) to edit and navigate those files Use color schemes and macros/custom commands of the text editor to effectively manipulate those files Use scripts (or a lightweight web framework like Sinatra) to try some business logic ideas on top of the data model The question is: Are there tools or toolkits that support or can be adopted to this approach? Also any ideas, links to articles/other knowledge sources are very welcome. And more specific question: What is the simplest way to index and update index of files with YAML files?

    Read the article

  • Explaining the difference between OData & RDF by way of analogy

    - by jamiet
    A couple of months back I wrote a blog post entitled Microsoft, OData and RDF where I gave a high level view of the OData protocol and how it compares to RDF. I talked about linked data, triples and such like which may have been somewhat useful however jargon-heavy. Earlier today Dr Michael Hausenblas (blog | twitter) offered an analogy which I think is probably more useful and with Michael's permission I'm re-posting it here:Imagine a Web (a Web of Documents, if you wish), which is not based on HTML and hyperlinks, but on MS Word documents. The documents are all available on the Internet, so you can download them and consume the content. But after you’re done with a certain document that talks about a book, how do you learn more about it? For example, reviews about the book or where you can purchase it? Maybe the original document mentions that there is some more related information on another server. So you’d need to go there and look for the related bit of information yourself. You see? That’s what the Web is great at – you just click on a hyperlink and it takes you to the document (or section) you’re interested in. All the legwork is taken care of for you through HTML, URIs and HTTP.Hm, right, but how is this related to OData? Well, OData feels a bit like the above mentioned scenario, just concerning data. Of course you – well actually rather a software program I guess – can consume it (a single source), but that’s it.from Oh – it is data on the Web by Michael Hausenblas I believe that OData has loads of use cases but its important to understand its limitations as well and I think Michael has done a good job of explaining those limitations.@Jamiet   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • I'm learning html and I'm confused as to how href's work. [migrated]

    - by Robolisk
    Okay so i'm learning html right now and soon css. In my html coding I have a section like this for navigation: <div id="header"> <h1>Guild Wars 2 Fanbase</h1> <ol id="navigation"> <li><a href="/">Home</a></li> <li><a href="/facts">Facts</a></li> <li><a href="/gallery">Gallery</a></li> <li><a href="/code">Coding</a> <ul><li><a href="/code/line">Lines</a></li> <li><a href="/code/comment">Comment Lines</a></li> </ul> </li> </ol></div> Now when I open up this .html file this is all layed out the way I want it too look (the mark up that is). My question is this, when I click on a link on this site (this site being this code) I get an a error saying this webpage is not found, but of course. But how do I create it so I can have the web pages working together? I'm not sure how to word it correctly. Like, do I create another .html file in the same directory so somehow when I click the link it reads from the second .html file? If you not sure what I'm asking, just let me know and I'll try to be more specific. Thank you for your help (: excuse my mistakes in grammar, not the worlds best in English, trying my best (:

    Read the article

  • Microdata for Q&A page

    - by Zoltán Kocsán
    Which microdata is the best for a Q&A page or for a list of links to Q&A pages. When searching "yahoo answers cat" via Google it displays the result from Yahoo in a very nice way. It displays a list of related question with the number of answers (screenshot). What microdat/microformat whatever should be used on Q&A pages like Yahoo Answers or StackExchange to have similar results in Google?

    Read the article

  • How to number nested ordered lists.

    - by Wes
    Is there any way through CSS to style nested ordered lists to display sub numbers? The idea is similar to using heading levels. However what I'd really like to see is the following. Note each of these subsections has text not just a title. This isn't a real example just some organisational stuff. Now I know I can use <h1>-<h6> but nested lists would be much clearer and allow for different indentation styling. Also it would be symentically correct. Note I don't think that <h1>-<h6> are correct in many ways as the name doesn't apply to the whole section. 1 Introduction. 1.1 Scope Blah Blah Blah Blah Blah Blah 1.2 Purpose Blah Blah Blah Blah Blah Blah 2 Cars Blah Blah Blah Blah Blah Blah 2.1 engines sub Blah Blah Blah sub Blah Blah Blah 2.2 Wheels ... ... 2.10.21 hub caps sub-sub Blah Blah Blah sub-sub Blah Blah Blah 2.10.21.1 hub cap paint sub-sub-sub Blah Blah Blah sub-sub-sub Blah Blah Blah 3 Planes 3.1 Commercial Airlines. ... ... 212 Glossary

    Read the article

  • How long for data highlighter mark up to appear in structured data tool?

    - by Max
    I used the data highlighter in webmaster tools over 3 weeks ago to mark up some local business data, but there is still no structured data being detected in webmaster tools. Does any body have any experience on approx how long it takes for Google Webmaster Tools to start reporting Structured Data that has been marked up with their data highlighter? I'm asking specifically about reporting on it in Web Master Tools Structured Data section, as opposed to actually appearing in the SERPs.

    Read the article

  • How does hreflang interact with geo targeting?

    - by zakgottlieb
    If I have multiple subfolders that I wish to target at different countries, I'm thinking the ideal set up would be to specify rel="alternative" hreflang with a language AND country code (e.g. en-AU) and ALSO to geotarget that subfolder to the particular country. That way, the pages would be showing up both in the country-specific results (accessed via Search Tools) because of hreflang, AND the more generic country results from regular searches, because of geotargeting. Is this correct? p.s. What would happen if you geotargeted a subfolder which had e.g. pt-BR hreflang value (i.e. Portuguese-Brazil) to just Portugal?

    Read the article

  • How come Indiegogo links shared on G+ link to their page instead of displaying URL?

    - by Ivan Vucica
    If an Indiegogo link, such as this one, gets shared on G+, their G+ page is displayed in the post in the place where commonly the URL would be displayed. I've tried looking analyzing the HTML, but came up empty handed: there's Twitter cards metadata, there's OpenGraph, there is a G+ button -- but I found nothing that links to Indiegogo's page, not even rel="publisher". So, how does Indiegogo achieve this?

    Read the article

  • Correct microdata and/or microformats for real estate listings?

    - by Ernests Karlsons
    Given I am running a real estate rentals listing website, what would be the correct microdata or microformats for the listing pages? There is the usual data: address, photos, price, start date, possible end date, person who is renting it out, list of amenities, description etc. Are there also microformats/microdata that can be used in the listing summary page (e.g., page that displays all listings in a particular city)?

    Read the article

  • Lining things up while using columns

    - by Charles
    I have a request that may not be possible. I'd like to line up the elements of a form so that the inputs all start at the same place: Name: [ ] Company: [ ] Some question with a long name: [ ] But my list is (somewhat) long and I would like to show them in multiple columns on screens that are wide enough. Ideally, I'd find a POSH method (table-free is semantically appropriate, I think) that works on a reasonable number of browsers. My current page uses a table. I tried CSS with columns: auto; -moz-column-count: auto; -moz-column-width: auto; -webkit-column-count: auto; -webkit-column-width: auto; but Firefox (at least) won't break a table across columns.

    Read the article

  • Is there a way to save MS Word document as HTML w/o the ms proprietary stuff?

    - by sequoia mcdowell
    So normally I wouldn't use this feature ("Save as Web Page") but I have large documents from clients they just want put on their site as HTML, and formatting it all by hand seems like a waste of time. I have tried "save as webpage" in Word 2007, but it produces all sorts of bad stuff. To wit: <b style='mso-bidi-font-weight:normal'> <span style="mso-spacerun: yes"> as well as a large block of XML formatting info: <!--[if gte mso 9]><xml> <o:DocumentProperties> <o:Subject> </o:Subject> <o:Author> </o:Author> <o:Keywords> </o:Keywords> ... As I said, formatting it all by hand seems like a waste of time, but the way MS exports currently just has too much cruft. Is there a way to export MS Word doc as html without all this?

    Read the article

  • Choices in Architecture, Design, Algorithms, Data Structures for effective RDF Reasoning and Querying in a Big Data Environment [on hold]

    - by user2891213
    As part of my academic project I would like to know what choices in Architecture, Design, Algorithms, Data Structures do we need in order to provide effective and efficient RDF Reasoning and Querying in a Big Data Environment. Basically I want to get info regarding below points: What are the Systems and Software to get appropriate Architecture? What kind of API layer(s) would we need on top of the Big Data stores, to make this possible? The Indexing structures we will need. The appropriate Algorithms, and appropriate Algorithms for Query Planning across Big Data stores. The Performance Analysis and Cost Models we will need to justify the design decisions we have made along the way. Can anyone please provide pointers.. Thanks, David

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >