Search Results

Search found 7048 results on 282 pages for 'vim tags'.

Page 49/282 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • is there a way to automate changing filenames in <link> , <script> tags

    - by nepsdotin
    when we use Expires header for text files like js, css, contents are cached in the browser, to get new content we need to change in the html file the new names in the link and script tag. When we add changes. How can we automate it. I may have some bunch of html files in multiple folders also in subdirectories. There would be a text file filelist.txt OldName NewName oldfile1-ver-1.0.js oldfile1-ver-2.0.js oldfile2-ver-1.0.js oldfile2-ver-2.0.js oldfile3-ver-1.0.js oldfile3-ver-2.0.js oldfile4-ver-1.0.js oldfile4-ver-2.0.js The script should change all the oldfile1-ver-1.0.js into oldfile1-ver-2.0.js in the html, php files I would run this script before i start uploading. Finally the script could create a list of files and line number where it made the update. The solution can be in PERL/PHP/BATCH or anything thats nice and elegant

    Read the article

  • Using jstl tags in a dynamically created div

    - by George
    I want to be able to show some data based on criteria the user enters in a text field. I can easily take this data, process the form post, and show the data on another page. However, I want to be able to do it all on the same page - they click the button, and a new div shows up with the information. This doesn't seem too complicated, but I want to use jstl tags to format the data like: <c:forEach items="${model.data}" var="d"> <tr> <td><fmt:formatDate type="date" dateStyle="short" timeStyle="default" value="${d.reportDate}" /></td> <td><c:out value="${d.cardType}"/></td> </tr> </c:forEach> If jstl tags are processed when the page loads, can I use that in this new div? Can I update it via a javascript (using prototype) function to display the proper data? Will I be able to do the same thing if they change the criteria and click the submit button again?

    Read the article

  • Custom Django admin URL + changelist view for custom list filter by Tags

    - by Botondus
    In django admin I wanted to set up a custom filter by tags (tags are introduced with django-tagging) I've made the ModelAdmin for this and it used to work fine, by appending custom urlconf and modifying the changelist view. It should work with URLs like: http://127.0.0.1:8000/admin/reviews/review/only-tagged-vista/ But now I get 'invalid literal for int() with base 10: 'only-tagged-vista', error which means it keeps matching the review edit page instead of the custom filter page, and I cannot figure out why since it used to work and I can't find what change might have affected this. Any help appreciated. Relevant code: class ReviewAdmin(VersionAdmin): def changelist_view(self, request, extra_context=None, **kwargs): from django.contrib.admin.views.main import ChangeList cl = ChangeList(request, self.model, list(self.list_display), self.list_display_links, self.list_filter, self.date_hierarchy, self.search_fields, self.list_select_related, self.list_per_page, self.list_editable, self) cl.formset = None if extra_context is None: extra_context = {} if kwargs.get('only_tagged'): tag = kwargs.get('tag') cl.result_list = cl.result_list.filter(tags__icontains=tag) extra_context['extra_filter'] = "Only tagged %s" % tag extra_context['cl'] = cl return super(ReviewAdmin, self).changelist_view(request, extra_context=extra_context) def get_urls(self): from django.conf.urls.defaults import patterns, url urls = super(ReviewAdmin, self).get_urls() def wrap(view): def wrapper(*args, **kwargs): return self.admin_site.admin_view(view)(*args, **kwargs) return update_wrapper(wrapper, view) info = self.model._meta.app_label, self.model._meta.module_name my_urls = patterns('', # make edit work from tagged filter list view # redirect to normal edit view url(r'^only-tagged-\w+/(?P<id>.+)/$', redirect_to, {'url': "/admin/"+self.model._meta.app_label+"/"+self.model._meta.module_name+"/%(id)s"} ), # tagged filter list view url(r'^only-tagged-(P<tag>\w+)/$', self.admin_site.admin_view(self.changelist_view), {'only_tagged':True}, name="changelist_view"), ) return my_urls + urls Edit: Original issue fixed. I now receive 'Cannot filter a query once a slice has been taken.' for line: cl.result_list = cl.result_list.filter(tags__icontains=tag) I'm not sure where this result list is sliced, before tag filter is applied. Edit2: It's because of the self.list_per_page in ChangeList declaration. However didn't find a proper solution yet. Temp fix: if kwargs.get('only_tagged'): list_per_page = 1000000 else: list_per_page = self.list_per_page cl = ChangeList(request, self.model, list(self.list_display), self.list_display_links, self.list_filter, self.date_hierarchy, self.search_fields, self.list_select_related, list_per_page, self.list_editable, self)

    Read the article

  • Add/remove xml tags using a bash script.

    - by sixtyfootersdude
    I have an xml file that I want to configure using a bash script. For example if I had this xml: <a> <b> <bb> <yyy> Bla </yyy> </bb> </b> <c> <cc> Something </cc> </c> <d> bla </d> </a> (confidential info removed) I would like to write a bash script that will remove section <b> (or comment it) but keep the rest of the xml intact. I am pretty new the the whole scripting thing. I was wondering if anyone could give me a hint as to what I should look into. I was thinking that sed could be used except sed is a line editor. I think it would be easy to remove the <b> tags however I am unsure if sed would be able to remove all the text between the <b> tags. I will also need to write a script to add back the deleted section.

    Read the article

  • Replacing specific HTML tags using Regex

    - by matthewpe
    Alright, an easy one for you guys. We are using ActiveReport's RichTextBox to display some random bits of HTML code. The HTML tags supported by ActiveReport can be found here : http://www.datadynamics.com/Help/ARNET3/ar3conSupportedHtmlTagsInRichText.html An example of what I want to do is replace any match of <div style="text-align:*</div> by <p style=\"text-align:*</p> in order to use a supported tag for text-alignment. I have found the following regex expression to find the correct match in my html input: <div style=\"text-align:(.*?)</div> However, I can't find a way to keep the previous text contained in the tags after my replacement. Any clue? Is it me or Regex are generally a PITA? :) private static readonly IDictionary<string, string> _replaceMap = new Dictionary<string, string> { {"<div style=\"text-align:(.*?)</div>", "<p style=\"text-align:(.*?)</p>"} }; public static string FormatHtml(string html) { foreach(var pair in _replaceMap) { html = Regex.Replace(html, pair.Key, pair.Value); } return html; } Thanks!

    Read the article

  • Python web scraping involving HTML tags with attributes

    - by rohanbk
    I'm trying to make a web scraper that will parse a web-page of publications and extract the authors. The skeletal structure of the web-page is the following: <html> <body> <div id="container"> <div id="contents"> <table> <tbody> <tr> <td class="author">####I want whatever is located here ###</td> </tr> </tbody> </table> </div> </div> </body> </html> I've been trying to use BeautifulSoup and lxml thus far to accomplish this task, but I'm not sure how to handle the two div tags and td tag because they have attributes. In addition to this, I'm not sure whether I should rely more on BeautifulSoup or lxml or a combination of both. What should I do? At the moment, my code looks like what is below: import re import urllib2,sys import lxml from lxml import etree from lxml.html.soupparser import fromstring from lxml.etree import tostring from lxml.cssselect import CSSSelector from BeautifulSoup import BeautifulSoup, NavigableString address='http://www.example.com/' html = urllib2.urlopen(address).read() soup = BeautifulSoup(html) html=soup.prettify() html=html.replace('&nbsp', '&#160') html=html.replace('&iacute','&#237') root=fromstring(html) I realize that a lot of the import statements may be redundant, but I just copied whatever I currently had in more source file. EDIT: I suppose that I didn't make this quite clear, but I have multiple tags in page that I want to scrape.

    Read the article

  • XmlSlurper/NekoHTML document fragment parsing - No HTML or BODY tags wanted

    - by Misha Koshelev
    Dear All, I am trying to parse the following HTML fragment, and I would like to get the same fragment as output (without HTML and BODY tags). Is this possible? If so, how? Thank you Misha p.s. I am reading here: http://nekohtml.sourceforge.net/faq.html#fragments and I believe I have added the correct options below. However, the output is still incorrect :( Thank you Misha import groovy.xml.MarkupBuilder import groovy.xml.StreamingMarkupBuilder import groovy.util.XmlNodePrinter import groovy.util.slurpersupport.NodeChild def text=""" <div><h2>Test</h2> <div>Hi</div> </div> """ // Parse def config=new org.cyberneko.html.HTMLConfiguration() config.setFeature("http://cyberneko.org/html/features/balance-tags/document-fragment",true) def html=new XmlSlurper(new org.cyberneko.html.parsers.SAXParser()).parseText(text) // Output def printNode(NodeChild node) { def writer = new StringWriter() writer << new StreamingMarkupBuilder().bind { mkp.declareNamespace('':node[0].namespaceURI()) mkp.yield node } new XmlNodePrinter().print(new XmlParser().parseText(writer.toString())) } printNode(html) Output: <HTML> <tag0:HEAD xmlns:tag0="http://www.w3.org/1999/xhtml"/> <BODY> <DIV> <H2> Test </H2> <DIV> Hi </DIV> </DIV> </BODY> </HTML>

    Read the article

  • In Python BeautifulSoup How to move tags

    - by JJ
    I have a partially converted XML document in soup coming from HTML. After some replacement and editing in the soup, the body is essentially - <Text...></Text> # This replaces <a href..> tags but automatically creates the </Text> <p class=norm ...</p> <p class=norm ...</p> <Text...></Text> <p class=norm ...</p> and so forth. I need to "move" the <p> tags to be children to <Text> or know how to suppress the </Text>. I want - <Text...> <p class=norm ...</p> <p class=norm ...</p> </Text> <Text...> <p class=norm ...</p> </Text> I've tried using item.insert and item.append but I'm thinking there must be a more elegant solution. for item in soup.findAll(['p','span']): if item.name == 'span' and item.has_key('class') and item['class'] == 'section': xBCV = short_2_long(item._getAttrMap().get('value','')) if currentnode: pass currentnode = Tag(soup,'Text', attrs=[('TypeOf', 'Section'),... ]) item.replaceWith(currentnode) # works but creates end tag elif item.name == 'p' and item.has_key('class') and item['class'] == 'norm': childcdatanode = None for ahref in item.findAll('a'): if childcdatanode: pass newlink = filter_hrefs(str(ahref)) childcdatanode = Tag(soup, newlink) ahref.replaceWith(childcdatanode) Thanks

    Read the article

  • Preserving SCRIPT tags (and more) in CKEditor

    - by Jonathan Sampson
    Update: I'm thinking the solution to this problem is in CKEDITOR.config.protectedSource(), but my regular-expression experience is proving to be too juvenile to handle this issue. How would I go about exempting all tags that contain the 'preserved' class from being touched by CKEditor? Is it possible to create a block of code within the CKEditor that will not be touched by the editor itself, and will be maintained in its intended-state until explicitly changed by the user? I've been attempting to input javascript variables (bound in script tags) and a flash movie following, but CKEditor continues to rewrite my pasted code/markup, and in doing so breaking my code. I'm working with the following setup: <script type="text/javascript"> var editor = CKEDITOR.replace("content", { height : "500px", width : "680px", resize_maxWidth : "680px", resize_minWidth : "680px", toolbar : [ ['Source','-','Save','Preview'], ['Cut','Copy','Paste','PasteText','PasteFromWord','-','Print', 'SpellChecker', 'Scayt'], ['Undo','Redo','-','Find','Replace','-','SelectAll','RemoveFormat'], ['Bold','Italic','Underline','Strike','-','Subscript','Superscript'], ['NumberedList','BulletedList','-','Outdent','Indent','Blockquote'], ['JustifyLeft','JustifyCenter','JustifyRight','JustifyBlock'], ['Link','Unlink','Anchor'], ['Image','Table','HorizontalRule','SpecialChar'] ] }); CKFinder.SetupCKEditor( editor, "<?php print url::base(); ?>assets/ckfinder" ); </script> UPDATE: I suppose the most ideal solution would be to preserve the contents of any tag that contains class="preserve" enabling much more than the limited exclusives.

    Read the article

  • Actual element tags are not getting captured.

    - by user323719
    I am using the below piece of XSL code to construct a span tag calling a javascript function on mouseover. The input to the javascipt should be a html table. The output from the variable "showContent" gives just the text content but not along with the table tags. How can this be resolved. XSL: <xsl:variable name="aTable" as="element()*"> <table border="0" cellspacing="0" cellpadding="0"> <xsl:for-each select="$capturedTags"> <tr><td><xsl:value-of select="node()" /></td></tr> </xsl:for-each> </table> </xsl:variable> <xsl:variable name="start" select='concat("Tip(&#39;", "")'></xsl:variable> <xsl:variable name="end" select='concat("&#39;)", "")'></xsl:variable> <xsl:variable name="showContent"> <xsl:value-of select='concat($start,$aTable,$end)'/> </xsl:variable> <span xmlns="http://www.w3.org/1999/xhtml" onmouseout="{$hideContent}" onmouseover="{$showContent}" id="{$textNodeId}"><xsl:value-of select="$textNode"></xsl:value-of></span> Actual Output: <span onmouseout="UnTip()" onmouseover="Tip('content1')" id="d1t14"is my </span Expected output: <span onmouseout="UnTip()" onmouseover="Tip('<table><tr><td>content1</td></tr>')" id="d1t14">is my </span> What is the change that needs to done in the above XSL for the table, tr and td tags to get passed?

    Read the article

  • I need to remove Java Script tags using regular expressions and JRegex

    - by piotr
    I need to remove all the Java Script tags and the content in between and style tags from the HTML code of web pages.So far I've come up with this expression : "(<[ \r\n\t]script([ \r\n\t]|){1,}([ \r\n\t]|.)?)|(<[ \r\n\t]noscript([ \r\n\t]|){1,}([ \r\n\t]|.)?)|(<[ \r\n\t]style([ \r\n\t]|){1,}([ \r\n\t]|.)?)" I use JRegex library to work with regular expressions. When I test it in any regex tester it works just fine, but once I run my program - it all crashes down with this error report: Exception in thread "Thread-0" java.lang.StackOverflowError at java.util.regex.Pattern$BranchConn.match(Unknown Source) at java.util.regex.Pattern$BmpCharProperty.match(Unknown Source) at java.util.regex.Pattern$Branch.match(Unknown Source) at java.util.regex.Pattern$GroupHead.match(Unknown Source) at java.util.regex.Pattern$LazyLoop.match(Unknown Source) at java.util.regex.Pattern$GroupTail.match(Unknown Source) at java.util.regex.Pattern$BranchConn.match(Unknown Source) at java.util.regex.Pattern$CharProperty.match(Unknown Source) at java.util.regex.Pattern$Branch.match(Unknown Source) at java.util.regex.Pattern$GroupHead.match(Unknown Source) at java.util.regex.Pattern$LazyLoop.match(Unknown Source) .................................. And it keeps on going forever. If anyone can give me an advice on this one - I'll be very grateful.

    Read the article

  • Automatically hyper-link URL's and Email's using C#, whilst leaving bespoke tags in place

    - by marcusstarnes
    I have a site that enables users to post messages to a forum. At present, if a user types a web address or email address and posts it, it's treated the same as any other piece of text. There are tools that enable the user to supply hyper-linked web and email addresses (via some bespoke tags/markup) - these are sometimes used, but not always. In addition, a bespoke 'Image' tag can also be used to reference images that are hosted on the web. My objective is to both cater for those that use these existing tools to generate hyper-linked addresses, but to also cater for those that simply type a web or email address in, and to then automatically convert this to a hyper-linked address for them (as soon as they submit their post). I've found one or two regular expressions that convert a plain string web or email address, however, I obviously don't want to perform any manipulation on addresses that are already being handled via the sites bespoke tagging, and that's where I'm stuck - how to EXCLUDE any web or email addresses that are already catered for via the bespoke tagging - I wan't to leave them as is. Here are some examples of bespoke tagging for the variations that I need to be left alone: [URL=www.msn.com]www.msn.com[/URL] [URL=http://www.msn.com]http://www.msn.com[/URL] [[email protected]][email protected][/EMAIL] [IMG]www.msn.com/images/test.jpg[/IMG] [IMG]http://www.msn.com/images/test.jpg[/IMG] The following examples would however ideally need to be automatically converted into web & email links respectively: www.msn.com http://www.msn.com [email protected] Ideally, the 'converted' links would just have the appropriate bespoke tags applied to them as per the initial examples earlier in this post, so rather than: <a href="..." etc. they'd become: [URL=http://www.. etc.) Unfortunately, we have a LOT of historic data stored with this bespoke tagging throughout, so for now, we'd like to retain that rather than implementing an entirely new way of storing our users posts. Any help would be much appreciated. Thanks.

    Read the article

  • Selectively allow unsafe html tags in Plone

    - by dhill
    I'm searching for a way to put widgets from several services (PicasaWeb, Yahoo Pipes, Delicious bookmarks, etc.) on the community site I host on Plone (currently 3.2.1). I'm looking for a way to allow a group of users to use dangerous html tags. There are some ways I see, but I don't know how to implement those. One would be changing safe_html for the pages editors own (1). Another would be to allow those tags on some subtree (2). And yet another finding an equivalent of "static text portlet" that would display in the middle panel (3). We could then use some of the composite products (I stumbled upon Collage and CMFContentPanels), to include the unsafe content on other sites. My site has been ridden by advert bots, so I don't want to remove the filtering all together. I don't have an easy (no false positives) way of checking which users are bots, so deploying captcha now wouldn't help either. The question is: How to implement any of those solutions? (I already asked that on plone mailing list without an answer, so I thought I would give it another try here.)

    Read the article

  • Double script tags in Google Analytics tracking code

    - by Tom
    This is more a curiosity question than anything else... Google instructs to add the analytics tracking code as follows: <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try{ var pageTracker = _gat._getTracker("UA-xxxxxx-x"); pageTracker._trackPageview(); } catch(err) {} </script> I'm wondering some JS guru here could tell me why they're separating it into two script tags instead of sticking it all inside one. I know that the top part could be put in the header and the bottom part just before body tag to ensure the page loaded before it's tracked, but I'm wondering if there's something more to it. Anyone who'd know that would likely know how to separate the code into two tags anyway. I'm only asking as this is coming from the Goog and is being used by millions of sites... Thanks

    Read the article

  • JavaScript: Double script tags in Google Analytics tracking code

    - by Tom
    This is more a curiosity question than anything else... Google instructs to add the analytics tracking code as follows: <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try{ var pageTracker = _gat._getTracker("UA-xxxxxx-x"); pageTracker._trackPageview(); } catch(err) {} </script> I'm wondering some JS guru here could tell me why they're separating it into two script tags instead of sticking it all inside one. I know that the top part could be put in the header and the bottom part just before body tag to ensure the page loaded before it's tracked, but I'm wondering if there's something more to it. Anyone who'd know that would likely know how to separate the code into two tags anyway. I'm only asking as this is coming from the Goog and is being used by millions of sites... Thanks

    Read the article

  • Anchor tags are blank

    - by ryanday
    I'm having a problem where the my anchor tags sometimes aren't displaying their links. This is happening on Mobile Safari on multiple iPhones, and in the iPhone simulator. I'm using jQtouch r147, PhoneGap, and jQuery 1.4.2. I'm generating the data from a database call, and adding anchor tags to a list like this: for(var i=0;i<data.rows.length;i++) { var item = $('<li></li>'); var name = data.rows.item(i).name; var anchor = $('<a href="#lpage">'+name+'</a>'); item.addClass('arrow'); // This line always displays the name, even when I can't see // the name in the browser debug.log('The name: ' + name); (function(info) { anchor.bind('tap', function(e) { debug.log('Touch start ' + info.id); }); })(data.rows.item(i)); item.append(anchor); if( anchor.html() == null ) { debug.log('html is blank'); } $('#myUL').append(item); } Sometimes my list of names shows fine(http://imagebin.org/101462), and sometimes it is just blank(http://imagebin.org/101464). When the list is blank, I see the debug.log() line show me 'html is blank', and I also see the log line show me that the variable 'name' does, in fact, contain a valid name. When I check for anchor.html() == null, I've also tried to .remove() the anchor tag, and re-create it. But it always comes back without the name displayed. This happens on the mobile device and in the simulator, but I've never seen it happen in Safari or in Chrome. Has anyone seen something like this? I can't find the cause, and I can't get it to stop. Thank you for any ideas or suggestions!

    Read the article

  • Extract title tags from normal text

    - by pravin
    I am working on one task, to extract title tag from given normal text ( it's not a HTML DOM ). I have below cases where need to extract title tag(s) : Case 1 : <html> <head> <title>Title of the document</title> </head> <body> The content of the document...... </body> </html> Expected : Title of the document Case 2 : <html> <head> <title>Title of the document</title> <title>Continuing title</title> </head> <body> The content of the document...... </body> </html> Expected : Title of the document Continuing title Case 3 (Nested title tags) <html> <head> <title>Title of the document <title>Continuing title</title></title> </head> <body> The content of the document...... </body> </html> Expected : Title of the document Continuing title I wanted to extract title tags using regular expression in javascript. Reg-ex should work for above case. Is anyone knows about this..please let me know... Thanks in Advance

    Read the article

  • BeautifulSoup: just get inside of a tag, no matter how many enclosing tags there are

    - by AP257
    I'm trying to scrape all the inner html from the <p> elements in a web page using BeautifulSoup. There are internal tags, but I don't care, I just want to get the internal text. For example, for: <p>Red</p> <p><i>Blue</i></p> <p>Yellow</p> <p>Light <b>green</b></p> How can I extract: Red Blue Yellow Light green Neither .string nor .contents[0] does what I need. Nor does .extract(), because I don't want to have to specify the internal tags in advance - I want to deal with any that may occur. Is there a 'just get the visible HTML' type of method in BeautifulSoup? ----UPDATE------ On advice, trying: p_tags = page.findAll('p',text=True) for i, p_tag in enumerate(p_tags): print str(p_tag) But that doesn't help - it just prints out: Red <i>Blue</i> Yellow Light <b>green</b>

    Read the article

  • Stop CDATA tags from being output-escaped when writing to XML in C#

    - by Smallgods
    We're creating a system outputting some data to an XML schema. Some of the fields in this schema need their formatting preserved, as it will be parsed by the end system into potentially a Word doc layout. To do this we're using <![CDATA[Some formatted text]]> tags inside of the App.Config file, then putting that into an appropriate property field in a xsd.exe generated class from our schema. Ideally the formatting wouldn't be out problem, but unfortunately thats just how the system is going. The App.Config section looks as follows: <header> <![CDATA[Some sample formatted data]]> </header> The data assignment looks as follows: HeaderSection header = ConfigurationManager.GetSection("header") as HeaderSection; report.header = "<[CDATA[" + header.Header + "]]>"; Finally, the Xml output is handled as follows: xs = new XmlSerializer(typeof(report)); fs = new FileStream (reportLocation, FileMode.Create); xs.Serialize(fs, report); fs.Flush(); fs.Close(); This should in theory produce in the final Xml a section that has information with CDATA tags around it. However, the angled brackets are being converted into &lt; and &gt; I've looked at ways of disabling Outout Escaping, but so far can only find references to XSLT sheets. I've also tried @"<[CDATA[" with the strings, but again no luck. Any help would be appreciated!

    Read the article

  • center div tags inside parent div

    - by Senthilnathan
    I have two div tags inside a parent div. I want to display the two divs in same line and centered. Below is the html code. <div id="parent"> <div id="addEditBtn" style="display:inline-block; vertical-align: middle; width:20px; cursor:pointer;" class="ui-state-default ui-corner-all"> <span class="ui-icon ui-icon-pencil"></span></div> <div id="deleteBtn" style="display:inline-block; vertical-align: middle; width:20px; cursor:pointer;" class="ui-state-default ui-corner-all"> <span class="ui-icon ui-icon-trash"></span></div> </div> I tried with "display:inline-block; vertical-align: middle;" but its getting aligned left. Please help me out to centered the div tags inside the parent div.

    Read the article

  • Parsing XHTML with inline tags

    - by user290796
    Hi, I'm trying to parse an XHTML document using TBXML on the iPhone (although I would be happy to use either libxml2 or NSXMLParser if it would be easier). I need to extract the content of the body as a series of paragraphs and maintain the inline tags, for example: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <title>Title</title> <link rel="stylesheet" href="css/style.css" type="text/css"/> <meta http-equiv="Content-Type" content="application/xhtml+xml; charset=utf-8"/> </head> <body> <div class="body"> <div> <h3>Title</h3> <p>Paragraph with <em>inline</em> tags</p> <img src="image.png" /> </div> </div> </body> </html> I need to extract the paragraph but maintain the <em>inline</em> content with the paragraph, all my testing so far has extracted that as a subelement without me knowing exactly where it fitted in the paragraph. Can anyone suggest a way to do this? Thanks.

    Read the article

  • Using struts 2 with no tags

    - by WannaAsk
    Hi everyone, I wonder if you could guide me on this struts issue - I feel hopeless on this... I am working on migrating our web application to a Struts 2 portlet (I'm using Struts 2.1.8). The thing is, I am forced to avoid struts tags because our application relies on Dijit widgets, which are omitted when using struts tags. So now I should implement the <s:form tag using scriptlets. My question is: how you would suggest I imitate the way struts handles the "action" attribute? Right now it appears I should: Create a regular html form in my jsp. Get/create the ActionConfig object during onsubmit and set on it values which match my desired destination: class+method+result jsp. Execute the action I set in 2. Would you recommend this? If so - is ActionConfig obtained from the session? Can I initiate an action execution from ActionConfig? (Aletrnatively... generating an action url using struts' form.vm file - what do you say?) I would appreciate any help on this - Thanks a lot in advance!!!

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >