Search Results

Search found 3978 results on 160 pages for 'beginning xpath'.

Page 67/160 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Slow SelectSingleNode

    - by Simon
    I have a simple structured XML file like this: <ttest ID="ttest00001", NickName="map00001"/> <ttest ID="ttest00002", NickName="map00002"/> <ttest ID="ttest00003", NickName="map00003"/> <ttest ID="ttest00004", NickName="map00004"/> ..... This xml file can be around 2.5MB. In my source code I will have a loop to get nicknames In each loop, I have something like this: nickNameLoopNum = MyXmlDoc.SelectSingleNode("//ttest[@ID=' + testloopNum + "']").Attributes["NickName"].Value This single line will cost me 30 to 40 millisecond. I searched some old articles (dated back to 2002) saying, use some sort of compiled "xpath" can help the situation, but that was 5 years ago. I wonder is there a mordern practice to make it faster? (I'm using .NET 3.5)

    Read the article

  • Native Mouse events with Flash and Selenium

    - by Dan at Demand
    I understand that Selenium does not support Flash, but it is my understanding that I should be able to do some simplistic testing of Flash by using Selenium's built in native mouse support and doing mouse up/down events based on coordinates. Is this correct? I can't seem to get it working. I'm trying to test on this page: http://mandy-mania.blogspot.com/2010/04/sneak-peek-of-final-season-of-lost-dvd.html and all I'm trying to do is click on the flash object so it plays the video. I've tried all sorts of commands, MouseOver, MouseDown, MouseDownAt, MouseUp, MouseUpAt, etc. So, I'm wondering if this just theoretically doesn't work or if I'm just doing something wrong. The xpath I'm using is //object[@id='player'], although I've tried a number of different combinations. And yes, I've also tried just the straight click command. Any suggestions? Thanks!

    Read the article

  • Find elements based on xsd type with lxml

    - by joet3ch
    I am trying to get a list of elements with a specific xsd type with lxml 2.x and I can't figure out how to traverse the xsd for specific types. Example of schema: <xsd:element name="ServerOwner" type="srvrs:string90" minOccurs="0"> <xsd:element name="HostName" type="srvrs:string35" minOccurs="0"> Example xml data: <srvrs:ServerOwner>John Doe</srvrs:ServerOwner> <srvrs:HostName>box01.example.com</srvrs:HostName> The ideal function would look like: elements = getElems(xml_doc, 'string90') def getElems(xml_doc, xsd_type): ** xpath or something to find the elements and build a dict return elements

    Read the article

  • WPF - 'Relational' Data in XAML Using DataContext

    - by Andy T
    Hi, Say I have a list of Employee IDs from one data source and a separate data source with a list of Employees, with their ID, Surname, FirstName, etc. Is it possible in XAML only to get the Employee's name from the second data source and display it next to the ID, using something like this (with the syntax corrected)?.. <TextBlock x:Name="EmployeeID" Text="{Binding ID}"></TextBlock> <TextBlock Grid.Column="1" DataContext="{StaticResource EmployeeList[**where ID = {Binding ID}**]}" Text="{Binding Surname}"/> I'm thinking back to my days using XML and XSLT with XPath to achieve the kind of thing shown above. Is this kind of thing possible in XAML? Or do I need to 'denormalize' the data first in code, into one consolidated list? It seems like it should be possible to do this simple task using XAML only, but I can't quite get my head around how you would switch the DataContext correctly and what the syntax would be to achieve this. Is it possible, or am I barking up the wrong tree? Thanks, AT

    Read the article

  • SQL SERVER – Create a Very First Report with the Report Wizard

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. What is the report Wizard? In today’s world automation is all around you. Henry Ford began building his Model T automobiles on a moving assembly line a century ago and changed the world. The moving assembly line allowed Ford to build identical cars quickly and cheaply. Henry Ford said in his autobiography “Any customer can have a car painted any color that he wants so long as it is black.” Today you can buy a car straight from the factory with your choice of several colors and with many options like back up cameras, built-in navigation systems and heated leather seats. The assembly lines now use robots to perform some tasks along with human workers. When you order your new car, if you want something special, not offered by the manufacturer, you will have to find a way to add it later. In computer software, we also have “assembly lines” called wizards. A wizard will ask you a series of questions, often branching to specific questions based on earlier answers, until you get to the end of the wizard. These wizards are used for many things, from something simple like setting up a rule in Outlook to performing administrative tasks on a server. Often, a wizard will get you part of the way to the end result, enough to get much of the tedious work out of the way. Once you get the product from the wizard, if the wizard is not capable of doing something you need, you can tweak the results. Create a Report with the Report Wizard Let’s get started with your first report!  Launch SQL Server Data Tools (SSDT) from the Start menu under SQL Server 2012. Once SSDT is running, click New Project to launch the New Project dialog box. On the left side of the screen expand Business Intelligence and select Reporting Services. Configure the properties as shown in . Be sure to select Report Server Project Wizard as the type of report and to save the project in the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Project folder. Click OK and wait for the Report Wizard to launch. Click Next on the Welcome screen.  On the Select the Data Source screen, make sure that New data source is selected. Type JProCo as the data source name. Make sure that Microsoft SQL Server is selected in the Type dropdown. Click Edit to configure the connection string on the Connection Properties dialog box. If your SQL Server database server is installed on your local computer, type in localhost for the Server name and select the JProCo database from the Select or enter a database name dropdown. Click OK to dismiss the Connection Properties dialog box. Check Make this a shared data source and click Next. On the Design the Query screen, you can use the query builder to build a query if you wish. Since this post is not meant to teach you T-SQL queries, you will copy all queries from files that have been provided for you. In the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Resources folder open the sales by employee.sql file. Copy and paste the code from the file into the Query string Text Box. Click Next. On the Select the Report Type screen, choose Tabular and click Next. On the Design the Table screen, you have to figure out the groupings of the report. How do you do this? Well, you often need to know a bit about the data and report requirements. I often draw the report out on paper first to help me determine the groups. In the case of this report, I could group the data several ways. Do I want to see the data grouped by Year and Month? Do I want to see the data grouped by Employee or Category? The only thing I know for sure about this ahead of time is that the TotalSales goes in the Details section. Let’s assume that the CIO asked to see the data grouped first by Year and Month, then by Category. Let’s move the fields to the right-hand side. This is done by selecting Page > Group or Details >, as shown in, and click Next. On the Choose the Table Layout screen, select Stepped and check Include subtotals and Enable drilldown, as shown in. On the Choose the Style screen, choose any color scheme you wish (unlike the Model T) and click Next. I chose the default, Slate. On the Choose the Deployment Location screen, change the Deployment folder to Chapter 3 and click Next. At the Completing the Wizard screen, name your report Employee Sales and click Finish. After clicking Finish, the report and a shared data source will appear in the Solution Explorer and the report will also be visible in Design view. Click the Preview tab at the top. This report expects the user to supply a year which the report will then use as a filter. Type in a year between 2006 and 2013 and click View Report. Click the plus sign next to the Sales Year to expand the report to see the months, then expand again to see the categories and finally the details. You now have the assembly line report completed, and you probably already have some ideas on how to improve the report. Tomorrow’s Post Tomorrow’s blog post will show how to create your own data sources and data sets in SSRS. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • XML/XHTML replace content?

    - by Daveo
    I have a XHTML string I want to replace tags in for example <span tag="x">FOO</span> <span tag="y"> <b>bar</b> some random text <span>another span</span> </span> I want to be able to find tag="x" and replace FOO with my own content and find tag=y and replace all the inner content with by own content. What is the best way to do this? I am thinking regex is definitely out of the question. Can XPATH do this or is that just for searching can it do manipulation?

    Read the article

  • How to export scrubyt extractor?

    - by robintw
    I've written a scrubyt extractor based on the 'learning' technique - that is, specifying the current text on the page and getting it to work out the XPath expressions itself. However, I now want to export the extractor so that it can be used even when the page has changed. The documentation for scrubyt seems to be all over the place now, but from what I can find I should be able to put the line extractor.export(__FILE__) and it should work. It doesn't - I just get an error saying that there is the wrong number of arguments for export, it should have 0. I've tried it without any arguments and it still fails. I would ask on the scrubyt forum, but it seems like no-one's been there for ages! Any ideas what to do here?

    Read the article

  • Sum value of XML attributes using PowerShell 2.0

    - by Yooakim
    I have a directory with XML files. I quickly want to go through them and sum upp the value stored in one attribute. I have figured out how to fidn the nodes I am interested in. For example: (Select-Xml -Path .\*.xml -XPath "//*[@Attribue = 'valueImlookingfor']").Count This command gives me the count of elements in all of the XML files in the current directory which has the value "valueImlookingfor" in the "Attribue". I want to sum up all the values of the "Attribute", not count them. What would be the best way to do this in PowerShell? I am new to PowerShell so this may be trivial but un-known to me... All tips are much appreciated :-)

    Read the article

  • XSLT workflow with variable number of source files

    - by chiborg
    I have a bunch of XML files with a fixed, country-based naming schema: report_en.xml, report_de.xml, report_fr.xml, etc. Now I want to write an XSLT style sheet that reads each of these files via the document() XPath function, extracts some values and generates one XML files with a summary. My question is: How can I iterate over the source files without knowing the exact names of the files I will process? At the moment I'm planning to generate an auxiliary XML file that holds all the file names and use the auxiliary XML file in my stylesheet to iterate. The the file list will be generated with a small PHP or bash script. Are there better alternatives? I am aware of XProc, but investing much time into it is not an option for me at the moment. Maybe someone can post an XProc solution. Preferably the solution includes workflow steps where the reports are downloaded as HTML and tidied up :) I will be using Saxon as my XSLT processor, so if there are Saxon-specific extensions I can use, these would also be OK.

    Read the article

  • PHP DOMElement::getElementsByTagName - Anyway to get just the immediate matching children?

    - by rr
    Hi All, is there a way to retrieve only the immediate children found by a call to DOMElement::getElementsByTagName? For example, I have an XML document that has a category element. That category element has sub category elements (which have the same structure), like: <category> <id>1</id> <name>Top Level Category Name</name> <subCategory> <id>2</id> <name>Sub Category Name</name> </subCategory> ... </category> If I have a DOMElement representing the top level category, $topLevelCategoryElement->getElementsByTagName('id'); will return a list with the nodes for all 'id' elements, where I want just the one from the top level. Any way to do this outside of using XPath?

    Read the article

  • How do I update an xml file with msbuild with two namespaces?

    - by c3rin
    This msbuild below task can take into account one namespace, but in the case where I'm updating an mxml (flex) that has a mix of namespaces, can I use this task or another msbuild task to do the update? <XmlUpdate Prefix="fx" Namespace="http://ns.adobe.com/mxml/2009" XmlFileName="myFlexApp.mxml" Xpath="//mx:Application/fx:Declarations/fx:String[@id='stringId']" Value="xxxxx"> Here is the flex xml I'm trying to update: <mx:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:mx="library://ns.adobe.com/flex/mx" xmlns:s="library://ns.adobe.com/flex/spark"> <fx:Declarations> <fx:String id="stringId">UPDATE_ME</fx:String> </fx:Declarations></mx:Application>

    Read the article

  • How can I programatically test which CSS elements match my XHTML?

    - by Shawn Lauzon
    I have an application which generates XHTML documents which are styled with (mostly) static CSS. I'm currently using XPath and Hamcrest (Java) to verify that the documents are constructed correctly. However, I also need to verify that the correct CSS properties are matched. For example, I would like a test like this: Given XHTML element Foo, verify that the property "text-transform:uppercase" is applied. Ideally, I would like a Java framework that provides this. I've looked a bit at Selenium, but I don't see this type of functionality. Thanks ...

    Read the article

  • Replacing Text Nodes With DOM Nodes

    - by Greg
    Hey, say I have a text node via XPath. How would I replace the text node with a new DOM node? For example, this little patch of code will go through text nodes, and if text matches something, it will replace it with a corresponding image via img element. I wanted something faster then a global page regex or even a element innerHTML regex. Any help would be appreciated. EDIT: Never mind. I figured it out.

    Read the article

  • How to estimate memory need by XPathDocument for a specific xml file

    - by bill seacham
    Is there any way to estimate the memory requirement for creating an XpathDocument instance based on the file size of the xml? XpathDocument xdoc = new XpathDocument(xmlfile); Is there any way to programmatically stop the process of creating the XpathDocument if memory drops to a very low level? Since it loads the entire xml into memory, it would be nice to know ahead of time if the xml is too big. What I have found is that when I create a new XpathDocument with a big xml file, an outofmemory exception is never fired, but that the process slows to a crawl, only 5 Mb of memory remains a available and the Task Manager reports it is not responding. This happened with a 266 Mb xml file when there was 584 Mb of ram. I was able to load a 150 Mb file with no problems in 18. After loading the xml, I want to do xpath queries using an XpathNavigator and an XpathNodeIterator. I am using .net 2.0, xp sp3.

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • Extract part of HTML in C/Objective-C

    - by Dan
    Hi, I need to extract the detail content of a website while preserve all formatting of the division. The section I wish to extract is: ... <div class="detailContent"><p> <P dir=ltr><STRONG>Hinweis</strong>: Auf ... </p> </div> ... My current solution is to use HTMLParser from libxml2 and xpath to find the nodes and walk through all the nodes to reconstruct this piece of HTML. This is a long an complicated code. I' just wondering if there is an easier solution to extract part of HTML? Thanks.

    Read the article

  • Load XML file into object. Best method?

    - by Cypher
    Hello, We are receiving an XML file from our client. I want to load the data from this file into a class, but am unsure about which way to go about it. I have an XSD to defining what is expected in the XML file, so therefore i can easily validate the XML file. Can i use the XSD file to load the data into a POCO, using some sort of serialization? The other way i was thinking was to load the xml into a XMLDocument and use XPath to populate each property in my class. Cheers for any advice

    Read the article

  • Can I test for the end of the content of a text/plain file with Selenium or javascript?

    - by fool4jesus
    I have a page that results in a text/plain file being displayed in the browser that looks like this: ... Admin Site Administration 2010-04-21 22:26:34 [email protected] Test Site Bob Smith 2010-04-21 22:27:09 [email protected] Admin Site Administration 2010-04-21 22:29:26 [email protected] I am trying to write a Selenium test against this that verifies the last line of the file has "[email protected]" at the end. How would you do this? I can't depend on the date/time as this is a login report that is constantly getting updated - all I want is to ensure that the last line ends with that email address. And I can't figure out how to do it using Selenium expressions, DOM, or XPath.

    Read the article

  • PHP DOMElement, replacing text of a node

    - by waitinforatrain
    I have a HTML node like so: <b>Bold text</b> A variable $el contains a DOMElement reference to the text of that HTML node ("Bold text"), got from the XPath expression //b/text() I want to change the element to <b><span>Bold Text</span></b> So I tried: $span = $doc->createElement('span', "Bold Text"); $el->parentNode->replaceChild($span,, $el) which fails because parentNode is null. So, as a test, I tried: $el-insertBefore($span, $el); which throws no errors but produces no change in the output. Any thoughts?

    Read the article

  • Preferred data-format for user-data in java applications?

    - by Frederik Wordenskjold
    I'm currently developing a desktop application in java, which stores user data such as bookmarks for ftp-servers. When deciding how to save these informations, I ended up using xml, simply because I like the way xpath works. I was thinking about json too, which seems more lightweight. What is your preferred way to store data in java desktop applications (in general) and why? What about java-persistence, does that have any advantages worth noting? And how much does the size of user data matter? Its not always possible to store data in a database (or preferable), and in my experience xml does not scale well. Let me know what you think!

    Read the article

  • How do I get this div tag using selenium webdriver?

    - by user1603518
    <div id="ctl00_ContentHolder_vs_ValidationSummary" class="errorblock"> <p><strong>The following errors were found:</strong></p> <ul><input type="hidden" Name="SummaryErrorCmsIds" Value="E024|E012|E014" /> <li>Please select a title.</li> <li>Please key in your first name.</li> <li>Please key in your last name.</li> </ul> </div> I want to capture the text having value of E024 E012 and E014 and write it in to an Excel file. I tried the following but it doesn't work. string val1 = driver.FindElement(By.XPath("//div[contains(@class, 'errorblock'/ value = 'E024|E012|E014'")).Text; How can I do this?

    Read the article

  • InfoPath browser form submitting dirty fields changed through javascript

    - by Xavier
    I'm trying to submit an InfoPath browser form with fields that have been modified through the Spell Checker included in Sharepoint server. The spell checker checks all the fields in the page and once the user closes the SpellChecker dialog, it changes the textboxes with the new values through javascript. When I click Submit, debug the FormEvents_Submit in the form code behind and try to do a GetNodeValue("XPath to changed field"), it still shows the old values. I realize that this may be a problem of doing postbacks and I'd like to do a full page postback once the SpellChecker is done changing all the textboxes. Any suggestions would be appreciated. Thanks.

    Read the article

  • Ruby execute code in class getting inherited to

    - by AdamB
    I'm trying to be able to have a global exception capture where I can add extra information when an error happens. I have two classes, "crawler" and "amazon". What I want to do is be able to call "crawl", execute a function in amazon, and use the exception handling in the crawl function. Here are the two classes I have: require 'mechanize' class Crawler Mechanize.html_parser = Nokogiri::HTML def initialize @agent = Mechanize.new end def crawl puts "crawling" begin #execute code in Amazon class here? rescue Exception => e puts "Exception: #{e.message}" puts "On url: #{@current_url}" puts e.backtrace end end def get(url) @current_url = url @agent.get(url) end end class Amazon < Crawler #some code with errors def stuff page = get("http://www.amazon.com") puts page.parser.xpath("//asldkfjasdlkj").first['href'] end end a = Amazon.new a.crawl Is there a way I can call "stuff" inside of "crawl" so I can use that exception handling over the entire stuff function? Is there a better way to accomplish this?

    Read the article

  • Select records from XML column (SQL Server 2005) based on node order

    - by jdoe
    I have a column in a SQL Server 2005 table defined as an XML column. Is there a way to select records from this table based on the order of two nodes in that column? For example, we have the following structure in our XML: <item> <latitude/> <longitude/> </item> I want to see if there are any records that have latitude/longitude in the opposite order i.e. <longitude/> then <latitude/>. I've tried some XPath expressions but with no luck.

    Read the article

  • Stealing the contents of another application's tree view

    - by User1
    I have an application with a very large TreeView control in Java. I want to get the contents of the tree control in a list (just strings not a JList) of XPath-like elements of leaves only. Here's an example root |-Item1 |-Item1.1 |-Item1.1.1 (leaf) |-Item1.2 (leaf) |-Item2 |-Item2.1 (leaf) Would output: /Item1/Item1.1/Item1.1.1 /Item1/Item1.2 /Item2/Item2.1 I don't have any source code or anything handy like that. Is there I tool I can use to dig into the Window item itself and pull out this data? I don't mind if there are a few post-processing steps because typing it in by hand is my only other option.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >