Search Results

Search found 418 results on 17 pages for 'excess fragments'.

Page 10/17 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • velocity: join optional fields with a separator/prefix

    - by SlowStrider
    What would be the most concise/readable way in a velocity template to join multiple fields with a separator while leaving out empty or null Strings without adding excess separators? As an example we have a tooltip or appointments that goes like: Appointment ($number) [with $employee] [-] [$remarks] [-] [$roomToVisit] Where I used brackets to indicate optional data. When filled in it would normally show as Appointment (3) with John - ballroom - serve Java coffee When $remarks is empty but $roomToVisit is not, this becomes: Appointment (3) with John - ballroom When $remarks is "serve Java coffee" but $roomToVisit is empty we get: Appointment (3) with John - serve Java coffee When both are empty: Appointment (3) with John Bonus: also make the field prefix optional. When only $employee is empty we should get: Appointment (2) serve Java coffee - ballroom Ideally I would like the velocity template to look very similar to the first code box. If this is not possible, how would you achieve this with a minimum of distracting code tags? Similar ideas (first is much more verbose): Join with intelligent separators velocity: do something except in last loop iteration

    Read the article

  • How to print absolute line number in uncaught exception?

    - by DSblizzard
    When error occured Python prints something like this: Traceback (most recent call last): File "<stdin>", line 2, in <module> File "<stdin>", line 8, in m File "<stdin>", line 5, in exec_st File "<stdin>", line 9, in exec_assign File "<stdin>", line 48, in ref_by_id IndexError: list index out of range where 2, ... , 48 are relative line numbers which are not very convenient. How to print absolute line numbers in such error messages? EDIT: Maybe it's a dumb question, but answer will facilitate development a little. I'm printing text in several files. When done, press shortcut which runs python and copies contents of current file to console. Proposed solution forces to press excess keystrokes (Ctrl+S, Alt+Tab) and create additional files. I hope I have put it clear.

    Read the article

  • C - Discard the edges of an arbitrary level of a multidimensional array

    - by Medivh
    I have some geographical data, that I'm trying to parse into a usable format. The data is kept in NetCDF files, and is read out as a multidimensional array. My problem comes because the source of the geographical data has a strip of longitude on each side of the grid that overlaps the other side. That is, I have a longitude point of -1 degree, and another of 361 degrees. Unfortunately, I've got time, latitude, and sometimes height as dimensions in this array as well, and I have no way of predicting in advance where each dimension will be in the list (or if it's a three dimensional array, or a four dimensional array). Further complicating the problem, the array can be of floats, doubles or integers, so I have to pass it around as a void. Are there any NetCDF tools that I can use to pre-prepare the files? If not, how would you suggest I go about stripping the excess longitudes?

    Read the article

  • Inconsistent values in network switch throughput values

    - by Marcus Hughes
    Quite simple, I have a network switch with SNMP, and need to calculate the throughput of the switch port, so simply I use ifOutOctets. We transfer a file which is 145MB and if we use the total from the start, subtracted from the value at the end then the value is : 158901842 I simply can't get the value to match, or be anything similar to what the real transfer is. I understand that there may be excess traffic etc but I just can't get it to be anywhere similar (the server being tested has no traffic when this is not running) We have tried for a long time and suspect there may be an issue with the recording on the HP switch, do you have any suggestions, or how should we be calculating it? Thanks a lot in advance We have a HP ProCurve 1810G on 2.2

    Read the article

  • Gmail,Yahoo, Facebook, Twitter contacts importer in PHP

    - by Arjun
    Which is the greatest, cheapest application in PHP that I can buy to import Gmail, Yahoo, MSN, Facebook, Twitter contacts from my user's accounts if they wish to invite their friends? I have gone through: http://www.improsys.com/importer.htm http://www.octazen.com/demo.php and http://www.iplussoft.com/product/iplusinvite_pricing Octazen looks awesome but wants excess of $320 for an all in solution. I don't want to spend that much. All you PHP programmers out there you may have needed to build of integrate a similar app, I need to know which is the best PHP readymade app for this? Any help would be appreciated and I'll smile with each answer - this should be your biggest incentive to find me something amazing :)

    Read the article

  • c# EnumerateFiles wildcard returning non matches?

    - by Sam Underhill
    As a simplified example I am executing the following IEnumerable<string> files = Directory.EnumerateFiles(path, @"2010*.xml", SearchOption.TopDirectoryOnly).ToList(); In my results set I am getting a few files which do no match the file pattern. According to msdn searchPattern wildcard is "Zero or more characters" and not a reg ex. An example is that I am getting a file name back as "2004_someothername.xml". For information there are in excess of 25,000 files in the folder. Does anyone have any idea what is going on?

    Read the article

  • HOw do I delete record in a table by keeping certain datas??

    - by mathew
    my site has lots of incoming searches which is stored in a database to show recent queries into my website. due to high search queries my database is getting bigger in size. so what I want is I need to keep only recent queries in database say 10 records. this keeps my database small and queries will be faster. I am able to store incoming queries to database but don't know how to restrict or delete excess/old data from table. any help?? well I am using PHP and MySQL

    Read the article

  • django - where to clean extra whitespace from form field inputs?

    - by Westerley
    I've just discovered that Django doesn't automatically strip out extra whitespace from form field inputs, and I think I understand the rationale ('frameworks shouldn't be altering user input'). I think I know how to remove the excess whitespace using python's re: #data = re.sub('\A\s+|\s+\Z', '', data) data = data.strip() data = re.sub('\s+', ' ', data) The question is where should I do this? Presumably this should happen in one of the form's clean stages, but which one? Ideally, I would like to clean all my fields of extra whitespace. If it should be done in the clean_field() method, that would mean I would have to have a lot of clean_field() methods that basically do the same thing, which seems like a lot of repetition. If not the form's cleaning stages, then perhaps in the model that the form is based on? Thanks for your help! W.

    Read the article

  • An array of structures in C...

    - by 00010000
    For the life of me I can't figure out the proper syntax for creating an array of structures in C. I tried this: struct foo { int x; int y; } foo[][] = { { { 1, 2 }, { 4, 5 }, { -1, -1 } }, { { 55, 44 } { 100, 200 }, } }; So for example foo[1][0].x == 100, foo[0][1].y == 5, etc. But GCC spits out a lot of errors. If anyone could provide the proper syntax that'd be great. EDIT: Okay, I tried this: struct foo { const char *x; int y; }; struct foo bar[2][] = { { { "A", 1 }, { "B", 2 }, { NULL, -1 }, }, { { "AA", 11 }, { "BB", 22 }, { NULL, -1 }, }, { { "ZZ", 11 }, { "YY", 22 }, { NULL, -1 }, }, { { "XX", 11 }, { "UU", 22 }, { NULL, -1 }, }, }; But GCC gives me "elements of array bar have incomplete type" and "excess elements in array initializer".

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • Cool Enhancements Everyone Can Enjoy

    - by Ruth
    With Release 17, we have a few visual and functional enhancements that make using CRM On Demand that much better for us all. I'll mention a few here, but to get the full outline of these upgrades, I recommend taking 10 minutes to view the Release 17 Usability Transfer of Information course. First and foremost, I find the ability to customize your theme (or skin) pretty cool, but I've said that before. Take a look at the Selecting Your Theme and the Themes - Create Your CRM Style blog articles for more information. My next favorite is the resizeable user interface (UI). CRM On Demand will dynamically fit the device and screen resolution you're using, which includes the resizing of fields, field editors and pop-ups. If you have a wide screen like me, you should appreciate that one very much. To make it easier to see that resized UI, the detail pages got a little face lift. New horizontal lines and other subtle changes make those pages easier to read. Also, those things you need to know, like error messages and inline help are highlighted with a little icon to show the message type. You may not think every change to the detail pages are particularly exciting, but I'm sure you'll enjoy the new Head Up Display, which saves you scrolling time by adding links to related information sections. I like that the head up display travels with me as I move up and down the page...it's like a little friend that takes me where I want to go as fast as possible. You may also really like the fact that the copy record feature is now available for all record types from both detail pages and lists. Your company administrator can choose which fields get copied, so you can maximize your efficiency when creating new records. Lists also got a face lift. Alternating colors in rows make it easier to see your data. Also, the Favorite Lists icon is now on the list itself, so you can save your most useful lists with one click. If you've ever tried to create a new list with 10 columns or more, you'll be happy to hear that the maximum number of columns in a list has increased from 9 to 20. This is great news, but doesn't mean you should include the kitchen sink in your list...excess columns can slow list performance. So choose your columns wisely. Again, these are just a few of my favorite things. Let us know what you think about the new usability features. What are your favorite things?

    Read the article

  • Issues with touch buttons in XNA (Release state to be precise)

    - by Aditya
    I am trying to make touch buttons in WP8 with all the states (Pressed, Released, Moved), but the TouchLocationState.Released is not working. Here's my code: Class variables: bool touching = false; int touchID; Button tempButton; Button is a separate class with a method to switch states when touched. The Update method contains the following code: TouchCollection touchCollection = TouchPanel.GetState(); if (!touching && touchCollection.Count > 0) { touching = true; foreach (TouchLocation location in touchCollection) { for (int i = 0; i < menuButtons.Count; i++) { touchID = location.Id; // store the ID of current touch Point touchLocation = new Point((int)location.Position.X, (int)location.Position.Y); // create a point Button button = menuButtons[i]; if (GetMenuEntryHitBounds(button).Contains(touchLocation)) // a method which returns a rectangle. { button.SwitchState(true); // change the button state tempButton = button; // store the pressed button for accessing later } } } } else if (touchCollection.Count == 0) // clears the state of all buttons if no touch is detected { touching = false; for (int i = 0; i < menuButtons.Count; i++) { Button button = menuButtons[i]; button.SwitchState(false); } } menuButtons is a list of buttons on the menu. A separate loop (within the Update method) after the touched variable is true if (touching) { TouchLocation location; TouchLocation prevLocation; if (touchCollection.FindById(touchID, out location)) { if (location.TryGetPreviousLocation(out prevLocation)) { Point point = new Point((int)location.Position.X, (int)location.Position.Y); if (prevLocation.State == TouchLocationState.Pressed && location.State == TouchLocationState.Released) { if (GetMenuEntryHitBounds(tempButton).Contains(point)) // Execute the button action. I removed the excess } } } } The code for switching the button state is working fine but the code where I want to trigger the action is not. location.State == TouchLocationState.Released mostly ends up being false. (Even after I release the touch, it has a value of TouchLocationState.Moved) And what is more irritating is that it sometimes works! I am really confused and stuck for days now. Is this the right way? If yes then where am I going wrong? Or is there some other more effective way to do this? PS: I also posted this question on stack overflow then realized this question is more appropriate in gamedev. Sorry if it counts as being redundant.

    Read the article

  • Function Folding in #PowerQuery

    - by Darren Gosbell
    Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2014/05/16/function-folding-in-powerquery.aspxLooking at a typical Power Query query you will noticed that it's made up of a number of small steps. As an example take a look at the query I did in my previous post about joining a fact table to a slowly changing dimension. It was roughly built up of the following steps: Get all records from the fact table Get all records from the dimension table do an outer join between these two tables on the business key (resulting in an increase in the row count as there are multiple records in the dimension table for each business key) Filter out the excess rows introduced in step 3 remove extra columns that are not required in the final result set. If Power Query was to execute a query like this literally, following the same steps in the same order it would not be overly efficient. Particularly if your two source tables were quite large. However Power Query has a feature called function folding where it can take a number of these small steps and push them down to the data source. The degree of function folding that can be performed depends on the data source, As you might expect, relational data sources like SQL Server, Oracle and Teradata support folding, but so do some of the other sources like OData, Exchange and Active Directory. To explore how this works I took the data from my previous post and loaded it into a SQL database. Then I converted my Power Query expression to source it's data from that database. Below is the resulting Power Query which I edited by hand so that the whole thing can be shown in a single expression: let     SqlSource = Sql.Database("localhost", "PowerQueryTest"),     BU = SqlSource{[Schema="dbo",Item="BU"]}[Data],     Fact = SqlSource{[Schema="dbo",Item="fact"]}[Data],     Source = Table.NestedJoin(Fact,{"BU_Code"},BU,{"BU_Code"},"NewColumn"),     LeftJoin = Table.ExpandTableColumn(Source, "NewColumn"                                   , {"BU_Key", "StartDate", "EndDate"}                                   , {"BU_Key", "StartDate", "EndDate"}),     BetweenFilter = Table.SelectRows(LeftJoin, each (([Date] >= [StartDate]) and ([Date] <= [EndDate])) ),     RemovedColumns = Table.RemoveColumns(BetweenFilter,{"StartDate", "EndDate"}) in     RemovedColumns If the above query was run step by step in a literal fashion you would expect it to run two queries against the SQL database doing "SELECT * …" from both tables. However a profiler trace shows just the following single SQL query: select [_].[BU_Code],     [_].[Date],     [_].[Amount],     [_].[BU_Key] from (     select [$Outer].[BU_Code],         [$Outer].[Date],         [$Outer].[Amount],         [$Inner].[BU_Key],         [$Inner].[StartDate],         [$Inner].[EndDate]     from [dbo].[fact] as [$Outer]     left outer join     (         select [_].[BU_Key] as [BU_Key],             [_].[BU_Code] as [BU_Code2],             [_].[BU_Name] as [BU_Name],             [_].[StartDate] as [StartDate],             [_].[EndDate] as [EndDate]         from [dbo].[BU] as [_]     ) as [$Inner] on ([$Outer].[BU_Code] = [$Inner].[BU_Code2] or [$Outer].[BU_Code] is null and [$Inner].[BU_Code2] is null) ) as [_] where [_].[Date] >= [_].[StartDate] and [_].[Date] <= [_].[EndDate] The resulting query is a little strange, you can probably tell that it was generated programmatically. But if you look closely you'll notice that every single part of the Power Query formula has been pushed down to SQL Server. Power Query itself ends up just constructing the query and passing the results back to Excel, it does not do any of the data transformation steps itself. So now you can feel a bit more comfortable showing Power Query to your less technical Colleagues knowing that the tool will do it's best fold all the  small steps in Power Query down the most efficient query that it can against the source systems.

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • Entity Framework 4 mapping fragment error when adding new entity scalar

    - by Jason Morse
    I have an Entity Framework 4 model-first design. I create a first draft of my model in the designer and all was well. I compiled, generated database, etc. Later on I tried to add a string scalar (Nullable = true) to one of my existing entities and I keep getting this type of error when I compile: Error 3004: Problem in mapping fragments starting at line 569: No mapping specified for properties MyEntity.MyValue in Set MyEntities. An Entity with Key (PK) will not round-trip when: Entity is type [MyEntities.MyEntity] I keep having to manually open the EDMX file and correct the XML whenever I add scalars. Ideas on what's going on?

    Read the article

  • Programmatically creating a toolbar in WPF

    - by rwallace
    I'm trying to create a simple toolbar in WPF, but the toolbar shows up with no corresponding buttons on it, just a very thin blank white strip. Any idea what I'm doing wrong, or what the recommended procedure is? Relevant code fragments so far: var tb = new ToolBar(); var b = new Button(); b.Command = comback; Image myImage = new Image(); myImage.Source = new BitmapImage(new Uri("back.png", UriKind.Relative)); b.Content = myImage; tb.Items.Add(b); var p = new DockPanel(); //DockPanel.SetDock(mainmenu, Dock.Top); DockPanel.SetDock(tb, Dock.Top); DockPanel.SetDock(sb, Dock.Bottom); //p.Children.Add(mainmenu); p.Children.Add(tb); p.Children.Add(sb); Content = p;

    Read the article

  • Template engine recommendations

    - by alex
    I'm looking for a template engine. Requirements: Runs on a JVM. Java is good; Jython, JRuby and the like, too... Can be used outside of servlets (unlike JSP) Is flexible wrt. to where the templates are stored (JSP and a lot of people require the templates to be stored in the FS). It should provide a template loading interface which one can implement or something like that Easy inclusion of parameterized templates- I really like JSP's tag fragments Good docs, nice code, etc., the usual suspects I've looked at JSP- it's nearly perfect except for the servlet and filesystem coupling, Stringtemplate- I love the template syntax, but it fails on the filesystem coupling, the documentation is lacking and template groups and stuff are confusing, GXP, TAL, etc. Ideas, thoughts? Alex

    Read the article

  • Library to parse ERB files

    - by Douglas Sellers
    I am attempting to parse, not evaluate, rails ERB files in a Hpricot/Nokogiri type manner. The files I am attempting to parse contain HTML fragments intermixed with dynamic content generated using ERB (standard rails view files) I am looking for a library that will not only parse the surrounding content, much the way that Hpricot or Nokogiri will but will also treat the ERB symbols, <%, <%= etc, as though they were html/xml tags. Ideally I would get back a DOM like structure where the <%, <%= etc symbols would be included as their own node types. I know that it is possible to hack something together using regular expressions but I was looking for something a bit more reliable as I am developing a tool that I need to run on a very large view code base where both the html content and the erb content are important. For example, content such as: blah blah blah <divMy Great Text <%= my_dynamic_expression %</div Would return a tree structure like: root - text_node (blah blah blah) - element (div) - text_node (My Great Text ) - erb_node (<%=)

    Read the article

  • Can JAXB Incrementally Marshall An Object?

    - by Intellectual Tortoise
    I've got a fairly simple, but potentially large structure to serialize. Basically the structure of the XML will be: <simple_wrapper> <main_object_type> <sub_objects> </main_object_type> ... main_object_type repeats up to 5,000 times </simple_wrapper> The main_object_type can have a significant amount of data. On my first 3,500 record extract, I had to give the JVM way more memory than it should need. So, I'd like to write out to disk after each (or a bunch of) main_object_type. I know that setting Marshaller.JAXB_FRAGMENT would allow it fragments, but I loose the outer xml document tags and the <simple_wrapper>. Any suggestions?

    Read the article

  • 'Programming by Coincidence' Excercise: Java File Writer

    - by Tapas
    I just read the article Programming by Coincidence. At the end of the page there are excercises. A few code fragments that are cases of "programming by coincidence". But I cant figure out the error in this piece: This code comes from a general-purpose Java tracing suite. The function writes a string to a log file. It passes its unit test, but fails when one of the Web developers uses it. What coincidence does it rely on? public static void debug(String s) throws IOException { FileWriter fw = new FileWriter("debug.log", true); fw.write(s); fw.flush(); fw.close(); } What is wrong about this?

    Read the article

  • Proper snowball analyzer configuration when using Grails Searchable Plugin

    - by Wirsbro
    To improve stemming we want to switch from the default analyzer to snowball, however, having a lot of difficulty with the proper settings and would appreciate any help. In Environment: - Sun's Java 1.6.16 - Grails 1.2.2 - Searchable Plug-In 0.5.5 Config.groovy: Have tried both settings: compassSettings = ['compass.engine.analyzer.stemmed.type': 'snowball', 'compass.engine.analyzer.stemmed.name': 'English'] compassSettings = ['compass.engine.analyzer.snowball.type': 'snowball', 'compass.engine.analyzer.snowball.name': 'English', 'compass.engine.analyzer.search.type': 'snowball', 'compass.engine.analyzer.search.name': 'English'] Search.groovy - The Invocation: def searchResult = searchableService.search(params.q, withHighlighter: { highlighter, index, sr if (!sr.highlights) { sr.highlights = [] } try { sr.highlights[index] = highlighter.fragments("content")[0..2].join(" ") } catch (IndexOutOfBoundsException ex) { sr.highlights[index] = highlighter.fragment("content") } }) def suggestion = searchableService.suggestQuery(params.q) if (suggestion != params.q) { searchResult.suggestedQuery = suggestion }

    Read the article

  • Can Nokogiri use a SAX parser to parse an HTML fragment?

    - by .yahoo.co.jpaqwsykcj3aulh3h1k0cy6nzs3isj
    I have this code. class MyParser < Nokogiri::XML::SAX::Document def characters(string) LOG.debug("characters #{string}") end def start_element(name, attrs = []) LOG.debug("start_element #{name}") end def end_element(name) LOG.debug("end_element #{name}") end end parser = Nokogiri::HTML::SAX::Parser.new(MyParser.new) parser.parse(File.new($*[0], 'rb')) Run on an HTML fragment like this, <h1>Hello</h1> <p>Hi.</p> the output shows that only the first element is processed: start_element h1 characters Hello end_element h1 If I wrap the fragment in html and body tags, the whole input is parsed. Is there a way to use a SAX style parser on HTML fragments?

    Read the article

  • How to use custom drawing in QGraphicsViews in PyQt?

    - by DSblizzard
    I need to view QGraphicsScene in 2 QGraphicsViews with condition that they have different scale factors for items in scene. Closest function which I found is drawItems(), but as far I can understand, it must be called manually. How to repaint views automatically? I have these two code fragments in program: class TGraphicsView(QGraphicsView): def __init__(self, parent = None): print("__init__") QGraphicsView.__init__(self, parent) def drawItems(self, Painter, ItemCount, Items, StyleOptions): print("drawItems") Brush = QBrush(Qt.red, Qt.SolidPattern) Painter.setBrush(Brush) Painter.drawEllipse(0, 0, 100, 100) ... Mw.gvNavigation = TGraphicsView(Mw) # Mw - main window Mw.gvNavigation.setGeometry(0, 0, Size1, Size1) Mw.gvNavigation.setScene(Mw.Scene) Mw.gvNavigation.setSceneRect(0, 0, Size2, Size2) Mw.gvNavigation.show() _init_ works, Mw.gvNavigation is displayed and there are Mw.Scene items in it, but drawItems() isn't called.

    Read the article

  • Current state of client-side XSLT

    - by Casey
    Last I heard, Blizzard was one of the few companies to put client-side XSLT into practice (2008). Is this still the case in 2011, or are more people now exploring this technique in production?  It seems that modern browsers (IE9, FF4, Chrome) and client processing power are primed to exploit this standard for tangible savings in server CPU power and bandwidth on large scale properties. Am I missing something? The negative aspects I'm aware of include * additional rendering time * additional assets required on uncached page load * additional layer of complexity * noticably less developer experience than server-side template techniques The benefits I perceive include * distributed template composition (offloaded on the client) * caching of common template fragments offloaded on the client * logical separation of document structure and data * well-documented web standard supported by all modern browsers Finally, although I know it's impossible to predict the future, I am curious to know opinions on whether or not client-side XSLT's day will come. With interest in HTML5 driving users to upgrade their browsers and developers to explore new techniques, I would say yes. How about you? Thanks in advance, Casey

    Read the article

  • Launching Vim via Lua

    - by Keith Pimmel
    I'm writing a simple little Lua commandline app that will build a static website. I'm storing my fragments in a sqlite database. Retrieving the data from the db is straightforward as is saving it; my question comes from editing the data. Is there an elegant way to pipe the data from Lua to vim? Can vim edit a memory buffer and return it? I was planning on launching the editor via os.execute('vim') but only after grabbing a temporary file handle and dumping the database output into that. I would like to have to avoid touching the filesystem that way but that is my contingency plan.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >