Search Results

Search found 9156 results on 367 pages for 'partial match'.

Page 299/367 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • reading non-english html pages with c#

    - by Gal Miller
    I am trying to find a string in Hebrew in a website. The reading code is attached. Afterward I try to read the file using streamReader but I can't match strings in other languages. what am I suppose to do? // used on each read operation byte[] buf = new byte[8192]; // prepare the web page we will be asking for HttpWebRequest request = (HttpWebRequest) WebRequest.Create("http://www.webPage.co.il"); // execute the request HttpWebResponse response = (HttpWebResponse) request.GetResponse(); // we will read data via the response stream Stream resStream = response.GetResponseStream(); string tempString = null; int count = 0; FileStream fileDump = new FileStream(@"c:\dump.txt", FileMode.Create); do { count = resStream.Read(buf, 0, buf.Length); fileDump.Write(buf, 0, buf.Length); } while (count > 0); // any more data to read? fileDump.Close();

    Read the article

  • Can I join two tables whereby the joined table is sorted by a certain column?

    - by Ferdy
    I'm not much of a database guru so I need some help on a query I'm working on. In my photo community project I want to richly visualize tags by not only showing the tag name and counter (# of images inside them), I also want to show a thumb of the most popular image inside the tag (most karma). The table setup is as follow: Image table holds basic image metadata, important is the karma field Imagefile table holds multiple entries per image, one for each format Tag table holds tag definitions Tag_map table maps tags to images In my usual trial and error query authoring I have come this far: SELECT * FROM (SELECT tag.name, tag.id, COUNT(tag_map.tag_id) as cnt FROM tag INNER JOIN tag_map ON (tag.id = tag_map.tag_id) INNER JOIN image ON tag_map.image_id = image.id INNER JOIN imagefile on image.id = imagefile.image_id WHERE imagefile.type = 'smallthumb' GROUP BY tag.name ORDER BY cnt DESC) as T1 WHERE cnt > 0 ORDER BY cnt DESC [column clause of inner query snipped for the sake of simplicity] This query gives me somewhat what I need. The outer query makes sure that only tags are returned for which there is at least 1 image. The inner query returns the tag details, such as its name, count (# of images) and the thumb. In addition, I can sort the inner query as I want (by most images, alphabetically, most recent, etc) So far so good. The problem however is that this query does not match the most popular image (most karma) of the tag, it seems to always take the most recent one in the tag. How can I make sure that the most popular image is matched with the tag?

    Read the article

  • SQL Server INSERT ... SELECT Statement won't parse

    - by Jim Barnett
    I am getting the following error message with SQL Server 2005 Msg 120, Level 15, State 1, Procedure usp_AttributeActivitiesForDateRange, Line 18 The select list for the INSERT statement contains fewer items than the insert list. The number of SELECT values must match the number of INSERT columns. I have copy and pasted the select list and insert list into excel and verified there are the same number of items in each list. Both tables an additional primary key field with is not listed in either the insert statement or select list. I am not sure if that is relevant, but suspicious it may be. Here is the source for my stored procedure: CREATE PROCEDURE [dbo].[usp_AttributeActivitiesForDateRange] ( @dtmFrom DATETIME, @dtmTo DATETIME ) AS BEGIN SET NOCOUNT ON; DECLARE @dtmToWithTime DATETIME SET @dtmToWithTime = DATEADD(hh, 23, DATEADD(mi, 59, DATEADD(s, 59, @dtmTo))); -- Get uncontested DC activities INSERT INTO AttributedDoubleClickActivities ([Time], [User-ID], [IP], [Advertiser-ID], [Buy-ID], [Ad-ID], [Ad-Jumpto], [Creative-ID], [Creative-Version], [Creative-Size-ID], [Site-ID], [Page-ID], [Country-ID], [State Province], [Areacode], [OS-ID], [Domain-ID], [Keyword], [Local-User-ID], [Activity-Type], [Activity-Sub-Type], [Quantity], [Revenue], [Transaction-ID], [Other-Data], Ordinal, [Click-Time], [Event-ID]) SELECT [Time], [User-ID], [IP], [Advertiser-ID], [Buy-ID], [Ad-ID], [Ad-Jumpto], [Creative-ID], [Creative-Version], [Creative-Size-ID], [Site-ID], [Page-ID], [Country-ID], [State Province], [Areacode], [OS-ID], [Domain-ID], [Keyword], [Local-User-ID] [Activity-Type], [Activity-Sub-Type], [Quantity], [Revenue], [Transaction-ID], [Other-Data], REPLACE(Ordinal, '?', '') AS Ordinal, [Click-Time], [Event-ID] FROM Activity_Reports WHERE [Time] BETWEEN @dtmFrom AND @dtmTo AND REPLACE(Ordinal, '?', '') IN (SELECT REPLACE(Ordinal, '?', '') FROM Activity_Reports WHERE [Time] BETWEEN @dtmFrom AND @dtmTo EXCEPT SELECT CONVERT(VARCHAR, TripID) FROM VisualSciencesActivities WHERE [Time] BETWEEN @dtmFrom AND @dtmTo); END GO

    Read the article

  • Php/Regex get the contents between a set of double quotes

    - by Davy Arnold
    Update to my question: My goal overall is to split the string into 4 parts that I can access later. value = " result of the html inside the first and last " " Here is an example of what i'm trying to do: // My string (this is dynamic and will change, this is just an example) $string = 'value="<p>Some text</p> <a href="#">linky</a>"'; // Run the match and spit out the results preg_match_all('/([^"]*)(?:\s*=\s*(\042|\047))([^"]*)/is', $string , $results); // Here is the array I want to end up with Array ( [0] => Array ( [0] => value="<p>Some text</p><a href="#">linky</a>" ) [1] => Array ( [0] => value ) [2] => Array ( [0] => " ) [3] => Array ( [0] => <p>Some text</p><a href="#">linky</a> ) ) Basically the double quotes on the link are causing me some trouble so my first though was to do [^"]$ or something to have it just run until the last double quote, but that isn't getting me anywhere. Another idea I had was maybe process the string in PHP to strip out any inner quotes, but i'm not sure ho to go about this either. Hopefully I'm being clear, it is pretty late and i've been at this far too long!

    Read the article

  • Creating one row of information in excel using a unique value

    - by user1426513
    This is my first post. I am currently working on a project at work which requires that I work with several different worksheets in order to create one mail master worksheet, as it were, in order to do a mail merge. The worksheet contains information regarding different purchases, and each purchaser is identified with their own ID number. Below is an example of what my spreadsheet looks like now (however I do have more columns): ID Salutation Address ID Name Donation ID Name Tickets 9 Mr. John Doe 123 12 Ms. Jane Smith 100.00 12 Ms.Jane Smith 300.00 12 Ms. Jane Smith 456 22 Mr. Mike Man 500.00 84 Ms. Jo Smith 300.00 What I would like to do is somehow sort my data so that everythign with the same unique identifier (ID) lines up on the same row. For example ID 12 Jane Smith - all the information for her will show up under her name matched by her ID number, and ID 22 will match up with 22 etc... When I merged all of my spreadsheets together, I sorted them all by ID number, however my problem is, not everyone who made a donation bought a ticket or some people just bought tickets and nothing us, so sorting doesn't work. Hopefully this makes sense. Thanks in advance.

    Read the article

  • Using LINQ on observable with GroupBy and Sum aggregate

    - by Mark Oates
    I have the following block of code which works fine; var boughtItemsToday = (from DBControl.MoneySpent bought in BoughtItemDB.BoughtItems select bought); BoughtItems = new ObservableCollection<DBControl.MoneySpent>(boughtItemsToday); It returns data from my MoneySpent table which includes ItemCategory, ItemAmount, ItemDateTime. I want to change it to group by ItemCategory and ItemAmount so I can see where I am spending most of my money, so I created a GroupBy query, and ended up with this; var finalQuery = boughtItemsToday.AsQueryable().GroupBy(category => category.ItemCategory); BoughtItems = new ObservableCollection<DBControl.MoneySpent>(finalQuery); Which gives me 2 errors; Error 1 The best overloaded method match for 'System.Collections.ObjectModel.ObservableCollection.ObservableCollection(System.Collections.Generic.List)' has some invalid arguments Error 2 Argument 1: cannot convert from 'System.Linq.IQueryable' to 'System.Collections.Generic.List' And this is where I'm stuck! How can I use the GroupBy and Sum aggregate function to get a list of my categories and the associated spend in 1 LINQ query?! Any help/suggestions gratefully received. Mark

    Read the article

  • Dealing with expired authentication for a partially filled form?

    - by aaronls
    I have a large webform, and would like to prompt the user to login if their session expires, or have them login when they submit the form. It seems that having them login when they submit the form creates alot of challenges because they get redirected to the login page and then the postback data for the original form submission is lost. So I'm thinking about how to prompt them to login asynchrounsly when the session expires. So that they stay on the original form page, have a panel appear telling them the session has expired and they need to login, it submits the login asynchronously, the login panel disapears, and the user is still on the original partially filled form and can submit it. Is this easily doable using the existing ASP.NET Membership controls? When they submit the form will I need to worry about the session key? I mean, I am wondering if the session key the form submits will be the original one from before the session expired which won't match the new one generated after logging in again asynchrounously(I still do not understand the details of how ASP.NET tracks authentication/session IDs). Edit: Yes I am actually concerned about authentication expiration. The user must be authenticated for the submitted data to be considered valid.

    Read the article

  • Rackspace Cloud rewrite jpg causes Session reset

    - by willoller
    This may be the .Net version of this question. I have an image script with the following: ... Response.WriteFile(filename); Response.End(); I am rewriting .jpg files using the following rewrite rule in web.config: <rule name="Image Redirect" stopProcessing="true"> <match url="^product-images/(.*).jpg" /> <conditions> <add input="{REQUEST_URI}" pattern="\.(jp?g|JP?G)$" /> </conditions> <action type="Redirect" redirectType="SeeOther" url="/product-images/ProductImage.aspx?path=product-images/{tolower:{R:1}}.jpg" /> </rule> It basically just rewrites the image path into a query parameter. The problem is that (intermittently of course) Mosso returns a new Asp Session cookie which breaks the whole world. Directly accessing a static .jpg file does not cause this problem. Directly accessing the image script does not cause it either. Only rewriting a .jpg file to the .aspx script causes the Session loss. Things I have tried (From the Rackspace doc How can I bypass the cache?) I added Private cacheability to the image script itself: Response.Cache.SetCacheability(HttpCacheability.Private); I tried adding these cache-disabling nodes to web.config: <staticContent> <clientCache cacheControlMode="DisableCache" /> </staticContent> and <httpProtocol> <customHeaders> <add name="Cache-Control private" value="Cache-Control private" </customHeaders> </httpProtocol> The Solution I need The browser cache cannot be disabled. This means potential solutions involving Cache.SetNoStore() or HttpCacheability.NoCache will not work.

    Read the article

  • How to syntax-highlight XML in CDATA elements in Vim?

    - by Jim Hurne
    Vim's syntax highlighting for XML/XSL is great, except it turns off all syntax highlighting in CDATA regions. Is there a way to turn on syntax highlighting on in CDATA regions? At work, we have a lot of XSL code embedded within other XML documents. It would be great if I could get all of the goodness of XML editing for the embedded XSL code as well without having to temporarily remove the CDATA tags, or copy the CDATA content into a temporary file. Example: <root> <someTag><![CDATA[ <xsl:template match="/"> <!-- XSL content here --> </xsl:template> ]]> </someTag> </root> Note that the name of the tag (in the example, someTag) containing the content could be anything. We also sometimes embed Javascript inside CDATA regions as well, and again, it would be nice to turn on Javascript syntax highlighting for those regions. Again, the tag the data is embedded in is usually arbitrary and can be anything.

    Read the article

  • How to use setTimeout / .delay() to wait for typing between characters

    - by Darcy
    Hi all, I am creating a simple listbox filter that takes the user input and returns the matching results in a listbox via javascript/jquery (roughly 5000+ items in listbox). Here is the code snippet: var Listbox1 = $('#Listbox1'); var commands = document.getElementById('DatabaseCommandsHidden'); //using js for speed $('#CommandsFilter').bind('keyup', function() { Listbox1.children().remove(); for (var i = 0; i < commands.options.length; i++) { if (commands.options[i].text.toLowerCase().match($(this).val().toLowerCase())) { Listbox1.append($('<option></option>').val(i).html(commands.options[i].text)); } } }); This works pretty well, but slows down somewhat when the 1st/2nd char's are being typed since there are so many items. I thought a solution I could use would be to add a delay to the textbox that prevents the 'keyup' event from being called until the user stops typing. The problem is, I'm not sure how to do that, or if its even a good idea or not. Any suggestions/help is greatly appreciated.

    Read the article

  • javaScript .splice() not working on correctly

    - by adardesign
    I am setting a cookie for each navigation container that is clicked on. It sets an array that is joined and set the cookie value. if its clicked again then its removed from the array. It somehow buggy. It only splices after clicking on other elements. and then it behaves weird. Thanks much. var navLinkToOpen; var setNavCookie = function(value){ var isSet = false; var checkCookies = checkNavCookie() setCookieHelper = checkCookies? checkCookies.split(","): []; console.log("value passed", value) for(i in setCookieHelper){ if(value == setCookieHelper[i]){ setCookieHelper.splice(value,1); isSet = true; } } if(!isSet){ setCookieHelper.push(value) } setCookieHelper.join(",") document.cookie = "navLinkToOpen"+"="+setCookieHelper; } var checkNavCookie = function(){ var allCookies = document.cookie.split( ';' ); for (i = 0; i < allCookies.length; i++ ){ temp = allCookies[i].split("=") if(temp[0].match("navLinkToOpen")){ var getValue = temp[1] } } return getValue || false } $(document).ready(function() { $("#LeftNav li").has("b").addClass("navHeader").not(":first").siblings("li").hide() $(".navHeader").click(function(){ $(this).toggleClass("collapsed").nextUntil("li:has('b')").slideToggle(300); setNavCookie($('.navHeader').index($(this))) return false }) console.log("init",document.cookie) var testCookies = checkNavCookie(); if(testCookies){ finalArrayValue = testCookies.split(",") for(i in finalArrayValue){ $(".navHeader").eq(finalArrayValue[i]).toggleClass("collapsed").nextUntil(".navHeader").slideToggle (0); } } });

    Read the article

  • DotNetNuke and Subversion guidelines

    - by David Stratton
    I've Googled, Binged, and here at StackOverflow, looked through the related questions and searched, but I'm not finding what I'm looking for. I've also searched documentation on DNN. What I'm looking for is any guidance (tutorials, blogs, step-by-step instructions for setting up a repository) etc from people who are experienced in using DotNetNuke with SVN. We use SVN for all our source control, and have no problem with standard applications, because we pretty much built the repository and directory structure to work with our processes. This means when we do web sites, in Visual Studio, we do file based web sites, rather than setting them up in the local IIS. It just makes things easier for us. However, with DNN, it appears that even if you get the source code, it is expecting to be set up in the local IIS, which means additional headaches for us. For example, we are moving all of our source code off our local C drives, and onto a shared drive on a server. This is to enable backups in addition to our normal source control. (This was a management decision). So that means that we need to change the virtual web app when we make the move. Has anyone come up with a good way to work around this? Can DNN be set up so that the developer web server in Visual Studio can be used, so that we can treat it just like any normal web app? Am I missing something obvious? Edit - added I'm willing to accept answers like "We tried it and never got it to work", and "It can't be done" as answers. I'm always open to hearing "It can't be done the way you want. You need to change your procedures to match how it works" if necessary. I guess if you've got experience trying this and just couldn't get it to work, I can learn from your experience that way as well, but some detail would be good.

    Read the article

  • Have to click twice to submit the form in IE8

    - by phil
    A very peculiar bug in a simple html form. After changing an option, button has to be clicked twice to submit the form. Button is focused after clicking once, but form is not submitted. It's only this way in IE8 and works fine in Chrome and FF. PAY ATTENTION TO 'g^' right before <select>. It has to be a letter or number followed by a symbol to generate this bug. For example, 'a#','f$','3(' all create the same bug. Otherwise it works fine. BTW, if you don't change option and click button right away,there won't be any bug. Very strange, huh? <form method="post" action="match.php"> g^ <select> <option>Select</option> <option>English</option> <option>French</option> </select> <input type="submit" value="Go" /> </form>

    Read the article

  • A "smart" (forgiving) date parser?

    - by jdmuys
    I have to migrate a very large dataset from one system to another. One of the "source" column contains a date but is really a string with no constraint, while the destination system mandates a date in the format yyyy-mm-dd. Many, but not all, of the source dates are formatted as yyyymmdd. So to coerce them to the expected format, I do (in Perl): return "$1-$2-$3" if ($val =~ /(\d{4})[-\/]*(\d{2})[-\/]*(\d{2})/); The problem arises when the source dates moves away from the "generic" yyyymmdd. The goal is to salvage as many dates as possible, before giving up. Example source strings include: 21/3/1998, March 2004, 2001, 3/4/97 I can try to match as many of the examples I can find with a succession of regular expressions such as the one above. But is there something smarter to do? Am I not reinventing the wheel? Is there a library somewhere doing something similar? I couldn't find anything relevant googling "forgiving date parser". (any language is OK).

    Read the article

  • XSLT: Add namespace to root element

    - by Ingrid
    I need to change namespaces in the root element as follows: input document: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <foo xsi:schemaLocation="urn:isbn:1-931666-22-9 http://www.loc.gov/ead/ead.xsd" xmlns:ns2="http://www.w3.org/1999/xlink" xmlns="urn:isbn:1-931666-22-9" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> desired output: <foo audience="external" xsi:schemaLocation="urn:isbn:1-931666-22-9 http://www.loc.gov/ead/ead.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="urn:isbn:1-931666-22-9"> I was trying to do it as I copy over the whole document and before I give any other transformation instructions, but the following doesn't work: <xsl:template match="* | processing-instruction() | comment()"> <xsl:copy copy-namespaces="no"> <xsl:for-each select="."> <xsl:attribute name="audience" select="'external'"/> <xsl:namespace name="xlink" select="'http://www.w3.org/1999/xlink'"/> </xsl:for-each> <xsl:apply-templates/> <xsl:copy-of select="@*"/> <xsl:apply-templates/> </xsl:copy> </xsl:template> Thanks for any advice!

    Read the article

  • How can I get the rank of rows relative to total number of rows based on a field?

    - by Arms
    I have a scores table that has two fields: user_id score I'm fetching specific rows that match a list of user_id's. How can I determine a rank for each row relative to the total number of rows, based on score? The rows in the result set are not necessarily sequential (the scores will vary widely from one row to the next). I'm not sure if this matters, but user_id is a unique field. Edit @Greelmo I'm already ordering the rows. If I fetch 15 rows, I don't want the rank to be 1-15. I need it to be the position of that row compared against the entire table by the score property. So if I have 200 rows, one row's rank may be 3 and another may be 179 (these are arbitrary #'s for example only). Edit 2 I'm having some luck with this query, but I actually want to avoid ties SELECT s.score , s.created_at , u.name , u.location , u.icon_id , u.photo , (SELECT COUNT(*) + 1 FROM scores WHERE score > s.score) AS rank FROM scores s LEFT JOIN users u ON u.uID = s.user_id ORDER BY s.score DESC , s.created_at DESC LIMIT 15 If two or more rows have the same score, I want the latest one (or earliest - I don't care) to be ranked higher. I tried modifying the subquery with AND id > s.id but that ended up giving me an unexpected result set and different ties.

    Read the article

  • C# Breakpoint Weirdness

    - by Dan
    In my program I've got two data files A and B. The data in A is static and the data in B refers back to the data in A. In order to make sure the data in B is invalidated when A is changed, I keep an identifier for each of the links which is a long byte-string identifying the data. I get this string using BitConverter on some of the important properties. My problem is that this scheme isn't working. I save the identifiers initially, and with I reload (with the exact same data in A) the identifiers don't match anymore. It seems the bit converter gives different results when I go to save. The really weird thing about it is, if I place a breakpoint in the save code, I can see the identifier it's writing to the file is fine, and the next load works. If I don't place a breakpoint and say print the identifiers to console instead, they're totally different. It's like when my program is running at full-speed the CPU messes up some instructions. This isn't the first time something like this happens to me. I've seen it in other projects. What gives? Has anyone every experienced this kind of debugging weirdness? I can't explain how stopping the program and not stopping it can change the output. Also, it's not a hardware problem because this happens on my laptop as well.

    Read the article

  • How to compare 2 complex spreadsheets running in parallel for consistency with each other?

    - by tbone
    I am working on converting a large number of spreadsheets to use a new 3rd party data access library (converting from third party library #1 to third party library #2). fyi: a call to a UDF (user defined function) is placed in a cell, and when that is refreshed, it pulls the data into a pivot table below the formula. Both libraries behave the same and produce the same output, except, small irregularites can arise, such as an additional field being shown in the output pivot table using library #2, which can affect formulas on the sheet if data is being read from the pivot table without using GetPivotData. So I have ~100 of these very complicated (20+ worksheets per workbook) spreadsheets that I have to convert, and run in parallel for a period of time, to see if the output using the new data access library matches the old library. Is there some clever approach to do this, so I don't have to spend a large amount of time analyzing each sheet to determine the specific elements to compare? Two rough ideas that come to mind: 1. just create a Validator workbook that has the same # of worksheets, and simply do a Worbook1!Worksheet1!A1 - Worbook2!Worksheet3!A1 for every possible cell on each sheet 2. roughly the equivalent of #1, but just traverse the cells in the 2 books using VBA, and log any cells that do not match. I don't particularly like either idea, can anyone think of something better than this, maybe some 3rd party utility I could buy?

    Read the article

  • How do I programatically verify, create, and update SQL table structure?

    - by JYelton
    Scenario: I have an application (C#) that expects a SQL database and login, which are set by a user. Once connected, it checks for the existence of several table and creates them if not found. I'd like to expand on this by having the program be capable of adding columns to those tables if I release a new version of the program which relies upon the new columns. Question: What is the best way to programatically check the structure of an existing SQL table and create or update it to match an expected structure? I am planning to iterate through the list of required columns and alter the existing table whenever it does not contain the new column. I can't help but wonder if there's an approach that is different or better. Criteria: Here are some of my expectations and self-imposed rules: Newer versions of the program might no longer use certain columns, but they would be retained for data logging purposes. In other words, no columns will be removed. Existing data in the table must be preserved, so the table cannot simply be dropped and recreated. In all cases, newly added columns would allow null data, so the population of old records is taken care of by having default null values. Example: Here is a sample table (because visual examples help!): id sensor_name sensor_status x1 x2 x3 x4 1 na019 OK 0.01 0.21 1.41 1.22 Then, in a new version, I may want to add the column x5. The "x-columns" are all data-storage columns that accept null.

    Read the article

  • When and why can sprintf fail?

    - by Srekel
    I'm using swprintf to build a string into a buffer (using a loop among other things). const int MaxStringLengthPerCharacter = 10 + 1; wchar_t* pTmp = pBuffer; for ( size_t i = 0; i < nNumPlayers ; ++i) { const int nPlayerId = GetPlayer(i); const int nWritten = swprintf(pTmp, MaxStringLengthPerCharacter, TEXT("%d,"), nPlayerId); assert(nWritten >= 0 ); pTmp += nWritten; } *pTaskPlayers = '\0'; If during testing the assert never hits, can I be sure that it will never hit in live code? That is, do I need to check if nWritten < 0 and handle that, or can I safely assume that there won't be a problem? Under which circumstances can it return -1? The documentation more or less just states "If the function fails". In one place I've read that it will fail if it can't match the arguments (i.e. the formatting string to the varargs) but that doesn't worry me. I'm also not worried about buffer overrun in this case - I know the buffer is big enough.

    Read the article

  • Filemaker XSL Select Column By Name

    - by Kevin Sylvestre
    I am looking to export from Filemaker using column names (instead of positions). Currently I export the following XSL stylesheet that exports by position with: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:fm="http://www.filemaker.com/fmpxmlresult" exclude-result-prefixes="fm" > <xsl:output method="xml" version="1.0" encoding="utf-8" indent="yes"/> <xsl:template match="/"> <people> <xsl:for-each select="fm:FMPXMLRESULT/fm:RESULTSET/fm:ROW"> <person> <name> <xsl:value-of select="fm:COL[01]/fm:DATA"/> </name> <location> <xsl:value-of select="fm:COL[02]/fm:DATA"/> </location> </person> </xsl:for-each> </people> </xsl:template> </xsl:stylesheet> Any ideas? Thanks.

    Read the article

  • Parsing HTTP - Bytes.length != String.length

    - by hotzen
    Hello, I consume HTTP via nio.SocketChannel, so I get chunks of data as Array[Byte]. I want to put these chunks into a parser and continue parsing after each chunk has been put. HTTP itself seems to use an ISO8859-Charset but the Payload/Body itself may be arbitrarily encoded: If the HTTP Content-Length specifies X bytes, the UTF8-decoded Body may have much less Characters (1 Character may be represented in UTF8 by 2 bytes, etc). So what is a good parsing strategy to honor an explicitly specified Content-Length and/or a Transfer-Encoding: Chunked which specifies a chunk-length to be honored. append each data-chunk to an mutable.ArrayBuffer[Byte], search for CRLF in the bytes, decode everything from 0 until CRLF to String and match with Regular-Expressions like StatusRegex, HeaderRegex, etc? decode each data-chunk with the proper charset (e.g. iso8859, utf8, etc) and add to StringBuilder. With this solution I am not able to honor any Content-Length or Chunk-Size, but.. do I have to care for it? any other solution... ?

    Read the article

  • Stack Overflow problem transforming with a custom xslt

    - by Flynn1179
    I've got a system that allows the user the option of providing their own XSLT to apply to some data that's been retrieved, as a means of specifying how that data should be presented. However, if the user includes code in the XSLT equivalent to: <xsl:template match="/"> <xsl:element name="data"> <xsl:apply-templates select="." /> </xsl:element> </xsl:template> this causes .NET to infinitely recurse trying to process it, and produces a stack overflow error. I need to be able to trap this before it crashes the app, as the data that's been retrieved is occasionally quite time-consuming to obtain, and the data is lost when this happens. Is there any way of doing this? I know it's theoretically possible to identify any occurrences of xsl:apply-templates with "." in the select attribute, but this isn't the only way an infinite recursion could happen, I need a way of generically trapping it.

    Read the article

  • prolog - infinite rule

    - by Tom
    I have the next rules % Signature: natural_number(N)/1 % Purpose: N is a natural number. natural_number(0). natural_number(s(X)) :- natural_number(X) ackermann(0, N, s(N)). //rule 1 ackermann(s(M),0,Result):- ackermann(M,s(0),Result). //rule 2 ackermann(s(M),s(N),Result):-ackermann(M,Result1,Result),ackermann(s(M),N,Result1). //rule 3 The query is: ackermann (M,N,s(s(0))). Now, as I understood, In the third calculation, we got an infinite search (failture branch). I check it, and I got a finite search (failture branch). I'll explain: In the first, we got a substitue of M=0, N=s(0) (rule 1 - succsess!). In the second, we got a substitue of M=s(0),N=0 (rule 2 - sucsses!). But what now? I try to match M=s(s(0)) N=0, But it got a finite search - failture branch. Why the comipler doesn't write me "fail". Thank you.

    Read the article

  • Best way to dynamically get column names from oracle tables

    - by MNC
    Hi, We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns. Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application. Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code? My concern is the condition that decides which table to query. The variable condition is like if the condition is A, then load from TableX if the condition is B then load from TableA and TableY. We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table. I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general. So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing. Thanks,

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >