Search Results

Search found 3646 results on 146 pages for 'escape sequence'.

Page 59/146 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Problem: writing parameter values to data driven MSTEST output

    - by Shubh
    Hi, I am trying to extract some information about the parameter variants used in an MSTEST data driven test case from trx file. Currently, For data driven tests, I get the output of same testcase with different inputs as a sequence of tags , but there is no info about the value of the variants. Example: Suppose we have a [data driven]TestMethod1() and the data rows contain variations a and b. There are two variations a=1,b=2 for which the test passes and a=3,b=4 for which the test fails. If we can output the info that it was a=1,b=2 which passed and a=3 b=4 which failed in the trx file; the output will be meaningful. Better information about test case runs from the output file alone(without any dependencies). Investigating the test failure without rerunning the whole set If the data rows change in data source(now a=1,b=2 pass and a=5,b=6 fail) , easy to decipher that the errors are different; although the fail sequence is still the same(row 0 pass row 1 fail but now row1 is different) Has any of you gone through a similar problem? What did you follow? I tried to put the parameter value information in the Description attribute of TestMethod, it didnt work. Any other methods you think can work too? thanks, Shubhankar

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • myXmlDataDoc.DataSet.ReadXml problem

    - by alex
    Hi: I am using myXmlDataDoc.DataSet.ReadXml(xml_file_name, XmlReadMode.InferSchema); to populate the tables within the dataset created by reading in a xml schema using: myStreamReader = new StreamReader(xsd_file_name); myXmlDataDoc.DataSet.ReadXmlSchema(myStreamReader); The problems I am facing is when it comes to read the xml tag: The readXml function puts the element in a different table and everything in the tables are 0s. Here is the print out of the datatable: TableName: test_data 100 2 1 TableName: parameters 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 0 In my xsd file, I represent an array using this: <xs:element name="test_data"> <xs:complexType> <xs:complexContent> <xs:extension base="test_base"> <xs:sequence> <xs:element name="a" type="xs:unsignedShort"/> <xs:element name="b" type="xs:unsignedShort"/> <xs:element name="c" type="xs:unsignedInt"/> <xs:element name="parameters" minOccurs="0" maxOccurs="12" type="xs:unsignedInt"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> And here is my xml file: 100 2 1 1 2 3 4 5 6 7 8 9 10 11 12 I am expecting after the readXml function call, the "parameters" is part of the test_data table. TableName: test_data 100 2 1 1 2 3 4 5 6 7 8 9 10 11 12 Does anyone knows what's wrong? Thanks

    Read the article

  • How can you transform a set of numbers into mostly whole ones?

    - by Alice
    Small amount of background: I am working on a converter that bridges between a map maker (Tiled) that outputs in XML, and an engine (Angel2D) that inputs lua tables. Most of this is straight forward However, Tiled outputs in pixel offsets (integers of absolute values), while Angel2D inputs OpenGL units (floats of relative values); a conversion factor between these two is needed (for example, 32px = 1gu). Since OpenGL units are abstract, and the camera can zoom in or out if the objects are too small or big, the actual conversion factor isn't important; I could use a random number, and the user would merely have to zoom in or out. But it would be best if the conversion factor was selected such that most numbers outputted were small and whole (or fractions of small whole numbers), because that makes it easier to work with (and the whole point of the OpenGL units is that they are easy to work with). How would I find such a conversion factor reliably? My first attempt was to use the smallest number given; this resulted in no fractions below 1, but often lead to lots of decimal places where the factors didn't line up. Then I tried the mode of the sequence, which lead to the largest number of 1's possible, but often lead to very long floats for background images. My current approach gets the GCD of the whole sequence, which, when it works, works great, but can easily be thrown off course by a single bad apple. Note that while I could easily just pass the numbers I am given along, or pick some fixed factor, or use one of the conversions I specified above, I am looking for a method to reliably scale this list of integers to small, whole numbers or simple fractions, because this would most likely be unsurprising to the end user; this is not a one off conversion. The end users tend to use 1.0 as their "base" for manipulations (because it's simple and obvious), so it would make more sense for the sizes of entities to cluster around this.

    Read the article

  • Validating and filling default values in XML based on XSD in Python

    - by PoltoS
    I have an XML like <a> <b/> <b c="2"/> </a> I have my XSD <xs:element name="a"> <xs:complexType> <xs:sequence> <xs:element name="b" maxOccurs="unbounded"> <xs:attribute name="c" default="1"/> </xs:element> </xs:sequence> </xs:complexType> </xs:element> I want to use my XSD to validate my original XML and fill all default values: <a> <b c="1"/> <b c="2"/> </a> How do I get it in Python? With validation there is no problem (e.g. XMLSchema). The problem are the default values.

    Read the article

  • Latex - Apply an operation to every character in a string

    - by hroest
    Hi I am using LaTeX and I have a problem concerning string manipulation. I want to have an operation applied to every character of a string, specifically I want to replace every character "x" with "\discretionary{}{}{}x". I want to do this because I have a long string (DNA) which I want to be able to separate at any point without hyphenation. Thus I would like to have a command called "myDNA" that will do this for me instead of inserting manually \discretionary{}{}{} after every character. Is this possible? I have looked around the web and there wasnt much helpful information on this topic (at least not any I could understand) and I hoped that you could help. --edit To clarify: What I want to see in the finished document is something like this: the dna sequence is CTAAAGAAAACAGGACGATTAGATGAGCTTGAGAAAGCCATCACCACTCA AATACTAAATGTGTTACCATACCAAGCACTTGCTCTGAAATTTGGGGACTGAGTACACCAAATACGATAG ATCAGTGGGATACAACAGGCCTTTACAGCTTCTCTGAACAAACCAGGTCTCTTGATGGTCGTCTCCAGGT ATCCCATCGAAAAGGATTGCCACATGTTATATATTGCCGATTATGGCGCTGGCCTGATCTTCACAGTCAT CATGAACTCAAGGCAATTGAAAACTGCGAATATGCTTTTAATCTTAAAAAGGATGAAGTATGTGTAAACC CTTACCACTATCAGAGAGTTGAGACACCAGTTTTGCCTCCAGTATTAGTGCCCCGACACACCGAGATCCT AACAGAACTTCCGCCTCTGGATGACTATACTCACTCCATTCCAGAAAACACTAACTTCCCAGCAGGAATT just plain linebreaks, without any hyphens. The DNA sequence will be one long string without any spaces or anything but it can break at any point. This is why my idea was to inesert a "\discretionary{}{}{}" after every character, so that it can break at any point without inserting any hyphens.

    Read the article

  • How to make write operation idempotent?

    - by Morgan Cheng
    I'm reading article about recently release Gizzard sharding framework by twitter(http://engineering.twitter.com/2010/04/introducing-gizzard-framework-for.html). It mentions that all write operations must be idempotent to make sure high reliability. According to wikipedia, "Idempotent operations are operations that can be applied multiple times without changing the result." But, IMHO, in Gazzard case, idempotent write operation should be operations that sequence doesn't matter. Now, my question is: How to make write operation idempotent? The only thing I can image is to have a version number attached to each write. For example, in blog system. Each blog must have a $blog_id and $content. In application level, we always write a blog content like this write($blog_id, $content, $version). The $version is determined to be unique in application level. So, if application first try to set one blog to "Hello world" and second want it to be "Goodbye", the write is idempotent. We have such two write operations: write($blog_id, "Hello world", 1); write($blog_id, "Goodbye", 2); These two operations are supposed to changed two different records in DB. So, no matter how many times and what sequence these two operations executed, the results are same. This is just my understanding. Please correct me if I'm wrong.

    Read the article

  • Schema for element with Attributes and Child nodes

    - by Matthew
    I am trying to write xsd type schema for an element that has a custom type to include addition attributes to extend a base type. I am running into trouble getting the syntax right. <xs:element name="graphs"> <xs:complexType> <xs:sequence> <xs:element name="graph" minOccurs="1" maxOccurs="unbounded" type="graphType"> <!-- child elements --> </xs:element> </xs:sequence> </xs:complexType> </xs:element> <xs:complexType name="graphType"> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="title" type="xs:string"/> <xs:attribute name="type" type="xs:string"/> </xs:extension> </xs:simpleContent> </xs:complexType> I thought this would be something very common, but having read many tuts and forums, I cant seem to find an answer that works for me.

    Read the article

  • git changes modification time of files

    - by tanascius
    In the GitFaq I can read, that Git sets the current time as the timestamp on every file it modifies, but only those. However, I tried this command sequence (EDIT: added complete command sequence) $ git init test && cd test Initialized empty Git repository in d:/test/.git/ exxxxxxx@wxxxxxxx /d/test (master) $ touch filea fileb exxxxxxx@wxxxxxxx /d/test (master) $ git add . exxxxxxx@wxxxxxxx /d/test (master) $ git commit -m "first commit" [master (root-commit) fcaf171] first commit 0 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 filea create mode 100644 fileb exxxxxxx@wxxxxxxx /d/test (master) $ ls -l > filea exxxxxxx@wxxxxxxx /d/test (master) $ touch fileb -t 200912301000 exxxxxxx@wxxxxxxx /d/test (master) $ ls -l total 1 -rw-r--r-- 1 exxxxxxx Administ 132 Feb 12 18:36 filea -rw-r--r-- 1 exxxxxxx Administ 0 Dec 30 10:00 fileb exxxxxxx@wxxxxxxx /d/test (master) $ git status -a warning: LF will be replaced by CRLF in filea # On branch master warning: LF will be replaced by CRLF in filea # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: filea # exxxxxxx@wxxxxxxx /d/test (master) $ git checkout . exxxxxxx@wxxxxxxx /d/test (master) $ ls -l total 0 -rw-r--r-- 1 exxxxxxx Administ 0 Feb 12 18:36 filea -rw-r--r-- 1 exxxxxxx Administ 0 Feb 12 18:36 fileb Now my question: Why did git change the timestamp of file fileb? I'd expect the timestamp to be unchanged. Are my commands causing a problem? Maybe it is possible to do something like a git checkout . --modified instead? I am using git version 1.6.5.1.1367.gcd48 under mingw32/windows xp.

    Read the article

  • Open closed prinicple, problem

    - by Marcus
    Hi, I'm trying to apply OCP to a code snippet I have that in it's current state is really smelly, but I feel I'm not getting all the way to the end. Current code: public abstract class SomeObject {} public class SpecificObject1 : SomeObject {} public class SpecificObject2 : SomeObject {} // Smelly code public class Model { public void Store(SomeObject someObject) { if (someObject is SpecificObject1) {} else if (someObject is SpecificObject2) {} } } That is really ugly, my new approach looks like this: // No so smelly code public class Model { public void Store(SomeObject someObject) { throw new Expception("Not allowed!"); } public void Store(SpecificObject1 someObject) {} public void Store(SpecificObject2 someObject) {} } When a new SomeObject type comes along I must implement how that specific object is stored, this will break OCP cause I need to alter the Model-class. To move the store logic to SomeObject also feels wrong cause then I will violate SRP (?), becuase in this case the SomeObject is almost like a DTO, it's resposibility it not how to know to store itself. If a new implementation to SomeObject comes along who's store implementation is missing I will get a runtime error due to exception in Store method in Model class, it also feels like a code smell. This is because calling code will in the form of IEnumerable<SomeObject> sequence; I will not know the specific types of the sequence objects. I can't seem to grasp the OCP-concept. Anyone has any concrete examples or links that is a bit more than just some Car/Fruit example?

    Read the article

  • Reading a C/C++ data structure in C# from a byte array

    - by Chris Miller
    What would be the best way to fill a C# struct from a byte[] array where the data was from a C/C++ struct? The C struct would look something like this (my C is very rusty): typedef OldStuff { CHAR Name[8]; UInt32 User; CHAR Location[8]; UInt32 TimeStamp; UInt32 Sequence; CHAR Tracking[16]; CHAR Filler[12];} And would fill something like this: [StructLayout(LayoutKind.Explicit, Size = 56, Pack = 1)]public struct NewStuff{ [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] [FieldOffset(0)] public string Name; [MarshalAs(UnmanagedType.U4)] [FieldOffset(8)] public uint User; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] [FieldOffset(12)] public string Location; [MarshalAs(UnmanagedType.U4)] [FieldOffset(20)] public uint TimeStamp; [MarshalAs(UnmanagedType.U4)] [FieldOffset(24)] public uint Sequence; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 16)] [FieldOffset(28)] public string Tracking;} What is best way to copy OldStuff to NewStuff, if OldStuff was passed as byte[] array? I'm currently doing something like the following, but it feels kind of clunky. GCHandle handle;NewStuff MyStuff;int BufferSize = Marshal.SizeOf(typeof(NewStuff));byte[] buff = new byte[BufferSize];Array.Copy(SomeByteArray, 0, buff, 0, BufferSize);handle = GCHandle.Alloc(buff, GCHandleType.Pinned);MyStuff = (NewStuff)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(NewStuff));handle.Free(); Is there better way to accomplish this?

    Read the article

  • Sum of even fibonacci numbers

    - by user300484
    This is a Project Euler problem. If you don't want to see candidate solutions don't look here. Hello you all! im developping an application that will find the sum of all even terms of the fibonacci sequence. The last term of this sequence is 4,000,000 . There is something wrong in my code but I cannot find the problem since it makes sense to me. Can you please help me? using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { long[] arr = new long [1000000] ; long i= 2; arr[i-2]=1; arr[i-1]=2; long n= arr[i]; long s=0; for (i=2 ; n <= 4000000; i++) { arr[i] = arr[(i - 1)] + arr[(i - 2)]; } for (long f = 0; f <= arr.Length - 1; f++) { if (arr[f] % 2 == 0) s += arr[f]; } Console.Write(s); Console.Read(); } } }

    Read the article

  • Deleting While Iterating in Ruby?

    - by Jesse J
    I'm iterating over a very large set of strings, which iterates over a smaller set of strings. Due to the size, this method takes a while to do, so to speed it up, I'm trying to delete one of the strings from the smaller set that no longer needs to be used. Below is my current code: Ms::Fasta.foreach(@database) do |entry| all.each do |set| if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete(set) end end end end The problem I face is that the easy way, array.delete(string), effectively adds a break statement to the inner loop, which messes up the results. The only way I know how to fix this is to do this: Ms::Fasta.foreach(@database) do |entry| i = 0 while i < all.length set = all[i] if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete_at(i) i -= 1 end end i += 1 end end This feels kind of sloppy to me. Is there a better way to do this? Thanks.

    Read the article

  • Is there a general-purpose printf-ish routine defined in any C standard

    - by supercat
    In many C libraries, there is a printf-style routine which is something like the following: int __vgprintf(void *info, (void)(*print_function(void*, char)), const char *format, va_list params); which will format the supplied string and call print_function with the passed-in info value and each character in sequence. A function like fprintf will pass __vgprintf the passed-in file parameter and a pointer to a function which will cast its void* to a FILE* and output the passed-in character to that file. A function like snprintf will create a struct holding a char* and length, and pass the address of that struct to a function which will output each character in sequence, space permitting. Is there any standard for such a function, which could be used if e.g. one wanted a function to output an arbitrary format to a TCP port? A common approach is to allocate a buffer one hopes is big enough, use snprintf to put the data there, and then output the data from the buffer. It would seem cleaner, though, if there were a standard way to to specify that the print formatter should call a user-supplied routine with each character.

    Read the article

  • A generic C++ library that provides QtConcurrent functionality?

    - by Lucas
    QtConcurrent is awesome. I'll let the Qt docs speak for themselves: QtConcurrent includes functional programming style APIs for parallel list processing, including a MapReduce and FilterReduce implementation for shared-memory (non-distributed) systems, and classes for managing asynchronous computations in GUI applications. For instance, you give QtConcurrent::map() an iterable sequence and a function that accepts items of the type stored in the sequence, and that function is applied to all the items in the collection. This is done in a multi-threaded manner, with a thread pool equal to the number of logical CPU's on the system. There are plenty of other function in QtConcurrent, like filter(), filteredReduced() etc. The standard CompSci map/reduce functions and the like. I'm totally in love with this, but I'm starting work on an OSS project that will not be using the Qt framework. It's a library, and I don't want to force others to depend on such a large framework like Qt. I'm trying to keep external dependencies to a minimum (it's the decent thing to do). I'm looking for a generic C++ framework that provides me with the same/similar high-level primitives that QtConcurrent does. AFAIK boost has nothing like this (I may be wrong though). boost::thread is very low-level compared to what I'm looking for. I know C# has something very similar with their Parallel Extensions so I know this isn't a Qt-only idea. What do you suggest I use?

    Read the article

  • inconsistent setTexture behavior in cocos2d on iPhone after using CCAnimate/CCAnimation

    - by chillid
    Hi, I have a character that goes through multiple states. The state changes are reflected by means of a sprite image (texture) change. The state change is triggered by a user tapping on the sprite. This works consistently and quite well. I then added an animation during State0. Now, when the user taps - setTexture gets executed to change the texture to reflect State1, however some of the times (unpredictable) it does not change the texture. The code flows as below: // 1. // Create the animation sequence CGRect frame1Rect = CGRectMake(0,32,32,32); CGRect frame2Rect = CGRectMake(32,32,32,32); CCTexture2D* texWithAnimation = [[CCTextureCache sharedTextureCache] addImage:@"Frames0_1_thinkNthickoutline32x32.png"]; id anim = [[[CCAnimation alloc] initWithName:@"Sports" delay:1/25.0] autorelease]; [anim addFrame:[CCSpriteFrame frameWithTexture:texWithAnimation rect:frame1Rect offset:ccp(0,0)]]; [anim addFrame:[CCSpriteFrame frameWithTexture:texWithAnimation rect:frame2Rect offset:ccp(0,0)]]; // Make the animation sequence repeat forever id myAction = [CCAnimate actionWithAnimation: anim restoreOriginalFrame:NO]; // 2. // Run the animation: sports = [[CCRepeatForever alloc] init]; [sports initWithAction:myAction]; [self.sprite runAction:sports]; // 3. stop action on state change and change texture: NSLog(@"Stopping action"); [sprite stopAction:sports]; NSLog(@"Changing texture for kCJSports"); [self setTexture: [[CCTextureCache sharedTextureCache] addImage:@"SportsOpen.png"]]; [self setTextureRect:CGRectMake(0,0,32,64)]; NSLog(@"Changed texture for kCJSports"); Note that all the NSLog lines get logged - and the texture RECT changes - but the image/texture changes only some of the times - fails for around 10-30% of times. Locking/threading/timing issue somewhere? My app (game) is single threaded and I only use the addImage and not the Async version. Any help much appreciated.

    Read the article

  • Locking database edit by key name

    - by Will Glass
    I need to prevent simultaneous edits to a database field. Users are executing a push operation on a structured data field, so I want to sequence the operations, not simply ignore one edit and take the second. Essentially I want to do synchronized(key name) { push value onto the database field } and set up the synchronized item so that only one operation on "key name" will occur at a time. (note: I'm simplifying, it's not always a simple push). A crude way to do this would be a global synchronization, but that bottlenecks the entire app. All I need to do is sequence two simultaneous writes with the same key, which is rare but annoying occurrence. This is a web-based java app, written with Spring (and using JPA/MySQL). The operation is triggered by a user web service call. (the root cause is when a user sends two simultaneous http requests with the same key). I've glanced through the Doug Lea/Josh Bloch/et al Concurrency in Action, but don't see an obvious solution. Still, this seems simple enough I feel there must be an elegant way to do this.

    Read the article

  • algorithm to find longest non-overlapping sequences

    - by msalvadores
    I am trying to find the best way to solve the following problem. By best way I mean less complex. As an input a list of tuples (start,length) such: [(0,5),(0,1),(1,9),(5,5),(5,7),(10,1)] Each element represets a sequence by its start and length, for example (5,7) is equivalent to the sequence (5,6,7,8,9,10,11) - a list of 7 elements starting with 5. One can assume that the tuples are sorted by the start element. The output should return a non-overlapping combination of tuples that represent the longest continuos sequences(s). This means that, a solution is a subset of ranges with no overlaps and no gaps and is the longest possible - there could be more than one though. For example for the given input the solution is: [(0,5),(5,7)] equivalent to (0,1,2,3,4,5,6,7,8,9,10,11) is it backtracking the best approach to solve this problem ? I'm interested in any different approaches that people could suggest. Also if anyone knows a formal reference of this problem or another one that is similar I'd like to get references. BTW - this is not homework. Edit Just to avoid some mistakes this is another example of expected behaviour for an input like [(0,1),(1,7),(3,20),(8,5)] the right answer is [(3,20)] equivalent to (3,4,5,..,22) with length 20. Some of the answers received would give [(0,1),(1,7),(8,5)] equivalent to (0,1,2,...,11,12) as right answer. But this last answer is not correct because is shorter than [(3,20)].

    Read the article

  • crash when using stl vector at instead of operator[]

    - by Jamie Cook
    I have a method as follows (from a class than implements TBB task interface - not currently multithreading though) My problem is that two ways of accessing a vector are causing quite different behaviour - one works and the other causes the entire program to bomb out quite spectacularly (this is a plugin and normally a crash will be caught by the host - but this one takes out the host program as well! As I said quite spectacular) void PtBranchAndBoundIterationOriginRunner::runOrigin(int origin, int time) const // NOTE: const method { BOOST_FOREACH(int accessMode, m_props->GetAccessModes()) { // get a const reference to appropriate vector from member variable // map<int, vector<double>> m_rowTotalsByAccessMode; const vector<double>& rowTotalsForAccessMode = m_rowTotalsByAccessMode.find(accessMode)->second; if (origin != 129) continue; // Additional debug constrain: I know that the vector only has one non-zero element at index 129 m_job->Write("size: " + ToString(rowTotalsForAccessMode.size())); try { // check for early return... i.e. nothing to do for this origin if (!rowTotalsForAccessMode[origin]) continue; // <- this works if (!rowTotalsForAccessMode.at(origin)) continue; // <- this crashes } catch (...) { m_job->Write("Caught an exception"); // but its not an exception } // do some other stuff } } I hate not putting in well defined questions but at the moment my best phrasing is : "WTF?" I'm compiling this with Intel C++ 11.0.074 [IA-32] using Microsoft (R) Visual Studio Version 9.0.21022.8 and my implementation of vector has const_reference operator[](size_type _Pos) const { // subscript nonmutable sequence #if _HAS_ITERATOR_DEBUGGING if (size() <= _Pos) { _DEBUG_ERROR("vector subscript out of range"); _SCL_SECURE_OUT_OF_RANGE; } #endif /* _HAS_ITERATOR_DEBUGGING */ _SCL_SECURE_VALIDATE_RANGE(_Pos < size()); return (*(_Myfirst + _Pos)); } (Iterator debugging is off - I'm pretty sure) and const_reference at(size_type _Pos) const { // subscript nonmutable sequence with checking if (size() <= _Pos) _Xran(); return (*(begin() + _Pos)); } So the only difference I can see is that at calls begin instead of simply using _Myfirst - but how could that possibly be causing such a huge difference in behaviour?

    Read the article

  • How to ignore the validation of Unknown tags ?

    - by infant programmer
    One more challenge to the XSD capability,I have been sending XML files by my clients, which will be having 0 or more undefined or [call] unexpected tags (May appear in hierarchy). Well they are redundant tags for me .. so I have got to ignore their presence, but along with them there are some set of tags which are required to be validated. This is a sample XML: <root> <undefined_1>one</undefined_1> <undefined_2>two</undefined_2> <node>to_be_validated</node> <undefined_3>two</undefined_3> <undefined_4>two</undefined_4> </root> And the XSD I tried with: <xs:element name="root" type="root"></xs:element> <xs:complexType name="root"> <xs:sequence> <xs:any maxOccurs="2" minOccurs="0"/> <xs:element name="node" type="xs:string"/> <xs:any maxOccurs="2" minOccurs="0"/> </xs:sequence> </xs:complexType XSD doesn't allow this, due to certain reasons. The above mentioned example is just a sample. The practical XML comes with the complex hierarchy of XML tags .. Kindly let me know if you can get a hack of it. By the way, The alternative solution is to insert XSL-transformation, before validation process. Well, I am avoiding it because I need to change the .Net code which triggers validation process, which is supported at the least by my company.

    Read the article

  • Creating a 'flexible' XML schema

    - by Fiona Holder
    I need to create a schema for an XML file that is pretty flexible. It has to meet the following requirements: Validate some elements that we require to be present, and know the exact structure of Validate some elements that are optional, and we know the exact structure of Allow any other elements Allow them in any order Quick example: XML <person> <age></age> <lastname></lastname> <height></height> </person> My attempt at an XSD: <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="person"> <xs:complexType> <xs:sequence> <xs:element name="firstname" minOccurs="0" type="xs:string"/> <xs:element name="lastname" type="xs:string"/> <xs:any processContents="lax" minOccurs="0" maxOccurs="unbounded" /> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Now my XSD satisfies requirements 1 and 3. It is not a valid schema however, if both firstname and lastname were optional, so it doesn't satisfy requirement 2, and the order is fixed, which fails requirement 4. Now all I need is something to validate my XML. I'm open to suggestions on any way of doing this, either programmatically in .NET 3.5, another type of schema etc. Can anyone think of a solution to satisfy all 4 requirements?

    Read the article

  • How do I correctly reference georss: point in my xsd?

    - by Chris Hinch
    I am putting together an XSD schema to describe an existing GeoRSS feed, but I am stumbling trying to use the external georss.xsd to validate an element of type georss:point. I've reduced the problem to the smallest components thusly: XML: <?xml version="1.0" encoding="utf-8"?> <this> <apoint>45.256 -71.92</apoint> </this> XSD: <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:georss="http://www.georss.org/georss"> <xs:import namespace="http://www.georss.org/georss" schemaLocation="http://georss.org/xml/1.1/georss.xsd"/> <xs:element name="this"> <xs:complexType> <xs:sequence> <xs:element name="apoint" type="georss:point"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> If I make apoint type "xs: string" instead of "georss: point", the XML validates happily against the XSD, but as soon as I reference an imported type (georss: point), my XML validator (Notepad++ | XML Tools) "cannot parse the schema". What am I doing wrong?

    Read the article

  • How to clean up my code

    - by simion
    Being new to this i realy am trying to learn how to keep code as simple as possible, whilst doing the job its supposed to. The question i have done is from project eulur, it says Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... Find the sum of all the even-valued terms in the sequence which do not exceed four million. Here is my code below, i was wondering what the best way of simplifying this would be, for a start removing all of the .get(list.length()-1 )..... stuff would be a good start if possible but i dont really no how to? Thanks public long fibb() { ArrayList<Integer> list = new ArrayList<Integer>(); list.add(1); list.add(2); while((list.get(list.size() - 1) + (list.get(list.size() - 2)) < 4000000)){ list.add((list.get(list.size()-1)) + (list.get(list.size() - 2))); } long value = 0; for(int i = 0; i < list.size(); i++){ if(list.get(i) % 2 == 0){ value += list.get(i); } } return value; }

    Read the article

  • How to make restrictions on XML Schema Complex type?

    - by chobo2
    Hi I am reading the tutorials on w3cschools ( http://www.w3schools.com/schema/schema_complex.asp ) but they don't seem to mention how you could add restrictions on complex types. Like for instance I have this schema. <xs:element name="employee"> <xs:complexType> <xs:sequence> <xs:element name="firstname" type="xs:string"/> <xs:element name="lastname" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> now I want to make sure the firstname is no more then 10 characters long. How do I do this? I tried to put in the simple type for the firstname but it says I can't do that since I am using a complex type. So how do I put restrictions like that on the file so the people who I give the schema to don't try to make the firstname 100 characters.

    Read the article

  • Ordering the results of a Hibernate Criteria query by using information of the child entities of the

    - by pkainulainen
    I have got two entities Person and Book. Only one instance of a specific book is stored to the system (When a book is added, application checks if that book is already found before adding a new row to the database). Relevant source code of the entities is can be found below: @Entity @Table(name="persons") @SequenceGenerator(name="id_sequence", sequenceName="hibernate_sequence") public class Person extends BaseModel { @Id @Column(name = "id") @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "id_sequence") private Long id = null; @ManyToMany(targetEntity=Book.class) @JoinTable(name="persons_books", joinColumns = @JoinColumn( name="person_id"), inverseJoinColumns = @JoinColumn( name="book_id")) private List<Book> ownedBooks = new ArrayList<Book>(); } @Entity @Table(name="books") @SequenceGenerator(name="id_sequence", sequenceName="hibernate_sequence") public class Book extends BaseModel { @Id @Column(name = "id") @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "id_sequence") private Long id = null; @Column(name="name") private String name = null; } My problem is that I want to find persons, which are owning some of the books owned by a specific persons. The returned list of persons should be ordered by using following logic: The person owning most of the same books should be at the first of the list, second person of the the list does not own as many books as the first person, but more than the third person. The code of the method performing this query is added below: @Override public List<Person> searchPersonsWithSimilarBooks(Long[] bookIds) { Criteria similarPersonCriteria = this.getSession().createCriteria(Person.class); similarPersonCriteria.add(Restrictions.in("ownedBooks.id", bookIds)); //How to set the ordering? similarPersonCriteria.addOrder(null); return similarPersonCriteria.list(); } My question is that can this be done by using Hibernate? And if so, how it can be done? I know I could implement a Comparator, but I would prefer using Hibernate to solve this problem.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >