Search Results

Search found 9273 results on 371 pages for 'complex strings'.

Page 297/371 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Writing language converter in ANTLR

    - by Stefan
    I'm writing a converter between some dialects of the same programming language. I've found a grammar on the net - it's complex and handles all the cases. Now I'm trying to write the appropriate actions. Most of the input is just going to be rewritten to output. What I need to do is parse function calls, do my magic (rename function, reorder arguments, etc) and write it. I'm using AST as output. When I come across a function call, I build a custom object structure (from classes defined in my target language), call the appropriate function and I have a string that represents the transformed function that I want to get. The problem is, what I'm supposed to do with that string? I'd like to replace the .text attribute of the enclosing rule, but setText() is only available on lexer rules and the rule's .text attribute is read-only. How to solve this problem? program : statement_list { output = $statement_list.text; } ; //... statement : expression_statement // ... ; expression_statement : function_call // ... ; function_call : ID '(' { /* build the object, assign name */ Function function = new Function(); //... } ( arg1 = expression { /* add first parameter */ } ( ',' arg2 = expression { /* add the rest of parameters */ } )* )? ')' { /* convert the function call */ string converted = Tools.Convert(function); // $setText(converted); // doesn't work // $functionCall.text = converted; // doesn't work } ;

    Read the article

  • django powering multiple shops from one code base on a single domain

    - by imanc
    Hey, I am new to django and python and am trying to figure out how to modify an existing app to run multiple shops through a single domain. Django's sites middleware seems inappropriate in this particular case because it manages different domains, not sites run through the same domain, e.g. : domain.com/uk domain.com/us domain.com/es etc. Each site will need translated content - and minor template changes. The solution needs to be flexible enough to allow for easy modification of templates. The forms will also need to vary a bit, e.g minor variances in fields and validation for each country specific shop. I am thinking along the lines of the following as a solution and would love some feedback from experienced django-ers: In short: same codebase, but separate country specific urls files, separate templates and separate database Create a middleware class that does IP localisation, determines the country based on the URL and creates a database connection, e.g. /au/ will point to the au specific database and so on. in root urls.py have routes that point to a separate country specific routing file, e..g (r'^au/',include('urls_au')), (r'^es/',include('urls_es')), use a single template directory but in that directory have a localised directory structure, e.g. /base.html and /uk/base.html and write a custom template loader that looks for local templates first. (or have a separate directory for each shop and set the template directory path in middleware) use the django internationalisation to manage translation strings throughout slight variances in forms and models (e.g. ZA has an ID field, France has 'door code' and 'floor' etc.) I am unsure how to handle these variations but I suspect the tables will contain all fields but allowing nulls and the model will have all fields but allowing nulls. The forms will to be modified slightly for each shop. Anyway, I am keen to get feedback on the best way to go about achieving this multi site solution. It seems like it would work, but feels a bit "hackish" and I wonder if there's a more elegant way of getting this solution to work. Thanks, imanc

    Read the article

  • Saving "heavy" figure to PDF in MATLAB - rendering problem

    - by yuk
    I generate a figure in MATLAB with lot amount of points (100000+) and want to save it into a PDF file. With zbuffer or painters renderer I've got very large and slowly opened file (over 4 Mb) - all points are in vector format. Using OpenGL renderer rasterize the figure in PDF, ok for the plot, but not good for text labels. The file size is about 150 Kb. Try this simplified code, for example: x=linspace(1,10,100000); y=sin(x)+randn(size(x)); plot(x,y,'.') set(gcf,'Renderer','zbuffer') print -dpdf -r300 testpdf_zb set(gcf,'Renderer','painters') print -dpdf -r300 testpdf_pa set(gcf,'Renderer','opengl') print -dpdf -r300 testpdf_op The actual figure is much more complex with several axes and different types of plots. Is there a way to rasterize the figure, but keep text labels as vectors? Another problem with OpenGL is that is does not work in terminal mode (-nosplash -nodesktop) under Mac OSX. Looks like OpenGL is not supported. I have to use terminal mode for automation. The MATLAB version I run is 2007b. Mac OSX server 10.4.

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • Bioperl, equivalent of IO::ScalarArray for array of Seq objects?

    - by Ryan Thompson
    In perl, we have IO::ScalarArray for treating the elements of an array like the lines of a file. In BioPerl, we have Bio::SeqIO, which can produce a filehandle that reads and writes Bio::Seq objects instead of strings representing lines of text. I would like to do a combination of the two: I would like to obtain a handle that reads successive Bio::Seq objects from an array of such objects. Is there any way to do this? Would it be trivial for me to implement a module that does this? My reason for wanting this is that I would like to be able to write a subroutine that accepts either a Bio::SeqIO handle or an array of Bio::Seq objects, and I'd like to avoid writing separate loops based on what kind of input I get. Perhaps the following would be better than writing my own IO module? sub process_sequences { my $input = $_[0]; # read either from array of Bio::Seq or from Bio::SeqIO my $nextseq; if (ref $input eq 'ARRAY') { my $pos = 0 $nextseq = sub { return $input->[$pos++] if $pos < @$input}; } } else { $nextseq = sub { $input->getline(); } } while (my $seq = $nextseq->()) { do_cool_stuff_with($seq) } }

    Read the article

  • Why is cell phone software is still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up?

    Read the article

  • Why use bin2hex when inserting binary data from PHP into MySQL?

    - by Atli
    I heard a rumor that when inserting binary data (files and such) into MySQL, you should use the bin2hex() function and send it as a HEX-coded value, rather than just use mysql_real_escape_string on the binary string and use that. // That you should do $hex = bin2hex($raw_bin); $sql = "INSERT INTO `table`(`file`) VALUES (X'{$hex}')"; // Rather than $bin = mysql_real_escape_string($raw_bin); $sql = "INSERT INTO `table`(`file`) VALUES ('{$bin}')"; It is supposedly for performance reasons. Something to do with how MySQL handles large strings vs. how it handles HEX-coded values However, I am having a hard time confirming this. All my tests indicate the exact oposite; that the bin2hex method is ~85% slower and uses ~24% more memory. (I am testing this on PHP 5.3, MySQL 5.1, Win7 x64 - Using a farily simple insert loop.) For instance, this graph shows the private memory usage of the mysqld process while the test code was running: Does anybody have any explainations or reasources that would clarify this? Thanks.

    Read the article

  • m2crypto aes-256-cbc not working against encoded openssl files.

    - by Gary
    $ echo 'this is text' > text.1 $ openssl enc -aes-256-cbc -a -k "thisisapassword" -in text.1 -out text.enc $ openssl enc -d -aes-256-cbc -a -k "thisisapassword" -in text.enc -out text.2 $ cat text.2 this is text I can do this with openssl. Now, how do I do the same in m2crypto. Documentation is lacking this. I looked at the snv test cases, still nothing there. I found one sample, http://passingcuriosity.com/2009/aes-encryption-in-python-with-m2crypto/ (changed to aes_256_cbc), and it will encrypted/descrypt it's own strings, but it cannot decrypt anything made with openssl, and anything it encrypts isn't decryptable from openssl. I need to be able enc/dec with aes-256-cbc as have many files already encrypted with this and we have many other systems in place that also handle the aes-256-cbc output just fine. We use password phrases only, with no IV. So setting the IV to \0 * 16 makes sense, but I'm not sure if this is also part of the problem. Anyone have any working samples of doing AES 256 that is compatible with m2crypto? I will also be trying some additional libraries and seeing if they work any better.

    Read the article

  • How to make a mapped field inherited from a superclass transient in JPA?

    - by Russ Hayward
    I have a legacy schema that cannot be changed. I am using a base class for the common features and it contains an embedded object. There is a field that is normally mapped in the embedded object that needs to be in the persistence id for only one (of many) subclasses. I have made a new id class that includes it but then I get the error that the field is mapped twice. Here is some example code that is much simplified to maintain the sanity of the reader: @MappedSuperclass class BaseClass { @Embedded private Data data; } @Entity class SubClass extends BaseClass { @EmbeddedId private SubClassId id; } @Embeddable class Data { private int location; private String name; } @Embeddable class SubClassId { private int thingy; private int location; } I have tried @AttributeOverride but I can only get it to rename the field. I have tried to set it to updatable = false, insertable = false but this did not seem to work when used in the @AttributeOverride annotation. See answer below for the solution to this issue. I realise I could change the base class but I really do not want to split up the embedded object to separate the shared field as it would make the surrounding code more complex and require some ugly wrapping code. I could also redesign the whole system for this corner case but I would really rather not. I am using Hibernate as my JPA provider.

    Read the article

  • Can I use the auto-generated Linq-to-SQL entity classes in 'disconnected' mode?

    - by Gary McGill
    Suppose I have an automatically-generated Employee class based on the Employees table in my database. Now suppose that I want to pass employee data to a ShowAges method that will print out name & age for a list of employees. I'll retrieve the data for a given set of employees via a linq query, which will return me a set of Employee instances. I can then pass the Employee instances to the ShowAges method, which can access the Name & Age fields to get the data it needs. However, because my Employees table has relationships with various other tables in my database, my Employee class also has a Department field, a Manager field, etc. that provide access to related records in those other tables. If the ShowAges method were to invoke any of those methods, this would cause lots more data to be fetched from the database, on-demand. I want to be sure that the ShowAges method only uses the data I have already fetched for it, but I really don't want to have to go to the trouble of defining a new class which replicates the Employee class but has fewer methods. (In my real-world scenario, the class would have to be considerably more complex than the Employee class described here; it would have several 'joined' classes that do need to be populated, and others that don't). Is there a way to 'switch off' or 'disconnect' the Employees instances so that an attempt to access any property or related object that's not already populated will raise an exception? If not, then I assume that since this must be a common requirement, there might be an already-established pattern for doing this sort of thing?

    Read the article

  • Drawing a relative line in C#

    - by icemanind
    Guys, I know this is going to turn out to be a simple answer, but I can't seem to figure it out. I have a C# Winform application that I am trying to build. I am trying to draw a white line 60 pixels above the bottom of the form. I am using this code: private void MainForm_Paint(object sender, PaintEventArgs e) { e.Graphics.DrawLine(Pens.White, 10, this.Height-60, 505, this.Height-60); } Simple enough, however no line is drawn. After some debugging, I figured out that it IS drawing the line, but it is drawing it outside my form. If I change the -60 to -175, then I can see it at the bottom of my form. This would solve my problem, except as my form's height changes, the line draws closer and closer to the bottom of my form until eventually, its off the form again. What am I doing wrong? Am I using the wrong graphics unit? Or is there a more complex calculation I need to do to determine 60 pixels from the bottom of my form?

    Read the article

  • Conditional macro expansion

    - by Dave DeLong
    Heads up: This is a weird question. I've got some really useful macros that I like to use to simplify some logging. For example I can do Log(@"My message with arguments: %@, %@, %@", @"arg1", @"arg2", @"arg3"), and that will get expanded into a more complex method invocation that includes things like self, _cmd, __FILE__, __LINE__, etc, so that I can easily track where things are getting logged. This works great. Now I'd like to expand my macros to not only work with Objective-C methods, but general C functions. The problem is the self and _cmd portions that are in the macro expansion. These two parameters don't exist in C functions. Ideally, I'd like to be able to use this same set of macros within C functions, but I'm running into problems. When I use (for example) my Log() macro, I get compiler warnings about self and _cmd being undeclared (which makes total sense). My first thought was to do something like the following (in my macro): if (thisFunctionIsACFunction) { DoLogging(nil, nil, format, ##__VA_ARGS__); } else { DoLogging(self, _cmd, format, ##__VA_ARGS__); } This still produces compiler warnings, since the entire if() statement is substituted in place of the macro, resulting in errors with the self and _cmd keywords (even though they will never be executed during function execution). My next thought was to do something like this (in my macro): if (thisFunctionIsACFunction) { #define SELF nil #define CMD nil } else { #define SELF self #define CMD _cmd } DoLogging(SELF, CMD, format, ##__VA_ARGS__); That doesn't work, unfortunately. I get "error: '#' is not followed by a macro parameter" on my first #define. My other thought was to create a second set of macros, specifically for use in C functions. This reeks of a bad code smell, and I really don't want to do this. Is there some way I can use the same set of macros from within both Objective-C methods and C functions, and only reference self and _cmd if the macro is in an Objective-C method?

    Read the article

  • Performance issue on Android's MapView Navigation-App

    - by poeschlorn
    Hey Guys, I've got a question on making an navigation app more faster and more stable. The basic layer of my app is a simple mapview, covered with several overlays (2 markers for start and destination and one for the route). My idea is to implement a thread to display the route, so that the app won't hang up during the calculation of a more complex route (like it does right now). After implementing the thread there are no updates shows any more, maybe you can help me with a short glance at an excerpt of my code below: private class MyLocationListener implements LocationListener { @Override public void onLocationChanged(Location loc) { posUser = new GeoPoint((int) (loc.getLatitude() * 1E6), (int) (loc .getLongitude() * 1E6)); new Thread(){ public void run(){ mapView.invalidate(); // Erase old overlays mapView.getOverlays().clear(); // Draw updated overlay elements and adjust basic map settings updateText(); if(firstRefresh){ adjustMap(); firstRefresh = false; } getAndPaintRoute(); drawMarkers(); } }; } Some features have been summarized to a method like "drawMarkers()" or "updateText()"...(they don't need any more attention ;-))

    Read the article

  • How to convert none-Latin-based encoded text into UTF-8, or make them coexist on same page?

    - by Yallaa
    Good day, I have a script that scrapes the title/description of remote pages and prints those values into a corresponding charset=UTF-8 encoded page. Here is the problem, whenever a remote page is encoded with non-Latin characters encoding like (Arabic, Russian, Chinese, Japanese etc.) the imported values print as garbled text. I've tried passing those values through either iconv or mb_convert_encoding converters but without much success. Then, I tried detecting the remote encoding first, then change my presentation page's encoding into the remote one instead of the current utf-8, which works okay with the imported values, but the other existing utf-8 content of that language on the page gets garbled instead. Example: If I try to import those values from a Russian windows-1251 into my UTF-8 encoded page which has a mix English/Russian content. I change the imported non-utf-8 string into a utf-8 using either iconv or mb_convert_encoding. I tried: $RemoteValue = iconv($RemoteEncoding, 'UTF-8', $RemoteValue); or $RemoteValue mb_convert_encoding($RemoteValue, "UTF-8", $RemoteEncoding); or $RemoteValue mb_convert_encoding($RemoteValue, "UTF-8", "auto"); without success. If I detect that the remote page is windows-1251 encoded and I change my presentation page (which already has UTF-8 encoded mixed language content) to be similar to the remote page, then the japanese utf-8 content on the existing page gets garbled... Can 2 differently encoded strings coexist on the same page (ex. utf-8 & windows-1251)? Am I using the converters correctly? any hints as to why they don't work? Is there any better way to do this? Thank you for your help

    Read the article

  • Can ElementTree be told to preserve the order of attributes?

    - by dmckee
    I've written a fairly simple filter in python using ElementTree to munge the contexts of some xml files. And it works, more or less. But it reorders the attributes of various tags, and I'd like it to not do that. Does anyone know a switch I can throw to make it keep them in specified order? Context for this I'm working with and on a particle physics tool that has a complex, but oddly limited configuration system based on xml files. Among the many things setup that way are the paths to various static data files. These paths are hardcoded into the existing xml and there are no facilities for setting or varying them based on environment variables, and in our local installation they are necessarily in a different place. This isn't a disaster because the combined source- and build-control tool we're using allows us to shadow certain files with local copies. But even thought the data fields are static the xml isn't, so I've written a script for fixing the paths, but with the attribute rearrangement diffs between the local and master versions are harder to read than necessary. This is my first time taking ElementTree for a spin (and only my fifth or sixth python project) so maybe I'm just doing it wrong. Abstracted for simplicity the code looks like this: tree = elementtree.ElementTree.parse(inputfile) i = tree.getiterator() for e in i: e.text = filter(e.text) tree.write(outputfile) Reasonable or dumb? Related links: How can I get the order of an element attribute list using Python xml.sax? Preserve order of attributes when modifying with minidom

    Read the article

  • restrict documents for mapreduce with mongoid

    - by theBernd
    I implemented the pearson product correlation via map / reduce / finalize. The missing part is to restrict the documents (representing users) to be processed via a filter query. For a simple query like mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name => 'Bernd' }) I get this to work. But my filter criteria is a little bit more complicated: I have one set of preferences which need to have at least one common element and another set of preferences which may not have a common element. In a later step I also want to restrict this to documents (users) within a certain geographical distance. Currently I have this code working in my map function, but I would prefer to separate this into either query params as supported by mongoid or a javascript function. All my attempts to solve this failed since the code is either ignored or raises an error. I did a couple of tests. A regular find like User.where(:name.in => ['Arno', 'Bernd', 'Claudia']) works and returns #<Mongoid::Criteria:0x00000101f0ea40 @selector={:name=>{"$in"=>["Arno", "Bernd", "Claudia"]}}, @options={}, @klass=User, @documents=[]> Trying the same with mapreduce User.collection. mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name.in => ['Arno', 'Bernd', 'Claudia'] }) fails with `serialize': keys must be strings or symbols (TypeError) in bson-1.1.5 The intermediate query parameter looks like this :query=>{#<Mongoid::Criterion::Complex:0x00000101a209e8 @key=:name, @operator="in">=>["Arno", "Bernd", "Claudia"]} and at least @operator looks a bit weird to me. I'm also uncertain if the class name can be omitted. BTW - I'm using mongodb 1.6.5-x86_64, and the mongoid 2.0.0.beta.20, mongo 1.1.5 and bson 1.1.5 gems on MacOS. What am I doing wrong? Thanks in advance.

    Read the article

  • Only last row in method execute (Objective-C)

    - by Emil
    First I want to apologize for my lack of knowledge in programming in general. I am in the way of learning and have discovered that this site is a huge compliment for me of doing so. I have created a program with three UITextField(thePlace,theVerb,theOutput) and two UITextView where one of the textviews(theOutput) take the text from the other textview(theTemplate) and replaces some strings with the text entered in the textfields. When I click a button it triggers the method createStory that is listed below. This works fine with one exception. The text in the output only changes the string 'number' to the text in the textfield. However if I change the order in the method to replace 'place','number','verb' only the verb is changed, not the number. Im sure this is some sort of easy fix but I can't find it. Can some of you help me with my troubleshooting? - (IBAction)createStory:(id)sender { theOutput.text=[theTemplate.text stringByReplacingOccurrencesOfString:@"<place>" withString:thePlace.text]; theOutput.text=[theTemplate.text stringByReplacingOccurrencesOfString:@"<verb>" withString:theVerb.text]; theOutput.text=[theTemplate.text stringByReplacingOccurrencesOfString:@"<number>" withString:theNumber.text]; } Many thanks // Emil

    Read the article

  • Search implementation dilemma: full text vs. plain SQL

    - by Ethan
    I have a MySQL/Rails app that needs search. Here's some info about the data: Users search within their own data only, so searches are narrowed down by user_id to begin with. Each user will have up to about five thousand records (they accumulate over time). I wrote out a typical user's records to a text file. The file size is 2.9 MB. Search has to cover two columns: title and body. title is a varchar(255) column. body is column type text. This will be lightly used. If I average a few searches per second that would be surprising. It's running an a 500 MB CentOS 5 VPS machine. I don't want relevance ranking or any kind of fuzziness. Searches should be for exact strings and reliably return all records containing the string. Simple date order -- newest to oldest. I'm using the InnoDB table type. I'm looking at plain SQL search (through the searchlogic gem) or full text search using Sphinx and the Thinking Sphinx gem. Sphinx is very fast and Thinking Sphinx is cool, but it adds complexity, a daemon to maintain, cron jobs to maintain the index. Can I get away with plain SQL search for a small scale app?

    Read the article

  • Is there some advantage to filling a stack with nils and interpreting the "top" as the last non-nil value?

    - by dwilbank
    While working on a rubymonk exercise, I am asked to implement a stack with a hard size limit. It should return 'nil' if I try to push too many values, or if I try to pop an empty stack. My solution is below, followed by their solution. Mine passes every test I can give it in my IDE, while it fails rubymonk's test. But that isn't my question. Question is, why did they choose to fill the stack with nils instead of letting it shrink and grow like it does in my version? It just makes their code more complex. Here's my solution: class Stack def initialize(size) @max = size @store = Array.new end def pop empty? ? nil : @store.pop end def push(element) return nil if full? @store.push(element) end def size @store.size end def look @store.last end private def full? @store.size == @max end def empty? @store.size == 0 end end and here is the accepted answer class Stack def initialize(size) @size = size @store = Array.new(@size) @top = -1 end def pop if empty? nil else popped = @store[@top] @store[@top] = nil @top = @top.pred popped end end def push(element) if full? or element.nil? nil else @top = @top.succ @store[@top] = element self end end def size @size end def look @store[@top] end private def full? @top == (@size - 1) end def empty? @top == -1 end end

    Read the article

  • Why my test xml is failing with very simple XSD Schema?

    - by JSteve
    Hi all, I am a bit novice in xml schema. I would be grateful if somebody help me out to understand why my xml is not being validated with the schema: Here is my Schema: <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.org/testSchema" xmlns="http://www.example.org/testSchema"> <xs:element name="Employee"> <xs:complexType> <xs:sequence> <xs:element name="Name"> <xs:complexType> <xs:sequence> <xs:element name="FirstName" /> <xs:element name="LastName" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Here is my test xml: <?xml version="1.0" encoding="UTF-8"?> <Employee xmlns="http://www.example.org/testSchema"> <Name> <FirstName>John</FirstName> <LastName>Smith</LastName> </Name> </Employee> I am getting following error by Eclipse xml editor/validator: cvc-complex-type.2.4.a: Invalid content was found starting with element 'Name'. One of '{Name}' is expected. I could not understand what is wrong with this schema or my xml.

    Read the article

  • nhibernate - mapping with contraints

    - by Tobias Müller
    Hello everybody, I am having a Problem with my nhibernate-mapping and I can't find a solution by searching on stackoverflow/google/documentation. The database I am using has (amongst others) two tables. One is unit with the following fields: id enduring_id starts ends damage_enduring_id [...] The other one is damage, which has the following fields: id enduring_id starts ends [...] The units are assigned to a damage and one damage can have zero, one or more units working on it. Every time a unit moves to annother damage, the dataset is copied. The field "ends" of the old record and "starts" of the new record are set to the current time stamp, enduring_id stays the same. So if I want to know which units were working on a damage at a certain time, I do the following select: select * from unit join damage on damage.enduring_id = unit.damage_enduring_id where unit.starts <= 'time' and unit.ends = 'time' (This is not an actualy query from the database, I made it up to make clear what I mean. The the real database is a little more complex) Now I want to map it that way, so I can load all the damages which are valid at one time (starts <= wanted time <= ends) and that each of them has a Bag with all the attached units at that time (again starts <= wanted time <= ends). Is this possible within the mapping? Sorry if this is a stupid question, but I am pretty new to nhibernate and I have no clue how to do it. Thanks a lot for reading my post! Bye, Tobias

    Read the article

  • Best strategy for moving data between physical tiers in ASP.net

    - by Pete Lunenfeld
    Building a new ASP.net application, and planning to separate DB, 'service' tier and Web/UI tier into separate physical layers. What is the best/easiest strategy to move serialized objects between the service tier and the UI tier? I was considering serializing POCOs into JSON using simple ASP.net pages to serve the middle tier. Meaning that the UI/Web tier will request data from a (hidden to the outside user) web server that will return a JSON string. This kind of JSON 'emitter' seems easily testable. It also seems easily compressible for efficiently moving data over the WAN between tiers. I know that some folks use .asmx webservices for this kind of task, but this seems like there is excess overhead with SOAP, and the package is not as human readable (testable) as POCOs serialized as JSON. Others are using more complex technology like WCF which we have never used. Does anyone have advice for choosing a method for moving data/objects between the data (db) tier and the web (UI) tier over the WAN using .net technologies? Thanks!!!

    Read the article

  • Scale image to fit text boxes around borders

    - by nispio
    I have the following plot in Matlab: The image size may vary, and so may the length of the text boxes at the top and left. I dynamically determine the strings that go in these text boxes and then create them with: [M,N] = size(img); imagesc((1:N)-0.5,(1:M)-0.5, img > 0.5); axis image; grid on; colormap([1 1 1; 0.5 0.5 0.5]); set(gca,'XColor','k','YColor','k','TickDir','out') set(gca,'XTick',1:N,'XTickLabel',cell(1,N)) set(gca,'YTick',1:N,'YTickLabel',cell(1,N)) for iter = 1:M text(-0.5, iter-0.5, sprintf(strL, br{iter,:}), ... 'FontSize',16, ... 'HorizontalAlignment','right', ... 'VerticalAlignment','middle', ... 'Interpreter','latex' ); end for iter = 1:N text(iter-0.5, -0.5, {bc{:,iter}}, ... 'FontSize',16, ... 'HorizontalAlignment','center', ... 'VerticalAlignment','bottom', ... 'Interpreter','latex' ); end where br and bc are cell arrays containing the appropriate numbers for the labels. The problem is that most of the time, the text gets clipped by the edges of the figure. I am using this as a workaround: set(gca,'Position',[0.25 0.25 0.5 0.5]); As you can see, I am simply adding a larger border around the plot so that there is more room for the text. While this scaling works for one zoom level, if I maximize my plot window I get way too much empty space, and if I shrink my plot window, I get clipping again. Is there a more intelligent way to add these labels to use the minimum amount of space while making sure that the text does not get clipped?

    Read the article

  • Treetop basic parsing and regular expression usage

    - by ucint
    I'm developing a script using the ruby Treetop library and having issues working with its syntax for regex's. First off, many regular expressions that work in other settings dont work the same in treetop. This is my grammar: (myline.treetop) grammar MyLine rule line string whitespace condition end rule string [\S]* end rule whitespace [\s]* end rule condition "new" / "old" / "used" end end This is my usage: (usage.rb) require 'rubygems' require 'treetop' require 'polyglot' require 'myline' parser = MyLineParser.new p parser.parse("randomstring new") This should find the word new for sure and it does! Now I wont to extend it so that it can find new if the input string becomes "randomstring anotherstring new yetanother andanother" and possibly have any number of strings followed by whitespace (tab included) before and after the regex for rule condition. In other words, if I pass it any sentence with the word "new" etc in it, it should be able to match it. So let's say I change my grammar to: rule line string whitespace condition whitespace string end Then, it should be able to find a match for: p parser.parse("randomstring new anotherstring") So, what do I have to do to allow the string whitespace to be repeated before and after condition? If I try to write this: rule line (string whitespace)* condition (whitespace string)* end , it goes in an infinite loop. If i replace the above () with [], it returns nil In general, regex's return a match when i use the above, but treetop regex's dont. Does anyone have any tips/points on how to go about this? Plus, since there isn't much documentation for treetop and the examples are either too trivial or too complex, is there anyone who knows a more thorough documentation/guide for treetop?

    Read the article

  • Using Pylint with Django

    - by rcreswick
    I would very much like to integrate pylint into the build process for my python projects, but I have run into one show-stopper: One of the error types that I find extremely useful--:E1101: *%s %r has no %r member*--constantly reports errors when using common django fields, for example: E1101:125:get_user_tags: Class 'Tag' has no 'objects' member which is caused by this code: def get_user_tags(username): """ Gets all the tags that username has used. Returns a query set. """ return Tag.objects.filter( ## This line triggers the error. tagownership__users__username__exact=username).distinct() # Here is the Tag class, models.Model is provided by Django: class Tag(models.Model): """ Model for user-defined strings that help categorize Events on on a per-user basis. """ name = models.CharField(max_length=500, null=False, unique=True) def __unicode__(self): return self.name How can I tune Pylint to properly take fields such as objects into account? (I've also looked into the Django source, and I have been unable to find the implementation of objects, so I suspect it is not "just" a class field. On the other hand, I'm fairly new to python, so I may very well have overlooked something.) Edit: The only way I've found to tell pylint to not warn about these warnings is by blocking all errors of the type (E1101) which is not an acceptable solution, since that is (in my opinion) an extremely useful error. If there is another way, without augmenting the pylint source, please point me to specifics :) See here for a summary of the problems I've had with pychecker and pyflakes -- they've proven to be far to unstable for general use. (In pychecker's case, the crashes originated in the pychecker code -- not source it was loading/invoking.)

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >