Search Results

Search found 4214 results on 169 pages for 'binary serializer'.

Page 139/169 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Will JSON replace XML as a data format?

    - by 13ren
    When I first saw XML, I thought it was basically a representation of trees. Then I thought: the important thing isn't that it's a particularly good representation of trees, but that it is one that everyone agrees on. Just like ASCII. And once established, it's hard to displace due to network effects. The new alternative would have to be much better (maybe 10 times better) to displace it. Of course, ASCII has been (mostly) replaced by Unicode, for internationalization. According to google trends, XML has a x43 lead, but is declining - while JSON grows. Will JSON replace XML as a data format? (edited) for which tasks? for which programmers/industries? NOTES: S-expressions (from lisp) are another representation of trees, but which has not gained mainstream adoption. There are many, many other proposals, such as YAML and Protocol Buffers (for binary formats). I can see JSON dominating the space of communicating with client-side AJAX (AJAJ?), and this possibly could back-spread into other systems transitively. XML, being based on SGML, is better than JSON as a document format. I'm interested in XML as a data format. XML has an established ecosystem that JSON lacks, especially ways of defining formats (XML Schema) and transforming them (XSLT). XML also has many other standards, esp for web services - but their weight and complexity can arguably count against XML, and make people want a fresh start (similar to "web services" beginning as a fresh start over CORBA).

    Read the article

  • py2app, pyObjc & macports compilation errors

    - by Neewok
    Hi, I'm currently writing a small python app that embeds cherrypy and django using py2app. It worked well until I tried to include pyobjc in my project, since my app needed a small GUI (which consists of a small icon in the top menu bar + a drop down menu). I can run my python script without any problem (I'm using python 2.6 with macports), but I can't launch the application bundle generated by py2app. A dialog box appears with the following message: ImportError: dlopen(/Users/denis/tlon/standalone/mac/dist/django_cherry.app/Contents/Resources/lib/python2.6/lib-dynload/CoreFoundation/_inlines.so, 2): no suitable image found. Did find: /Users/denis/tlon/standalone/mac/dist/django_cherry.app/Contents/Resources/lib/python2.6/lib-dynload/CoreFoundation/_inlines.so: mach-o, but wrong architecture I did a quick : sudo port -u install py26-pyobjc +universal but for some reason macports tries to build openssl, with which compilation fails each time. It seems the problem is related to zLib - this is what appears in the logs : :info:build ld: warning: in /opt/local/lib/libz.dylib, file is not of required architecture ...And here is the output of file /opt/local/lib/libz.dylib : /opt/local/lib/libz.dylib: Mach-O universal binary with 2 architectures /opt/local/lib/libz.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64 /opt/local/lib/libz.dylib (for architecture i386): Mach-O dynamically linked shared library i386 Nothing looks wrong to me. I'm a bit stuck here. I don't even understand what openssl has to do with pyObjc, but it looks like I can't go anywhere if I don't manage to compile it. Macports really suck sometimes :/ EDIT I manage to fix Macports issue, but not py2app one. If I get it right, py2app try to create a 32-bits app, while Core Foundation files on Snow Leopard are for 64 bits architectures. Damn. Either I build this on Leopard, either I have to find a way to create a 64bit app with py2app, but then Snow Leopard only.

    Read the article

  • Is this a good approach to address double-base64-encoding?

    - by Freiheit
    My software understands attachments, like PNGs attached to user records. These attachments are usually sent in from outside sources as a Base64 encoded string. The database stores whatever data it is given, Base64 encoded or not. When I serve up the attachment for download I do this: if (Base64.isBase64(data)) { data = Base64.decodeBase64(data); } There is a potential for data that is double encoded. For instance the sender of a message had base64 encoded data, then encoded it again when building the message to send to me. I think the following code would address that circumstance: while (Base64.isBase64(data)) { data = Base64.decodeBase64(data); } So if data is encoded multiple times, it would be decoded until its in its 'raw' state and then served up for download. Is this approach an acceptable way to address that problem? Ideally some sort of checking could happen at the edge when I receive attachment data, but that will take more time. This looping seems to be a faster way to do it. The 'Base64' library is Apache Commons: http://commons.apache.org/codec/apidocs/org/apache/commons/codec/binary/Base64.html I trust it to properly identify Base64 encoded data.

    Read the article

  • Loading a RSA private key from memory using libxmlsec

    - by ereOn
    Hello, I'm currently using libxmlsec into my C++ software and I try to load a RSA private key from memory. To do this, I searched trough the API and found this function. It takes binary data, a size, a format string and several PEM-callback related parameters. When I call the function, it just stucks, uses 100% of the CPU time and never returns. Quite annoying, because I have no way of finding out what is wrong. Here is my code: d_xmlsec_dsig_context->signKey = xmlSecCryptoAppKeyLoadMemory( reinterpret_cast<const xmlSecByte*>(data), static_cast<xmlSecSize>(datalen), xmlSecKeyDataFormatBinary, NULL, NULL, NULL ); data is a const char* pointing to the raw bytes of my RSA key (using i2d_RSAPrivateKey(), from OpenSSL) and datalen the size of data. My test private key doesn't have a passphrase so I decided not to use the callbacks for the time being. Has someone already done something similar ? Do you guys see anything that I could change/test to get forward on this problem ? I just discovered the library on yesterday, so I might miss something obvious here; I just can't see it. Thank you very much for your help.

    Read the article

  • Having trouble with extension methods for byte arrays

    - by Dave
    I'm working with a device that sends back an image, and when I request an image, there is some undocumented information that comes before the image data. I was only able to realize this by looking through the binary data and identifying the image header information inside. I've been able to make everything work fine by writing a method that takes a byte[] and returns another byte[] with all of this preamble "stuff" removed. However, what I really want is an extension method so I can write image_buffer.RemoveUpToByteArray(new byte[] { 0x42, 0x4D }); instead of byte[] new_buffer = RemoveUpToByteArray( image_buffer, new byte[] { 0x42, 0x4D }); I first tried to write it like everywhere else I've seen online: public static class MyExtensionMethods { public static void RemoveUpToByteArray(this byte[] buffer, byte[] header) { ... } } but then I get an error complaining that there isn't an extension method where the first parameter is a System.Array. Weird, everyone else seems to do it this way, but okay: public static class MyExtensionMethods { public static void RemoveUpToByteArray(this Array buffer, byte[] header) { ... } } Great, that takes now, but still doesn't compile. It doesn't compile because Array is an abstract class and my existing code that gets called after calling RemoveUpToByteArray used to work on byte arrays. I could rewrite my subsequent code to work with Array, but I am curious -- what am I doing wrong that prevents me from just using byte[] as the first parameter in my extension method?

    Read the article

  • Is there a GUI that I can use to create XML documents based on my schema?

    - by David Conlisk
    Hi all, I want to create a simple graphical user interface to allow non-technical users to create an XML file without having to manually edit the XML source. Ideally I'd like a drag and drop interface, but failing that, anything really. The contents of the XML file are similar to an encoded flow chart of a binary tree, so maybe something like Visio, with a save as xml option? Here's a quick sample of the XML output that is required: <?xml version="1.0" encoding="utf-8"?> <steps> <step id="1" type="prompt"> <prompt> Welcome. </prompt> <next>1.1</next> </step> <step id="1.1" type="question"> <prompt> Do you have what you need? </prompt> <yes>1.2</yes> <no>1.1.1</no> </step> ... </steps> Are there any existing tools out there that you can recommend for this purpose? Ideally open-source or with a free personal license, but I'm interested in hearing about all options. Thanks, David

    Read the article

  • c#: how to read parts of a file? (DICOM)

    - by Xaisoft
    I would like to read a DICOM file in C#. I don't want to do anything fancy, I just for now would like to know how to read in the elements, but first I would actually like to know how to read the header to see if is a valid DICOM file. It consists of Binary Data Elements. The first 128 bytes are unused (set to zero), followed by the string 'DICM'. This is followed by header information, which is organized into groups. A sample DICOM header First 128 bytes: unused DICOM format. Followed by the characters 'D','I','C','M' Followed by extra header information such as: 0002,0000, File Meta Elements Groups Len: 132 0002,0001, File Meta Info Version: 256 0002,0010, Transfer Syntax UID: 1.2.840.10008.1.2.1. 0008,0000, Identifying Group Length: 152 0008,0060, Modality: MR 0008,0070, Manufacturer: MRIcro In the above example, the header is organized into groups. The group 0002 hex is the file meta information group which contains 3 elements: one defines the group length, one stores the file version and the their stores the transfer syntax. Questions How to I read the header file and verify if it is a DICOM file by checking for the 'D','I','C','M' characters after the 128 byte preamble? How do I continue to parse the file reading the other parts of the data?

    Read the article

  • Why does perl crash with "*** glibc detected *** perl: munmap_chunk(): invalid pointer"?

    - by sid_com
    #!/usr/bin/env perl use warnings; use strict; use 5.012; use XML::LibXML::Reader; my $reader = XML::LibXML::Reader->new( location => 'http://www.heise.de/' ) or die $!; while ( $reader->read ) { say $reader->name; } At the end of the output from this script I get this error-messages: * glibc detected * perl: munmap_chunk(): invalid pointer: 0x0000000000b362e0 * ======= Backtrace: ========= /lib64/libc.so.6[0x7fb84952fc76] ... ======= Memory map: ======== 00400000-0053d000 r-xp 00000000 08:01 182002 /usr/local/bin/perl ... Is this due a bug? perl -V: Summary of my perl5 (revision 5 version 12 subversion 0) configuration: Platform: osname=linux, osvers=2.6.31.12-0.2-desktop, archname=x86_64-linux uname='linux linux1 2.6.31.12-0.2-desktop #1 smp preempt 2010-03-16 21:25:39 +0100 x86_64 x86_64 x86_64 gnulinux ' config_args='-Dnoextensions=ODBM_File' hint=recommended, useposix=true, d_sigaction=define useithreads=undef, usemultiplicity=undef useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef use64bitint=define, use64bitall=define, uselongdouble=undef usemymalloc=n, bincompat5005=undef Compiler: cc='cc', ccflags ='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64', optimize='-O2', cppflags='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include' ccversion='', gccversion='4.4.1 [gcc-4_4-branch revision 150839]', gccosandvers='' intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16 ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8 alignbytes=8, prototype=define Linker and Libraries: ld='cc', ldflags =' -fstack-protector -L/usr/local/lib' libpth=/usr/local/lib /lib /usr/lib /lib64 /usr/lib64 /usr/local/lib64 libs=-lnsl -ldl -lm -lcrypt -lutil -lc perllibs=-lnsl -ldl -lm -lcrypt -lutil -lc libc=/lib/libc-2.10.1.so, so=so, useshrplib=false, libperl=libperl.a gnulibc_version='2.10.1' Dynamic Linking: dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E' cccdlflags='-fPIC', lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector' Characteristics of this binary (from libperl): Compile-time options: PERL_DONT_CREATE_GVSV PERL_MALLOC_WRAP USE_64_BIT_ALL USE_64_BIT_INT USE_LARGE_FILES USE_PERLIO USE_PERL_ATOF Built under linux Compiled at Apr 15 2010 13:25:46 @INC: /usr/local/lib/perl5/site_perl/5.12.0/x86_64-linux /usr/local/lib/perl5/site_perl/5.12.0 /usr/local/lib/perl5/5.12.0/x86_64-linux /usr/local/lib/perl5/5.12.0 .

    Read the article

  • Symfony/Propel NestedSet left/right ID corruption/adjustment

    - by Mike Crowe
    Hi folks, I have a nested set application that seems to be getting corrupted. Here's what I'm seeing: We're using nested sets for a binary tree (any node can have 2 children). It appears to be working fine, but some event causes a discrepancy. For instance, when I do a getNumberOfDescendants() for the root node, it will slowly increase as this event happens. However, displaying the tree works fine, as does inserting (apparently). Has anybody seen anything like this before? For instance, my repair program shows this as the repairs that it makes: User pxxxxx left 0=>0, right 145=>129 User axxxxx left 1=>1, right 124=>106 User mxxxxx left 119=>117, right 120=>118 User fxxxxx left 125=>107, right 144=>128 User fxxxxx left 126=>108, right 131=>113 User rxxxxx left 127=>109, right 128=>110 User mxxxxx left 129=>111, right 130=>112 User mxxxxx left 132=>114, right 143=>127 User cxxxxx left 133=>115, right 142=>126 User gxxxxx left 134=>116, right 137=>121 User mxxxxx left 135=>119, right 136=>120 User jxxxxx left 138=>122, right 141=>125 User axxxxx left 139=>123, right 140=>124 I thought at first it was when I deleted a user, but it has since occurred w/o that event. Anybody know of a cause that might generate this? I've tested ad nauseum on my local machine, but I can't duplicate it. I do have an issue where my production box is PHP 5.2.0, whereas my test device is 5.2.10. Could that be an issue? TIA Mike

    Read the article

  • Average of two strings in alphabetical order

    - by Bemmu
    Suppose you take the strings 'a' and 'z' and list all the strings that come between them in alphabetical order: ['a','b','c' ... 'x','y','z']. Take the midpoint of this list and you find 'm'. So this is kind of like taking an average of those two strings. You could extend it to strings with more than one character, for example the midpoint between 'aa' and 'zz' would be found in the middle of the list ['aa', 'ab', 'ac' ... 'zx', 'zy', 'zz']. Might there be a Python method somewhere that does this? If not, even knowing the name of the algorithm would help. I began making my own routine that simply goes through both strings and finds midpoint of the first differing letter, which seemed to work great in that 'aa' and 'az' midpoint was 'am', but then it fails on 'cat', 'doggie' midpoint which it thinks is 'c'. Rather than invent a method I thought it better to ask. I tried Googling for "binary search string midpoint" etc. but without knowing the name of what I am trying to do here I had little luck.

    Read the article

  • Trying to use a authlogic-connect as a plugin in place of gem - Server doesn't start

    - by Arkid
    I am trying to use Authlogic-connect as a plugin in Rails 3 in place of a gem. I have made an entry in the gemfile as gem "authlogic-connect", :require => "authlogic-connect", :path => "localgems" Now when I run the bundle install, it runs fine. When I try to start the server i get the error Could not find gem 'authlogic-connect (>= 0, runtime)' in source at localgems. Source does not contain any versions of 'authlogic-connect (>= 0, runtime)' Try running `bundle install`. I have placed the unzipped Gem renamed as authlogic-connect in the localgems folder. what is the problem? Here is what I get on using rails plugin install arkidmitra$ rails plugin install git://github.com/viatropos/authlogic-connect.git Usage: rails new APP_PATH [options] Options: [--skip-gemfile] # Don't create a Gemfile -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files -G, [--skip-git] # Skip Git ignores and keeps -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository Runtime options: -q, [--quiet] # Supress status output -s, [--skip] # Skip files that already exist -f, [--force] # Overwrite files that already exist -p, [--pretend] # Run but do not make any changes Rails options: -h, [--help] # Show this help message and quit -v, [--version] # Show Rails version number and quit Description: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.

    Read the article

  • Integrating twitpic OAuth for iPhone.

    - by asadqamber
    How can I integrate twitpic API with OAuth for posting an image from iPhone? Any help or tutorial? Currently I am doing... NSURL *twitpicURL = [NSURL URLWithString:@"http://api.twitpic.com/2/upload.format"]; theRequest = [NSMutableURLRequest requestWithURL:twitpicURL]; [theRequest setHTTPMethod:@"POST"]; // Set the params NSString *message = theMessage; [theRequest addValue:@"http://api.twitter.com/" forHTTPHeaderField:@"OAuth realm"]; [theRequest addValue:TWITPIC_API_KEY forHTTPHeaderField:@"oauth_consumer_key"]; [theRequest addValue:@"HMAC-SHA1" forHTTPHeaderField:@"oauth_signature_method"]; [theRequest addValue:USER_OAUTH_TOKEN forHTTPHeaderField:@"oauth_token"]; [theRequest addValue:USER_OAUTH_SECRET forHTTPHeaderField:@"oauth_secret"]; [theRequest addValue: @"1272325550" forHTTPHeaderField:@"oauth_timestamp"]; [theRequest addValue:nil forHTTPHeaderField:@"oauth_nonce"]; [theRequest addValue:@"1.0" forHTTPHeaderField:@"oauth_version"]; [theRequest addValue:nil forHTTPHeaderField:@"oauth_signature"]; NSMutableData *postBody = [NSMutableData data]; [postBody appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"source\"\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"lighttable"] dataUsingEncoding:NSUTF8StringEncoding]]; // Message [postBody appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"message\"\r\n\r\n%@", message]dataUsingEncoding:NSUTF8StringEncoding]]; // Media [postBody appendData:[[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"media\"; filename=\"%@\"\r\n", @"doc_twitpic_image.jpg"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"Content-Type: image/jpg\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; // data as JPEG [postBody appendData:[[NSString stringWithFormat:@"Content-Transfer-Encoding: binary\r\n\r\n"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[NSData dataWithData:image]]; [theRequest setHTTPBody:postBody]; [theRequest setValue:[NSString stringWithFormat:@"%d", [postBody length]] forHTTPHeaderField:@"Content-Length"]; theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self];

    Read the article

  • Understanding floating point problems

    - by Maxim Gershkovich
    Could someone here please help me understand how to determine when floating point limitations will cause errors in your calculations. For example the following code. CalculateTotalTax = function (TaxRate, TaxFreePrice) { return ((parseFloat(TaxFreePrice) / 100) * parseFloat(TaxRate)).toFixed(4); }; I have been unable to input any two values that have caused for me an incorrect result for this method. If I remove the toFixed(4) I can infact see where the calculations start to lose accuracy (somewhere around the 6th decimal place). Having said that though, my understanding of floats is that even small numbers can sometimes fail to be represented or have I misunderstood and can 4 decimal places (for example) always be represented accurately. MSDN explains floats as such... This means they cannot hold an exact representation of any quantity that is not a binary fraction (of the form k / (2 ^ n) where k and n are integers) Now I assume this applies to all floats (inlcuding those used in javascript). Fundamentally my question boils down to this. How can one determine if any specific method will be vulnerable to errors in floating point operations, at what precision will those errors materialize and what inputs will be required to produce those errors? Hopefully what I am asking makes sense.

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • How to include multiple XML files in a single XML file for deserialization by XmlSerializer in .NET

    - by harrydev
    Hi, is it possible to use the XmlSerializer in .NET to load an XML file which includes other XML files? And how? This, in order to share XML state easily in two "parent" XML files, e.g. AB and BC in below. Example: using System; using System.IO; using System.Xml.Serialization; namespace XmlSerializerMultipleFilesTest { [Serializable] public class A { public int Value { get; set; } } [Serializable] public class B { public double Value { get; set; } } [Serializable] public class C { public string Value { get; set; } } [Serializable] public class AB { public A A { get; set; } public B B { get; set; } } [Serializable] public class BC { public B B { get; set; } public C C { get; set; } } class Program { public static void Serialize<T>(T data, string filePath) { using (var writer = new StreamWriter(filePath)) { var xmlSerializer = new XmlSerializer(typeof(T)); xmlSerializer.Serialize(writer, data); } } public static T Deserialize<T>(string filePath) { using (var reader = new StreamReader(filePath)) { var xmlSerializer = new XmlSerializer(typeof(T)); return (T)xmlSerializer.Deserialize(reader); } } static void Main(string[] args) { const string fileNameA = @"A.xml"; const string fileNameB = @"B.xml"; const string fileNameC = @"C.xml"; const string fileNameAB = @"AB.xml"; const string fileNameBC = @"BC.xml"; var a = new A(){ Value = 42 }; var b = new B(){ Value = Math.PI }; var c = new C(){ Value = "Something rotten" }; Serialize(a, fileNameA); Serialize(b, fileNameB); Serialize(c, fileNameC); // How can AB and BC be deserialized from single // files which include two of the A, B or C files. // Using ideally something like: var ab = Deserialize<AB>(fileNameAB); var bc = Deserialize<BC>(fileNameBC); // That is, so that A, B, C xml file // contents are shared across these two } } } Thus, the A, B, C files contain the following: A.xml: <?xml version="1.0" encoding="utf-8"?> <A xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Value>42</Value> </A> B.xml: <?xml version="1.0" encoding="utf-8"?> <B xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Value>3.1415926535897931</Value> </B> C.xml: <?xml version="1.0" encoding="utf-8"?> <C xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Value>Something rotten</Value> </C> And then the "parent" XML files would contain a XML include file of some sort (I have not been able to find anything like this), such as: AB.xml: <?xml version="1.0" encoding="utf-8"?> <AB xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <A include="A.xml"/> <B include="B.xml"/> </AB> BC.xml: <?xml version="1.0" encoding="utf-8"?> <BC xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <B include="B.xml"/> <C include="C.xml"/> </BC> Of course, I guess this can be solved by implementing IXmlSerializer for AB and BC, but I was hoping there was an easier solution or a generic solution with which classes themselves only need the [Serializable] attribute and nothing else. That is, the split into multiple files is XML only and handled by XmlSerializer itself or a custom generic serializer on top of this. I know this should be somewhat possible with app.config (as in http://stackoverflow.com/questions/480538/use-xml-includes-or-config-references-in-app-config-to-include-other-config-files), but I would prefer a solution based on XmlSerializer. Thanks.

    Read the article

  • mysql match against russain

    - by Devenv
    Hey, Trying to solve this for a very long time now... SELECT MATCH(name) AGAINST('????????') (russian) doesn't work, but SELECT MATCH(name) AGAINST('abraxas') (english) work perfectly. I know it's something with character-set, but I tried all kind of settings and it didn't work. For now it's latin-1. LIKE works This is the show variables charset related: character_set_client - latin1 character_set_connection - latin1 character_set_database - latin1 character_set_filesystem - binary character_set_results - latin1 character_set_server - latin1 character_set_system - utf8 character_sets_dir - /usr/share/mysql/charsets/ collation_connection - latin1_swedish_ci collation_database - latin1_swedish_ci collation_server - latin1_swedish_ci chunk of /etc/my.cnf default-character-set=latin1 skip-character-set-client-handshake chunk of the dump: /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8 */; DROP TABLE IF EXISTS `scenes_raw`; /*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET character_set_client = utf8 */; CREATE TABLE `scenes_raw` ( `scene_name` varchar(40) DEFAULT NULL, ...blabla... ) ENGINE=MyISAM AUTO_INCREMENT=901 DEFAULT CHARSET=utf8; (I did tests without skip-character-set-client-handshake too) SHOW TABLE STATUS WHERE Name = 'scenes_raw'\G Name: scenes_raw Engine: MyISAM Version: 10 Row_format: Dynamic Index_length: 23552 Collation: utf8_general_ci Checksum: NULL Create_options:

    Read the article

  • Exporting static data in a DLL

    - by Gayan
    I have a DLL which contains a class with static members. I use __declspec(dllexport) in order to make use of this class's methods. But when I link it to another project and try to compile it, I get "unresolved external symbol" errors for the static data. e.g. In DLL, Test.h class __declspec(dllexport) Test{ protected: static int d; public: static void m(){} } In DLL, Test.cpp #include "Test.h" int Test::d; In the application which uses Test, I call m(). I also tried using __declspec(dllexport) for each method separately but I still get the same link errors for the static members. If I check the DLL (the .lib) using dumpbin, I could see that the symbols have been exported. For instance, the app gives the following error at link time: 1>Main.obj : error LNK2001: unresolved external symbol "protected: static int CalcEngine::i_MatrixRow" (?i_MatrixRow@CalcEngine@@1HA) But the dumpbin of the .lib contains: Version : 0 Machine : 14C (x86) TimeDateStamp: 4BA3611A Fri Mar 19 17:03:46 2010 SizeOfData : 0000002C DLL name : CalcEngine.dll Symbol name : ?i_MatrixRow@CalcEngine@@1HA (protected: static int CalcEngine::i_MatrixRow) Type : data Name type : name Hint : 31 Name : ?i_MatrixRow@CalcEngine@@1HA I can't figure out how to solve this. What am I doing wrong? How can I get over these errors? P.S. The code was originally developed for Linux and the .so/binary combination works without a problem

    Read the article

  • Difficulties using custom control - RichTextEditor

    - by Chris
    I am working on a school project that uses ASP.NET. I found this TextEditor control ( http://blogs.msdn.com/kirti/archive/2007/11/10/rich-text-editor-is-here.aspx ) that I am trying to include but it isn't working. The error I am getting is: Error Rendering Control - TextEditor. An unhandled exception has occurred. Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index. I see this error when I go Design part of the editor. I just don't understand this error at all. Also I am a lil bit confused as there is no parameter called index. :( What I have done is reference the binary in my project and then on the page I am trying to use it have registered its namespace and assembly with this line: <%@ Register Assembly="RichTextEditor" Namespace="AjaxControls" TagPrefix="rtt" %> I then go ahead and try to add the control to the page with this line of code: <rtt:richtexteditor ID="TextEditor" Theme="Blue" runat="server" /> Any help would be much appreciated. I haven't done anything like add a custom control before.

    Read the article

  • How do I build BugTrap?

    - by magnifico
    I am trying to build the Itellesoft BugTrap source using Visual Studio 2008. I have downloaded and unziped the BugTrap source and the zlib source. I navigated down to ./BugTrap/Win32/BugTrap and opened BugTrap.sln (suggest by the author here). I used Build-Build Solution and the build failed with a compiler error: fatal error C1083: Cannot open include file: 'zip.h': No such file or directory I opened the project properties and added the path to the zlib-vc/zlib/include folder to the list of "Additional Include Directories" and tried to build again. The second build attempt failed with a linker error: fatal error LNK1104: cannot open file 'zlibSD.lib' I opened the zlib project and built the source. The zlib build succeeded. However, the bin directory does not contain a zlibSD.lib. The closest file in name is zlibMSD.lib. This poster on CodeProject seemed to have the same problem I did. But there is no resolution posted. Hopefully someone out there has experience building this project and can point me in the right direction, I've played with the binary distribution and it seems really slick.

    Read the article

  • Detecting Asymptotes in a Graph

    - by nasufara
    I am creating a graphing calculator in Java as a project for my programming class. There are two main components to this calculator: the graph itself, which draws the line(s), and the equation evaluator, which takes in an equation as a String and... well, evaluates it. To create the line, I create a Path2D.Double instance, and loop through the points on the line. To do this, I calculate as many points as the graph is wide (e.g. if the graph itself is 500px wide, I calculate 500 points), and then scale it to the window of the graph. Now, this works perfectly for most any line. However, it does not when dealing with asymptotes. If, when calculating points, the graph encounters a domain error (such as 1/0), the graph closes the shape in the Path2D.Double instance and starts a new line, so that the line looks mathematically correct. Example: However, because of the way it scales, sometimes it is rendered correctly, sometimes it isn't. When it isn't, the actual asymptotic line is shown, because within those 500 points, it skipped over x = 2.0 in the equation 1 / (x-2), and only did x = 1.98 and x = 2.04, which are perfectly valid in that equation. Example: In that case, I increased the window on the left and right one unit each. My question is: Is there a way to deal with asymptotes using this method of scaling so that the resulting line looks mathematically correct? I myself have thought of implementing a binary search-esque method, where, if it finds that it calculates one point, and then the next point is wildly far away from the last point, it searches in between those points for a domain error. I had trouble figuring out how to make it work in practice, however. Thank you for any help you may give!

    Read the article

  • How to sort a list so that managers are always ahead of their subordinates

    - by James Black
    I am working on a project using Groovy, and I would like to take an array of employees, so that no manager follows their subordinates in the array. The reason being that I need to add people to a database and I would prefer not to do it in two passes. So, I basically have: <employees> <employee> <employeeid>12</employeeid> <manager>3</manager> </employee> <employee> <employeeid>1</employeeid> <manager></manager> </employee> <employee> <employeeid>3</employeeid> <manager>1</manager> </employee> </employees> So, it should be sorted as such: employeeid = 1 employeeid = 3 employeeid = 12 The first person should have a null for managers. I am thinking about a binary tree representation, but I expect it will be very unbalanced, and I am not certain the best way to do this using Groovy properly. Is there a way to do this that isn't going to involve using nested loops?

    Read the article

  • reconstructing a tree from its preorder and postorder lists.

    - by NomeN
    Consider the situation where you have two lists of nodes of which all you know is that one is a representation of a preorder traversal of some tree and the other a representation of a postorder traversal of the same tree. I believe it is possible to reconstruct the tree exactly from these two lists, and I think I have an algorithm to do it, but have not proven it. As this will be a part of a masters project I need to be absolutely certain that it is possible and correct (Mathematically proven). However it will not be the focus of the project, so I was wondering if there is a source out there (i.e. paper or book) I could quote for the proof. (Maybe in TAOCP? anybody know the section possibly?) In short, I need a proven algorithm in a quotable resource that reconstructs a tree from its pre and post order traversals. Note: The tree in question will probably not be binary, or balanced, or anything that would make it too easy. Note2: Using only the preorder or the postorder list would be even better, but I do not think it is possible. Note3: A node can have any amount of children. Note4: I only care about the order of siblings. Left or right does not matter when there is only one child.

    Read the article

  • haskell: a data structure for storing ascending integers with a very fast lookup

    - by valya
    Hello! (This question is related to my previous question, or rather to my answer to it.) I want to store all qubes of natural numbers in a structure and look up specific integers to see if they are perfect cubes. For example, cubes = map (\x -> x*x*x) [1..] is_cube n = n == (head $ dropWhile (<n) cubes) It is much faster than calculating the cube root, but It has complexity of O(n^(1/3)) (am I right?). I think, using a more complex data structure would be better. For example, in C I could store a length of an already generated array (not list - for faster indexing) and do a binary search. It would be O(log n) with lower ?oefficient than in another answer to that question. The problem is, I can't express it in Haskell (and I don't think I should). Or I can use a hash function (like mod). But I think it would be much more memory consuming to have several lists (or a list of lists), and it won't lower the complexity of lookup (still O(n^(1/3))), only a coefficient. I thought about a kind of a tree, but without any clever ideas (sadly I've never studied CS). I think, the fact that all integers are ascending will make my tree ill-balanced for lookups. And I'm pretty sure this fact about ascending integers can be a great advantage for lookups, but I don't know how to use it properly (see my first solution which I can't express in Haskell).

    Read the article

  • [ASP.NET] Difficulties using custom control - RichTextEditor

    - by Chris
    I am working on a school project that uses ASP.NET. I found this TextEditor control ( http://blogs.msdn.com/kirti/archive/2007/11/10/rich-text-editor-is-here.aspx ) that I am trying to include but it isn't working. The error I am getting is: Error Rendering Control - TextEditor. An unhandled exception has occurred. Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index. I see this error when I go Design part of the editor. I just don't understand this error at all. Also I am a lil bit confused as there is no parameter called index. :( What I have done is reference the binary in my project and then on the page I am trying to use it have registered its namespace and assembly with this line: <%@ Register Assembly="RichTextEditor" Namespace="AjaxControls" TagPrefix="rtt" %> I then go ahead and try to add the control to the page with this line of code: <rtt:richtexteditor ID="TextEditor" Theme="Blue" runat="server" /> Any help would be much appreciated. I haven't done anything like add a custom control before.

    Read the article

  • Producing CCITT compressed TIFF from CGImage

    - by Brian Postow
    I have a CGImage (core graphics, C/C++). It's grayscale. Well, originally it was B/W, but the CGImage may be RGB. That shouldn't matter. I want to create a CCITT-Group 4 TIFF. I can create an LZW TIFF (grayscale or color) via creating a destination with the correct dictionary and adding the image in. No problem. However, there doesn't seem to be an equivalent kCGImagePropertyTIFFCompression value to represent CCITT-4. It should be 4, but that produces uncompressed. I have a manual CCITT compression routine, so if I can get the binary (1 bit per pixel) data, I'm set. But I can't seem to get 1 BPP data out of a CGImage. I have code that is supposed to put the CGImage into a CGBitmapContext and then give me the data, but it seems to be giving me all black. I've asked a couple of questions today trying to get at this, but I just figured, lets ask the question I REALLY want answered and see if someone can answer it. There's GOT to be a way to do this. I've got to be missing something dumb. What is it?

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >