Search Results

Search found 5180 results on 208 pages for 'binary serialization'.

Page 22/208 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Is it possible to De-Serialize a new Derived class using Old Binary?

    - by Anand
    In my project I have a class which I Serialize in Binary format to the disk. Due to some new requirement I need to create a new class which is derived from the original class. eg [Serializable] public class Sample { String someString; int someInt; public Sample() { } public Sample(String _someString, int _someInt) { this.someInt = _someInt; this.someString = _someString; } public String SomeString { get { return someString; } } public int SomeInt { get { return someInt; } } } [Serializable] public class DerivedSample : Sample { String someMoreString; int someMoreInt; public DerivedSample () : base() { } public DerivedSample (String _someString, int _someInt, String _someMoreString, int _someMoreInt) : base(_someString, _someInt) { this.someMoreString = _someMoreString; this.someMoreInt = _someMoreInt; } public String SomeMoreString { get { return someMoreString; } } public int SomeMoreInt { get { return someMoreInt; } } } When I try to De serialize an old file which contains only object of Sample it works fine, in the current assembly. That means backward compatibility is there. But when I try to deserialize the file which contains object of DerivedSample using the previous version of the assembly application crashes. Which means forward compatibilty needs to be taken care off... It it possible to say read only the base class part of the object from new version of the file?

    Read the article

  • Binary search in a sorted (memory-mapped ?) file in Java

    - by sds
    I am struggling to port a Perl program to Java, and learning Java as I go. A central component of the original program is a Perl module that does string prefix lookups in a +500 GB sorted text file using binary search (essentially, "seek" to a byte offset in the middle of the file, backtrack to nearest newline, compare line prefix with the search string, "seek" to half/double that byte offset, repeat until found...) I have experimented with several database solutions but found that nothing beats this in sheer lookup speed with data sets of this size. Do you know of any existing Java library that implements such functionality? Failing that, could you point me to some idiomatic example code that does random access reads in text files? Alternatively, I am not familiar with the new (?) Java I/O libraries but would it be an option to memory-map the 500 GB text file (I'm on a 64-bit machine with memory to spare) and do binary search on the memory-mapped byte array? I would be very interested to hear any experiences you have to share about this and similar problems.

    Read the article

  • How to pass binary data between two apps using Content Provider?

    - by Viktor
    I need to pass some binary data between two android apps using Content Provider (sharedUserId is not an option). I would prefer not to pass the data (a savegame stored as a file, small in size < 20k) as a file (ie. overriding openFile()) since this would necessitate some complicated temp-file scheme to cope with concurrency with several content provider accesses and a running game. I would like to read the file into memory under a mutex lock and then pass the binary array in the simplest way possible. How do I do this? It seems creating a file in memory is not a possibility due to the return type of openFile(). query() needs to return a Cursor. Using MatrixCursor is not possible since it applies toString() to all stored objects when reading it. What do I need to do? Implement a custom Cursor? This class has 30 abstract methods. Do I read the file, put it in a SQLite db and return the cursor? The complexity of this seemingly simple task is mindboggling.

    Read the article

  • How to create a Binary Tree from a General Tree?

    - by mno4k
    I have to solve the following constructor for a BinaryTree class in java: BinaryTree(GeneralTree<T> aTree) This method should create a BinaryTree (bt) from a General Tree (gt) as follows: Every Vertex from gt will be represented as a leaf in bt. If gt is a leaf, then bt will be a leaf with the same value as gt If gt is not a leaf, then bt will be constructed as an empty root, a left subTree (lt) and a right subTree (lr). Lt is a stric binary tree created from the oldest subtree of gt (the left-most subtree) and lr is a stric binary tree created from gt without its left-most subtree. The frist part is trivial enough, but the second one is giving me some trouble. I've gotten this far: public BinaryTree(GeneralTree<T> aTree){ if (aTree.isLeaf()){ root= new BinaryNode<T>(aTree.getRootData()); }else{ root= new BinaryNode<T>(null); // empty root LinkedList<GeneralTree<T>> childs = aTree.getChilds(); // Childs of the GT are implemented as a LinkedList of SubTrees child.begin(); //start iteration trough list BinaryTree<T> lt = new BinaryTree<T>(childs.element(0)); // first element = left-most child this.addLeftChild(lt); aTree.DeleteChild(hijos.elemento(0)); BinaryTree<T> lr = new BinaryTree<T>(aTree); this.addRightChild(lr); } } Is this the right way? If not, can you think of a better way to solve this? Thank you!

    Read the article

  • Why use bin2hex when inserting binary data from PHP into MySQL?

    - by Atli
    I heard a rumor that when inserting binary data (files and such) into MySQL, you should use the bin2hex() function and send it as a HEX-coded value, rather than just use mysql_real_escape_string on the binary string and use that. // That you should do $hex = bin2hex($raw_bin); $sql = "INSERT INTO `table`(`file`) VALUES (X'{$hex}')"; // Rather than $bin = mysql_real_escape_string($raw_bin); $sql = "INSERT INTO `table`(`file`) VALUES ('{$bin}')"; It is supposedly for performance reasons. Something to do with how MySQL handles large strings vs. how it handles HEX-coded values However, I am having a hard time confirming this. All my tests indicate the exact oposite; that the bin2hex method is ~85% slower and uses ~24% more memory. (I am testing this on PHP 5.3, MySQL 5.1, Win7 x64 - Using a farily simple insert loop.) For instance, this graph shows the private memory usage of the mysqld process while the test code was running: Does anybody have any explainations or reasources that would clarify this? Thanks.

    Read the article

  • How much is modern programming still tied to underyling digital logic?

    - by New Talk
    First of all: I've got no academic background. I'm working primarily with Java and Spring and I'm also fond of web programming and relational databases. I hope I'm using the right terms and I hope that this vague question makes some sense. Today the following question came to my mind: How much is modern programming still tied to the underlying digital logic? With modern programming I mean concepts like OOP, AOP, Java 7, AJAX, … I hope you get the idea. Do they no longer need the digital logic with which computers are working internally? Or is binary logic still ubiquitous when programming this way? If I'd change the inner workings of a computer overnight, would it matter, because my programming techniques are already that abstract? P. S.: With digital logic I mean the physical representation of everything "inside" the computer as zeroes and ones. Changed "binary" to "digital".

    Read the article

  • Count function on tree structure (non-binary)

    - by Spevy
    I am implementing a tree Data structure in c# based (largely on Dan Vanderboom's Generic implementation). I am now considering approach on handling a Count property which Dan does not implement. The obvious and easy way would be to use a recursive call which Traverses the tree happily adding up nodes (or iteratively traversing the tree with a Queue and counting nodes if you prefer). It just seems expensive. (I also may want to lazy load some of my nodes down the road). I could maintain a count at the root node. All children would traverse up to and/or hold a reference to the root, and update a internally settable count property on changes. This would push the iteration problem to when ever I want to break off a branch or clear all children below a given node. Generally less expensive, and puts the heavy lifting what I think will be less frequently called functions. Seems a little brute force, and that usually means exception cases I haven't thought of yet, or bugs if you prefer. Does anyone have an example of an implementation which keeps a count for an Unbalanced and/or non-binary tree structure rather than counting on the fly? Don't worry about the lazy load, or language. I am sure I can adjust the example to fit my specific needs. EDIT: I am curious about an example, rather than instructions or discussion. I know this is not technically difficult...

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • How to control Time zone formatting in System.Xml.Serialization or during application execution?

    - by Beal
    I'm developing a C# .Net Application that is executing on a system located in the Central Time Zone. The application gets information from a third party using an API they provide. I have used the WSDL to produce the code that my application access the API with...their reporting API allows you to define a start date and end date for the report. These are C# DateTime fields and XSD:dateTime. Now when I set the start date and end dates and allow the API to create the SOAP messages the dates don't always include a Time Zone unless I set the date fields using the ToLocalTime method; however, the method will create the DateTime fields in the Central Time Zone (CST) but I need to have it create these fields in the Pacific Time Zone (PST). If I set my machine time to PST all is good...but of course that causes other time issues. What methods can I use to control the formatting of the DateTime? Alternatively, is there a application setting that can be set in C# that allows timezone control?

    Read the article

  • How do I make a serialization class for this?

    - by chobo2
    I have something like this (sorry for the bad names) <root xmlns="http://www.domain.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.Domain.com Schema.xsd> <product></product> <SomeHighLevelElement> <anotherElment> <lowestElement> </lowestElement> </anotherElment> </SomeHighLevelElement> </root> I have something like this for my class public class MyClass { public MyClass() { ListWrapper= new List<UserInfo>(); } public string product{ get; set; } public List<SomeHighLevelElement> ListWrapper{ get; set; } } public class SomeHighLevelElement { public string lowestElement{ get; set; } } But I don't know how to write the code for the "anotherElement" not sure if I have to make another wrapper around it. Edit I know get a error in my actual xml file. I have this in my tag xmlns="http://www.domain.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.Domain.com Schema.xsd Throws an exception on the root line saying there was a error with this stuff. So I don't know if it is mad at the schemaLocation since I am using local host right now or what.

    Read the article

  • How can I intercept an exception occurred during serialization in WCF?

    - by bonomo
    I have a legit data object with all data contract / data member attributes. For some reason the WCF service crashes after the operation has completed and the result is passed as a return value. I believe it has something to do with WCF not being able to serialize that result properly. The test client doesn't say anything specific: The underlying connection was closed: The connection was closed unexpectedly. Server stack trace: at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at IFacade.PickSecurities(String pattern, Int32 atMost) at FacadeClient.PickSecurities(String pattern, Int32 atMost) Inner Exception: The underlying connection was closed: The connection was closed unexpectedly. at System.Net.HttpWebRequest.GetResponse() at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) I am in control of creating the instance of the service using a customized service host factory. I know I can set up trace listeners and check the logs, but it's a lot of hassle to do. So I would rather handle it explicitly on the server at the time it happens. So I how can I intercept that exception programmatically and return an appropriate fault meassage?

    Read the article

  • How do I duplicate a Box2d simulation, mid-simulation?

    - by Whyte
    I want to serialize the state mid-game, send it over the network to an identical computer (same CPU, same OS, same binary), load it there, and have the two games run in tandem doing the exact same simulation, without one of them drifting off and going haywire. In short: I want pop-in, pop-out networking support on my HIGHLY physics-intensive game, where sending object coordinates every few seconds is impossible, due to having thousands of objects, and many clients. I tried this with Box2D, and saving an object's location/velocity/etc wasn't enough... there's internal state that's not accessible through any public methods. My current workaround is to force EVERY client to save its entire worldstate and reload it from scratch, whenever a new player connects... but this is obviously bad practice, because it hangs the game for everyone whenever someone new connects. However, it works, with zero desynchronization. So, anyone know of any other techniques that can help me? Or should I just kiss my project goodbye?

    Read the article

  • How to write a recursive function that returns a linked list of nodes, when given a binary tree of n

    - by Jian Lin
    I was once asked of this in an interview: How to write a recursive function that returns a linked list of nodes, when given a binary tree of nodes? (flattening the data) For some reason, I tend to need more than 3 to 5 minutes to solve any recursive problem. Usually, 15 to 20 minutes will be more like it. How could we attack this problem, such as a very systematic way of reaching a solution, so that they can be solved in 3 to 5 minute time frame?

    Read the article

  • Is there any sence in performing binary AND with a number where all bits are set to 1

    - by n535
    Greetings everybody. I have seen examples of such operations for so many times that i began to think that i am getting something wrong with binary arithmetics. Is there any sense to perform the following: byte value = someAnotherByteValue & 0xFF; I don't really understand this, because it does not change anything anyway. Thanks for help. P.S. I was trying to search for information both elsewhere and here, but unsuccessfully.

    Read the article

  • Open source C program with little requirements and at least 2MB binary size.

    - by max
    Hi, I'm developing a operating system and I need a test program (function of any kind) to test certain internal features. I cannot find any appropriate program to do this job. Probably one of you knows one. The program should be open source, written in C with very little user library usage (only file IO, pthreads, stdio, stdlib preferred) and must have a binary size of at least 2MB. Thanks for any suggestions.

    Read the article

  • DataContract Known Types - passing array

    - by Erup
    Im having problems when passing an generic List, trough a WCF operation. In this case, there is a List of int. The example 4 is described here in MSDN. Note that in MSDN sample, is described: // This will serialize and deserialize successfully because the generic List is equivalent to int[], which was added to known types. Above, is the DataContract: [DataContract] [KnownType(typeof(int[]))] [KnownType(typeof(object[]))] public class AccountData { [DataMember] public object accNumber1; [DataMember] public object accNumber2; [DataMember] public object accNumber3; [DataMember] public object accNumber4; } In client side, Im calling the operation like this: DataTransfer.Service.AccountData data = new DataTransfer.Service.AccountData() { accNumber1 = 100, accNumber2 = new int[100], accNumber3 = new List<int>(), accNumber4 = new ArrayList() }; cService.AddAccounts(data); Also, here is the decorations of the generated AccountData obj (WCF proxy): [System.Diagnostics.DebuggerStepThroughAttribute()] [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Runtime.Serialization", "3.0.0.0")] [System.Runtime.Serialization.DataContractAttribute(Name="AccountData", Namespace="http://schemas.datacontract.org/2004/07/DataTransfer.Service")] [System.SerializableAttribute()] [System.Runtime.Serialization.KnownTypeAttribute(typeof(DataTransfer.Client.CustomerServiceReference.PurchaseOrder))] [System.Runtime.Serialization.KnownTypeAttribute(typeof(DataTransfer.Client.CustomerServiceReference.Customer))] [System.Runtime.Serialization.KnownTypeAttribute(typeof(int[]))] [System.Runtime.Serialization.KnownTypeAttribute(typeof(object[]))]

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >