Search Results

Search found 4214 results on 169 pages for 'binary serializer'.

Page 71/169 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • difference between DataContract attribute and Serializable attribute in .net

    - by samar
    I am trying to create a deep clone of an object using the following method. public static T DeepClone<T>(this T target) { using (MemoryStream stream = new MemoryStream()) { BinaryFormatter formatter = new BinaryFormatter(); formatter.Serialize(stream, target); stream.Position = 0; return (T)formatter.Deserialize(stream); } } This method requires an object which is Serialized i.e. an object of a class who is having an attribute "Serializable" on it. I have a class which is having attribute "DataContract" on it but the method is not working with this attribute. I think "DataContract" is also a type of serializer but maybe different than that of "Serializable". Can anyone please give me the difference between the two? Also please let me know if it is possible to create a deepclone of an object with just 1 attribute which does the work of both "DataContract" and "Serializable" attribute or maybe a different way of creating a deepclone? Please help!

    Read the article

  • Advantages/Disadvantages of different implementations for Comparing Objects using .NET

    - by Kevin Crowell
    This questions involves 2 different implementations of essentially the same code. First, using delegate to create a Comparison method that can be used as a parameter when sorting a collection of objects: class Foo { public static Comparison<Foo> BarComparison = delegate(Foo foo1, Foo foo2) { return foo1.Bar.CompareTo(foo2.Bar); }; } I use the above when I want to have a way of sorting a collection of Foo objects in a different way than my CompareTo function offers. For example: List<Foo> fooList = new List<Foo>(); fooList.Sort(BarComparison); Second, using IComparer: public class BarComparer : IComparer<Foo> { public int Compare(Foo foo1, Foo foo2) { return foo1.Bar.CompareTo(foo2.Bar); } } I use the above when I want to do a binary search for a Foo object in a collection of Foo objects. For example: BarComparer comparer = new BarComparer(); List<Foo> fooList = new List<Foo>(); Foo foo = new Foo(); int index = fooList.BinarySearch(foo, comparer); My questions are: What are the advantages and disadvantages of each of these implementations? What are some more ways to take advantage of each of these implementations? Is there a way to combine these implementations in such a way that I do not need to duplicate the code? Can I achieve both a binary search and an alternative collection sort using only 1 of these implementations?

    Read the article

  • DeSerialization doesn't work though i Implement GetObjectData method and Constructor

    - by Punit Singhi
    Hi, I have a static generic dictionary in a class. As static memeber cannot serialized so i have implented ISerializable interface and method GetObjectData to serialize. I have a constructor which will also accept SerializationInfo and StreamingContext to deserliaze the dictionay. Now when i try to serialize and deserialize , it always return 1(thoug i added 2 entries). please find the pseduo code- [Serializable] public class MyClass : ISerializable { internal static Dictionary<long, string> dict = new Dictionary<long,string>(); public void GetObjectData(SerializationInfo info, StreamingContext context) { info.AddValue("static.dic", MyClass1.dict, typeof(Dictionary<long, string>)); } public MyClass(SerializationInfo info, StreamingContext context) { MyClass.dict= (Dictionary<long, string>)info.GetValue("static.dic", typeof(Dictionary<long, string>)); } public void Add() { dict.Add(21, "11"); } public MyClass() { dict.Add(21, "11"); } } public class MyClass { MyClass myClass = new MyClass(); public static void Main() { myClass.Add(); FileStream fileStream = new FileStream("test.binary", FileMode.Create); IFormatter bf = new BinaryFormatter(); bf.Serialize(fileStream, myClass); fileStream.Dispose(); fileStream.Close(); fileStream = new FileStream("test.binary", FileMode.Open); bf = new BinaryFormatter(); myClass = (MyClass1)bf.Deserialize(fileStream); } }

    Read the article

  • What to pass to UserType, BlobType.setPreparedStatement session parameter

    - by dlots
    http://blog.xebia.com/2009/11/09/understanding-and-writing-hibernate-user-types/ I am attempting to defined a customer serialization UserType that mimics, the XStreamUserType referenced and provided here: http://code.google.com/p/aphillips/source/browse/commons-hibernate-usertype/trunk/src/main/java/com/qrmedia/commons/persistence/hibernate/usertype/XStreamableUserType.java My serializer outputs a bytearray that should presumably written to a Blob. I was going to do: public class CustomSerUserType extends DirtyCheckableUserType { protected SerA ser=F.g(SerA.class); public Class<Object> returnedClass() { return Object.class; } public int[] sqlTypes() { return new int[] {Types.BLOB}; } public Object nullSafeGet(ResultSet resultSet,String[] names,Object owner) throws HibernateException,SQLException { if() } public void nullSafeSet(PreparedStatement preparedStatement,Object value,int index) throws HibernateException,SQLException { BlobType.nullSafeSet(preparedStatement,ser.ser(value),index); } } Unfortunetly, the BlobType.nullSafeSet method requires the session. So how does one define a UserType that gets access to a servlet requests session? EDIT: There is a discussion of the issue here and it doesn't appear there is a solution: Best way to implement a Hibernate UserType after deprecations?

    Read the article

  • Changing what a property is serialized as

    - by slugster
    I think i already know the answer to this, but i cannot find anything that states it definitively, hence my question - i want to make sure i am not missing a trick. Using the DataContractSerializer or the XmlSerializer, is there any way to change what a pulic property is serialized as? I have a property that is an Enum, and i would like it to be serialized as an int, so that its value is sent across the wire instead of a text representation of its value. Is it possible to do this using attributes, or will i have to write my own serializer? Thanks :)

    Read the article

  • Reading numpy arrays outside of Python

    - by Abiel
    In a recent question I asked about the fastest way to convert a large numpy array to a delimited string. My reason for asking was because I wanted to take that plain text string and transmit it (over HTTP for instance) to clients written in other programming languages. A delimited string of numbers is obviously something that any client program can work with easily. However, it was suggested that because string conversion is slow, it would be faster on the Python side to do base64 encoding on the array and send it as binary. This is indeed faster. My question now is, (1) how can I make sure my encoded numpy array will travel well to clients on different operating systems and different hardware, and (2) how do I decode the binary data on the client side. For (1), my inclination is to do something like the following import numpy as np import base64 x = np.arange(100, dtype=np.float64) base64.b64encode(x.tostring()) Is there anything else I need to do? For (2), I would be happy to have an example in any programming language, where the goal is to take the numpy array of floats and turn them into a similar native data structure. Assume we have already done base64 decoding and have a byte array, and that we also know the numpy dtype, dimensions, and any other metadata which will be needed. Thanks.

    Read the article

  • Why execution of a portion of code loaded from external file is not halted by the OS?

    - by menjaraz
    I've harnessed a project released on internet a long time ago. Here comes the details, all irrelevant things being stripped off for sake of concision and clarity. A binary file whose content is descibed below HEX DUMP: 55 89 E5 83 EC 08 C7 45 FC 00 00 00 00 8B 45 FC 3B 45 10 72 02 EB 19 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 08 8A 00 88 02 8D 45 FC FF 00 EB DD C6 45 FA 00 83 7D 10 01 76 6C 80 7D FA 00 74 02 EB 64 C6 45 FA 01 C7 45 FC 00 00 00 00 8B 45 10 48 39 45 FC 72 02 EB E2 8B 45 FC 8B 4D 0C 01 C1 8B 45 FC 03 45 0C 8D 50 01 8A 01 3A 02 73 30 8B 45 FC 03 45 0C 8A 00 88 45 FB 8B 45 FC 8B 55 0C 01 C2 8B 45 FC 03 45 0C 40 8A 00 88 02 8B 45 FC 03 45 0C 8D 50 01 8A 45 FB 88 02 C6 45 FA 00 8D 45 FC FF 00 EB A7 C9 C2 0C 00 90 90 90 90 90 90 is loaded into memory and executed using the following method snippet var MySrcArray, MyDestArray: array [1 .. 15] of Byte; // ... MyBuffer: Pointer; TheProc: procedure; SortIt: procedure(ASrc, ADest: Pointer; ASize: LongWord); stdcall; begin // Initialization of MySrcArray with random Bytes and display here ... // Instructions of loading of the binary file into MyBuffer using merely **GetMem** here ... @SortIt := MyBuffer; try SortIt(@MySrcArray, @MyDestArray, 15); // Display of MyDestArray (The outcome of the processing !) except // Invalid code error handling end; // Cleaning code here ... end; works like a charm on my box. My Question: How comes it works without using VirtualAlloc and/or VirtualProtect?

    Read the article

  • How much detail should be in a project plan or spec?

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • how can a Win32 App plugin load its DLL in its own directory

    - by Jean-Denis Muys
    My code is a plugin for a specific Application, written in C++ using Visual Studio 8. It uses two DLL from an external provider. Unfortunately, my plugin fails to start because the DLLs are not found (I put them in the same directory as the plugin itself). When I manually move or copy the DLLs to the host application directory, then the plugin loads fine. This moving was deemed unacceptably cumbersome for the end user, and I am looking for a way for my plugin to load its DLLs transparently. What can I do? Relevant details: the host Application plugins are located in a directory mandated by the host application. That directory is not in the DLL search path and I don't control it. The plugin is itself packaged as a subdirectory of the plugin directory, holding the plugin code itself, but also any resource associated with the plugin (eg images, configuration files…). I control what's inside that subdirectory, called a "bundle", but not where it's located. the common plugin installation idiom for that App is for the end user to copy the plugin bundle to the plugin directory. This plugin is a port from the Macintosh version of the plugin. On the Mac there is no issue because each binary contains its own dynamic library search path, which I set as I needed to for my plugin binary. To set that on the Mac simply involves a project setting in the Xcode IDE. This is why I would hope for something similar in Visual Studio, but I could not find anything relevant. Moreover, Visual Studio's help was anything but, and neither was Google. A possible workaround would be for my code to explicitly tell Windows where to find the DLL, but I don't know how, and in any case, since my code is not even started, it hasn't got the opportunity to do so. As a Mac developer, I realize that I may be asking for something very elementary. If such is the case, I apologize, but I have run out of hair to pull out.

    Read the article

  • C# async callback on disposed form

    - by Rodney Burton
    Quick question: One of my forms in my winform app (c#) makes an async call to a WCF service to get some data. If the form happens to close before the callback happens, it crashes with an error about accessing a disposed object. What's the correct way to check/handle this situation? The error happens on the Invoke call to the method to update my form, but I can't drill down to the inner exception because it says the code has been optimized. The Code: public void RequestUserPhoto(int userID) { WCF.Service.BeginGetUserPhoto(userID, new AsyncCallback(GetUserPhotoCB), userID); } public void GetUserPhotoCB(IAsyncResult result) { var photo = WCF.Service.EndGetUserPhoto(result); int userID = (int)result.AsyncState; UpdateUserPhoto(userID, photo); } public delegate void UpdateUserPhotoDelegate(int userID, Binary photo); public void UpdateUserPhoto(int userID, Binary photo) { if (InvokeRequired) { var d = new UpdateUserPhotoDelegate(UpdateUserPhoto); Invoke(d, new object[] { userID, photo }); } else { if (photo != null) { var ms = new MemoryStream(photo.ToArray()); var bmp = new System.Drawing.Bitmap(ms); if (userID == theForm.AuthUserID) { pbMyPhoto.BackgroundImage = bmp; } else { pbPhoto.BackgroundImage = bmp; } } } }

    Read the article

  • C++ destructos causing crash's

    - by larsonator
    ok, so i got a some what intricate program that simulates the uni systems of students, units, and students enrolling in units. Students are stored in a binary search tree, Units are stored in a standard list. Student has a list of Unit Pointers, to store which units he/she is enrolled in Unit has a list of Student pointers, to store students which are enrolled in that unit. The unit collections (storing units in a list) as made as a static variable where the main function is, as is the Binary search tree of students. when its finaly time to close the program, i call the destructors of each. but at some stage, during the destructors on the unit side, Unhandled exception at 0x002e4200 in ClassAllocation.exe: 0xC0000005: Access violation reading location 0x00000000. UnitCollection destructor: UnitCol::~UnitCol() { list<Unit>::iterator itr; for(itr = UnitCollection.begin(); itr != UnitCollection.end();) { UnitCollection.pop_front(); itr = UnitCollection.begin(); } } Unit Destructor Unit::~Unit() { } now i got the same sorta problem on the student side of things BST destructors void StudentCol::Destructor(const BTreeNode * r) { if(r!= 0) { Destructor(r->getLChild()); Destructor(r->getRChild()); delete r; } } StudentCol::~StudentCol() { Destructor(root); } Student Destructor Student::~Student() { } so yeah any help would be greatly appreciated

    Read the article

  • UTF-8 HTML and CSS files with BOM (and how to remove the BOM with Python)

    - by Cameron
    First, some background: I'm developing a web application using Python. All of my (text) files are currently stored in UTF-8 with the BOM. This includes all my HTML templates and CSS files. These resources are stored as binary data (BOM and all) in my DB. When I retrieve the templates from the DB, I decode them using template.decode('utf-8'). When the HTML arrives in the browser, the BOM is present at the beginning of the HTTP response body. This generates a very interesting error in Chrome: Extra <html> encountered. Migrating attributes back to the original <html> element and ignoring the tag. Chrome seems to generate an <html> tag automatically when it sees the BOM and mistakes it for content, making the real <html> tag an error. So, using Python, what is the best way to remove the BOM from my UTF-8 encoded templates (if it exists -- I can't guarantee this in the future)? For other text-based files like CSS, will major browsers correctly interpret (or ignore) the BOM? They are being sent as plain binary data without .decode('utf-8'). Note: I am using Python 2.5. Thanks!

    Read the article

  • Family Tree :- myheritage.com

    - by Nitesh Panchal
    Hello, The other day i just accidently visited the site myheritage.com. I was just wondering, how they must have created one? Can anybody tell me what can be their database design? and if possible, algorithm that we can use to generate such a tree? Generating simple binary tree is very easy using recursion. But if you have a look at the site(if you have time please make account on it and add few nodes to feel) when we add son to a father, it's mother is automatically added(if you don't add explicitly). Mother's family tree is also generated side by side and many such fancy things are happening. In a simple binary tree we have a root node and then many nodes below it. Thus we cannot show wife and husband in the tree and then show a line from wife and husband to child. In spare time, can anybody discuss what can be it's database design and the recursive algorithm that we can follow to generate it? I hope i am not asking too much from you :).

    Read the article

  • How to query MySQL for exact length and exact UTF-8 characters

    - by oskarae
    I have table with words dictionary in my language (latvian). CREATE TABLE words ( value varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; And let's say it has 3 words inside: INSERT INTO words (value) VALUES ('teja'); INSERT INTO words (value) VALUES ('vejš'); INSERT INTO words (value) VALUES ('feja'); What I want to do is I want to find all words that is exactly 4 characters long and where second character is 'e' and third character is 'j' For me it feels that correct query would be: SELECT * FROM words WHERE value LIKE '_ej_'; But problem with this query is that it returs not 2 entries ('teja','vejš') but all three. As I understand it is because internally MySQL converts strings to some ASCII representation? Then there is BINARY addition possible for LIKE SELECT * FROM words WHERE value LIKE BINARY '_ej_'; But this also does not return 2 entries ('teja','vejš') but only one ('teja'). I believe this has something to do with UTF-8 2 bytes for non ASCII chars? So question: What MySQL query would return my exact two words ('teja','vejš')? Thank you in advance

    Read the article

  • Division by zero: Undefined Behavior or Implementation Defined in C and/or C++ ?

    - by SiegeX
    Regarding division by zero, the standards say: C99 6.5.5p5 - The result of the / operator is the quotient from the division of the first operand by the second; the result of the % operator is the remainder. In both operations, if the value of the second operand is zero, the behavior is undefined. C++03 5.6.4 - The binary / operator yields the quotient, and the binary % operator yields the remainder from the division of the first expression by the second. If the second operand of / or % is zero the behavior is undefined. If we were to take the above paragraphs at face value, the answer is clearly Undefined Behavior for both languages. However, if we look further down in the C99 standard we see the following paragraph which appears to be contradictory(1): C99 7.12p4 - The macro INFINITY expands to a constant expression of type float representing positive or unsigned infinity, if available; Do the standards have some sort of golden rule where Undefined Behavior cannot be superseded by a (potentially) contradictory statement? Barring that, I don't think it's unreasonable to conclude that if your implementation defines the INFINITY macro, division by zero is defined to be such. However, if your implementation does not define such a macro, the behavior is Undefined. I'm curious what the consensus on this matter for each of the two languages. Would the answer change if we are talking about integer division int i = 1 / 0 versus floating point division float i = 1.0 / 0.0 ? Note (1) The C++03 standard talks about the library which includes the INFINITY macro.

    Read the article

  • Working with EnumSet class in GWT

    - by zenmonkey
    I am having trouble using EnumSet on the client side. I get this runtime error message: java.util.EnumSet.EnumSetImpl is not default instantiable (it must have a zero-argument constructor or no constructors at all) and has no custom serializer. Is this is a known issue? Here is what I am doing (basically a hello world app) Service: String echo (EnumSet<Names> name) throws IllegalArgumentException; Client: echoServ.echo (EnumSet.of(Names.JOHN), new AsyncCallback<String>() { ....... }); Shared enum class enum Names { JOHN, NUMAN, OBAMA }

    Read the article

  • C++ destructors causing crash's

    - by larsonator
    ok, so i got a some what intricate program that simulates the uni systems of students, units, and students enrolling in units. Students are stored in a binary search tree, Units are stored in a standard list. Student has a list of Unit Pointers, to store which units he/she is enrolled in Unit has a list of Student pointers, to store students which are enrolled in that unit. The unit collections (storing units in a list) as made as a static variable where the main function is, as is the Binary search tree of students. when its finaly time to close the program, i call the destructors of each. but at some stage, during the destructors on the unit side, Unhandled exception at 0x002e4200 in ClassAllocation.exe: 0xC0000005: Access violation reading location 0x00000000. UnitCollection destructor: UnitCol::~UnitCol() { list<Unit>::iterator itr; for(itr = UnitCollection.begin(); itr != UnitCollection.end();) { UnitCollection.pop_front(); itr = UnitCollection.begin(); } } Unit Destructor Unit::~Unit() { } now i got the same sorta problem on the student side of things BST destructors void StudentCol::Destructor(const BTreeNode * r) { if(r!= 0) { Destructor(r->getLChild()); Destructor(r->getRChild()); delete r; } } StudentCol::~StudentCol() { Destructor(root); } Student Destructor Student::~Student() { } so yeah any help would be greatly appreciated

    Read the article

  • charsets in MySQL replication

    - by niklassaers
    Hi guys, What can I do to ensure that replication will use latin1 instead of utf-8? I'm migrating between an MySQL 5.1.22 server (master) on a Linux system and a MySQL 5.1.42 server (slave) on a FreeBSD system. My replication works well, but when non-ascii characters are in my varchars, they turn "weird". The Linux/MySQL-5.1.22 shows the following character set variables: character_set_client=latin1 character_set_connection=latin1 character_set_database=latin1 character_set_filesystem=binary character_set_results=latin1 character_set_server=latin1 character_set_system=utf8 character_sets_dir=/usr/share/mysql/charsets/ collation_connection=latin1_swedish_ci collation_database=latin1_swedish_ci collation_server=latin1_swedish_ci While the FreeBSD shows character_set_client=utf8 character_set_connection=utf8 character_set_database=utf8 character_set_filesystem=binary character_set_results=utf8 character_set_server=utf8 character_set_system=utf8 character_sets_dir=/usr/local/share/mysql/charsets/ collation_connection=utf8_general_ci collation_database=utf8_general_ci collation_server=utf8_general_ci Setting any of these variables from the MySQL CLI has no effect, and setting them in my.cnf or at the command line makes the server not start. Of course, both servers have the tables in question created the same way, in this case with DEFAULT CHARSET=latin1. Let me give you an example: CREATE TABLE `test` ( `test` varchar(5) DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 When I on the master do, in a Latin1 terminal, "INSERT INTO test VALUES ('æøå')", this becomes on the slave, when I select it from a Latin1 based terminal +--------+ | test | +--------+ | æøå | +--------+ On a UTF-8 based terminal on the replication slave, test contains: +--------+ | test | +--------+ | æøå | +--------+ So my conclusion is that it is converted to utf8, even though the table definition is latin1. Is this a correct conclusion? Of course, on the master, in a latin1 terminal, it still says: +------+ | test | +------+ | æøå | +------+ Since both system character sets are utf-8, if I set both terminals to utf-8 and do again "INSERT INTO test VALUES ('æøå')" on the master with a utf-8 terminal, on the slave with utf-8 I get: +------------+ | test | +------------+ | æøà | +------------+ If my conclusion is correct, all my replicated data is converted to utf8 (if it is utf8, it is treated as latin1 and converted to utf8), while all the old data in the table is, as the CREATE TABLE suggests, latin1. I'd love to convert it all to utf-8 if it weren't for the fact that legacy applications rely on it being latin1, so I need to keep it in latin1 while they still exist. What can I do to ensure that the replication reads latin1, treats it as latin1 and writes it on the slave as latin1? Cheers Nik

    Read the article

  • C# - Advantages/Disadvantages of different implementations for Comparing Objects

    - by Kevin Crowell
    This questions involves 2 different implementations of essentially the same code. First, using delegate to create a Comparison method that can be used as a parameter when sorting a collection of objects: class Foo { public static Comparison<Foo> BarComparison = delegate(Foo foo1, Foo foo2) { return foo1.Bar.CompareTo(foo2.Bar); }; } I use the above when I want to have a way of sorting a collection of Foo objects in a different way than my CompareTo function offers. For example: List<Foo> fooList = new List<Foo>(); fooList.Sort(BarComparison); Second, using IComparer: public class BarComparer : IComparer<Foo> { public int Compare(Foo foo1, Foo foo2) { return foo1.Bar.CompareTo(foo2.Bar); } } I use the above when I want to do a binary search for a Foo object in a collection of Foo objects. For example: BarComparer comparer = new BarComparer(); List<Foo> fooList = new List<Foo>(); Foo foo = new Foo(); int index = fooList.BinarySearch(foo, comparer); My questions are: What are the advantages and disadvantages of each of these implementations? What are some more ways to take advantage of each of these implementations? Is there a way to combine these implementations in such a way that I do not need to duplicate the code? Can I achieve both a binary search and an alternative collection sort using only 1 of these methods?

    Read the article

  • Python: Created nested dictionary from list of paths

    - by sberry2A
    I have a list of tuples the looks similar to this (simplified here, there are over 14,000 of these tuples with more complicated paths than Obj.part) [ (Obj1.part1, {<SPEC>}), (Obj1.partN, {<SPEC>}), (ObjK.partN, {<SPEC>}) ] Where Obj goes from 1 - 1000, part from 0 - 2000. These "keys" all have a dictionary of specs associated with them which act as a lookup reference for inspecting another binary file. The specs dict contains information such as the bit offset, bit size, and C type of the data pointed to by the path ObjK.partN. For example: Obj4.part500 might have this spec, {'size':32, 'offset':128, 'type':'int'} which would let me know that to access Obj4.part500 in the binary file I must unpack 32 bits from offset 128. So, now I want to take my list of strings and create a nested dictionary which in the simplified case will look like this data = { 'Obj1' : {'part1':{spec}, 'partN':{spec} }, 'ObjK' : {'part1':{spec}, 'partN':{spec} } } To do this I am currently doing two things, 1. I am using a dotdict class to be able to use dot notation for dictionary get / set. That class looks like this: class dotdict(dict): def __getattr__(self, attr): return self.get(attr, None) __setattr__ = dict.__setitem__ __delattr__ = dict.__delitem__ The method for creating the nested "dotdict"s looks like this: def addPath(self, spec, parts, base): if len(parts) > 1: item = base.setdefault(parts[0], dotdict()) self.addPath(spec, parts[1:], item) else: item = base.setdefault(parts[0], spec) return base Then I just do something like: for path, spec in paths: self.lookup = dotdict() self.addPath(spec, path.split("."), self.lookup) So, in the end self.lookup.Obj4.part500 points to the spec. Is there a better (more pythonic) way to do this?

    Read the article

  • Java Regex for matching hexadecimal numbers in a file

    - by Ranman
    So I'm reading in a file (like java program < trace.dat) which looks something like this: 58 68 58 68 40 c 40 48 FA If I'm lucky but more often it has several whitespace characters before and after each line. These are hexadecimal addresses that I'm parsing and I basically need to make sure that I can get the line using a scanner, buffered reader... whatever and make sure I can then convert the hexadecimal to an integer. This is what I have so far: Scanner scanner = new Scanner(System.in); int address; String binary; Pattern pattern = Pattern.compile("^\\s*[0-9A-Fa-f]*\\s*$", Pattern.CASE_INSENSITIVE); while(scanner.hasNextLine()) { address = Integer.parseInt(scanner.next(pattern), 16); binary = Integer.toBinaryString(address); //Do lots of other stuff here } //DO MORE STUFF HERE... So I've traced all my errors to parsing input and stuff so I guess I'm just trying to figure out what regex or approach I need to get this working the way I want.

    Read the article

  • How to seralize only some properties in .Net?

    - by Beta033
    This is for a web project so i have several classes that inherit from Web.UI. I only want to serialize very particular properties (basically, only local properties) I'm aware of the XMLIgnore property that can be placed on a property to ignore items, but this won't work in my context since that would require modifying a bunch of stuff that i really don't want to modify (and probably can't). So how do i tell the xml serializer to ignore everything except for X and Y or tell it to seralize just X and Y? i could just create my own xml in a string builder or something and if that's the only way, so be it. however i'm looking for a method that will employ the built in XML stuff. Thanks

    Read the article

  • Changing default compiler in Linux, using SCons

    - by ereOn
    On my Linux platform, I have several versions of gcc. Under usr/bin I have: gcc34 gcc44 gcc Here are some outputs: $ gcc --version gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-48) $ gcc44 --version gcc44 (GCC) 4.4.0 20090514 (Red Hat 4.4.0-6) I need to use the 4.4 version of gcc however the default seems to the 4.1 one. I there a way to replace /usr/bin/gcc and make gcc44 the default compiler not using a symlink to /usr/bin/gcc44 ? The reason why I can't use a symlink is because my code will have to be shipped in a RPM package using mock. mock creates a minimal linux installation from scratch and just install the specified dependencies before compiling my code in it. I cannot customize this "minimal installation". Ideally, the perfect solution would be to install an official RPM package that replaces gcc with gcc44 as the default compiler. Is there such a package ? Is this even possible/good ? Additional information I have to use SCons (a make alternative) and it doesn't let me specify the binary to use for gcc. I will also accept any answer that will tell me how to specify the gcc binary in my SConstruct file.

    Read the article

  • Concatenating a string and byte array in to unmanaged memory.

    - by Scott Chamberlain
    This is a followup to my last question. I now have a byte[] of values for my bitmap image. Eventually I will be passing a string to the print spooler of the format String.Format("GW{0},{1},{2},{3},", X, Y, stride, _Bitmap.Height) + my binary data; I am using the SendBytesToPrinter command from here. Here is my code so far to send it to the printer public static bool SendStringPlusByteBlockToPrinter(string szPrinterName, string szString, byte[] bytes) { IntPtr pBytes; Int32 dwCount; // How many characters are in the string? dwCount = szString.Length; // Assume that the printer is expecting ANSI text, and then convert // the string to ANSI text. pBytes = Marshal.StringToCoTaskMemAnsi(szString); pBytes = Marshal.ReAllocCoTaskMem(pBytes, szString.Length + bytes.Length); Marshal.Copy(bytes,0, SOMTHING GOES HERE,bytes.Length); // this is the problem line // Send the converted ANSI string + the concatenated bytes to the printer. SendBytesToPrinter(szPrinterName, pBytes, dwCount); Marshal.FreeCoTaskMem(pBytes); return true; } My issue is I do not know how to make my data appended on to the end of the string. Any help would be greatly appreciated, and if I am doing this totally wrong I am fine in going a entirely different way (for example somehow getting the binary data concatenated on to the string before the move to unmanaged space. P.S. As a second question, will ReAllocCoTaskMem move the data that is sitting in it before the call to the new location?

    Read the article

  • not able to run c/cpp execs in eclipse cdt

    - by user1658323
    i installed eclipse and then cdt on an ubuntu system recently and was trying to make the first runnable c/c++ proj.. i installed g++ also, and then created the first executable cpp 'Hello World' project some files are created... then some issues... 1) even though Build Automatically is selected, I have to goto the project n do a Build Project to build it manually, and this i have to do everytime i make a change 2) After Building manually, there are some new folders created with Binaries and Debug files and i can see g++ commands in the console being executed. The project binary is output both to debug n binaries folder. But i am not able to run these through the Green Play Button or any other way in eclipse. Even Run configuration is not showing any option for c/C++ proj.. though i can goto terminal and run the binary myself through ./ But i want to be able to run n debug this through eclipse. plz help in fixing me this problem as i really love eclipse n have some c/cpp assignments coming soon.. Console info on doing a manual project build - Build of configuration Debug for project qwe ** make all Building file: ../src/qwe.cpp Invoking: GCC C++ Compiler g++ -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"src/qwe.d" -MT"src/qwe.d" -o "src/qwe.o" "../src/qwe.cpp" Finished building: ../src/qwe.cpp Building target: qwe Invoking: GCC C++ Linker g++ -o "qwe" ./src/qwe.o Finished building target: qwe Build Finished **

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >