Search Results

Search found 5377 results on 216 pages for 'explicit cast operator'.

Page 183/216 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Fastest way to convert a list of doubles to a unique list of integers?

    - by javanix
    I am dealing with a MySQL table here that is keyed in a somewhat unfortunate way. Instead of using an auto increment table as a key, it uses a column of decimals to preserve order (presumably so its not too difficult to insert new rows while preserving a primary key and order). Before I go through and redo this table to something more sane, I need to figure out how to rekey it without breaking everything. What I would like to do is something that takes a list of doubles (the current keys) and outputs a list of integers (which can be cast down to doubles for rekeying). For example, input {1.00, 2.00, 2.50, 2.60, 3.00} would give output {1, 2, 3, 4, 5). Since this is a database, I also need to be able to update the rows nicely: UPDATE table SET `key`='3.00' WHERE `key`='2.50'; Can anyone think of a speedy algorithm to do this? My current thought is to read all of the doubles into a vector, take the size of the vector, and output a new vector with values from 1 => doubleVector.size. This seems pretty slow, since you wouldn't want to read every value into the vector if, for instance, only the last n/100 elements needed to be modified. I think there is probably something I can do in place, since only values after the first non-integer double need to be modified, but I can't for the life of me figure anything out that would let me update in place as well. For instance, setting 2.60 to 3.00 the first time you see 2.50 in the original key list would result in an error, since the key value 3.00 is already used for the table.

    Read the article

  • Trouble with custom WPF Panel-derived class

    - by chaiguy
    I'm trying to write a custom Panel class for WPF, by overriding MeasureOverride and ArrangeOverride but, while it's mostly working I'm experiencing one strange problem I can't explain. In particular, after I call Arrange on my child items in ArrangeOverride after figuring out what their sizes should be, they aren't sizing to the size I give to them, and appear to be sizing to the size passed to their Measure method inside MeasureOverride. Am I missing something in how this system is supposed to work? My understanding is that calling Measure simply causes the child to evaluate its DesiredSize based on the supplied availableSize, and shouldn't affect its actual final size. Here is my full code (the Panel, btw, is intended to arrange children in the most space-efficient manner, giving less space to rows that don't need it and splitting remaining space up evenly among the rest--it currently only supports vertical orientation but I plan on adding horizontal once I get it working properly): protected override Size MeasureOverride( Size availableSize ) { foreach ( UIElement child in Children ) child.Measure( availableSize ); return availableSize; } protected override System.Windows.Size ArrangeOverride( System.Windows.Size finalSize ) { double extraSpace = 0.0; var sortedChildren = Children.Cast<UIElement>().OrderBy<UIElement, double>( new Func<UIElement, double>( delegate( UIElement child ) { return child.DesiredSize.Height; } ) ); double remainingSpace = finalSize.Height; double normalSpace = 0.0; int remainingChildren = Children.Count; foreach ( UIElement child in sortedChildren ) { normalSpace = remainingSpace / remainingChildren; if ( child.DesiredSize.Height < normalSpace ) // if == there would be no point continuing as there would be no remaining space remainingSpace -= child.DesiredSize.Height; else { remainingSpace = 0; break; } remainingChildren--; } extraSpace = remainingSpace / Children.Count; double offset = 0.0; foreach ( UIElement child in Children ) { //child.Measure( new Size( finalSize.Width, normalSpace ) ); double value = Math.Min( child.DesiredSize.Height, normalSpace ) + extraSpace; child.Arrange( new Rect( 0, offset, finalSize.Width, value ) ); offset += value; } return finalSize; }

    Read the article

  • Oracle ODBC x64 - getting 0 when selecting a number(9) column

    - by MatsL
    I'm having a really weird problem with a third party web service that uses an ODBC connection to Oracle 10.2.0.3.0. I've written a .NET client that generates the same SQL as the web service so I can find out what's going on. The web service is hosted by IIS 6 that's in x64 mode so we use Oracle x64 client. The oracle client version is 10.2.0.1.0. I have a table that looks like this (I've removed some columns and names): SQL> describe tablename; Name Null? Type ----------------------------------------- -------- ---------------------------- KOD VARCHAR2(30) ORDNING NUMBER(5) AVGIFT NUMBER(9) I then in SQL*Plus issue the following statement: SELECT KOD as kod, AVGIFT as riskPoang FROM tablename Where upper(KODTYP) = 'OBJLIVSV_RISKVERKSAMTYP' ORDER BY ORDNING And I get the following result: KOD RISKPOANG ------------------------------ ---------- Hög risk 55 Mellan risk 35 Låg risk 15 Mycket låg risk 5 But when I execute the exact same SQL using the same DSN on the same machine I get this: Values Kod: Hög risk RiskPoäng: 0 Kod: Mellan risk RiskPoäng: 0 Kod: Låg risk RiskPoäng: 0 Kod: Mycket låg risk RiskPoäng: 0 If I first cast the number to varchar and then back again to number, like this: SELECT KOD as kod, to_number(to_char(AVGIFT, '99'), '9999999999') as riskPoang FROM tablename Where upper(KODTYP) = 'OBJLIVSV_RISKVERKSAMTYP' ORDER BY ORDNING I get the correct result: Values Kod: Hög risk RiskPoäng: 55 Kod: Mellan risk RiskPoäng: 35 Kod: Låg risk RiskPoäng: 15 Kod: Mycket låg risk RiskPoäng: 5 Has anyone else experiences this? It's incredibly annoying and I'm completely stuck and not sure what to do next. We have a third party web service that use these tables so I must get the original SQL-statement to work since I can't modify its code. And pointers are greatly appreciated! :-) Best regards, Mats

    Read the article

  • R: manipulating data.frames containing strings and booleans.

    - by Mike Dewar
    Hello. I have a data.frame in R; it's called p. Each element in the data.frame is either True or False. My variable p has, say, m rows and n columns. For every row there is strictly only one TRUE element. It also has column names, which are strings. What I would like to do is the following: For every row in p I see a TRUE I would like to replace with the name of the corresponding column I would then like to collapse the data.frame, which now contains FALSEs and column names, to a single vector, which will have m elements. I would like to do this in an R-thonic manner, so as to continue my enlightenment in R and contribute to a world without for-loops. I can do step 1 using the following for loop: for (i in seq(length(colnames(p)))) { p[p[,i]==TRUE,i]=colnames(p)[i] } but theres's no beauty here and I have totally subscribed to this for-loops-in-R-are-probably-wrong mentality. Maybe wrong is too strong but they're certainly not great. I don't really know how to do step 2. I kind of hoped that the sum of a string and FALSE would return the string but it doesn't. I kind of hoped I could use an OR operator of some kind but can't quite figure that out (Python responds to False or 'bob' with 'bob'). Hence, yet again, I appeal to you beautiful Rstats people for help!

    Read the article

  • Need feedback on two member functions of a Table class in C++

    - by George
    int Table::addPlayer(Player player, int position) { deque<Player>::iterator it = playerList.begin()+position; deque<Player>::iterator itStart = playerList.begin()+postion; while(*it != "(empty seat)") { it++; if (it == playerList.end()) { it = playerList.begin(); } if (it == itStart) { cout << "Table full" << endl; return -1; } } //TODO overload Player assignment, << operator *it = player; cout << "Player " << player << " sits at position " << it - playerList.begin() << endl; return it - playerList.begin(); } } int Table::removePlayer(Player player) { deque<Player>::iterator it = playerList.begin(); //TODO Do I need to overload != in Player? while(*it != player) { it++; if (it == playerList.end()) { cout << "Player " << player << " not found" << endl; return -1; } } *it = "(empty seat)"; cout << "Player " << player << " stands up from position " << it - playerList.begin() << endl; return it - playerList.begin(); } Would like some feedback on these two member functions of a Table class for Texas Hold Em Poker simulation. Any information syntax, efficiency or even common practices would be much appreciated.

    Read the article

  • Casting to specify unknown object type?

    - by fuzzygoat
    In the following code I have a view object that is an instance of UIScrollView, if I run the code below I get warnings saying that "UIView might not respond to -setContentSize etc." UIImage *image = [UIImage imageNamed:@"Snowy_UK.jpg"]; imageView = [[UIImageView alloc] initWithImage:image]; [[self view] addSubview:imageView]; [[self view] setContentSize:[image size]]; [[self view] setMaximumZoomScale:2.0]; [[self view] setMinimumZoomScale: [[self view] bounds].size.width / [image size].width]; I have checked the type of the object and [self view] is indeed a UIScrollView. I am guessing that this is just the compiler making a bad guess as to the type and the solution is simply to cast the object to the correct type manually, am I getting this right? UIScrollView *scrollView = (UIScrollView *)[self view]; UIImage *image = [UIImage imageNamed:@"Snowy_UK.jpg"]; imageView = [[UIImageView alloc] initWithImage:image]; [[self view] addSubview:imageView]; [scrollView setContentSize:[image size]]; [scrollView setMaximumZoomScale:2.0]; [scrollView setMinimumZoomScale: [scrollView bounds].size.width / [image size].width]; cheers Gary.

    Read the article

  • Is there really such a thing as a char or short in modern programming?

    - by Dean P
    Howdy all, I've been learning to program for a Mac over the past few months (I have experience in other languages). Obviously that has meant learning the Objective C language and thus the plainer C it is predicated on. So I have stumbles on this quote, which refers to the C/C++ language in general, not just the Mac platform. With C and C++ prefer use of int over char and short. The main reason behind this is that C and C++ perform arithmetic operations and parameter passing at integer level, If you have an integer value that can fit in a byte, you should still consider using an int to hold the number. If you use a char, the compiler will first convert the values into integer, perform the operations and then convert back the result to char. So my question, is this the case in the Mac Desktop and IPhone OS environments? I understand when talking about theses environments we're actually talking about 3-4 different architectures (PPC, i386, Arm and the A4 Arm variant) so there may not be a single answer. Nevertheless does the general principle hold that in modern 32 bit / 64 bit systems using 1-2 byte variables that don't align with the machine's natural 4 byte words doesn't provide much of the efficiency we may expect. For instance, a plain old C-Array of 100,000 chars is smaller than the same 100,000 ints by a factor of four, but if during an enumeration, reading out each index involves a cast/boxing/unboxing of sorts, will we see overall lower 'performance' despite the saved memory overhead?

    Read the article

  • Scala and Java BigDecimal

    - by geejay
    I want to switch from Java to a scripting language for the Math based modules in my app. This is due to the readability, and functional limitations of mathy Java. For e.g, in Java I have this: BigDecimal x = new BigDecimal("1.1"); BigDecimal y = new BigDecimal("1.1"); BigDecimal z = x.multiply(y.exp(new BigDecimal("2")); As you can see, without BigDecimal operator overloading, simple formulas get complicated real quick. With doubles, this looks fine, but I need the precision. I was hoping in Scala I could do this: var x = 1.1; var y = 0.1; print(x + y); And by default I would get decimal-like behaviour, alas Scala doesn't use decimal calculation by default. Then I do this in Scala: var x = BigDecimal(1.1); var y = BigDecimal(0.1); println(x + y); And I still get an imprecise result. Is there something I am not doing right in Scala? Maybe I should use Groovy to maximise readability (it uses decimals by default)?

    Read the article

  • C++ Beginner Delete Question

    - by Pooch
    Hi all, This is my first year learning C++ so bear with me. I am attempting to dynamically allocate memory to the heap and then delete the allocated memory. Below is the code that is giving me a hard time: // String.cpp #include "String.h" String::String() {} String::String(char* source) { this->Size = this->GetSize(source); this->CharArray = new char[this->Size + 1]; int i = 0; for (; i < this->Size; i++) this->CharArray[i] = source[i]; this->CharArray[i] = '\0'; } int String::GetSize(const char * source) { int i = 0; for (; source[i] != '\0'; i++); return i; } String::~String() { delete[] this->CharArray; } Here is the error I get when the compiler tries to delete the CharArray: 0xC0000005: Access violation reading location 0xccccccc0. And here is the last call on the stack: msvcr100d.dll!operator delete(void * pUserData) Line 52 + 0x3 bytes C++ I am fairly certain the error exists within this piece of code but will provide you with any other information needed. Oh yeah, using VS 2010 for XP. Thanks for any and all help!

    Read the article

  • What's the best way to resolve this scope problem?

    - by Peter Stewart
    I'm writing a program in python that uses genetic techniques to optimize expressions. Constructing and evaluating the expression tree is the time consumer as it can happen billions of times per run. So I thought I'd learn enough c++ to write it and then incorporate it in python using cython or ctypes. I've done some searching on stackoverflow and learned a lot. This code compiles, but leaves the pointers dangling. I tried 'this_node = new Node(...' . It didn't seem to work. And I'm not at all sure how I'd delete all the references as there would be hundreds. I'd like to use variables that stay in scope, but maybe that's not the c++ way. What is the c++ way? class Node { public: char *cargo; int depth; Node *left; Node *right; } Node make_tree(int depth) { depth--; if(depth <= 0) { Node tthis_node("value",depth,NULL,NULL); return tthis_node; } else { Node this_node("operator" depth, &make_tree(depth), &make_tree(depth)); return this_node; } };

    Read the article

  • ActiveRecord/sqlite3 column type lost in table view?

    - by duncan
    I have the following ActiveRecord testcase that mimics my problem. I have a People table with one attribute being a date. I create a view over that table adding one column which is just that date plus 20 minutes: #!/usr/bin/env ruby %w|pp rubygems active_record irb active_support date|.each {|lib| require lib} ActiveRecord::Base.establish_connection( :adapter => "sqlite3", :database => "test.db" ) ActiveRecord::Schema.define do create_table :people, :force => true do |t| t.column :name, :string t.column :born_at, :datetime end execute "create view clowns as select p.name, p.born_at, datetime(p.born_at, '+' || '20' || ' minutes') as twenty_after_born_at from people p;" end class Person < ActiveRecord::Base validates_presence_of :name end class Clown < ActiveRecord::Base end Person.create(:name => "John", :born_at => DateTime.now) pp Person.all.first.born_at.class pp Clown.all.first.born_at.class pp Clown.all.first.twenty_after_born_at.class The problem is, the output is Time Time String When I expect the new datetime attribute of the view to be also a Time or DateTime in the ruby world. Any ideas? I also tried: create view clowns as select p.name, p.born_at, CAST(datetime(p.born_at, '+' || '20' || ' minutes') as datetime) as twenty_after_born_at from people p; With the same result.

    Read the article

  • InvalidCastException when getting Text from a Label referenced by dynamicaly built String, Fix?

    - by Chris
    NET Version: 3.5 Ok, I recieve an error (System.InvalidCastException was unhandled. Message="Unable to cast object of type 'System.Windows.Forms.Control[]' to type 'System.Windows.Forms.Label'.") when trying to get Text from a Label referenced by a dynamicly built string. Here's my situation; I have an array of 250 labels named l1 - l250. What I want to do is loop through them using this while statement: int c = 1; while (c < 251) { string k = "l" + c.ToString(); //dynamic name of Control(Label) object ka = Controls.Find(k, true); string ct = ((Label)ka).Text; //<<Error Occurs Here build = build + ct; c++; } and get the text value of each to build a string named build. I don't get any build errors, just this while debuging. While debuging I can go down to view my local variables. When looking through these, I can view the contents of object ka; it does contain the correct Text value of the correct Label I want to "access". I just don't understand how to get there. The text value is listed under "[0]" which is the only subcatagory for "ka".

    Read the article

  • invasive vs non-invasive ref-counted pointers in C++

    - by anon
    For the past few years, I've generally accepted that if I am going to use ref-counted smart pointers invasive smart pointers is the way to go -- However, I'm starting to like non-invasive smart pointers due to the following: I only use smart pointers (so no Foo* lying around, only Ptr) I'm starting to build custom allocators for each class. (So Foo would overload operator new). Now, if Foo has a list of all Ptr (as it easily can with non-invasive smart pointers). Then, I can avoid memory fragmentation issues since class Foo move the objects around (and just update the corresponding Ptr). The only reason why this Foo moving objects around in non-invasive smart pointers being easier than invasive smart pointers is: In non-invasive smart pointers, there is only one pointer that points to each Foo. In invasive smart pointers, I have no idea how many objects point to each Foo. Now, the only cost of non-invasive smart pointers ... is the double indirection. [Perhaps this screws up the caches]. Does anyone have a good study of expensive this extra layer of indirection is?

    Read the article

  • MUD (game) design concept question about timed events.

    - by mudder
    I'm trying my hand at building a MUD (multiplayer interactive-fiction game) I'm in the design/conceptualizing phase and I've run into a problem that I can't come up with a solution for. I'm hoping some more experienced programmers will have some advice. Here's the problem as best I can explain it. When the player decides to perform an action he sends a command to the server. the server then processes the command, determines whether or not the action can be performed, and either does it or responds with a reason as to why it could not be done. One reason that an action might fail is that the player is busy doing something else. For instance, if a player is mid-fight and has just swung a massive broadsword, it might take 3 seconds before he can repeat this action. If the player attempts to swing again to soon, the game will respond indicating that he must wait x seconds before doing that. Now, this I can probably design without much trouble. The problem I'm having is how I can replicate this behavior from AI creatures. All of the events that are being performed by the server ON ITS OWN, aka not as an immediate reaction to something a player has done, will have to be time sensitive. Some evil monster has cast a spell on you but must wait 30 seconds before doing it again... I think I'll probably be adding all these events to some kind of event queue, but how can I make that event queue time sensitive?

    Read the article

  • Creating a Function in SQL Server with a Phone Number as a parameter and returns a Random Number

    - by Emer
    Hi Guys, I am hoping someone can help me here as google is not being as forthcoming as I would have liked. I am relatively new to SQL Server and so this is the first function I have set myself to do. The outline of the function is that it has a Phone number varchar(15) as a parameter, it checks that this number is a proper number, i.e. it is 8 digits long and contains only numbers. The main character I am trying to avoid is '+'. Good Number = 12345678 Bad Number = +12345678. Once the number is checked I would like to produce a random number for each phone number that is passed in. I have looked at substrings, the like operator, Rand(), left(), Right() in order to search through the number and then produce a random number. I understand that Rand() will produce the same random number unless alterations are done to it but right now it is about actually getting some working code. Any hints on this would be great or even point me towards some more documentation. I have read books online and they haven't helped me, maybe I am not looking in the right places. Here is a snippet of code I was working on the Rand declare @Phone Varchar (15) declare @Counter Varchar (1) declare @NewNumber Varchar(15) set @Phone = '12345678' set @Counter = len(@Phone) while @Counter > 0 begin select case when @Phone like '%[0-9]%' then cast(rand()*100000000 as int) else 'Bad Number' end set @counter = @counter - 1 end return Thanks for the help in advance Emer

    Read the article

  • In a class with no virtual methods or superclass, is it safe to assume (address of first member vari

    - by Jeremy Friesner
    Hi all, I made a private API that assumes that the address of the first member-object in the class will be the same as the class's this-pointer... that way the member-object can trivially derive a pointer to the object that it is a member of, without having to store a pointer explicitly. Given that I am willing to make sure that the container class won't inherit from any superclass, won't have any virtual methods, and that the member-object that does this trick will be the first member object declared, will that assumption hold valid for any C++ compiler, or do I need to use the offsetof() operator (or similar) to guarantee correctness? To put it another way, the code below does what I expect under g++, but will it work everywhere? class MyContainer { public: MyContainer() {} ~MyContainer() {} // non-virtual dtor private: class MyContained { public: MyContained() {} ~MyContained() {} // Given that the only place Contained objects are declared is m_contained // (below), will this work as expected on any C++ compiler? MyContainer * GetPointerToMyContainer() { return reinterpret_cast<MyContainer *>(this); } }; MyContained m_contained; // MUST BE FIRST MEMBER ITEM DECLARED IN MyContainer int m_foo; // other member items may be declared after m_contained float m_bar; };

    Read the article

  • Objective-C subclass and base class casting

    - by ryanjm.mp
    I'm going to create a base class that implements very similar functions for all of the subclasses. This was answered in a different question. But what I need to know now is if/how I can cast various functions (in the base class) to return the subclass object. This is both for a given function but also a function call in it. (I'm working with CoreData by the way) As a function within the base class (this is from a class that is going to become my subclass) +(Structure *)fetchStructureByID:(NSNumber *)structureID inContext:(NSManagedObjectContext *)managedObjectContext {...} And as a function call within a given function: Structure *newStructure = [Structure fetchStructureByID:[currentDictionary objectForKey:@"myId"]]; inContext:managedObjectContext]; Structure is one of my subclasses, so I need to rewrite both of these so that they are "generic" and can be applied to other subclasses (whoever is calling the function). How do I do that? Update: I just realized that in the second part there are actually two issues. You can't change [Structure fetch...] to [self fetch...] because it is a class method, not an instance method. How do I get around that too?

    Read the article

  • Syntactical analysis with Flex/Bison part 2

    - by Imran
    Hallo, I need help in Lex/Yacc Programming. I wrote a compiler for a syntactical analysis for inputs of many statements. Now i have a special problem. In case of an Input the compiler gives the right output, which statement is uses, constant operator or a jmp instructor to which label, now i have to write so, if now a if statement comes, first the first command (before the else) must be give out when the assignment of the if is yes then it must jump to the end because the command after the else isnt needed, so after this jmp then the second command must be give out. I show it in an example maybe you understand what i mean. Input adr. Output if(x==0) 10 if(x==0) Wait 5 20 WAIT 5 else 30 JMP 50 Wait 1 40 WAIT 1 end 50 END like so. I have an idea, maybe i can do it whith a special if statement like IF exp jmp_stmt_end stmt_seq END when the if statement is given in the input the compiler has to recognize the end ofthe statement and like my jmp_stmt in my compiler ( you have to download the files from http://bitbucket.org/matrix/changed-tiny) only to jump to the end. I hope you understand my problem.thanks.

    Read the article

  • Concise C# code for gathering several properties with a non-null value into a collection?

    - by stakx
    A fairly basic problem for a change. Given a class such as this: public class X { public T A; public T B; public T C; ... // (other fields, properties, and methods are not of interest here) } I am looking for a concise way to code a method that will return all A, B, C, ... that are not null in an enumerable collection. (Assume that declaring these fields as an array is not an option.) public IEnumerable<T> GetAllNonNullAs(this X x) { // ? } The obvious implementation of this method would be: public IEnumerable<T> GetAllNonNullAs(this X x) { var resultSet = new List<T>(); if (x.A != null) resultSet.Add(x.A); if (x.B != null) resultSet.Add(x.B); if (x.C != null) resultSet.Add(x.C); ... return resultSet; } What's bothering me here in particular is that the code looks verbose and repetitive, and that I don't know the initial List capacity in advance. It's my hope that there is a more clever way, probably something involving the ?? operator? Any ideas?

    Read the article

  • Function with parameter type that has a copy-constructor with non-const ref chosen?

    - by Johannes Schaub - litb
    Some time ago I was confused by the following behavior of some code when I wanted to write a is_callable<F, Args...> trait. Overload resolution won't call functions accepting arguments by non-const ref, right? Why doesn't it reject in the following because the constructor wants a Test&? I expected it to take f(int)! struct Test { Test() { } // I want Test not be copyable from rvalues! Test(Test&) { } // But it's convertible to int operator int() { return 0; } }; void f(int) { } void f(Test) { } struct WorksFine { }; struct Slurper { Slurper(WorksFine&) { } }; struct Eater { Eater(WorksFine) { } }; void g(Slurper) { } void g(Eater) { } // chooses this, as expected int main() { // Error, why? f(Test()); // But this works, why? g(WorksFine()); } Error message is m.cpp: In function 'int main()': m.cpp:33:11: error: no matching function for call to 'Test::Test(Test)' m.cpp:5:3: note: candidates are: Test::Test(Test&) m.cpp:2:3: note: Test::Test() m.cpp:33:11: error: initializing argument 1 of 'void f(Test)' Can you please explain why one works but the other doesn't?

    Read the article

  • DRYing out implementation of ICloneable in several classes

    - by Sarah Vessels
    I have several different classes that I want to be cloneable: GenericRow, GenericRows, ParticularRow, and ParticularRows. There is the following class hierarchy: GenericRow is the parent of ParticularRow, and GenericRows is the parent of ParticularRows. Each class implements ICloneable because I want to be able to create deep copies of instances of each. I find myself writing the exact same code for Clone() in each class: object ICloneable.Clone() { object clone; using (var stream = new MemoryStream()) { var formatter = new BinaryFormatter(); // Serialize this object formatter.Serialize(stream, this); stream.Position = 0; // Deserialize to another object clone = formatter.Deserialize(stream); } return clone; } I then provide a convenience wrapper method, for example in GenericRows: public GenericRows Clone() { return (GenericRows)((ICloneable)this).Clone(); } I am fine with the convenience wrapper methods looking about the same in each class because it's very little code and it does differ from class to class by return type, cast, etc. However, ICloneable.Clone() is identical in all four classes. Can I abstract this somehow so it is only defined in one place? My concern was that if I made some utility class/object extension method, it would not correctly make a deep copy of the particular instance I want copied. Is this a good idea anyway?

    Read the article

  • Counting entries in a list of dictionaries: for loop vs. list comprehension with map(itemgetter)

    - by Dennis Williamson
    In a Python program I'm writing I've compared using a for loop and increment variables versus list comprehension with map(itemgetter) and len() when counting entries in dictionaries which are in a list. It takes the same time using a each method. Am I doing something wrong or is there a better approach? Here is a greatly simplified and shortened data structure: list = [ {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'biscuits and gravy'}, {'key1': False, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'peaches and cream'}, {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': False, 'filenotfound': 'Abbott and Costello'}, {'key1': False, 'dontcare': False, 'ignoreme': True, 'key2': False, 'filenotfound': 'over and under'}, {'key1': True, 'dontcare': True, 'ignoreme': False, 'key2': True, 'filenotfound': 'Scotch and... well... neat, thanks'} ] Here is the for loop version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True key1 = key2 = 0 for dictionary in list: if dictionary["key1"]: key1 += 1 if dictionary["key2"]: key2 += 1 print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above: Counts: key1: 3, subset key2: 2 Here is the other, perhaps more Pythonic, version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True from operator import itemgetter KEY1 = 0 KEY2 = 1 getentries = itemgetter("key1", "key2") entries = map(getentries, list) key1 = len([x for x in entries if x[KEY1]]) key2 = len([x for x in entries if x[KEY1] and x[KEY2]]) print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above (same as before): Counts: key1: 3, subset key2: 2 I'm a tiny bit surprised these take the same amount of time. I wonder if there's something faster. I'm sure I'm overlooking something simple. One alternative I've considered is loading the data into a database and doing SQL queries, but the data doesn't need to persist and I'd have to profile the overhead of the data transfer, etc., and a database may not always be available. I have no control over the original form of the data. The code above is not going for style points.

    Read the article

  • Reading Unicode files line by line C++

    - by Roger Nelson
    What is the correct way to read Unicode files line by line in C++? I am trying to read a file saved as Unicode (LE) by Windows Notepad. Suppose the file contains simply the characters A and B on separate lines. In reading the file byte by byte, I see the following byte sequence (hex) : FE FF 41 00 0D 00 0A 00 42 00 0D 00 0A 00 So 2 byte BOM, 2 byte 'A', 2byte CR , 2byte LF, 2 byte 'B', 2 byte CR, 2 byte LF . I tried reading the text file using the following code: std::wifstream file("test.txt"); file.seekg(2); // skip BOM std::wstring A_line; std::wstring B_line; getline(file,A_line); // I get "A" getline(file,B_line); // I get "\0B" I get the same results using operator instead of getline file >> A_line; file >> B_line; It appears that the single byte CR character is is being consumed only as the single byte. or CR NULL LF is being consumed but not the high byte NULL. I would expect wifstream in text mode would read the 2byte CR and 2byte LF. What am I doing wrong? It does not seem right that one should have to read a text file byte by byte in binary mode just to parse the new lines.

    Read the article

  • How to produce 64 bit masks?

    - by egiakoum1984
    Based on the following simple program the bitwise left shit operator works only for 32 bits. Is it true? #include <iostream> #include <stdlib.h> using namespace std; int main(void) { long long currentTrafficTypeValueDec; int input; cout << "Enter input:" << endl; cin >> input; currentTrafficTypeValueDec = 1 << (input - 1); cout << currentTrafficTypeValueDec << endl; cout << (1 << (input - 1)) << endl; return 0; } The output of the program: Enter input: 30 536870912 536870912 Enter input: 62 536870912 536870912 How could I produce 64-bit masks?

    Read the article

  • Coding Practices which enable the compiler/optimizer to make a faster program.

    - by EvilTeach
    Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code. As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant. FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code. What coding practices are available that may enable the compiler/optimizer to generate faster code? Identifying the platform and compiler you use, would be appreciated. Why does the technique seem to work? Sample code is encouraged. Here is a related question [Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code? [Edit] Offset related link

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >