Search Results

Search found 770 results on 31 pages for 'counting'.

Page 9/31 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Why implement DB connection pointer object as a reference counting pointer? (C++)

    - by DVK
    At our company one of the core C++ classes (Database connection pointer) is implemented as a reference counting pointer. To be clear, the objects are NOT DB connections themselves, but pointers to a DB connection object. The library is very old, and nobody who designed is around anymore. So far, nether I, nor any C++ experts in the company that I asked have come up with a good reason for why this particular design was chosen. Any ideas? It is introducing some problems (partially due to awful reference pointer implementation used), and I'm trying to understand if this design actually has some deep underlying reasons? The usage pattern these days seems to be that the DB connection pointer object is returned by a DB connection manager class, and it's somewhat unclear whether DB connection pointers were designed to be able to be used independently of DB connection manager.

    Read the article

  • Counting entries in a list of dictionaries: for loop vs. list comprehension with map(itemgetter)

    - by Dennis Williamson
    In a Python program I'm writing I've compared using a for loop and increment variables versus list comprehension with map(itemgetter) and len() when counting entries in dictionaries which are in a list. It takes the same time using a each method. Am I doing something wrong or is there a better approach? Here is a greatly simplified and shortened data structure: list = [ {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'biscuits and gravy'}, {'key1': False, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'peaches and cream'}, {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': False, 'filenotfound': 'Abbott and Costello'}, {'key1': False, 'dontcare': False, 'ignoreme': True, 'key2': False, 'filenotfound': 'over and under'}, {'key1': True, 'dontcare': True, 'ignoreme': False, 'key2': True, 'filenotfound': 'Scotch and... well... neat, thanks'} ] Here is the for loop version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True key1 = key2 = 0 for dictionary in list: if dictionary["key1"]: key1 += 1 if dictionary["key2"]: key2 += 1 print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above: Counts: key1: 3, subset key2: 2 Here is the other, perhaps more Pythonic, version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True from operator import itemgetter KEY1 = 0 KEY2 = 1 getentries = itemgetter("key1", "key2") entries = map(getentries, list) key1 = len([x for x in entries if x[KEY1]]) key2 = len([x for x in entries if x[KEY1] and x[KEY2]]) print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above (same as before): Counts: key1: 3, subset key2: 2 I'm a tiny bit surprised these take the same amount of time. I wonder if there's something faster. I'm sure I'm overlooking something simple. One alternative I've considered is loading the data into a database and doing SQL queries, but the data doesn't need to persist and I'd have to profile the overhead of the data transfer, etc., and a database may not always be available. I have no control over the original form of the data. The code above is not going for style points.

    Read the article

  • Server 2008 R2 boot is at 2 hours and counting. What now?

    - by Jesse
    This morning, we rebooted our Server 2008 R2 box. No problem, came right back up. Then we shut it down and let it install windows updates. While it was off, we added some RAM. Then we turned it back on. The system came right back up to the "press ctrl-alt-delete" screen, so far, so good. I logged in. The system got as far as "Applying Group Policy" -- then spent almost an hour applying drive mappings. Finally finished that, and has now spent 30 minutes on waiting for the Event Notification Service. I still haven't been able to log in. Remote desktop service doesn't appear to be running yet. I tried viewing the event log from another machine. I see that the box is writing to the Security log, but there are no events in System or Application in the last 45 minutes. Digging through the System log of events from 45 minutes ago, I see a bunch of timeouts: A timeout (30000 milliseconds) was reached while waiting for a transaction response from the ShellHWDetection service. [lots of these] A timeout (30000 milliseconds) was reached while waiting for a transaction response from the wuauserv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the SessionEnv service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the Schedule service. A timeout (30000 milliseconds) was reached while waiting for a transaction response from the CertPropSvc service. What can I do? Should I try shutting it down remotely, or will that do more damage?

    Read the article

  • What would be the time complexity of counting the number of all structurally different binary trees?

    - by ktslwy
    Using the method presented here: http://cslibrary.stanford.edu/110/BinaryTrees.html#java 12. countTrees() Solution (Java) /** For the key values 1...numKeys, how many structurally unique binary search trees are possible that store those keys? Strategy: consider that each value could be the root. Recursively find the size of the left and right subtrees. */ public static int countTrees(int numKeys) { if (numKeys <=1) { return(1); } else { // there will be one value at the root, with whatever remains // on the left and right each forming their own subtrees. // Iterate through all the values that could be the root... int sum = 0; int left, right, root; for (root=1; root<=numKeys; root++) { left = countTrees(root-1); right = countTrees(numKeys - root); // number of possible trees with this root == left*right sum += left*right; } return(sum); } } I have a sense that it might be n(n-1)(n-2)...1, i.e. n!

    Read the article

  • Are memory barriers necessary for atomic reference counting shared immutable data?

    - by Dietrich Epp
    I have some immutable data structures that I would like to manage using reference counts, sharing them across threads on an SMP system. Here's what the release code looks like: void avocado_release(struct avocado *p) { if (atomic_dec(p->refcount) == 0) { free(p->pit); free(p->juicy_innards); free(p); } } Does atomic_dec need a memory barrier in it? If so, what kind of memory barrier? Additional notes: The application must run on PowerPC and x86, so any processor-specific information is welcomed. I already know about the GCC atomic builtins. As for immutability, the refcount is the only field that changes over the duration of the object.

    Read the article

  • Jquery counting elements by class what is the best way to implement this?

    - by 133794m3r
    Ok what i'm trying to do is to count all of the elements in the current page with the same class and then i'm going to use it to be added onto a name for a input form. Basically i'm allowing users to click on an and then by doing so add another one for more of the same type of items. But i can't think of a way to count all of these simply with jquery/javascript. I was going to then name the item as something like name="whatever(total+1)", if anyone has a simple way to do this i'd be extremely grateful as javascript isn't exactly my native tongue and stack overflow usually has a ton of great answers and a great community so i'm hoping my query will not go unheard.

    Read the article

  • im counting the number of characters in a file but i want to count the number of words that are less

    - by user320950
    i want to do this: reads the words in the file one at a time. (Use a string to do this) Counts three things: how many single-character words are in the file, how many short (2 to 5 characters) words are in the file, and how many long (6 or more characters) words are in the file. HELP HERE im not sure on how about reading file into a string. i know i have to something like this but i dont understand the rest. HELP HERE ifstream infile; //char mystring[6]; //char mystring[20]; int main() { infile.open("file.txt"); if(infile.fail()) { cout << " Error " << endl; } int numb_char=0; char letter; while(!infile.eof()) { infile.get(letter); cout << letter; numb_char++; break; } cout << " the number of characters is :" << numb_char << endl; infile.close(); return 0;

    Read the article

  • MySQL - Counting rows in preparation for greatest-n-per-group not working?

    - by John M
    Referring to SO and other sites have given me examples of how to use MySQL to create a 'greatest-n-per-group' query. My variant on this would be to have a query that returns the first 3 rows of each category. As the basis for this I need to sort my data into a usable sequence and that is where my problems start. Running just the sequence query with row numbering shows that changes in category are mostly ignored. I should have 35 categories returning rows but only 5 do so. My query: set @c:=0; set @a:=0; SELECT IF(@c = tdg, @a:=@a+1, @a:=1) AS rownum, (@c:=tdg) , julian_day, sequence, complete, year, tdg FROM tsd WHERE complete = 0 order by tdg, year, julian_day, sequence Do I have a syntax mistake with this query?

    Read the article

  • dotNet Templated, Repeating, Databound ServerControl: Counting the templates OnDataBind?

    - by Campbeln
    I have a server control that wraps an underlying class which manages a number of indexes to track where it is in a dataset (ie: RenderedRecordCount, ErroredRecordCount, NewRecordCount, etc.). I've got the server control rendering great, but OnDataBinding I'm having an issue as to seems to happen after CreateChildControls and before Render (both of which properly manage the iteration of the underlying indexes). While I'm somewhat familiar with the ASP.NET page lifecycle, this one seems to be beyond me at the moment. So... how do I hook into the iterative process OnDataBinding uses so I can manage the underlying indexes? Will I have to iterate over the ITemplates myself, managing the indexes as I go or is there an easier solution? Also... I implemented the iteration of the underlying indexes during CreateChildControls originally in the belief that was the proper place to hook in for events like OnDataBinding (thining it was done as the controls were being .Add'd). Now it seems that this may actually be unnecessary. So I guess the secondary question is: What happens during CreateChildControls? Are the unadulterated (read: with <%-tags in place) controls added to the .Controls collection without any other processing?

    Read the article

  • How do I get a array to just count how many numbers there are instead of counting the value of each number?

    - by Chad Loos
    //This is the output of the program *** start of 276 2D Arrays_03.cpp program *** Number Count Total 1 3 3 2 6 9 3 15 24 4 6 30 5 9 39 *** end of 276 2D Arrays_03.cpp program *** #include <iostream> #include <string> #include <iomanip> using namespace std; const int COLUMN_SIZE = 13; int main(void) { const int ROW_SIZE = 3; const int COUNT_SIZE = 5; void countValues(const int[][COLUMN_SIZE], const int, int[]); void display(const int [], const int); int numbers[ROW_SIZE][COLUMN_SIZE] = {{1, 3, 4, 5, 3, 2, 3, 5, 3, 4, 5, 3, 2}, {2, 1, 3, 4, 5, 3, 2, 3, 5, 3, 4, 5, 3}, {3, 4, 5, 3, 2, 1, 3, 4, 5, 3, 2, 3, 5}}; int counts[COUNT_SIZE] = {0}; string choice; cout << "*** start of 276 2D Arrays_03.cpp program ***" << endl; cout << endl; countValues(numbers, ROW_SIZE, counts); display(counts, COUNT_SIZE); cout << endl; cout << endl; cout << "*** end of 276 2D Arrays_03.cpp program ***" << endl << endl; cin.get(); return 0; } // end main() This is the function where I need to count each of the values. I know how to sum rows and cols, but I'm not quite sure of the code to just tally the values themselves. void countValues(const int numbers[][COLUMN_SIZE], const int ROW_SIZE, int counts[]) This is what I have so far. { for (int index = 0; index < ROW_SIZE; index++) counts[index]; {

    Read the article

  • Why is it useful to count the number of bits?

    - by Scorchin
    I've seen the numerous questions about counting the number of set bits in a insert type of input, but why is it useful? For those looking for algorithms about bit counting, look here: http://stackoverflow.com/questions/1517848/counting-common-bits-in-a-sequence-of-unsigned-longs http://stackoverflow.com/questions/472325/fastest-way-to-count-number-of-bit-transitions-in-an-unsigned-int http://stackoverflow.com/questions/109023/best-algorithm-to-count-the-number-of-set-bits-in-a-32-bit-integer

    Read the article

  • Algorithm for count-down timer that can add on time

    - by Person
    I'm making a general timer that has functionality to count up from 0 or count down from a certain number. I also want it to allow the user to add and subtract time. Everything is simple to implement except for the case in which the timer is counting down from some number, and the user adds or subtracts time from it. For example: (m_clock is an instance of SFML's Clock) float Timer::GetElapsedTime() { if ( m_forward ) { m_elapsedTime += m_clock.GetElapsedTime() - m_elapsedTime; } else { m_elapsedTime -= m_elapsedTime - m_startingTime + m_clock.GetElapsedTime(); } return m_elapsedTime; } To be a bit more clear, imagine that the timer starts at 100 counting down. After 10 seconds, the above function would look like 100 -= 100 - 100 + 10 which equals 90. If it was called after 20 more seconds it would look like 90 -= 90 - 100 + 30 which equals 70. This works for normal counting, but if the user calls AddTime() ( just m_elapsedTime += arg ) then the algorithm for backwards counting fails miserably. I know that I can do this using more members and keeping track of previous times, etc. but I'm wondering whether I'm missing some implementation that is extremely obvious. I'd prefer to keep it as simple as possible in that single operation.

    Read the article

  • Unit testing is… well, flawed.

    - by Dewald Galjaard
    Hey someone had to say it. I clearly recall my first IT job. I was appointed Systems Co-coordinator for a leading South African retailer at store level. Don’t get me wrong, there is absolutely nothing wrong with an honest day’s labor and in fact I highly recommend it, however I’m obliged to refer to the designation cautiously; in reality all I had to do was monitor in-store prices and two UNIX front line controllers. If anything went wrong – I only had to phone it in… Luckily that wasn’t all I did. My duties extended to some other interesting annual occurrence – stock take. Despite a bit more curious affair, it was still a tedious process that took weeks of preparation and several nights to complete.  Then also I remember that no matter how elaborate our planning was, the entire exercise would be rendered useless if we couldn’t get the basics right – that being the act of counting. Sounds simple right? We’ll with a store which could potentially carry over tens of thousands of different items… we’ll let’s just say I believe that’s when I first became a coffee addict. In those days the act of counting stock was a very humble process. Nothing like we have today. A staff member would be assigned a bin or shelve filled with items he or she had to sort then count. Thereafter they had to record their findings on a complementary piece of paper. Every night I would manage several teams. Each team was divided into two groups - counters and auditors. Both groups had the same task, only auditors followed shortly on the heels of the counters, recounting stock levels, making sure the original count correspond to their findings. It was a simple yet hugely responsible orchestration of people and thankfully there was one fundamental and golden rule I could always abide by to ensure things run smoothly – No-one was allowed to audit their own work. Nope, not even on nights when I didn’t have enough staff available. This meant I too at times had to get up there and get counting, or have the audit stand over until the next evening. The reason for this was obvious - late at night and with so much to do we were prone to make some mistakes, then on the recount, without a fresh set of eyes, you were likely to repeat the offence. Now years later this rule or guideline still holds true as we develop software (as far removed as software development from counting stock may be). For some reason it is a fundamental guideline we’re simply ignorant of. We write our code, we write our tests and thus commit the same horrendous offence. Yes, the procedure of writing unit tests as practiced in most development houses today – is flawed. Most if not all of the tests we write today exercise application logic – our logic. They are based on the way we believe an application or method should/may/will behave or function. As we write our tests, our unit tests mirror our best understanding of the inner workings of our application code. Unfortunately these tests will therefore also include (or be unaware of) any imperfections and errors on our part. If your logic is flawed as you write your initial code, chances are, without a fresh set of eyes, you will commit the same error second time around too. Not even experience seems to be a suitable solution. It certainly helps to have deeper insight, but is that really the answer we should be looking for? Is that really failsafe? What about code review? Code review is certainly an answer. You could have one developer coding away and another (or team) making sure the logic is sound. The practice however has its obvious drawbacks. Firstly and mainly it is resource intensive and from what I’ve seen in most development houses, given heavy deadlines, this guideline is seldom adhered to. Hardly ever do we have the resources, money or time readily available. So what other options are out there? A quest to find some solution revealed a project by Microsoft Research called PEX. PEX is a framework which creates several test scenarios for each method or class you write, automatically. Think of it as your own personal auditor. Within a few clicks the framework will auto generate several unit tests for a given class or method and save them to a single project. PEX help to audit your work. It lends a fresh set of eyes to any project you’re working on and best of all; it is cost effective and fast. Check them out at http://research.microsoft.com/en-us/projects/pex/ In upcoming posts we’ll dive deeper into how it works and how it can help you.   Certainly there are more similar frameworks out there and I would love to hear from you. Please share your experiences and insights.

    Read the article

  • Class design, One class in two sources

    - by Pavla
    Is it possible define methods from the same class in different "CPP" files? I have header file "myClass.h" with: class myClass { public: // methods for counting ... // methods for other ... }; I would like to define "methods for counting" in one CPP and "methods for other" in other CPP. For clarity. Both groups of methods sometime use the same attributes. Is it possible? Thanks :).

    Read the article

  • Linux disk usage analyser that acts like symlinks are real files

    - by Rory
    I am using git-annex, an extension to the DVCS git, which is designed for handling large files. It makes heavy use of symlinks. The actual large files are moved to the .git/annex directory and the original files are symlinked to there. I am running out of disk space, and need to clear up, and see what's using all my space. Usually I'd use a disk usage tool like ncdu, Baobab or Filelight. However they treat the symlink as essentially empty, and only count the file that it is pointing to as using any space. Which means when I use git-annex, it shows no space used in the main directories and lots of space used in the .git/annex directory. This is not helpful. Is there any (graphical or ncurses) based disk usage programme for linux (apt-get installable would be easie that is capable (through options or not) of counting a symlink as using up the space that the original file uses up? Many have options for different behaviour for hard links, so makes sense that some should h (I know counting symlinks as using space has flaws, like counting the space space twice, broken symlinks, etc. But that's OK for my purposes)

    Read the article

  • Count words like Microsoft Word does

    - by Maarten
    I need to count words in a string using PHP or Javascript (preferably PHP). The problem is that the counting needs to be the same as it works in Microsoft Word, because that is where the people assemble their original texts in so that is their reference frame. PHP has a word counting function (http://php.net/manual/en/function.str-word-count.php) but that is not 100% the same as far as I know. Any pointers?

    Read the article

  • More Fun with C# Iterators and Generators

    - by James Michael Hare
    In my last post, I talked quite a bit about iterators and how they can be really powerful tools for filtering a list of items down to a subset of items.  This had both pros and cons over returning a full collection, which, in summary, were:   Pros: If traversal is only partial, does not have to visit rest of collection. If evaluation method is costly, only incurs that cost on elements visited. Adds little to no garbage collection pressure.    Cons: Very slight performance impact if you know caller will always consume all items in collection. And as we saw in the last post, that con for the cost was very, very small and only really became evident on very tight loops consuming very large lists completely.    One of the key items to note, though, is the garbage!  In the traditional (return a new collection) method, if you have a 1,000,000 element collection, and wish to transform or filter it in some way, you have to allocate space for that copy of the collection.  That is, say you have a collection of 1,000,000 items and you want to double every item in the collection.  Well, that means you have to allocate a collection to hold those 1,000,000 items to return, which is a lot especially if you are just going to use it once and toss it.   Iterators, though, don't have this problem.  Each time you visit the node, it would return the doubled value of the node (in this example) and not allocate a second collection of 1,000,000 doubled items.  Do you see the distinction?  In both cases, we're consuming 1,000,000 items.  But in one case we pass back each doubled item which is just an int (for example's sake) on the stack and in the other case, we allocate a list containing 1,000,000 items which then must be garbage collected.   So iterators in C# are pretty cool, eh?  Well, here's one more thing a C# iterator can do that a traditional "return a new collection" transformation can't!   It can return **unbounded** collections!   I know, I know, that smells a lot like an infinite loop, eh?  Yes and no.  Basically, you're relying on the caller to put the bounds on the list, and as long as the caller doesn't you keep going.  Consider this example:   public static class Fibonacci {     // returns the infinite fibonacci sequence     public static IEnumerable<int> Sequence()     {         int iteration = 0;         int first = 1;         int second = 1;         int current = 0;         while (true)         {             if (iteration++ < 2)             {                 current = 1;             }             else             {                 current = first + second;                 second = first;                 first = current;             }             yield return current;         }     } }   Whoa, you say!  Yes, that's an infinite loop!  What the heck is going on there?  Yes, that was intentional.  Would it be better to have a fibonacci sequence that returns only a specific number of items?  Perhaps, but that wouldn't give you the power to defer the execution to the caller.   The beauty of this function is it is as infinite as the sequence itself!  The fibonacci sequence is unbounded, and so is this method.  It will continue to return fibonacci numbers for as long as you ask for them.  Now that's not something you can do with a traditional method that would return a collection of ints representing each number.  In that case you would eventually run out of memory as you got to higher and higher numbers.  This method, though, never runs out of memory.   Now, that said, you do have to know when you use it that it is an infinite collection and bound it appropriately.  Fortunately, Linq provides a lot of these extension methods for you!   Let's say you only want the first 10 fibonacci numbers:       foreach(var fib in Fibonacci.Sequence().Take(10))     {         Console.WriteLine(fib);     }   Or let's say you only want the fibonacci numbers that are less than 100:       foreach(var fib in Fibonacci.Sequence().TakeWhile(f => f < 100))     {         Console.WriteLine(fib);     }   So, you see, one of the nice things about iterators is their power to work with virtually any size (even infinite) collections without adding the garbage collection overhead of making new collections.    You can also do fun things like this to make a more "fluent" interface for for loops:   // A set of integer generator extension methods public static class IntExtensions {     // Begins counting to inifity, use To() to range this.     public static IEnumerable<int> Every(this int start)     {         // deliberately avoiding condition because keeps going         // to infinity for as long as values are pulled.         for (var i = start; ; ++i)         {             yield return i;         }     }     // Begins counting to infinity by the given step value, use To() to     public static IEnumerable<int> Every(this int start, int byEvery)     {         // deliberately avoiding condition because keeps going         // to infinity for as long as values are pulled.         for (var i = start; ; i += byEvery)         {             yield return i;         }     }     // Begins counting to inifity, use To() to range this.     public static IEnumerable<int> To(this int start, int end)     {         for (var i = start; i <= end; ++i)         {             yield return i;         }     }     // Ranges the count by specifying the upper range of the count.     public static IEnumerable<int> To(this IEnumerable<int> collection, int end)     {         return collection.TakeWhile(item => item <= end);     } }   Note that there are two versions of each method.  One that starts with an int and one that starts with an IEnumerable<int>.  This is to allow more power in chaining from either an existing collection or from an int.  This lets you do things like:   // count from 1 to 30 foreach(var i in 1.To(30)) {     Console.WriteLine(i); }     // count from 1 to 10 by 2s foreach(var i in 0.Every(2).To(10)) {     Console.WriteLine(i); }     // or, if you want an infinite sequence counting by 5s until something inside breaks you out... foreach(var i in 0.Every(5)) {     if (someCondition)     {         break;     }     ... }     Yes, those are kinda play functions and not particularly useful, but they show some of the power of generators and extension methods to form a fluid interface.   So what do you think?  What are some of your favorite generators and iterators?

    Read the article

  • (How) Can I approximate a "dynamic" index (key extractor) for Boost MultiIndex?

    - by Sarah
    I have a MultiIndex container of boost::shared_ptrs to members of class Host. These members contain private arrays bool infections[NUM_SEROTYPES] revealing Hosts' infection statuses with respect to each of 1,...,NUM_SEROTYPES serotypes. I want to be able to determine at any time in the simulation the number of people infected with a given serotype, but I'm not sure how: Ideally, Boost MultiIndex would allow me to sort, for example, by Host::isInfected( int s ), where s is the serotype of interest. From what I understand, MultiIndex key extractors aren't allowed to take arguments. An alternative would be to define an index for each serotype, but I don't see how to write the MultiIndex container typedef ... in such an extensible way. I will be changing the number of serotypes between simulations. (Do experienced programmers think this should be possible? I'll attempt it if so.) There are 2^(NUM_SEROTYPES) possible infection statuses. For small numbers of serotypes, I could use a single index based on this number (or a binary string) and come up with some mapping from this key to actual infection status. Counting is still darn slow. I could maintain a separate structure counting the total numbers of infecteds with each serotype. The synchrony is a bit of a pain, but the memory is fine. I would prefer a slicker option, since I would like to do further sorts on other host attributes (e.g., after counting the number infected with serotype s, count the number of those infected who are also in a particular household and have a particular age). Thanks in advance.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >