Search Results

Search found 21392 results on 856 pages for 'order of operations'.

Page 164/856 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Creating collaborative whiteboard drawing application

    - by Steven Sproat
    I have my own drawing program in place, with a variety of "drawing tools" such as Pen, Eraser, Rectangle, Circle, Select, Text etc. It's made with Python and wxPython. Each tool mentioned above is a class, which all have polymorphic methods, such as left_down(), mouse_motion(), hit_test() etc. The program manages a list of all drawn shapes -- when a user has drawn a shape, it's added to the list. This is used to manage undo/redo operations too. So, I have a decent codebase that I can hook collaborative drawing into. Each shape could be changed to know its owner -- the user who drew it, and to only allow delete/move/rescale operations to be performed on shapes owned by one person. I'm just wondering the best way to develop this. One person in the "session" will have to act as the server, I have no money to offer free central servers. Somehow users will need a way to connect to servers, meaning some kind of "discover servers" browser...or something. How do I broadcast changes made to the application? Drawing in realtime and broadcasting a message on each mouse motion event would be costly in terms of performance and things get worse the more users there are at a given time. Any ideas are welcome, I'm not too sure where to begin with developing this (or even how to test it)

    Read the article

  • C++ Matrix class hierachy

    - by bpw1621
    Should a matrix software library have a root class (e.g., MatrixBase) from which more specialized (or more constrained) matrix classes (e.g., SparseMatrix, UpperTriangluarMatrix, etc.) derive? If so, should the derived classes be derived publicly/protectively/privately? If not, should they be composed with a implementation class encapsulating common functionality and be otherwise unrelated? Something else? I was having a conversation about this with a software developer colleague (I am not per se) who mentioned that it is a common programming design mistake to derive a more restricted class from a more general one (e.g., he used the example of how it was not a good idea to derive a Circle class from an Ellipse class as similar to the matrix design issue) even when it is true that a SparseMatrix "IS A" MatrixBase. The interface presented by both the base and derived classes should be the same for basic operations; for specialized operations, a derived class would have additional functionality that might not be possible to implement for an arbitrary MatrixBase object. For example, we can compute the cholesky decomposition only for a PositiveDefiniteMatrix class object; however, multiplication by a scalar should work the same way for both the base and derived classes. Also, even if the underlying data storage implementation differs the operator()(int,int) should work as expected for any type of matrix class. I have started looking at a few open-source matrix libraries and it appears like this is kind of a mixed bag (or maybe I'm looking at a mixed bag of libraries). I am planning on helping out with a refactoring of a math library where this has been a point of contention and I'd like to have opinions (that is unless there really is an objective right answer to this question) as to what design philosophy would be best and what are the pros and cons to any reasonable approach.

    Read the article

  • Doctrine 2 entity cannot be saved with its many-to-many related entities

    - by user1333185
    I'm writing a project in ZF and have many-to-many related accounts and videos entities and I provide the account with the videos collection. When I try to save the account I receive the following error: A new entity was found through the relationship 'Entities\Account#playlistVideos' that was not configured to cascade persist operations for entity: Entities\Video@000000004d5f02ef00000000196757dc. Explicitly persist the new entity or configure cascading persist operations on the relationship. If you cannot find out which entity causes the problem implement 'Entities\Video#__toString()' to get a clue. I found a suggested solution with cascade={"persist"} which should allow videos to be saved together with the account by only specifying $this-em-persist($account) but then I get the error: Fatal error: method_exists(): The script tried to execute a method or access a property of an incomplete object. Please ensure that the class definition "Proxies\EntitiesAccountProxy" of the object you are trying to operate on was loaded before unserialize() gets called or provide a __autoload() function to load the class definition in... The same happens when I try to manually persist videos and account. Thank you for the help in advance.

    Read the article

  • how to minimize application downtime when updating database and application ORM

    - by yamspog
    We currently run an ecommerce solution for a leisure and travel company. Everytime we have a release, we must bring the ecommerce site down as we update database schema and the data access code. We are using a custom built ORM where each data entity is responsible for their own CRUD operations. This is accomplished by dynamically generating the SQL based on attributes in the data entity. For example, the data entity for an address would be... [tableName="address"] public class address : dataEntity { [column="address1"] public string address1; [column="city"] public string city; } So, if we add a new column to the database, we must update the schema of the database and also update the data entity. As you can expect, the business people are not too happy about this outage as it puts a crimp in their cash-flow. The operations people are not happy as they have to deal with a high-pressure time when database and applications are upgraded. The programmers are upset as they are constantly getting in trouble for the legacy system that they inherited. Do any of you smart people out there have some suggestions?

    Read the article

  • NHibernate - Retrieving Lots of Data Becomes Exponentially Slow

    - by nfplee
    Hi, I have an issue when I retrieve lots of data in NHibernate (such as when producing a report) the page becomes exponentially slower the more data it has to retrieve. I found the following article: http://nhforge.org/blogs/nhibernate/archive/2008/10/30/bulk-data-operations-with-nhibernate-s-stateless-sessions.aspx It explains how doing bulk data operations in NHibernate is slow since the first level cache grows too large and how you should use the IStatelessSession instead. The trouble I have is that I don't wish to tie my application to NHibernate so I've added a wrapper around ISession. I then use Linq as my query mechanism but IStatelessSession does not support Linq (it may do in NHibernate 3 but the Linq provider is not stable as it stands at the moment). I then read that you could do a clear after so many iterations to clear out the first level cache. The problem now is that you can't use lazy loading. The linq provider doesn't allow you to override the mapping defined (or eagerly fetch the additional data) so whenever I grab data which is lazy loaded after I have cleared the session an exception is thrown. I'm completely lost on what do now. I like the ease of producing reports with linq but the limitations of the inbuilt linq provider in NHibernate seem to be holding me back. I'd really appreciate it if someone could show me an alternative approach. Thanks

    Read the article

  • Handling inheritance with overriding efficiently

    - by Fyodor Soikin
    I have the following two data structures. First, a list of properties applied to object triples: Object1 Object2 Object3 Property Value O1 O2 O3 P1 "abc" O1 O2 O3 P2 "xyz" O1 O3 O4 P1 "123" O2 O4 O5 P1 "098" Second, an inheritance tree: O1 O2 O4 O3 O5 Or viewed as a relation: Object Parent O2 O1 O4 O2 O3 O1 O5 O3 O1 null The semantics of this being that O2 inherits properties from O1; O4 - from O2 and O1; O3 - from O1; and O5 - from O3 and O1, in that order of precedence. NOTE 1: I have an efficient way to select all children or all parents of a given object. This is currently implemented with left and right indexes, but hierarchyid could also work. This does not seem important right now. NOTE 2: I have tiggers in place that make sure that the "Object" column always contains all possible objects, even when they do not really have to be there (i.e. have no parent or children defined). This makes it possible to use inner joins rather than severely less effiecient outer joins. The objective is: Given a pair of (Property, Value), return all object triples that have that property with that value either defined explicitly or inherited from a parent. NOTE 1: An object triple (X,Y,Z) is considered a "parent" of triple (A,B,C) when it is true that either X = A or X is a parent of A, and the same is true for (Y,B) and (Z,C). NOTE 2: A property defined on a closer parent "overrides" the same property defined on a more distant parent. NOTE 3: When (A,B,C) has two parents - (X1,Y1,Z1) and (X2,Y2,Z2), then (X1,Y1,Z1) is considered a "closer" parent when: (a) X2 is a parent of X1, or (b) X2 = X1 and Y2 is a parent of Y1, or (c) X2 = X1 and Y2 = Y1 and Z2 is a parent of Z1 In other words, the "closeness" in ancestry for triples is defined based on the first components of the triples first, then on the second components, then on the third components. This rule establishes an unambigous partial order for triples in terms of ancestry. For example, given the pair of (P1, "abc"), the result set of triples will be: O1, O2, O3 -- Defined explicitly O1, O2, O5 -- Because O5 inherits from O3 O1, O4, O3 -- Because O4 inherits from O2 O1, O4, O5 -- Because O4 inherits from O2 and O5 inherits from O3 O2, O2, O3 -- Because O2 inherits from O1 O2, O2, O5 -- Because O2 inherits from O1 and O5 inherits from O3 O2, O4, O3 -- Because O2 inherits from O1 and O4 inherits from O2 O3, O2, O3 -- Because O3 inherits from O1 O3, O2, O5 -- Because O3 inherits from O1 and O5 inherits from O3 O3, O4, O3 -- Because O3 inherits from O1 and O4 inherits from O2 O3, O4, O5 -- Because O3 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 O4, O2, O3 -- Because O4 inherits from O1 O4, O2, O5 -- Because O4 inherits from O1 and O5 inherits from O3 O4, O4, O3 -- Because O4 inherits from O1 and O4 inherits from O2 O5, O2, O3 -- Because O5 inherits from O1 O5, O2, O5 -- Because O5 inherits from O1 and O5 inherits from O3 O5, O4, O3 -- Because O5 inherits from O1 and O4 inherits from O2 O5, O4, O5 -- Because O5 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 Note that the triple (O2, O4, O5) is absent from this list. This is because property P1 is defined explicitly for the triple (O2, O4, O5) and this prevents that triple from inheriting that property from (O1, O2, O3). Also note that the triple (O4, O4, O5) is also absent. This is because that triple inherits its value of P1="098" from (O2, O4, O5), because it is a closer parent than (O1, O2, O3). The straightforward way to do it is the following. First, for every triple that a property is defined on, select all possible child triples: select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value from TriplesAndProperties tp -- Select corresponding objects of the triple inner join Objects as Objects1 on Objects1.Id = tp.O1 inner join Objects as Objects2 on Objects2.Id = tp.O2 inner join Objects as Objects3 on Objects3.Id = tp.O3 -- Then add all possible children of all those objects inner join Objects as Children1 on Objects1.Id [isparentof] Children1.Id inner join Objects as Children2 on Objects2.Id [isparentof] Children2.Id inner join Objects as Children3 on Objects3.Id [isparentof] Children3.Id But this is not the whole story: if some triple inherits the same property from several parents, this query will yield conflicting results. Therefore, second step is to select just one of those conflicting results: select * from ( select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value, row_number() over( partition by Children1.Id, Children2.Id, Children3.Id, tp.Property order by Objects1.[depthInTheTree] descending, Objects2.[depthInTheTree] descending, Objects3.[depthInTheTree] descending ) as InheritancePriority from ... (see above) ) where InheritancePriority = 1 The window function row_number() over( ... ) does the following: for every unique combination of objects triple and property, it sorts all values by the ancestral distance from the triple to the parents that the value is inherited from, and then I only select the very first of the resulting list of values. A similar effect can be achieved with a GROUP BY and ORDER BY statements, but I just find the window function semantically cleaner (the execution plans they yield are identical). The point is, I need to select the closest of contributing ancestors, and for that I need to group and then sort within the group. And finally, now I can simply filter the result set by Property and Value. This scheme works. Very reliably and predictably. It has proven to be very powerful for the business task it implements. The only trouble is, it is awfuly slow. One might point out the join of seven tables might be slowing things down, but that is actually not the bottleneck. According to the actual execution plan I'm getting from the SQL Management Studio (as well as SQL Profiler), the bottleneck is the sorting. The problem is, in order to satisfy my window function, the server has to sort by Children1.Id, Children2.Id, Children3.Id, tp.Property, Parents1.[depthInTheTree] descending, Parents2.[depthInTheTree] descending, Parents3.[depthInTheTree] descending, and there can be no indexes it can use, because the values come from a cross join of several tables. EDIT: Per Michael Buen's suggestion (thank you, Michael), I have posted the whole puzzle to sqlfiddle here. One can see in the execution plan that the Sort operation accounts for 32% of the whole query, and that is going to grow with the number of total rows, because all the other operations use indexes. Usually in such cases I would use an indexed view, but not in this case, because indexed views cannot contain self-joins, of which there are six. The only way that I can think of so far is to create six copies of the Objects table and then use them for the joins, thus enabling an indexed view. Did the time come that I shall be reduced to that kind of hacks? The despair sets in.

    Read the article

  • Glassfish complaining about JSF component IDs

    - by Brian
    Hello All I am very new to JSF (v2.0) and I am attempting to learn it at places like netbeans.org and coreservlets.com. I am working on a very simple "add/subtract/multiply/divide" Java webapp and I have run into a problem. When I first started out, the application was enter two numbers and hit a '+' key and they would be automatically added together. Now that I have added more complexity I am having trouble getting the operation to the managed bean. This is what I had when it was just "add": <h:inputText styleClass="display" id="number01" size="4" maxlength="3" value="#{Calculator.number01}" /> <h:inputText styleClass="display" id="number02" size="4" maxlength="3" value="#{Calculator.number02}" /> <h:commandButton id="add" action="answer" value="+" /> For the "answer" page, I display the answer like this: <h:outputText value="#{Calculator.answer}" /> I had the proper getters and setters in the Calculator.java managed bean and the operation worked perfectly. Now I have added the other three operations and I am having trouble visualizing how to get the operation parameter to the bean so that I can switch around it. I tried this: <h:commandButton id="operation" action="answer" value="+" /> <h:commandButton id="operation" action="answer" value="-" /> <h:commandButton id="operation" action="answer" value="*" /> <h:commandButton id="operation" action="answer" value="/" /> However, Glassfish complained that I have already used "operation" once and I am trying to use it four times here. Any adivce/tips on how to get multiple operations to the managed bean so that it can preform the desired operation? Thank you for taking the time to read.

    Read the article

  • perl - universal operator overload

    - by Todd Freed
    I have an idea for perl, and I'm trying to figure out the best way to implement it. The idea is to have new versions of every operator which consider the undefined value as the identity of that operation. For example: $a = undef + 5; # undef treated as 0, so $a = 5 $a = undef . "foo"; # undef treated as '', so $a = foo $a = undef && 1; # undef treated as false, $a = true and so forth. ideally, this would be in the language as a pragma, or something. use operators::awesome; However, I would be satisfied if I could implement this special logic myself, and then invoke it where needed: use My::Operators; The problem is that if I say "use overload" inside My::Operators only affects objects blessed into My::Operators. So the question is: is there a way (with "use overoad" or otherwise) to do a "universal operator overload" - which would be called for all operations, not just operations on blessed scalars. If not - who thinks this would be a great idea !? It would save me a TON of this kind of code if($object && $object{value} && $object{value} == 15) replace with if($object{value} == 15) ## the special "is-equal-to" operator

    Read the article

  • Configure nHibernate for multiple-project solution

    - by NoOne
    Hello, Im doing a project with C# winforms. This project is composed by: Client project: Windows Forms where user will do CRUD operations on the models; Server project; Common Project: This project will hold the models (in the image only have the model Item); ListSingleton: Remote Object that will do the operations on the models; I already have all the communication working, but now I need to work on the persistence of the data in a mysql database. I was trying to use nHibernate but I’m having some troubles. My main problem is how to organize my hibernate configuration. In which project do I keep the mapping? Common project? In which project do I keep the hibernate configuration file (App.config)? ListSingleton project? In which project do I do this: Configuration cfg = new Configuration(); cfg.AddXmlFile("Item.hbm.xml"); ISessionFactory factory = cfg.BuildSessionFactory(); ISession session = factory.OpenSession(); ITransaction transaction = session.BeginTransaction(); Item newItem = new Item("BLAA"); // Tell NHibernate that this object should be saved session.Save(newItem); // commit all of the changes to the DB and close the ISession transaction.Commit(); session.Close(); In the ListSingleton project? Altho I had reference to the Common Project in the ListSingleton I keep getting error in the addXml line… My mapping is correct because I tried with a one-project solution and it worked :X

    Read the article

  • C Programming. How to deep copy a struct?

    - by user69514
    I have the following two structs where "child struct" has a "rusage struct" as an element. Then I create two structs of type "child" let's call them childA and childB How do I copy just the rusage struct from childA to childB? typedef struct{ int numb; char *name; pid_t pid; long userT; long systemT; struct rusage usage; }child; typedef struct{ struct timeval ru_utime; /* user time used */ struct timeval ru_stime; /* system time used */ long ru_maxrss; /* maximum resident set size */ long ru_ixrss; /* integral shared memory size */ long ru_idrss; /* integral unshared data size */ long ru_isrss; /* integral unshared stack size */ long ru_minflt; /* page reclaims */ long ru_majflt; /* page faults */ long ru_nswap; /* swaps */ long ru_inblock; /* block input operations */ long ru_oublock; /* block output operations */ long ru_msgsnd; /* messages sent */ long ru_msgrcv; /* messages received */ long ru_nsignals; /* signals received */ long ru_nvcsw; /* voluntary context switches */ long ru_nivcsw; /* involuntary context switches */ }rusage; I did the following, but I guess it copies the memory location, because if I changed the value of usage in childA, it also changes in childB. memcpy(&childA,&childB, sizeof(rusage)); I know that gives childB all the values from childA. I have already taken care of the others fields in childB, I just need to be able to copy the rusage struct called usage that resides in the "child" struct.

    Read the article

  • When is it safe to use a broken hash function?

    - by The Rook
    It is trivial to use a secure hash function like SHA256 and continuing to use md5 is reckless behavior. However, there are some complexities to hash function vulnerabilities that I would like to better understand. Collisions have been generated for md4 and md5. According to NIST md5() is not a secure hash function. It only takes 2^39th operations to generate a collision and should never be used for passwords. However SHA1 is vulnerable to a similar collision attack in which a collision can be found in 2^69 operations, where as brute force is 2^80th. No one has generated a sha1 collision and NIST still lists sha1 as a secure message digest function. So when is it safe to use a broken hash function? Even though a function is broken it can still be "big enough". According to Schneier a hash function vulnerable to a collsion attack can still be used as an HMAC. I believe this is because the security of an HMAC is Dependant on its secret key and a collision cannot be found until this key is obtained. Once you have the key used in a HMAC its already broken, so its a moot point. What hash function vulnerabilities would undermine the security of an HMAC? Lets take this property a bit further. Does it then become safe to use a very weak message digest like md4 for passwords if a salt is perpended to the password? Keep in mind the md4 and md5 attacks are prefixing attacks, and if a salt is perpended then an attacker cannot control the prefix of the message. If the salt is truly a secret, and isn't known to the attacker, then does it matter if its a appended to the end of the password? Is it safe to assume that an attacker cannot generate a collision until the entire message has been obtained? Do you know of other cases where a broken hash function can be used in a security context without introducing a vulnerability? (Please post supporting evidence because it is awesome!)

    Read the article

  • Time gaps between host clEnqueue_xxx calls

    - by dialer
    Consider these OpenCL calls (3 memcpy DtoH, 4313 cl_float elements each): clEnqueueReadBuffer(CommandQueue, SpectrumAbsMem, CL_FALSE, 0, SpectrumMemSize, SpectrumAbs, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumReMem, CL_FALSE, 0, SpectrumMemSize, SpectrumRe, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumImMem, CL_FALSE, 0, SpectrumMemSize, SpectrumIm, 0, NULL, NULL); When I analyze these with the NVIDIA visual profiler, I see that the actual memcpy operation only takes 8 us, but there is a significant gap of around 130 us after each memcpy. I'm already using the supposedly asynchronous method (the CL_FALSE in the argument list). When I use only one operation, but with three times the size, the operation is way faster. Why is the time gap between the actual memcpy operations so huge, whereas the gap between the kernel execution (exactly before these three operations) and the first memcpy is only 7us? Can I get rid of it, or do I need to accumulate more data before starting a memcpy? If so, is there a convenient way how I could combine mutliple arrays into a single contiguous block of memory, but still have a cl_mem object as a separate device memory pointer to each section?

    Read the article

  • C# threading solution for long queries

    - by Eddie
    Senerio We have an application that records incidents. An external database needs to be queried when an incident is approved by a supervisor. The queries to this external database are sometimes taking a while to run. This lag is experienced through the browser. Possible Solution I want to use threading to eliminate the simulated hang to the browser. I have used the Thread class before and heard about ThreadPool. But, I just found BackgroundWorker in this post. MSDN states: The BackgroundWorker class allows you to run an operation on a separate, dedicated thread. Time-consuming operations like downloads and database transactions can cause your user interface (UI) to seem as though it has stopped responding while they are running. When you want a responsive UI and you are faced with long delays associated with such operations, the BackgroundWorker class provides a convenient solution. Is BackgroundWorker the way to go when handling long running queries? What happens when 2 or more BackgroundWorker processes are ran simultaneously? Is it handled like a pool?

    Read the article

  • Using boost::random to select from an std::list where elements are being removed

    - by user144182
    See this related question on more generic use of the Boost Random library. My questions involves selecting a random element from an std::list, doing some operation, which could potentally include removing the element from the list, and then choosing another random element, until some condition is satisfied. The boost code and for loop look roughly like this: // create and insert elements into list std::list<MyClass> myList; //[...] // select uniformly from list indices boost::uniform_int<> indices( 0, myList.size()-1 ); boost::variate_generator< boost::mt19937, boost::uniform_int<> > selectIndex(boost::mt19937(), indices); for( int i = 0; i <= maxOperations; ++i ) { int index = selectIndex(); MyClass & mc = myList.begin() + index; // do operations with mc, potentially removing it from myList //[...] } My problem is as soon as the operations that are performed on an element result in the removal of an element, the variate_generator has the potential to select an invalid index in the list. I don't think it makes sense to completely recreate the variate_generator each time, especially if I seed it with time(0).

    Read the article

  • What's the "correct way" to organize this project?

    - by user571747
    I'm working on a project that allows multiple users to submit large data files and perform operations on them. The "backend" which performs these operations is written in Perl while the "frontend" uses PHP to load HTML template files and determines which content to deliver. Data is stored in a database (MySQL, SQLite, Oracle) and while there is data which has not yet been acted upon, Perl adds it to a running queue which delivers data to other threads based on system load. In addition, there may be pre- and post-processing of the data before and after the main Perl script operates (the specifications are unclear) so I may want to allow these processors to be user-selectable plugins. I had been writing this project in a more procedural fashion but I am quickly realizing the benefit of separating concerns as to limit the scope one change has on the rest of the project. I'm quite unexperienced with design patterns and am curious what the best way to proceed is. I've heard MVC thrown around quite a bit but I am unsure of how to apply it. Specifically, what are some good options to structure this code (in terms of design patterns and folder hierarchy)? How can I achieve this with both PHP and Perl while minimizing duplicated code between languages? Should I keep my PHP files in the top level so I don't have ugly paths in the URL? Also, if I want to provide interchangeable databases, does each table need its own DAO implementation?

    Read the article

  • Job queueing and execute Mechanism

    - by Calm Storm
    In my webservice all method calls submits jobs to a queue. Basically these operations take long time to execute, so all these operations submit a Job to a queue and return a status saying "Submitted". Then the client keeps polling using another service method to check for the status of the job. Presently, what I do is create my own Queue, Job classes that are Serializable and persist these jobs (i.e, their serialized byte stream format) into the database. So an UpdateLogistics operation just queues up a "UpdateLogisticsJob" to the queue and returns. I have written my own JobExecutor which wakes up every N seconds, scans the database table for any existing jobs, and executes them. Note the jobs have to persisted because these jobs have to survive app-server crashes. This was done a long time ago, and I used bespoke classes for my Queues, Jobs, Executors etc. But now, I would like to know has someone done something similar before? In particular, Are there frameworks available for this ? Something in Spring/Apache etc Any framework that is easy to adapt/debug and plays well along with libraries like Spring will be great.

    Read the article

  • fastest way to check to see if a certain index in a linq statement is null

    - by tehdoommarine
    Basic Details I have a linq statement that grabs some records from a database and puts them in a System.Linq.Enumerable: var someRecords = someRepoAttachedToDatabase.Where(p=>true); Suppose this grabs tons (25k+) of records, and i need to perform operations on all of them. to speed things up, I have to decided to use paging and perform the operations needed in blocks of 100 instead of all of the records at the same time. The Question The line in question is the line where I count the number of records in the subset to see if we are on the last page; if the number of records in subset is less than the size of paging - then that means there are no more records left. What I would like to know is what is the fastest way to do this? Code in Question int pageSize = 100; bool moreData = true; int currentPage = 1; while (moreData) { var subsetOfRecords = someRecords.Skip((currentPage - 1) * pageSize).Take(pageSize); //this is also a System.Linq.Enumerable if (subsetOfRecords.Count() < pageSize){ moreData = false;} //line in question //do stuff to records in subset currentPage++; } Things I Have Considered subsetOfRecords.Count() < pageSize subsetOfRecords.ElementAt(pageSize - 1) == null (causes out of bounds exception - can catch exception and set moreData to false there) Converting subsetOfRecords to an array (converting someRecords to an array will not work due to the way subsetOfRecords is declared - but I am open to changing it) I'm sure there are plenty of other ideas that I have missed.

    Read the article

  • Finding patterns of failure in a Unit Test

    - by Pekka
    I'm new to Unit Testing, and I'm only getting into the routine of building test suites. I have what is going to be a rather large project that I want to build tests for from the start. I'm trying to figure out general strategies and patterns for building test suites. When you look at a class, many tests come to you obviously due to the nature of the class. Say for a "user account" class with basic CRUD operations, being related to a database table, we will want to test - well, the CRUD. creating an object and seeing whether it exists query its properties change some properties change some properties to incorrect values and delete it again. As for how to break things, there are "fail" tests common to most CRUD classes like: Invalid input data types A number as the ID key that exceeds the range of the chosen data type Input in an incorrect character encoding Input that is too long And so on and so on. For a unit test concerned with file operations, the list of "breaking things" could be Invalid characters in file name File name too long File name uses incorrect protocol or path I'm pretty sure similar patterns - applicable beyond the unit test one is currently working on - can be found for most units that are being tested. Now my question is: Am I correct in seeing such "breaking patterns"? Or am I getting something completely wrong about Unit testing, and if I did it right, this wouldn't be an issue at all? Is Unit Testing as a process of finding as many ways to break the unit as possible the right way to go? If I am correct: Are there existing definitions, lists, cheat sheets for such patterns? Are there any provisions (mainly in PHPUnit, as that's the framework I'm working in) to automate such patterns? Is there any assistance - in the form of check lists, or software - to aid in writing complete tests?

    Read the article

  • How difficult is it to write our own Robots API, similar to G Wave Robots API ? Please read the deta

    - by user169650
    Consider the following entities : a) My own Wave-server b) My own Robots API c) Tomcat d) Google wave server/any other wave server Let us consider that a and d interact with one another via Google wave federation protocol. Now, I want to write my own Robots API in Java (similar to that of G Wave Robots API) using which I want to create Robots; which I want to host in entity c), which may in-turn connect to a) for listening to events and responding with operations. Let us consider that a) is already in place, i.e. implemented. Let us also consider that the Robot running on tomcat and entity a) are co-located, so that we do not need to use JSON-RPC for receiving events/sending operations; instead we can use Java interfaces. Now, my questions are : 1.How much of an effort is it to write my own Robots API to run on a tomcat container ? 2.What are the salient points to be taken care of ? Am I missing some important point here ? 3.How can I reuse some of the classes/packages/interfaces (e.g. com.google.wave.api.AbstractRobot, com.google.wave.api.event) with little/no changes at all ?

    Read the article

  • How can I compare the performance of log() and fp division in C++?

    - by Ventzi Zhechev
    Hi, I’m using a log-based class in C++ to store very small floating-point values (as the values otherwise go beyond the scope of double). As I’m performing a large number of multiplications, this has the added benefit of converting the multiplications to sums. However, at a certain point in my algorithm, I need to divide a standard double value by an integer value and than do a *= to a log-based value. I have overloaded the *= operator for my log-based class and the right-hand side value is first converted to a log-based value by running log() and than added to the left-hand side value. Thus the operations actually performed are floating-point division, log() and floating-point summation. My question whether it would be faster to first convert the denominator to a log-based value, which would replace the floating-point division with floating-point subtraction, yielding the following chain of operations: twice log(), floating-point subtraction, floating-point summation. In the end, this boils down to whether floating-point division is faster or slower than log(). I suspect that a common answer would be that this is compiler and architecture dependent, so I’ll say that I use gcc 4.2 from Apple on darwin 10.3.0. Still, I hope to get an answer with a general remark on the speed of these two operators and/or an idea on how to measure the difference myself, as there might be more going on here, e.g. executing the constructors that do the type conversion etc. Cheers!

    Read the article

  • How to easily apply a function to a collection in C++

    - by Jesse Beder
    I'm storing images as arrays, templated based on the type of their elements, like Image<unsigned> or Image<float>, etc. Frequently, I need to perform operations on these images; for example, I might need to add two images, or square an image (elementwise), and so on. All of the operations are elementwise. I'd like get as close as possible to writing things like: float Add(float a, float b) { return a+b; } Image<float> result = Add(img1, img2); and even better, things like complex ComplexCombine(float a, float b) { return complex(a, b); } Image<complex> result = ComplexCombine(img1, img2); or struct FindMax { unsigned currentMax; FindMax(): currentMax(0) {} void operator(unsigned a) { if(a > currentMax) currentMax = a; } }; FindMax findMax; findMax(img); findMax.currentMax; // now contains the maximum value of 'img' Now, I obviously can't exactly do that; I've written something so that I can call: Image<float> result = Apply(img1, img2, Add); but I can't seem to figure out a generic way for it to detect the return type of the function/function object passed, so my ComplexCombine example above is out; also, I have to write a new one for each number of arguments I'd like to pass (which seems inevitable). Any thoughts on how to achieve this (with as little boilerplate code as possible)?

    Read the article

  • How to properly cast a global memory array using the uint4 vector in CUDA to increase memory throughput?

    - by charis
    There are generally two techniques to increase the memory throughput of the global memory on a CUDA kernel; memory accesses coalescence and accessing words of at least 4 bytes. With the first technique accesses to the same memory segment by threads of the same half-warp are coalesced to fewer transactions while be accessing words of at least 4 bytes this memory segment is effectively increased from 32 bytes to 128. To access 16-byte instead of 1-byte words when there are unsigned chars stored in the global memory, the uint4 vector is commonly used by casting the memory array to uint4: uint4 *text4 = ( uint4 * ) d_text; var = text4[i]; In order to extract the 16 chars from var, i am currently using bitwise operations. For example: s_array[j * 16 + 0] = var.x & 0x000000FF; s_array[j * 16 + 1] = (var.x >> 8) & 0x000000FF; s_array[j * 16 + 2] = (var.x >> 16) & 0x000000FF; s_array[j * 16 + 3] = (var.x >> 24) & 0x000000FF; My question is, is it possible to recast var (or for that matter *text4) to unsigned char in order to avoid the additional overhead of the bitwise operations?

    Read the article

  • What is the n in O(n) when comparing sorting algorithms?

    - by Mumfi
    The question is rather simple, but I just can't find a good enough answer. I've taken a look at the most upvoted question regarding the Big-Oh notation, namely this: Plain English explanation of Big O It says there that: For example, sorting algorithms are typically compared based on comparison operations (comparing two nodes to determine their relative ordering). Now let's consider the simple bubble sort algorithm: for (int i = arr.length - 1; i > 0 ; i--) { for (int j = 0; j<i; j++) { if (arr[j] > arr[j+1]) { switchPlaces(...) } } } I know that worst case is O(n^2) and best case is O(n), but what is n exactly? If we attempt to sort an already sorted algorithm (best case), we would end up doing nothing, so why is it still O(n)? We are looping through 2 for-loops still, so if anything it should be O(n^2). n can't be the number of comparison operations, because we still compare all the elements, right? This confuses me, and I appreciate if someone could help me.

    Read the article

  • Comparing lists of field-hashes with equivalent AR-objects.

    - by Tim Snowhite
    I have a list of hashes, as such: incoming_links = [ {:title => 'blah1', :url => "http://blah.com/post/1"}, {:title => 'blah2', :url => "http://blah.com/post/2"}, {:title => 'blah3', :url => "http://blah.com/post/3"}] And an ActiveRecord model which has fields in the database with some matching rows, say: Link.all => [<Link#2 @title='blah2' @url='...post/2'>, <Link#3 @title='blah3' @url='...post/3'>, <Link#4 @title='blah4' @url='...post/4'>] I'd like to do set operations on Link.all with incoming_links so that I can figure out that <Link#4 ...> is not in the set of incoming_links, and {:title => 'blah1', :url =>'http://blah.com/post/1'} is not in the Link.all set, like so: #pseudocode #incoming_links = as above links = Link.all expired_links = links - incoming_links missing_links = incoming_links - links expired_links.destroy missing_links.each{|link| Link.create(link)} One route I've tried: I'd rather not rewrite Array#- and such, and I'm okay with converting incoming_links to a set of unsaved Link objects; so I've tried overwriting hash eql? and so on in Link so that it ignored the id equality that AR::Base provides by default. But this is the only place this sort of equality should be considered in the application - in other places the Link#id default identity is required. Is there some way I could subclass Link and apply the hash, eql?, etc overwriting there? The other route I've tried is to pull out the attributes hash for each Link and doing a .slice('id',...etc) to prune the hashes down. But this requires writing seperate methods for keeping track of the Link objects while doing set operations on the hashes, or writing seperate Collection classes to wrap the incoming_links hash-list and Link-list which seems a bit overkill. What is the best way to design this interaction? Extra credit for cleanliness.

    Read the article

  • Very different I/O performance in C++ on Windows

    - by Mr.Gate
    Hi all, I'm a new user and my english is not so good so I hope to be clear. We're facing a performance problem using large files (1GB or more) expecially (as it seems) when you try to grow them in size. Anyway... to verify our sensations we tryed the following (on Win 7 64Bit, 4core, 8GB Ram, 32 bit code compiled with VC2008) a) Open an unexisting file. Write it from the beginning up to 1Gb in 1Mb slots. Now you have a 1Gb file. Now randomize 10000 positions within that file, seek to that position and write 50 bytes in each position, no matter what you write. Close the file and look at the results. Time to create the file is quite fast (about 0.3"), time to write 10000 times is fast all the same (about 0.03"). Very good, this is the beginnig. Now try something else... b) Open an unexisting file, seek to 1Gb-1byte and write just 1 byte. Now you have another 1Gb file. Follow the next steps exactly same way of case 'a', close the file and look at the results. Time to create the file is the faster you can imagine (about 0.00009") but write time is something you can't believe.... about 90"!!!!! b.1) Open an unexisting file, don't write any byte. Act as before, ramdomizing, seeking and writing, close the file and look at the result. Time to write is long all the same: about 90"!!!!! Ok... this is quite amazing. But there's more! c) Open again the file you crated in case 'a', don't truncate it... randomize again 10000 positions and act as before. You're fast as before, about 0,03" to write 10000 times. This sounds Ok... try another step. d) Now open the file you created in case 'b', don't truncate it... randomize again 10000 positions and act as before. You're slow again and again, but the time is reduced to... 45"!! Maybe, trying again, the time will reduce. I actually wonder why... Any Idea? The following is part of the code I used to test what I told in previuos cases (you'll have to change someting in order to have a clean compilation, I just cut & paste from some source code, sorry). The sample can read and write, in random, ordered or reverse ordered mode, but write only in random order is the clearest test. We tryed using std::fstream but also using directly CreateFile(), WriteFile() and so on the results are the same (even if std::fstream is actually a little slower). Parameters for case 'a' = -f_tempdir_\casea.dat -n10000 -t -p -w Parameters for case 'b' = -f_tempdir_\caseb.dat -n10000 -t -v -w Parameters for case 'b.1' = -f_tempdir_\caseb.dat -n10000 -t -w Parameters for case 'c' = -f_tempdir_\casea.dat -n10000 -w Parameters for case 'd' = -f_tempdir_\caseb.dat -n10000 -w Run the test (and even others) and see... // iotest.cpp : Defines the entry point for the console application. // #include <windows.h> #include <iostream> #include <set> #include <vector> #include "stdafx.h" double RealTime_Microsecs() { LARGE_INTEGER fr = {0, 0}; LARGE_INTEGER ti = {0, 0}; double time = 0.0; QueryPerformanceCounter(&ti); QueryPerformanceFrequency(&fr); time = (double) ti.QuadPart / (double) fr.QuadPart; return time; } int main(int argc, char* argv[]) { std::string sFileName ; size_t stSize, stTimes, stBytes ; int retval = 0 ; char *p = NULL ; char *pPattern = NULL ; char *pReadBuf = NULL ; try { // Default stSize = 1<<30 ; // 1Gb stTimes = 1000 ; stBytes = 50 ; bool bTruncate = false ; bool bPre = false ; bool bPreFast = false ; bool bOrdered = false ; bool bReverse = false ; bool bWriteOnly = false ; // Comsumo i parametri for(int index=1; index < argc; ++index) { if ( '-' != argv[index][0] ) throw ; switch(argv[index][1]) { case 'f': sFileName = argv[index]+2 ; break ; case 's': stSize = xw::str::strtol(argv[index]+2) ; break ; case 'n': stTimes = xw::str::strtol(argv[index]+2) ; break ; case 'b':stBytes = xw::str::strtol(argv[index]+2) ; break ; case 't': bTruncate = true ; break ; case 'p' : bPre = true, bPreFast = false ; break ; case 'v' : bPreFast = true, bPre = false ; break ; case 'o' : bOrdered = true, bReverse = false ; break ; case 'r' : bReverse = true, bOrdered = false ; break ; case 'w' : bWriteOnly = true ; break ; default: throw ; break ; } } if ( sFileName.empty() ) { std::cout << "Usage: -f<File Name> -s<File Size> -n<Number of Reads and Writes> -b<Bytes per Read and Write> -t -p -v -o -r -w" << std::endl ; std::cout << "-t truncates the file, -p pre load the file, -v pre load 'veloce', -o writes in order mode, -r write in reverse order mode, -w Write Only" << std::endl ; std::cout << "Default: 1Gb, 1000 times, 50 bytes" << std::endl ; throw ; } if ( !stSize || !stTimes || !stBytes ) { std::cout << "Invalid Parameters" << std::endl ; return -1 ; } size_t stBestSize = 0x00100000 ; std::fstream fFile ; fFile.open(sFileName.c_str(), std::ios_base::binary|std::ios_base::out|std::ios_base::in|(bTruncate?std::ios_base::trunc:0)) ; p = new char[stBestSize] ; pPattern = new char[stBytes] ; pReadBuf = new char[stBytes] ; memset(p, 0, stBestSize) ; memset(pPattern, (int)(stBytes&0x000000ff), stBytes) ; double dTime = RealTime_Microsecs() ; size_t stCopySize, stSizeToCopy = stSize ; if ( bPre ) { do { stCopySize = std::min(stSizeToCopy, stBestSize) ; fFile.write(p, stCopySize) ; stSizeToCopy -= stCopySize ; } while (stSizeToCopy) ; std::cout << "Creating time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } else if ( bPreFast ) { fFile.seekp(stSize-1) ; fFile.write(p, 1) ; std::cout << "Creating Fast time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } size_t stPos ; ::srand((unsigned int)dTime) ; double dReadTime, dWriteTime ; stCopySize = stTimes ; std::vector<size_t> inVect ; std::vector<size_t> outVect ; std::set<size_t> outSet ; std::set<size_t> inSet ; // Prepare vector and set do { stPos = (size_t)(::rand()<<16) % stSize ; outVect.push_back(stPos) ; outSet.insert(stPos) ; stPos = (size_t)(::rand()<<16) % stSize ; inVect.push_back(stPos) ; inSet.insert(stPos) ; } while (--stCopySize) ; // Write & read using vectors if ( !bReverse && !bOrdered ) { std::vector<size_t>::iterator outI, inI ; outI = outVect.begin() ; inI = inVect.begin() ; stCopySize = stTimes ; dReadTime = 0.0 ; dWriteTime = 0.0 ; do { dTime = RealTime_Microsecs() ; fFile.seekp(*outI) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++outI ; if ( !bWriteOnly ) { dTime = RealTime_Microsecs() ; fFile.seekg(*inI) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++inI ; } } while (--stCopySize) ; std::cout << "Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " (Ave: " << xw::str::itoa(dWriteTime/stTimes, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { std::cout << "Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " (Ave: " << xw::str::itoa(dReadTime/stTimes, 10, 'f') << ")" << std::endl ; } } // End // Write in order if ( bOrdered ) { std::set<size_t>::iterator i = outSet.begin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.begin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End // Write in reverse order if ( bReverse ) { std::set<size_t>::reverse_iterator i = outSet.rbegin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.rbegin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End dTime = RealTime_Microsecs() ; fFile.close() ; std::cout << "Flush/Close Time is " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; std::cout << "Program Terminated" << std::endl ; } catch(...) { std::cout << "Something wrong or wrong parameters" << std::endl ; retval = -1 ; } if ( p ) delete []p ; if ( pPattern ) delete []pPattern ; if ( pReadBuf ) delete []pReadBuf ; return retval ; }

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >