Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 38/151 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • To have an Integer pointing to 3 ordered lists in Java

    - by Masi
    Which datastructure would you use in the place of X to have efficient merges, sorts and additions as described below? #1 Possible solution: one HashMap to X -datastructure Having a HashMap pointing from fileID to some datastructure linking word, wordCount and wordID may be a good solution. However, I have not found a way to implement it. I am not allowed to use Postgres or any similar tool to keep my data neutralized. I want to have efficient merges, sorts and additions according to fileID, wordID or wordCount for the type below. I have the type Words which has th field fileID that points to a list of words and to relating pieces of information: The Type class Words =================================== fileID: int [list of words] : ArrayList [list of wordCounts] : ArrayList [list of wordIDs] : ArrayList Example of the data in fileID word wordCount wordID instance1 of words 1 He 123 1111 1 llo 321 2 instance2 of words 2 Van 213 666 2 cou 777 932 Example of needed merge fileID wordID fileID wordID 1 2 1 3 wordID=2 2 2 ========> 1 2 2 3 2 2 I cannot see any usage of set-operations such as intersections here because order is needed. Having about three HashMaps makes sorting difficult: from word to wordID in a given fileID from wordID to fileID from wordID to wordCount in a given fileID

    Read the article

  • calloc v/s malloc and time efficiency

    - by yCalleecharan
    Hi, I've read with interest the post "c difference between malloc and calloc". I'm using malloc in my code and would like to know what difference I'll have using calloc instead. My present (pseudo)code with malloc: Scenario 1 int main() { allocate large arrays with malloc INITIALIZE ALL ARRAY ELEMENTS TO ZERO for loop //say 1000 times do something and write results to arrays end for loop FREE ARRAYS with free command } //end main If I use calloc instead of malloc, then I'll have: Scenario2 int main() { for loop //say 1000 times ALLOCATION OF ARRAYS WITH CALLOC do something and write results to arrays FREE ARRAYS with free command end for loop } //end main I have three questions: Which of the scenarios is more efficient if the arrays are very large? Which of the scenarios will be more time efficient if the arrays are very large? In both scenarios,I'm just writing to arrays in the sense that for any given iteration in the for loop, I'm writing each array sequentially from the first element to the last element. The important question: If I'm using malloc as in scenario 1, then is it necessary that I initialize the elements to zero? Say with malloc I have array z = [garbage1, garbage2, garbage 3]. For each iteration, I'm writing elements sequentially i.e. in the first iteration I get z =[some_result, garbage2, garbage3], in the second iteration I get in the first iteration I get z =[some_result, another_result, garbage3] and so on, then do I need specifically to initialize my arrays after malloc?

    Read the article

  • mySQL Efficiency Issue - How to find the right balance of normalization...?

    - by Foo
    I'm fairly new to working with relational databases, but have read a few books and know the basics of good design. I'm facing a design decision, and I'm not sure how to continue. Here's a very over simplified version of what I'm building: People can rate photos 1-5, and I need to display the average votes on the picture while keeping track of the individual votes. For example, 12 people voted 1, 7 people voted 2, etc. etc. The normalization freak of me initially designed the table structure like this: Table pictures id* | picture | userID | Table ratings id* | pictureID | userID | rating With all the foreign key constraints and everything set as they shoudl be. Every time someone rates a picture, I just insert a new record into ratings and be done with it. To find the average rating of a picture, I'd just run something like this: SELECT AVG(rating) FROM ratings WHERE pictureID = '5' GROUP by pictureID Having it setup this way lets me run my fancy statistics to. I can easily find who rated a certain picture a 3, and what not. Now I'm thinking if there's a crapload of ratings (which is very possible in what I'm really designing), finding the average will became very expensive and painful. Using a non-normalized version would seem to be more efficient. e.g.: Table picture id | picture | userID | ratingOne | ratingTwo | ratingThree | ratingFour | ratingFive To calculate the average, I'd just have to select a single row. It seems so much more efficient, but so much more uglier. Can someone point me in the right direction of what to do? My initial research shows that I have to "find the right balance", but how do I go about finding that balance? Any articles or additional reading information would be appreciated as well. Thanks.

    Read the article

  • [UNIX] Sort lines of massive file by number of words on line (ideally in parallel)

    - by conradlee
    I am working on a community detection algorithm for analyzing social network data from Facebook. The first task, detecting all cliques in the graph, can be done efficiently in parallel, and leaves me with an output like this: 17118 17136 17392 17064 17093 17376 17118 17136 17356 17318 12345 17118 17136 17356 17283 17007 17059 17116 Each of these lines represents a unique clique (a collection of node ids), and I want to sort these lines in descending order by the number of ids per line. In the case of the example above, here's what the output should look like: 17118 17136 17356 17318 12345 17118 17136 17356 17283 17118 17136 17392 17064 17093 17376 17007 17059 17116 (Ties---i.e., lines with the same number of ids---can be sorted arbitrarily.) What is the most efficient way of sorting these lines. Keep the following points in mind: The file I want to sort could be larger than the physical memory of the machine Most of the machines that I'm running this on have several processors, so a parallel solution would be ideal An ideal solution would just be a shell script (probably using sort), but I'm open to simple solutions in python or perl (or any language, as long as it makes the task simple) This task is in some sense very easy---I'm not just looking for any old solution, but rather for a simple and above all efficient solution

    Read the article

  • How can I apply a PSSM efficiently?

    - by flies
    I am fitting for position specific scoring matrices (PSSM aka Position Specific Weight Matrices). The fit I'm using is like simulated annealing, where I the perturb the PSSM, compare the prediction to experiment and accept the change if it improves agreement. This means I apply the PSSM millions of times per fit; performance is critical. In my particular problem, I'm applying a PSSM for an object of length L (~8 bp) at every position of a DNA sequence of length M (~30 bp) (so there are M-L+1 valid positions). I need an efficient algorithm to apply a PSSM. Can anyone help improve performance? My best idea is to convert the DNA into some kind of a matrix so that applying the PSSM is matrix multiplication. There are efficient linear algebra libraries out there (e.g. BLAS), but I'm not sure how best to turn an M-length DNA sequence into a matrix M x 4 matrix and then apply the PSSM at each position. The solution needs to work for higher order/dinucleotide terms in the PSSM - presumably this means representing the sequence-matrix for mono-nucleotides and separately for dinucleotides. My current solution iterates over each position m, then over each letter in word from m to m+L-1, adding the corresponding term in the matrix. I'm storing the matrix as a multi-dimensional STL vector, and profiling has revealed that a lot of the computation time is just accessing the elements of the PSSM (with similar performance bottlenecks accessing the DNA sequence). If someone has an idea besides matrix multiplication, I'm all ears.

    Read the article

  • PHP Serialize Function - Adding serialized data to mysql and then fetch and display

    - by Abhilash Shukla
    I want to know whether the PHP serialize function is 100% secure, also if we store serialized data into a database and want to do something after fetching it, will it be a nice way. For example:- I have a website with different user privileges, now i want to store the permissions settings for a particular privilege to my database (This data i want to store is to be done through php serialize function), now when a user logs in i want to fetch this data and set the privilege for the customer. Now i am ok to do this thing, what i want to know is, whether it is the best way to do or something more efficient can be done. Also, i was going through php manual and found this code, can anybody explain me a bit what's happening in this code:- [Specially why base64_encode is used?] <?php mySerialize( $obj ) { return base64_encode(gzcompress(serialize($obj))); } myUnserialize( $txt ) { return unserialize(gzuncompress(base64_decode($txt))); } ?> Also if somebody can provide me their own code to show me to do this thing in the most efficient manner. Thanks.

    Read the article

  • What is a faster way of merging the values of this Python structure into a single dictionary?

    - by jcoon
    I've refactored how the merged-dictionary (all_classes) below is created, but I'm wondering if it can be more efficient. I have a dictionary of dictionaries, like this: groups_and_classes = {'group_1': {'class_A': [1, 2, 3], 'class_B': [1, 3, 5, 7], 'class_c': [1, 2], # ...many more items like this }, 'group_2': {'class_A': [11, 12, 13], 'class_C': [5, 6, 7, 8, 9] }, # ...and many more items like this } A function creates a new object from groups_and_classes like this (the function to create this is called often): all_classes = {'class_A': [1, 2, 3, 11, 12, 13], 'class_B': [1, 3, 5, 7, 9], 'class_C': [1, 2, 5, 6, 7, 8, 9] } Right now, there is a loop that does this: all_classes = {} for group in groups_and_classes.values(): for c, vals in group.iteritems(): for v in vals: if all_classes.has_key(c): if v not in all_classes[c]: all_classes[c].append(v) else: all_classes[c] = [v] So far, I changed the code to use a set instead of a list since the order of the list doesn't matter and the values need to be unique: all_classes = {} for group in groups_and_classes.values(): for c, vals in group.iteritems(): try: all_classes[c].update(set(vals)) except KeyError: all_classes[c] = set(vals) This is a little nicer, and I didn't have to convert the sets to lists because of how all_classes is used in the code. Question: Is there a more efficient way of creating all_classes (aside from building it at the same time groups_and_classes is built, and changing everywhere this function is called)?

    Read the article

  • Assistance with Lua functions

    - by Josh
    As noted before, I'm relatively new to lua, but again, I learn quick. The last time I got help here, it helped me immensely, and I was able to write a better script. Now I've come to another question that I think will make my life a bit easier. I have no clue what I'm doing with functions, but I'm hoping there is a way to do what I want to do here. Below, you'll see an example of code I have to do to strip down some unneeded elements. Yeah, I realize it's not efficient in the least, so if anyone else has a better idea of how to make it much more efficient, I'm all ears. What I would like to do is create a function with it so that I can strip down whatever variable with a simple call of it (like stripdown(winds)). I appreciate any help that is offered, and any lessons given. Thanks! winds = string.gsub(winds,"%b<>","") winds = string.gsub(winds,"%c"," ") winds = string.gsub(winds," "," ") winds = string.gsub(winds," "," ") winds = string.gsub(winds,"^%s*(.-)%s*$", "%1)") winds = string.gsub(winds,"&nbsp;","") winds = string.gsub(winds,"/ ", "(") Josh

    Read the article

  • Reverse search in Hibernate Search

    - by Javi
    Hello, I'm using Hibernate Search (which uses Lucene) for searching some Data I have indexed in a directory. It works fine but I need to do a reverse search. By reverse search I mean that I have a list of queries stored in my database I need to check which one of these queries match with a Data object each time Data Object is created. I need it to alert the user when a Data Object matches with a Query he has created. So I need to index this single Data Object which has just been created and see which queries of my list has this object as a result. I've seen Lucene MemoryIndex Class to create an index in memory so I can do something like this example for every query in a list (though iterating in a Java list of queries would not be very efficient): //Iterating over my list<Query> MemoryIndex index = new MemoryIndex(); //Add all fields index.addField("myField", "myFieldData", analyzer); ... QueryParser parser = new QueryParser("myField", analyzer); float score = index.search(query); if (score > 0.0f) { System.out.println("it's a match"); } else { System.out.println("no match found"); } The problem here is that this Data Class has several Hibernate Search Annotations @Field,@IndexedEmbedded,... which indicated how fields should be indexed, so when I invoke index() method on the FullTextEntityManager instance it uses this information to index the object in the directory. Is there a similar way to index it in memory using this information? Is there a more efficient way of doing this reverse search? Thanks

    Read the article

  • Ideas Related to Subset Sum with 2,3 and more integers

    - by rolandbishop
    I've been struggling with this problem just like everyone else and I'm quite sure there has been more than enough posts to explain this problem. However in terms of understanding it fully, I wanted to share my thoughts and get more efficient solutions from all the great people in here related to Subset Sum problem. I've searched it over the Internet and there is actually a lot sources but I'm really willing to re-implement an algorithm or finding my own in order to understand fully. The key thing I'm struggling with is the efficiency considering the set size will be large. (I do not have a limit, just conceptually large). The two phases I'm trying to implement ideas on is finding two numbers that are equal to given integer T, finding three numbers and eventually K numbers. Some ideas I've though; For the two integer part I'm thing basically sorting the array O(nlogn) and for each element in the array searching for its negative value. (i.e if the array element is 3 searching for -3). Maybe a hash table inclusion could be better, providing a O(1) indexing the element? For the three or more integers I've found an amazing blog post;http://www.skorks.com/2011/02/algorithms-a-dropbox-challenge-and-dynamic-programming/. However even the author itself states that it is not applicable for large numbers. So I was for 2 and 3 and more integers what ideas could be applied for the subset problem. I'm struggling with setting up a dynamic programming method that will be efficient for the large inputs as well.

    Read the article

  • Alignment in assembly

    - by jena
    Hi, I'm spending some time on assembly programming (Gas, in particular) and recently I learned about the align directive. I think I've understood the very basics, but I would like to gain a deeper understanding of its nature and when to use alignment. For instance, I wondered about the assembly code of a simple C++ switch statement. I know that under certain circumstances switch statements are based on jump tables, as in the following few lines of code: .section .rodata .align 4 .align 4 .L8: .long .L2 .long .L3 .long .L4 .long .L5 ... .align 4 aligns the following data on the next 4-byte boundary which ensures that fetching these memory locations is efficient, right? I think this is done because there might be things happening before the switch statement which caused misalignment. But why are there actually two calls to .align? Are there any rules of thumb when to call .align or should it simply be done whenever a new block of data is stored in memory and something prior to this could have caused misalignment? In case of arrays, it seems that alignment is done on 32-byte boundaries as soon as the array occupies at least 32 byte. Is it more efficient to do it this way or is there another reason for the 32-byte boundary? I'd appreciate any explanation or hint on literature.

    Read the article

  • C++0x rvalue references and temporaries

    - by Doug
    (I asked a variation of this question on comp.std.c++ but didn't get an answer.) Why does the call to f(arg) in this code call the const ref overload of f? void f(const std::string &); //less efficient void f(std::string &&); //more efficient void g(const char * arg) { f(arg); } My intuition says that the f(string &&) overload should be chosen, because arg needs to be converted to a temporary no matter what, and the temporary matches the rvalue reference better than the lvalue reference. This is not what happens in GCC and MSVC. In at least G++ and MSVC, any lvalue does not bind to an rvalue reference argument, even if there is an intermediate temporary created. Indeed, if the const ref overload isn't present, the compilers diagnose an error. However, writing f(arg + 0) or f(std::string(arg)) does choose the rvalue reference overload as you would expect. From my reading of the C++0x standard, it seems like the implicit conversion of a const char * to a string should be considered when considering if f(string &&) is viable, just as when passing a const lvalue ref arguments. Section 13.3 (overload resolution) doesn't differentiate between rvalue refs and const references in too many places. Also, it seems that the rule that prevents lvalues from binding to rvalue references (13.3.3.1.4/3) shouldn't apply if there's an intermediate temporary - after all, it's perfectly safe to move from the temporary. Is this: Me misreading/misunderstand the standard, where the implemented behavior is the intended behavior, and there's some good reason why my example should behave the way it does? A mistake that the compiler vendors have somehow all made? Or a mistake based on common implementation strategies? Or a mistake in e.g. GCC (where this lvalue/rvalue reference binding rule was first implemented), that was copied by other vendors? A defect in the standard, or an unintended consequence, or something that should be clarified?

    Read the article

  • Determining what action an NPC will take, when it is partially random but influenced by preferences?

    - by lala
    I want to make characters in a game perform actions that are partially random but also influenced by preferences. For instance, if a character feels angry they have a higher chance of yelling than telling a joke. So I'm thinking about how to determine which action the character will take. Here are the ideas that have come to me. Solution #1: Iterate over every possible action. For each action do a random roll, then add the preference value to that random number. The action with the highest value is the one the character takes. Solution #2: Assign a range of numbers to an action, with more likely actions having a wider range. So, if the random roll returns anywhere from 1-5, the character will tell a joke. If it returns 6-75, they will yell. And so on. Solution #3: Group all the actions and make a branching tree. Will they take a friendly action or a hostile action? The random roll (with preference values added) says hostile. Will they make a physical attack or verbal? The random roll says verbal. Keep going down the line until you reach the action. Solution #1 is the simplest, but hardly efficient. I think Solution #3 is a little more complicated, but isn't it more efficient? Does anyone have any more insight into this particular problem? Is #3 the best solution? Is there a better solution?

    Read the article

  • Show users a list of unique items on Java Google App Engine

    - by James
    I've been going round in circles with what must be a very simple challenge but I want to do it the most efficient way from the start. So, I've watched Brett Slatkin's Google IO videos (2008 & 2009) about building scalable apps including http://www.youtube.com/watch?v=AgaL6NGpkB8 and read the docs but as a n00b, I'm still not sure. I'm trying to build an app on GAEJ similar to the original 'hotornot' where a user is presented with an item which they rate. Once they rate it, they are presented with another one which they haven't seen before. My question is this; is it most efficient to do a query up front to grab x items (say 100) and put them in a list (stored in memcache?) or is it better to simply make a query for a new item after each rating. To keep track of the items a user has seen, I'm planning to keep those items' keys in a list property of the user's entity. Does that sound sensible? I've really got myself confused about this so any help would be much appreciated.

    Read the article

  • Help me to refactor this F# code to tail recursion

    - by Kev
    I write some code to learning F#. Here is a example: let nextPrime list= let rec loop n= match n with | _ when (list |> List.filter (fun x -> x <= ( n |> double |> sqrt |> int)) |> List.forall (fun x -> n % x <> 0)) -> n | _ -> loop (n+1) loop (List.max list + 1) let rec findPrimes num= match num with | 1 -> [2] | n -> let temp = findPrimes <| n-1 (nextPrime temp ) :: temp //find 10 primes findPrimes 10 |> printfn "%A" I'm very happy that it just works! I'm totally beginner to recursion Recursion is a wonderful thing. I think findPrimes is not efficient. Someone help me to refactor findPrimes to tail recursion if possible? BTW, is there some more efficient way to find first n primes?

    Read the article

  • Byte-Pairing for data compression

    - by user1669533
    Question about Byte-Pairing for data compression. If byte pairing converts two byte values to a single byte value, splitting the file in half, then taking a gig file and recusing it 16 times shrinks it to 62,500,000. My question is, is byte-pairing really efficient? Is the creation of a 5,000,000 iteration loop, to be conservative, efficient? I would like some feed back on and some incisive opinions please. Dave, what I read was: "The US patent office no longer grants patents on perpetual motion machines, but has recently granted at least two patents on a mathematically impossible process: compression of truly random data." I was not inferring the Patent Office was actually considering what I am inquiring about. I was merely commenting on the notion of a "mathematically impossible process." If someone has, in some way created a method of having a "single" data byte as a placeholder of 8 individual bytes of data, that would be a consideration for a patent. Now, about the mathematically impossibility of an 8 to 1 compression method, it is not so much a mathematically impossibility, but a series of rules and conditions that can be created. As long as there is the rule of 8 or 16 bit representation of storing data on a medium, there are ways to manipulate data that mirrors current methods, or creation by a new way of thinking.

    Read the article

  • Fastest Java way to remove the first/top line of a file (like a stack)

    - by christangrant
    I am trying to improve an external sort implementation in java. I have a bunch of BufferedReader objects open for temporary files. I repeatedly remove the top line from each of these files. This pushes the limits of the Java's Heap. I would like a more scalable method of doing this without loosing speed because of a bunch of constructor calls. One solution is to only open files when they are needed, then read the first line and then delete it. But I am afraid that this will be significantly slower. So using Java libraries what is the most efficient method of doing this. --Edit-- For external sort, the usual method is to break a large file up into several chunk files. Sort each of the chunks. And then treat the sorted files like buffers, pop the top item from each file, the smallest of all those is the global minimum. Then continue until for all items. http://en.wikipedia.org/wiki/External_sorting My temporary files (buffers) are basically BufferedReader objects. The operations performed on these files are the same as stack/queue operations (peek and pop, no push needed). I am trying to make these peek and pop operations more efficient. This is because using many BufferedReader objects takes up too much space.

    Read the article

  • How to get a list of all Subversion commit author usernames?

    - by Quinn Taylor
    I'm looking for an efficient way to get the list of unique commit authors for an SVN repository as a whole, or for a given resource path. I haven't been able to find an SVN command specifically for this (and don't expect one) but I'm hoping there may be a better way that what I've tried so far in Terminal (on OS X): svn log --quiet | grep "^r" | awk '{print $3}' svn log --quiet --xml | grep author | sed -E "s:</?author>::g" Either of these will give me one author name per line, but they both require filtering out a fair amount of extra information. They also don't handle duplicates of the same author name, so for lots of commits by few authors, there's tons of redundancy flowing over the wire. More often than not I just want to see the unique author usernames. (It actually might be handy to infer the commit count for each author on occasion, but even in these cases it would be better if the aggregated data were sent across instead.) I'm generally working with client-only access, so svnadmin commands are less useful, but if necessary, I might be able to ask a special favor of the repository admin if strictly necessary or much more efficient. The repositories I'm working with have tens of thousands of commits and many active users, and I don't want to inconvenience anyone.

    Read the article

  • C++: Efficiently adding integers to strings

    - by Shinka
    I know how to add integers to strings, but I'm not sure I'm doing it in an efficient matters. I have a class where I often have to return a string plus an integer (a different integer each time), in Java I would do something like public class MyClass { final static String S = "MYSTRING"; private int id = 0; public String getString() { return S + (id++); } } But in C++ I have to do; class MyClass { private: std::string S; // For some reason I can't do const std::string S = "MYSTRING"; int id; public: MyClass() { S = "MYSTRING"; id = 0; } std::string getString() { std::ostringstream oss; oss << S << id++; return oss.str(); } } An additional constraint: I don't want (in fact, in can't) use Boost or any other librairies, I'll have to work with the standard library. So the thing is; the code works, but in C++ I have to create a bunch of ostringstream objects, so it seems inefficient. To be fair, perhaps Java do the same and I just don't notice it, I say it's inefficient mostly because I know very little about strings. Is there a more efficient way to do this ?

    Read the article

  • C# performance methods of receiving data from a socket?

    - by Daniel
    Lets assume we have a simple internet socket, and its going to send 10 megabytes (because i want to ignore memory issues) of random data through. Is there any performance difference or a best practice method that one should use for receiving data? The final output data should be represented by a byte[]. Yes i know writing an arbitrary amount of data to memory is bad, and if I was downloading a large file i wouldn't be doing it like this. But for argument sake lets ignore that and assume its a smallish amount of data. I also realise that the bottleneck here is probably not the memory management but rather the socket receiving. I just want to know what would be the most efficient method of receiving data. A few dodgy ways can think of is: Have a List and a buffer, after the buffer is full, add it to the list and at the end list.ToArray() to get the byte[] Write the buffer to a memory stream, after its complete construct a byte[] of the stream.Length and read it all into it in order to get the byte[] output. Is there a more efficient/better way of doing this?

    Read the article

  • Combining two-part SQL query into one query

    - by user332523
    Hello, I have a SQL query that I'm currently solving by doing two queries. I am wondering if there is a way to do it in a single query that makes it more efficient. Consider two tables: Transaction_Entries table and Transactions, each one defined below: Transactions - id - reference_number (varchar) Transaction_Entries - id - account_id - transaction_id (references Transactions table) Notes: There are multiple transaction entries per transaction. Some transactions are related, and will have the same reference_number string. To get all transaction entries for Account X, then I would do SELECT E.*, T.reference_number FROM Transaction_Entries E JOIN Transactions T ON (E.transaction_id=T.id) where E.account_id = X The next part is the hard part. I want to find all related transactions, regardless of the account id. First I make a list of all the unique reference numbers I found in the previous result set. Then for each one, I can query all the transactions that have that reference number. Assume that I hold all the rows from the previous query in PreviousResultSet UniqueReferenceNumbers = GetUniqueReferenceNumbers(PreviousResultSet) // in Java foreach R in UniqueReferenceNumbers // in Java SELECT * FROM Transaction_Entries where transaction_id IN (SELECT * FROM Transactions WHERE reference_number=R Any suggestions how I can put this into a single efficient query?

    Read the article

  • java: libraries for immutable functional-style data structures

    - by Jason S
    This is very similar to another question (Functional Data Structures in Java) but the answers there are not particularly useful. I need to use immutable versions of the standard Java collections (e.g. HashMap / TreeMap / ArrayList / LinkedList / HashSet / TreeSet). By "immutable" I mean immutable in the functional sense (e.g. purely functional data structures), where updating operations on the data structure do not change the original data, but instead return a new instance of the same kind of data structure. Also typically new and old instances of the data structure will share immutable data to be efficient in time and space. From what I can tell my options include: Functional Java Scala Clojure but I'm not sure whether any of these are particularly appealing to me. I have a few requirements/desirements: the collections in question should be usable directly in Java (with the appropriate libraries in the classpath). FJ would work for me; I'm not sure if I can use Scala's or Clojure's data structures in Java w/o having to use the compilers/interpreters from those languages and w/o having to write Scala or Clojure code. Core operations on lists/maps/sets should be possible w/o having to create function objects with confusing syntaxes (FJ looks slightly iffy) They should be efficient in time and space. I'm looking for a library which ideally has done some performance testing. FJ's TreeMap is based on a red-black tree, not sure how that rates. Documentation / tutorials should be good enough so someone can get started quickly using the data structures. FJ fails on that front. Any suggestions?

    Read the article

  • How to detect column conflicts with Hibernate?

    - by Slim
    So let's say I have an ArrayList full of Products that need to be committed to the database via Hibernate. There are already a large number of Products in the database. Each product has an ID. Note this is NOT the PK that is autogenerated by Hibernate. My questions is: what is the best way to detect conflicts with this ID? I am looking for a relatively efficient method of obtaining, from the the database, a List of Products that share an ID with any of the Products in my ArrayList. This is all in a single table called Products and the ID attribute is in column ProductID. The way I've done it is grabbing a list of all Products in the database, and compared each one with each entry in my ArrayList - but that is seriously inefficient and I don't think it would work well with a larger database. How should it be done? Thanks. I say "relatively" efficient because efficiency is not the primary concern, but it shouldn't take noticeably long to test against a table of ~1000-5000 rows. Help? EDIT* I'm very new to hibernate and below is the best I've come up with. How does this look? for(long id : idList){ //idList just holds the IDs of each Product in my ArrayList Query query = session.createQuery("select product from Product product where product.id = :id"); query.setLong("id", id); for(int i = 0; i < query.list().size(); i++){ listOfConflictingProducts.add((Product) query.list().get(i)); } }

    Read the article

  • A more effecient millisecond conversion method?

    - by cube
    I am currently using this method to convert milliseconds to min:sec:1/10sec. However it does not seem to be efficient at all. Would anyone know of a faster more efficient and optimized way of accomplishing the same. mills.prototype.formatTime = function(time) { var elapsedTime = (time * 1000); //Minutes var elapsedM = (elapsedTime/60000)|0; var remaining = elapsedTime - (elapsedM * 60000); //add a leading zero if it's a single digit number if (elapsedM < 10) { elapsedM = "0"+elapsedM; } //Seconds var elapsedS = ((remaining/1000)|0); remaining -= (elapsedS*1000); ////add leading zero if (elapsedS<10) { elapsedS = "0"+elapsedS; } //Hundredths var elapsedFractions = ((remaining/10)|0); if (elapsedFractions < 10) { elapsedFractions = "0"+elapsedFractions; } //display results nicely var time_data = elapsedM+":"+elapsedS+":"+elapsedFractions; //return time_data; return[time_data,elapsedM,elapsedS,elapsedFractions] };

    Read the article

  • Is there any class in the .NET Framework to represent a holding container for objects?

    - by Charles Prakash Dasari
    I am looking for a class that defines a holding structure for an object. The value for this object could be set at a later time than when this container is created. It is useful to pass such a structure in lambdas or in callback functions etc. Say: class HoldObject<T> { public T Value { get; set; } public bool IsValueSet(); public void WaitUntilHasValue(); } // and then we could use it like so ... HoldObject<byte[]> downloadedBytes = new HoldObject<byte[]>(); DownloadBytes("http://www.stackoverflow.com", sender => downloadedBytes.Value = sender.GetBytes()); It is rather easy to define this structure, but I am trying to see if one is available in FCL. I also want this to be an efficient structure that has all needed features like thread safety, efficient waiting etc. Any help is greatly appreciated.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >