Search Results

Search found 3659 results on 147 pages for 'sorted hash'.

Page 30/147 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • How can I find the boundaries of a subset of a sorted list?

    - by Alex
    I have the following dilemma: I have a list of strings, and I want to find the set of string which start with a certain prefix. The list is sorted, so the naive solution is this: Perform binary search on the prefixes of the set, and when you find an element that starts with the prefix, traverse up linearly until you hit the top of the subset. This runs in linear time, however, and I was wondering if anyone can suggest a more efficient way to do it.

    Read the article

  • How does the rsync algorithm correctly identify repeating blocks?

    - by Kai
    I'm on a personal quest to learn how the rsync algorithm works. After some reading and thinking, I've come up with a situation where I think the algorithm fails. I'm trying to figure out how this is resolved in an actual implementation. Consider this example, where A is the receiver and B is the sender. A = abcde1234512345fghij B = abcde12345fghij As you can see, the only change is that 12345 has been removed. Now, to make this example interesting, let's choose a block size of 5 bytes (chars). Hashing the values on the sender's side using the weak checksum gives the following values list. abcde|12345|fghij abcde -> 495 12345 -> 255 fghij -> 520 values = [495, 255, 520] Next we check to see if any hash values differ in A. If there's a matching block we can skip to the end of that block for the next check. If there's a non-matching block then we've found a difference. I'll step through this process. Hash the first block. Does this hash exist in the values list? abcde -> 495 (yes, so skip) Hash the second block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the third block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the fourth block. Does this hash exist in the values list? fghij -> 520 (yes, so skip) No more data, we're done. Since every hash was found in the values list, we conclude that A and B are the same. Which, in my humble opinion, isn't true. It seems to me this will happen whenever there is more than one block that share the same hash. What am I missing?

    Read the article

  • Why is processing a sorted array faster than an unsorted array?

    - by GManNickG
    Here is a piece of code that shows some very peculiar performance. For some strange reason, sorting the data miraculously speeds up the code by almost 6x: #include <algorithm> #include <ctime> #include <iostream> int main() { // generate data const unsigned arraySize = 32768; int data[arraySize]; for (unsigned c = 0; c < arraySize; ++c) data[c] = std::rand() % 256; // !!! with this, the next loop runs faster std::sort(data, data + arraySize); // test clock_t start = clock(); long long sum = 0; for (unsigned i = 0; i < 100000; ++i) { // primary loop for (unsigned c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC; std::cout << elapsedTime << std::endl; std::cout << "sum = " << sum << std::endl; } Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds. With the sorted data, the code runs in 1.93 seconds. Initially I thought this might be just a language or compiler anomaly. So I tried it Java... import java.util.Arrays; import java.util.Random; public class Main { public static void main(String[] args) { // generate data int arraySize = 32768; int data[] = new int[arraySize]; Random rnd = new Random(0); for (int c = 0; c < arraySize; ++c) data[c] = rnd.nextInt() % 256; // !!! with this, the next loop runs faster Arrays.sort(data); // test long start = System.nanoTime(); long sum = 0; for (int i = 0; i < 100000; ++i) { // primary loop for (int c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } System.out.println((System.nanoTime() - start) / 1000000000.0); System.out.println("sum = " + sum); } } with a similar but less extreme result. My first thought was that sorting brings the data into cache, but my next thought was how silly that is because the array was just generated. What is going on? Why is a sorted array faster than an unsorted array? The code is summing up some independent terms, the order should not matter.

    Read the article

  • Working with a large data object between ruby processes

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds Update: Since my original question did not receive many responses, I'm assuming there's no solution as easy as I would have hoped. Presently I'm considering two options: Create a Sinatra application to store this hash with an API to modify/access it. Create a C application to do the same as #1, but a lot faster. The scope of my problem has increased such that the hash may be larger than my original example. So #2 may be necessary. But I have no idea where to start in terms of writing a C application that exposes an appropriate API. A good walkthrough through how best to implement #1 or #2 may receive best answer credit.

    Read the article

  • Issue with creating index organized table

    - by mtim
    I'm having a weird problem with index organized table. I'm running Oracle 11g standard. i have a table src_table SQL> desc src_table; Name Null? Type --------------- -------- ---------------------------- ID NOT NULL NUMBER(16) HASH NOT NULL NUMBER(3) ........ SQL> select count(*) from src_table; COUNT(*) ---------- 21108244 now let's create another table and copy 2 columns from src_table set timing on SQL> create table dest_table(id number(16), hash number(20), type number(1)); Table created. Elapsed: 00:00:00.01 SQL> insert /*+ APPEND */ into dest_table (id,hash,type) select id, hash, 1 from src_table; 21108244 rows created. Elapsed: 00:00:15.25 SQL> ALTER TABLE dest_table ADD ( CONSTRAINT dest_table_pk PRIMARY KEY (HASH, id, TYPE)); Table altered. Elapsed: 00:01:17.35 It took Oracle < 2 min. now same exercise but with IOT table SQL> CREATE TABLE dest_table_iot ( id NUMBER(16) NOT NULL, hash NUMBER(20) NOT NULL, type NUMBER(1) NOT NULL, CONSTRAINT dest_table_iot_PK PRIMARY KEY (HASH, id, TYPE) ) ORGANIZATION INDEX; Table created. Elapsed: 00:00:00.03 SQL> INSERT /*+ APPEND */ INTO dest_table_iot (HASH,id,TYPE) SELECT HASH, id, 1 FROM src_table; "insert" into IOT takes 18 hours !!! I have tried it on 2 different instances of Oracle running on win and linux and got same results. What is going on here ? Why is it taking so long ?

    Read the article

  • WPF data-bound ComboBox only shows first item of ItemsSource list

    - by Mark
    Hi all, I'm sure I'm doing something stupid but for the life of me I can't think of right now. I have a ComboBox that is data-bound to a list of Layout objects. The list is initially empty but things are added over time. When the list is updated by the model the first time, this update reflects properly in the ComboBox. However, subsequent updates never show up in the ComboBox even though I can see that the list itself contains these items. Since the first update works, I know the data-binding is OK - so what am I doing wrong here? Here's the XAML (abridged): <Grid HorizontalAlignment="Stretch"> <ComboBox ItemsSource="{Binding Path=SavedLayouts, diagnostics:PresentationTraceSources.TraceLevel=High}" DisplayMemberPath="Name" SelectedValuePath="Name" SelectedItem="{Binding LoadLayout}" Height="25" Grid.Row="1" Grid.Column="0"></ComboBox> </Grid> And the related part of the model: public IList<Layout> SavedLayouts { get { return _layouts; } } public Layout SaveLayout( String data_ ) { Layout theLayout = new Layout( SaveLayoutName ); _layouts.Add( theLayout ); try { return theLayout; } finally { PropertyChangedEventHandler handler = PropertyChanged; if( handler != null ) { handler( this, new PropertyChangedEventArgs( "SavedLayouts" ) ); } } } And finally, the layout class (abridged): public class Layout { public String Name { get; private set; } } In the output window, I can see the update occurring: System.Windows.Data Warning: 91 : BindingExpression (hash=64564967): Got PropertyChanged event from TickerzModel (hash=43624632) System.Windows.Data Warning: 97 : BindingExpression (hash=64564967): GetValue at level 0 from TickerzModel (hash=43624632) using RuntimePropertyInfo(SavedLayouts): List`1 (hash=16951421 Count=11) System.Windows.Data Warning: 76 : BindingExpression (hash=64564967): TransferValue - got raw value List`1 (hash=16951421 Count=11) System.Windows.Data Warning: 85 : BindingExpression (hash=64564967): TransferValue - using final value List`1 (hash=16951421 Count=11) But I get do not get this 11th item in the ComboBox. Any ideas?

    Read the article

  • postfix with mailman

    - by Thufir
    What should happen is that [email protected] should be delivered to that users inbox on localhost, user@localhost. Thunderbird works fine at reading user@localhost. I'm just using a small portion of postfix-dovecot with Ubuntu mailman. How can I get postfix to recognize the FQDN and deliver them to a localhost inbox? root@dur:~# root@dur:~# tail /var/log/mail.err;tail /var/log/mailman/subscribe;postconf -n Aug 27 18:59:16 dur dovecot: lda(root): Error: chdir(/root) failed: Permission denied Aug 27 18:59:16 dur dovecot: lda(root): Error: user root: Initialization failed: Initializing mail storage from mail_location setting failed: stat(/root/Maildir) failed: Permission denied (euid=65534(nobody) egid=65534(nogroup) missing +x perm: /root, dir owned by 0:0 mode=0700) Aug 27 18:59:16 dur dovecot: lda(root): Fatal: Invalid user settings. Refer to server log for more information. Aug 27 20:09:16 dur postfix/trivial-rewrite[15896]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 21:19:17 dur postfix/trivial-rewrite[16569]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 22:27:00 dur postfix[17042]: fatal: usage: postfix [-c config_dir] [-Dv] command Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 22:59:07 dur postfix/postfix-script[17459]: error: unknown command: 'restart' Aug 27 22:59:07 dur postfix/postfix-script[17460]: fatal: usage: postfix start (or stop, reload, abort, flush, check, status, set-permissions, upgrade-configuration) Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 21:39:03 2012 (16734) cola: pending "[email protected]" <[email protected]> 127.0.0.1 Aug 27 21:40:37 2012 (16749) cola: pending "[email protected]" <[email protected]> 127.0.0.1 Aug 27 22:45:31 2012 (17288) gmane.mail.mailman.user.1: pending [email protected] 127.0.0.1 Aug 27 22:45:46 2012 (17293) gmane.mail.mailman.user.1: pending [email protected] 127.0.0.1 Aug 27 23:02:01 2012 (17588) test3: pending [email protected] 127.0.0.1 Aug 27 23:05:41 2012 (17652) test4: pending [email protected] 127.0.0.1 Aug 27 23:56:20 2012 (17985) test5: pending [email protected] 127.0.0.1 alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# there's definitely a transport problem: root@dur:~# root@dur:~# root@dur:~# grep transport /var/log/mail.log | tail Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Aug 27 22:29:19 dur postfix/trivial-rewrite[17062]: warning: transport_maps lookup failure Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: error: open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "*" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "*" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport is unavailable. open database /etc/postfix/transport.db: No such file or directory Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: hash:/etc/postfix/transport lookup error for "[email protected]" Aug 27 23:39:17 dur postfix/trivial-rewrite[17794]: warning: transport_maps lookup failure root@dur:~# trying to add the transport file: EDIT root@dur:~# root@dur:~# touch /etc/postfix/transport root@dur:~# ll /etc/postfix/transport -rw-r--r-- 1 root root 0 Aug 28 00:16 /etc/postfix/transport root@dur:~# root@dur:~# cd /etc/postfix/ root@dur:/etc/postfix# root@dur:/etc/postfix# postmap transport root@dur:/etc/postfix# root@dur:/etc/postfix# cat transport

    Read the article

  • How to use YQL to merge 2 RSS feeds sorted by pubDate?

    - by jnman
    Seeing that YQL is being promoted as a good way to do things, I was curious as to how to use YQL to fetch and merge 2 different feeds into one (sorted by pubDate). It's pretty trivial to fetch 2 feeds but it turns out that the feeds are just concatenated together and not merged. Here's the sample code. select channel.title,channel.link,channel.item.title,channel.item.link from xml where url in( 'http://code.flickr.com/blog/feed/rss/', 'http://feeds.delicious.com/v2/rss/codepo8?count=15', 'http://www.stevesouders.com/blog/feed/rss', 'http://www.yqlblog.net/blog/feed/', 'http://www.quirksmode.org/blog/index.xml' )

    Read the article

  • How do I get the sequence of numbers in a sorted-set that are between two integers in clojure?

    - by Greg Rogers
    Say I have a sorted-set of integers, xs, and I want to retrieve all the integers in xs that are [x, y), ie. between x and y. I can do: (select #(and (>= % x) (< % y)) xs) But this is inefficient - O(n) when it could be O(k log n) where k is the number of integers returned. I am just learning clojure so here is how I would do it in C++: set<int>::iterator first = xs.lower_bound(x); set<int>::iterator last = xs.upper_bound(y); for (; first != last; ++first) // do something with *first Can I do this in clojure?

    Read the article

  • Java List Sorting: Is there a way to keep a list permantly sorted automatically like TreeMap?

    - by david
    In Java you can build up an ArrayList with items and then call: Collections.sort(list, comparator); Is there anyway to pass in the Comparator at the time of List creation like you can do with TreeMap? The goal is to be able add an element to the list and instead of having it automatically appended to the end of the list, the list would keep itself sorted based on the Comparator and insert the new element at the index determined by the Comparator. So basically the list might have to re-sort upon every new element added. Is there anyway to achieve this in this way with the Comparator or by some other similar means? Thanks.

    Read the article

  • How do I get the sequence of numbers in a sorted-set that are between two integers in clojure?

    - by Greg Rogers
    Say I have a sorted-set of integers, xs, and I want to retrieve all the integers in xs that are [x, y), ie. between x and y. I can do: (select #(and (>= % x) (< % y)) xs) But this is inefficient - O(n) when it could be O(log n), I expect the number of elements returned to be small. Using take-while and drop-while would let me exit once I've reached y, but I still can't jump to x efficiently. I am just learning clojure so here is how I would do it in C++: set<int>::iterator first = xs.lower_bound(x); set<int>::iterator last = xs.lower_bound(y); for (; first != last; ++first) // do something with *first Can I do this in clojure?

    Read the article

  • Tracking unique versions of files with hashes

    - by rwmnau
    I'm going to be tracking different versions of potentially millions of different files, and my intent is to hash them to determine I've already seen that particular version of the file. Currently, I'm only using MD5 (the product is still in development, so it's never dealt with millions of files yet), which is clearly not long enough to avoid collisions. However, here's my question - Am I more likely to avoid collisions if I hash the file using two different methods and store both hashes (say, SHA1 and MD5), or if I pick a single, longer hash (like SHA256) and rely on that alone? I know option 1 has 288 hash bits and option 2 has only 256, but assume my two choices are the same total hash length. Since I'm dealing with potentially millions of files (and multiple versions of those files over time), I'd like to do what I can to avoid collisions. However, CPU time isn't (completely) free, so I'm interested in how the community feels about the tradeoff - is adding more bits to my hash proportionally more expensive to compute, and are there any advantages to multiple different hashes as opposed to a single, longer hash, given an equal number of bits in both solutions?

    Read the article

  • Is it an MD5 digest in this Python script?

    - by brilliant
    Hello, I am trying to understand this simple hashlib code in Python that has been given to me the other day on "Stackoverflow": import hashlib m = hashlib.md5() m.update("Nobody inspects") m.update(" the spammish repetition here") m.digest() '\xbbd\x9c\x83\xdd\x1e\xa5\xc9\xd9\xde\xc9\xa1\x8d\xf0\xff\xe9' m.digest_size 16 m.block_size 64 print m I thought that "print m" would show me the MD5 digest of the phrase: "Nobody inspects the spammish repetition here", but as a result I got this line on my local host: <md5 HASH object @ 01806220> Strange, when I refreshed the page, I got another line: <md5 HASH object @ 018062E0> and every time when I refresh it, I get another value: md5 HASH object @ 017F8AE0 md5 HASH object @ 01806220 md5 HASH object @ 01806360 md5 HASH object @ 01806400 md5 HASH object @ 01806220 Why is it so? I guess, what I have in each line flowing "@" is not really a digest. Then, what is it? And how can I display MD5 digest here in this code? My python version is Python 2.5 and the framework I am currently using is webapp (I have downloaded it together with SDK from "Google App Engine")

    Read the article

  • Help using Horner's rule and Hash Functions in JAVA?

    - by Matt
    I am trying to use Horner's rule to convert words to integers. I understand how it works and how if the word is long, it may cause an overflow. My ultimate goal is to use the converted integer in a hash function h(x)=x mod tableSize. My book suggests, because of the overflow, you could "apply the mod operator after computing each parenthesized expression in Horner's rule." I don't exactly understand what they mean by this. Say the expression looks like this: ((14*32+15)*32+20)*32+5 Do I take the mod tableSize after each parenthesized expression and add them together? What would it look like with this hash function and this example of Horner's rule?

    Read the article

  • How can I accept a hash mark in a URL via $_GET?

    - by bccarlso
    From what I have been able to understand, hash marks (#) aren't sent to the server, so it doesn't seem likely that I will be able to use raw PHP to parse data like in the URL below: index.php?name=Ben&address=101 S 10th St Suite #301 I'm looking to pre-populate form fields with this $_GET data. How would I do this with Javascript (or jQuery), and is there a fallback that wouldn't break my form for people not using Javascript? Currently if there is a hash (usually in the address field), everything after that is not parsed in or stored in $_GET.

    Read the article

  • How can I generate a unique ID using a hash in Perl?

    - by sganesh
    I doing message transfer program between multiple clients and server. I want to generate unique message id for every messages. that should be generated by server and return to client. For message transfer I am using hash data structure, Ex: { api => POST, username => sganesh, pass => "pass", message => "hai", time => "current_time", } I want to generate unique id using this hash. I have tried some of the ways, MD5 and freeze but this give unreadable id. I want some meaningful or readable unique id. I have thought we can use micro seconds to differentiate the id but here the problem is multiple clients. In any situation my id should be unique. Can anyone help me out of this problem? Thanks in Advance.

    Read the article

  • Why is Postgres doing a Hash in this query?

    - by Claudiu
    I have two tables: A and P. I want to get information out of all rows in A whose id is in a temporary table I created, tmp_ids. However, there is additional information about A in the P table, foo, and I want to get this info as well. I have the following query: SELECT A.H_id AS hid, A.id AS aid, P.foo, A.pos, A.size FROM tmp_ids, P, A WHERE tmp_ids.id = A.H_id AND P.id = A.P_id I noticed it going slowly, and when I asked Postgres to explain, I noticed that it combines tmp_ids with an index on A I created for H_id with a nested loop. However, it hashes all of P before doing a Hash join with the result of the first merge. P is quite large and I think this is what's taking all the time. Why would it create a hash there? P.id is P's primary key, and A.P_id has an index of its own.

    Read the article

  • Python: What's a correct and good way to implement __hash__()?

    - by random-name
    What's a correct and good way to implement hash()? I am talking about the function that returns a hashcode that is then used to insert objects into hashtables aka dictionaries. As hash() returns an integer and is used for "binning" objects into hashtables I assume that the values of the returned integer should be uniformly distributed for common data (to minimize collisions). What's a good practice to get such values? Are collisions a problem? In my case I have a small class which acts as a container class holding some ints, some floats and a string.

    Read the article

  • Can I use a single pointer for my hash table in C?

    - by aks
    I want to implement a hash table in the following manner: struct list { char *string; struct list *next; }; struct hash_table { int size; /* the size of the table */ struct list **table; /* the table elements */ }; Instead of struct hash_table like above, can I use: struct hash_table { int size; /* the size of the table */ struct list *table; /* the table elements */ }; That is, can I just use a single pointer instead of a double pointer for the hash table elements? If yes, please explain the difference in the way the elements will be stored in the table?

    Read the article

  • What Qt container class to use for a sorted list?

    - by Dave
    Part of my application involves rendering audio waveforms. The user will be able to zoom in/out of the waveform. Starting at fully zoomed-out, I only want to sample the audio at the necessary internals to draw the waveform at the given resolution. Then, when they zoom in, asynchronously resample the "missing points" and provide a clearer waveform. (Think Google Maps.) I'm not sure the best data structure to use in Qt world. Ideally, I would like to store data samples sorted by time, but with the ability to fill-in points as needed. So, for example, the data points might initially look like: data[0 ms] = 10 data[10 ms] = 32 data[20 ms] = 21 ... But when they zoom in, I would get more points as necessary, perhaps: data[0 ms] = 10 data[2 ms] = 11 data[4 ms] = 18 data[6 ms] = 30 data[10 ms] = 32 data[20 ms] = 21 ... Note that the values in brackets are lookup values (milliseconds), not array indices. In .Net I might have used a SortedList<int, int>. What would be the best class to use in Qt? Or should I use a STL container?

    Read the article

  • Binary search in a sorted (memory-mapped ?) file in Java

    - by sds
    I am struggling to port a Perl program to Java, and learning Java as I go. A central component of the original program is a Perl module that does string prefix lookups in a +500 GB sorted text file using binary search (essentially, "seek" to a byte offset in the middle of the file, backtrack to nearest newline, compare line prefix with the search string, "seek" to half/double that byte offset, repeat until found...) I have experimented with several database solutions but found that nothing beats this in sheer lookup speed with data sets of this size. Do you know of any existing Java library that implements such functionality? Failing that, could you point me to some idiomatic example code that does random access reads in text files? Alternatively, I am not familiar with the new (?) Java I/O libraries but would it be an option to memory-map the 500 GB text file (I'm on a 64-bit machine with memory to spare) and do binary search on the memory-mapped byte array? I would be very interested to hear any experiences you have to share about this and similar problems.

    Read the article

  • MYSQL fetch 10 posts, each w/ vote count, sorted by vote count, limited by where clause on posts

    - by nibblebot
    I want to fetch a set of Posts w/ vote count listed, sorted by vote count (e.g.) Post 1 - Post Body blah blah - Votes: 500 Post 2 - Post Body blah blah - Votes: 400 Post 3 - Post Body blah blah - Votes: 300 Post 4 - Post Body blah blah - Votes: 200 I have 2 tables: Posts - columns - id, body, is_hidden Votes - columns - id, post_id, vote_type_id Here is the query I've tried: SELECT p.*, v.yes_count FROM posts p LEFT JOIN (SELECT post_id, vote_type_id, COUNT(1) AS yes_count FROM votes WHERE (vote_type_id = 1) GROUP BY post_id ORDER BY yes_count DESC LIMIT 0, 10) v ON v.post_id = p.id WHERE (p.is_hidden = 0) ORDER BY yes_count DESC LIMIT 0, 10 Correctness: The above query almost works. The subselect is including votes for posts that have is_hidden = 1, so when I left join it to posts, if a hidden post is in the top 10 (ranked by votes), I can end up with records with NULL on the yes_count field. Performance: I have ~50k posts and ~500k votes. On my dev machine, the above query is running in .4sec. I'd like to stay at or below this execution time. Indexes: I have an index on the Votes table that covers the fields: vote_type_id and post_id EXPLAIN id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY p ALL NULL NULL NULL NULL 45985 Using where; Using temporary; Using filesort 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 10 2 DERIVED votes ref VotingPost VotingPost 4 319881 Using where; Using index; Using temporary; Using filesort

    Read the article

  • Sorted array: how to get position before and after using name? as3

    - by user1560239
    I have been working on a project and Stack Overflow has helped me with a few problems so far, so I am very thankful! My question is this: I have an array like this: var records:Object = {}; var arr:Array = [ records["nh"] = { medinc:66303, statename:"New Hampshire"}, records["ct"] = { medinc:65958, statename:"Connecticut"}, records["nj"] = { medinc:65173, statename:"New Jersey"}, records["md"] = { medinc:64596, statename:"Maryland"}, etc... for all 50 states. And then I have the array sorted reverse numerically (descending) like this: arr.sortOn("medinc", Array.NUMERIC); arr.reverse(); Can I call the name of the record (i.e. "nj" for new jersey) and then get the value from the numeric position above and below the record in the array? Basically, medinc is medium income of US states, and I am trying to show a ranking system... a user would click Texas for example, and it would show the medinc value for Texas, along with the state the ranks one position below and the state that ranks one position above in the array. Thanks for your help!

    Read the article

  • Excel cell references not updating when referenced cells are sorted.

    - by Robert Kerr
    There are two tables, each with 75 entries. Each entry in the 2nd table calls an entry in the first table a parent. One of my 2nd table columns contains the "Parent Price", referencing the Price column in the first table, such as "=E50". Table 1 Id Price 1001 79.25 1002 8.99 1003 24.50 Table 2 Id Price Parent Price 2001 50.00 =B2 2002 2.81 =B3 2003 12.00 =B4 The problem is when I sort the first table, none of the second table's "Parent Price" references are updated, and still point to the =E50 cell, which is no longer the correct parent. I don't want to have to name the cells if possible. What style of formula do I enter in the parent price column so that they properly track the cells in the referenced table?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >