Search Results

Search found 3659 results on 147 pages for 'sorted hash'.

Page 26/147 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Convert array of hashes to array of structs?

    - by keruilin
    Let's say I have two objects: User and Race. User has two attributes: first_name, last_name. And Race has three attributes: course, start_time, end_time. Now let's say I create an array of hashes like this: user_races = races.map{ |race| {:user = race.user, :race = race} } How do I then convert user_races into an array of structs, keeping in mind that I want to be able to access the attributes of both user and race from the struct element?

    Read the article

  • Creating a short unique string for each unique long string

    - by king.net
    I'm trying to create a url shortener system in c# and asp.net mvc. I know about hashtable and I know how to create a redirect system etc. The problem is indexing long urls in database. Some urls may have up to 4000 character length, and it seems it is a bad idea to index this kind of strings. The question is: How can I create a unique short string for each url? for example MD5 can help me? Is MD5 really unique for each string? NOTE: I see that Gravatar uses MD5 for emails, so if each email address is unique, then its MD5 hashed value is unique. Is it right? Can I use same solution for urls?

    Read the article

  • using Multi Probe LSH with LSHKIT

    - by Yijinsei
    Hi Guys, I have read through the source code for mplsh, but I still unsure on how to use the indexes generated by lshkit to speed up the process in comparing feature vector in Euclidean Distance. Do you guys have any experience regarding this?

    Read the article

  • MD5 file processing

    - by Ric Coles
    Good morning all, I'm working on an MD5 file integrity check tool in C#. How long should it take for a file to be given an MD5 checksum value? For example, if I try to get a 2gb .mpg file, it is taking around 5 mins+ each time. This seems overly long. Am I just being impatient? Below is the code I'm running public string getHash(String @fileLocation) { FileStream fs = new FileStream(@fileLocation, FileMode.Open); HashAlgorithm alg = new HMACMD5(); byte[] hashValue = alg.ComputeHash(fs); string md5Result = ""; foreach (byte x in hashValue) { md5Result += x; } fs.Close(); return md5Result; } Any suggestions will be appreciated. Regards

    Read the article

  • Java to JavaScript (Encryption related)

    - by balexandre
    Hi guys, I'm having difficulties to get the same string in Javascript and I'm thinking that I'm doing something wrong... Java code: import java.io.UnsupportedEncodingException; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.Date; import java.util.GregorianCalendar; import sun.misc.BASE64Encoder; private static String getBase64Code(String input) throws UnsupportedEncodingException, NoSuchAlgorithmException { String base64 = ""; byte[] txt = input.getBytes("UTF8"); byte[] text = new byte[txt.length+3]; text[0] = (byte)239; text[1] = (byte)187; text[2] = (byte)191; for(int i=0; i<txt.length; i++) text[i+3] = txt[i]; MessageDigest md = MessageDigest.getInstance("MD5"); md.update(text); byte digest[] = md.digest(); BASE64Encoder encoder = new BASE64Encoder(); base64 = encoder.encode(digest); return base64; } I'm trying this using Paj's MD5 script as well Farhadi Base 64 Encode script but my tests fail completely :( my code: function CalculateCredentialsSecret(type, user, pwd) { var days = days_between(new Date(), new Date(2000, 1, 1)); var str = type.toUpperCase() + user.toUpperCase() + pwd.toUpperCase() + days; var md5 = hex_md5(str); var b64 = base64Encode(md5); return encodeURIComponent(b64); } Does anyone know how can I convert this Java method into a Javascript one? Thank you Tests (for today, 3740 days after January 1st, 2000 var secret = CalculateCredentialsSecret('AAA', 'BBB', 'CCC'); // secret SHOULD be: S3GYAfGWlmrhuoNsIJF94w==

    Read the article

  • Codeigniter or PHP Amazon API help

    - by faya
    Hello, I have a problem searching through amazon web servise using PHP in my CodeIgniter. I get InvalidParameter timestamp is not in ISO-8601 format response from the server. But I don't think that timestamp is the problem,because I have tryed to compare with given date format from http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html and it seems its fine. Could anyone help? Here is my code: $private_key = 'XXXXXXXXXXXXXXXX'; // Took out real secret key $method = "GET"; $host = "ecs.amazonaws.com"; $uri = "/onca/xml"; $timeStamp = gmdate("Y-m-d\TH:i:s.000\Z"); $timeStamp = str_replace(":", "%3A", $timeStamp); $params["AWSAccesskeyId"] = "XXXXXXXXXXXX"; // Took out real access key $params["ItemPage"] = $item_page; $params["Keywords"] = $keywords; $params["ResponseGroup"] = "Medium2%2525COffers"; $params["SearchIndex"] = "Books"; $params["Operation"] = "ItemSearch"; $params["Service"] = "AWSECommerceService"; $params["Timestamp"] = $timeStamp; $params["Version"] = "2009-03-31"; ksort($params); $canonicalized_query = array(); foreach ($params as $param=>$value) { $param = str_replace("%7E", "~", rawurlencode($param)); $value = str_replace("%7E", "~", rawurlencode($value)); $canonicalized_query[] = $param. "=". $value; } $canonicalized_query = implode("&", $canonicalized_query); $string_to_sign = $method."\n\r".$host."\n\r".$uri."\n\r".$canonicalized_query; $signature = base64_encode(hash_hmac("sha256",$string_to_sign, $private_key, True)); $signature = str_replace("%7E", "~", rawurlencode($signature)); $request = "http://".$host.$uri."?".$canonicalized_query."&Signature=".$signature; $response = @file_get_contents($request); if ($response === False) { return "response fail"; } else { $parsed_xml = simplexml_load_string($response); if ($parsed_xml === False) { return "parse fail"; } else { return $parsed_xml; } } P.S. - Personally I think that something is wrong in the generation of the from the $string_to_sign when hashing it.

    Read the article

  • How can I cleanly turn a nested Perl hash into a non-nested one?

    - by knorv
    Assume a nested hash structure %old_hash .. my %old_hash; $old_hash{"foo"}{"bar"}{"zonk"} = "hello"; .. which we want to "flatten" (sorry if that's the wrong terminology!) to a non-nested hash using the sub &flatten(...) so that .. my %h = &flatten(\%old_hash); die unless($h{"zonk"} eq "hello"); The following definition of &flatten(...) does the trick: sub flatten { my $hashref = shift; my %hash; my %i = %{$hashref}; foreach my $ii (keys(%i)) { my %j = %{$i{$ii}}; foreach my $jj (keys(%j)) { my %k = %{$j{$jj}}; foreach my $kk (keys(%k)) { my $value = $k{$kk}; $hash{$kk} = $value; } } } return %hash; } While the code given works it is not very readable or clean. My question is two-fold: In what ways does the given code not correspond to modern Perl best practices? Be harsh! :-) How would you clean it up?

    Read the article

  • how to build a index table(python dict like) in python with sqlite3

    - by Registered User KC
    Suppose I have one string list may have duplicated items: A B C A A C D E F F I want to make a list can assign an unique index for each item, looks like: 1 A 2 B 3 C 4 D 5 E 6 F now I created sqlite3 database with below SQL statement: CREATE TABLE aa ( myid INTEGER PRIMARY KEY AUTOINCREMENT, name STRING, UNIQUE (myid) ON CONFLICT FAIL, UNIQUE (name) ON CONFLICT FAIL); The plan is insert each row into the database in python. My question is how to handle the error when conflict do happened when insert in python module sqlite3? For example: the program will printing a warning message which item is conflicted and continue next insert action when inserting in python? Thanks

    Read the article

  • Does it make sense to resize an Hash Table down? And When?

    - by Nazgulled
    Hi, My Hash Table implementation has a function to resize the table when the load reaches about 70%. My Hash Table is implemented with separate chaining for collisions. Does it make sense that I should resize the hash table down at any point or should I just leave it like it is? Otherwise, if I increase the size (by almost double, actually I follow this: http://planetmath.org/encyclopedia/GoodHashTablePrimes.html) when the load is 70%, should I resize it down when the load gets 30% or below?

    Read the article

  • How to check for duplicate files?

    - by miorel
    I have an external hard drive on which I have backed up files several times. Some files were modified between backups, others were not. Some may have been renamed. Now I'm running out of space, and I'd like to clean up duplicate files. My idea was to md5sum every file on the drive, then look for duplicates, and diff the relevant files (just in case, haha). Is this the best way to do this? What are some other methods of checking for duplicate files?

    Read the article

  • perl array of array of hashes sorting

    - by srk
    @aoaoh; $aoaoh[0][0]{21} = 31; $aoaoh[0][0]{22} = 31; $aoaoh[0][0]{23} = 17; for $k(0.. $#aoaoh) { for $i(0.. $#aoaoh) { for $val (keys %{$aoaoh[$i][$k]}) { print "$val=$aoaoh[$i][$k]{$val}"; print "\n"; }} } output is 22=31 21=31 23=17 but i expect it to be 21=31 22=31 23=17 Please tell me where is this wrong.. Also how do i sort the values so that i get the output as 23=17 22=31 21=31 (if 2 keys have same value then key with higher value come first)

    Read the article

  • Which SHA-256 is correct? The Java SHA-256 digest or the Linux commandline tool

    - by Peter Tillemans
    When I calculate in Java an SHA-256 of a string with the following method I get : 5e884898da2847151d0e56f8dc6292773603dd6aabbdd62a11ef721d1542d8 on the commandline I do : echo "password" | sha256sum and get 5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8 if we compare these more closely I find 2 subtle differences 5e884898da2847151d0e56f8dc6292773603dd6aabbdd62a11ef721d1542d8 5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8 or : 5e884898da28 47151d0e56f8dc6292773603d d6aabbdd62a11ef721d1542d8 5e884898da28 0 47151d0e56f8dc6292773603d 0 d6aabbdd62a11ef721d1542d8 Which of the 2 is correct here?

    Read the article

  • spl_object_hash for PHP < 5.2 (unique ID for object instances)

    - by Rowan
    I'm trying to get unique IDs for object instances in PHP 5+. The function, spl_object_hash() is available from PHP 5.2 but I'm wondering if there's a workaround for older versions. There are a couple of functions in the comments on php.net but they're not working for me. The first (simplified): function spl_object_hash($object){ if (is_object($object)){ return md5((string)$object); } return null; } does not work with native objects (such as DOMDocument), and the second: function spl_object_hash($object){ if (is_object($object)){ ob_start(); var_dump($object); $dump = ob_get_contents(); ob_end_clean(); if (preg_match('/^object\(([a-z0-9_]+)\)\#(\d)+/i', $dump, $match)) { return md5($match[1] . $match[2]); } } return null; } looks like it could be a major performance buster! Does anybody have anything up their sleeve?

    Read the article

  • Reducing Time Complexity in Java

    - by Koeneuze
    Right, this is from an older exam which i'm using to prepare my own exam in january. We are given the following method: public static void Oorspronkelijk() { String bs = "Dit is een boodschap aan de wereld"; int max = -1; char let = '*'; for (int i=0;i<bs.length();i++) { int tel = 1; for (int j=i+1;j<bs.length();j++) { if (bs.charAt(j) == bs.charAt(i)) tel++; } if (tel > max) { max = tel; let = bs.charAt(i); } } System.out.println(max + " keer " + let); } The questions are: what is the output? - Since the code is just an algorithm to determine the most occuring character, the output is "6 keer " (6 times space) What is the time complexity of this code? Fairly sure it's O(n²), unless someone thinks otherwise? Can you reduce the time complexity, and if so, how? Well, you can. I've received some help already and managed to get the following code: public static void Nieuw() { String bs = "Dit is een boodschap aan de wereld"; HashMap<Character, Integer> letters = new HashMap<Character, Integer>(); char max = bs.charAt(0); for (int i=0;i<bs.length();i++) { char let = bs.charAt(i); if(!letters.containsKey(let)) { letters.put(let,0); } int tel = letters.get(let)+1; letters.put(let,tel); if(letters.get(max)<tel) { max = let; } } System.out.println(letters.get(max) + " keer " + max); } However, I'm uncertain of the time complexity of this new code: Is it O(n) because you only use one for-loop, or does the fact we require the use of the HashMap's get methods make it O(n log n) ? And if someone knows an even better way of reducing the time complexity, please do tell! :)

    Read the article

  • How can I sort a Perl array of array of hashes?

    - by srk
    @aoaoh; $aoaoh[0][0]{21} = 31; $aoaoh[0][0]{22} = 31; $aoaoh[0][0]{23} = 17; for $k (0 .. $#aoaoh) { for $i(0.. $#aoaoh) { for $val (keys %{$aoaoh[$i][$k]}) { print "$val=$aoaoh[$i][$k]{$val}\n"; } } } The output is: 22=31 21=31 23=17 but i expect it to be 21=31 22=31 23=17 Please tell me where is this wrong. Also how do I sort the values so that i get the output as 23=17 22=31 21=31 (if 2 keys have same value then key with higher value come first)

    Read the article

  • Why this code generates different numbers?

    - by frbry
    Hello, I have this function that creates a unique number for hard-disk and CPU combination. DWORD hw_hash() { char drv[4]; char szNameBuffer[256]; DWORD dwHddUnique; DWORD dwProcessorUnique; DWORD dwUniqueKey; char *sysDrive = getenv ("SystemDrive"); strcpy(drv, sysDrive); drv[2] = '\\'; drv[3] = 0; GetVolumeInformation(drv, szNameBuffer, 256, &dwHddUnique, NULL, NULL, NULL, NULL); SYSTEM_INFO si; GetSystemInfo(&si); dwProcessorUnique = si.dwProcessorType + si.wProcessorArchitecture + si.wProcessorRevision; dwUniqueKey = dwProcessorUnique + dwHddUnique; return dwUniqueKey; } It returns different numbers if I format my hard-disk and install a new Windows. Any ideas, why? Thank you. Edit: OK, Got it: This function returns the volume serial number that the operating system assigns when a hard disk is formatted. To programmatically obtain the hard disk's serial number that the manufacturer assigns, use the Windows Management Instrumentation (WMI) Win32_PhysicalMedia property SerialNumber. I should do more research before posting my problems online. Sorry to bother you, let's keep this here in case anybody else can need it.

    Read the article

  • Mapping words to numbers with respect to definition

    - by thornate
    As part of a larger project, I need to read in text and represent each word as a number. For example, if the program reads in "Every good boy deserves fruit", then I would get a table that converts 'every' to '1742', 'good' to '977513', etc. Now, obviously I can just use a hashing algorithm to get these numbers. However, it would be more useful if words with similar meanings had numerical values close to each other, so that 'good' becomes '6827' and 'great' becomes '6835', etc. As another option, instead of a simple integer representing each number, it would be even better to have a vector made up of multiple numbers, eg (lexical_category, tense, classification, specific_word) where lexical_category is noun/verb/adjective/etc, tense is future/past/present, classification defines a wide set of general topics and specific_word is much the same as described in the previous paragraph. Does any such an algorithm exist? If not, can you give me any tips on how to get started on developing one myself? I code in C++.

    Read the article

  • Java - Make an object collection friendly

    - by DutrowLLC
    If an object holds a unique primary key, what interfaces does it need to implement in order to be collection friendly especially in terms of being efficiently sortable, hashable, etc...? If the primary key is a string, how are these interfaces best implemented? Thanks!

    Read the article

  • C: getopt with list of acceptable optarg. What is the best practise ?

    - by Xavier Maillard
    Hi, I am writing a C program which is a frontend to a myriad tools. This fronted will be launched like this: my-frontend --action <AN ACTION> As all the tools have the same prefix, let say for this example this prefix is "foo". I want to concatenate "AN ACTION" to this prefix and exec this (if the tool exists). I have written something but my implementation uses strcmp to test that "AN ACTION" is a valid action. Even if this works, I do not like it. So I am looking for a nicer solution that would do the same. The list of possibilities is pretty small (less than 10) and static (the list is "hardcoded") but I am sure there is a more "C-ish" way to do this (using a struct or something like that). As I am not a C expert, I am asking for your help. Regards

    Read the article

  • How to Iterate in ruby ?

    - by Big Bang Theory
    Hi I would like to iterate @some_value outputs the following result {"Meta"=>{"Query"=>"java", "ResultOffset"=>"1", "NumResults"=>"1", "TotalResults"=>"21931"}} i need to retrieve the Value of each individual value for example java 1 1 21931

    Read the article

  • Speed of QHash lookups using QStrings as keys.

    - by Ryan R.
    I need to draw a dynamic overlay on a QImage. The component parts of the overlay are defined in XML and parsed out to a QHash<QString, QPicture> where the QString is the name (such as "crosshairs") and the QPicture is the resolution independent drawing. I then draw components of the overlay as they are needed at a position determined during runtime. Example: I have 10 pictures in my QHash composing every possible element in a HUD. During a particular frame of video I need to draw 6 of them at different positions on the image. During the next frame something has changed and now I only need to draw 4 of them but 2 of those positions have changed. Now to my question: If I am trying to do this quickly, should I redefine my QHash as QHash<int, QPicture> and enumerate the keys to counteract the overhead caused by string comparisons; or are the comparisons not going to make a very big impact on performance? I can easily make the conversion to integer keys as the XML parser and overlay composer are completely separate classes; but I would like to use a consistent data structure across the application. Should I overcome my desire for consistency and re-usability in order to increase performance? Will it even matter very much if I do?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >