Search Results

Search found 312 results on 13 pages for 'hashes'.

Page 3/13 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Comparing lists of field-hashes with equivalent AR-objects.

    - by Tim Snowhite
    I have a list of hashes, as such: incoming_links = [ {:title => 'blah1', :url => "http://blah.com/post/1"}, {:title => 'blah2', :url => "http://blah.com/post/2"}, {:title => 'blah3', :url => "http://blah.com/post/3"}] And an ActiveRecord model which has fields in the database with some matching rows, say: Link.all => [<Link#2 @title='blah2' @url='...post/2'>, <Link#3 @title='blah3' @url='...post/3'>, <Link#4 @title='blah4' @url='...post/4'>] I'd like to do set operations on Link.all with incoming_links so that I can figure out that <Link#4 ...> is not in the set of incoming_links, and {:title => 'blah1', :url =>'http://blah.com/post/1'} is not in the Link.all set, like so: #pseudocode #incoming_links = as above links = Link.all expired_links = links - incoming_links missing_links = incoming_links - links expired_links.destroy missing_links.each{|link| Link.create(link)} One route I've tried: I'd rather not rewrite Array#- and such, and I'm okay with converting incoming_links to a set of unsaved Link objects; so I've tried overwriting hash eql? and so on in Link so that it ignored the id equality that AR::Base provides by default. But this is the only place this sort of equality should be considered in the application - in other places the Link#id default identity is required. Is there some way I could subclass Link and apply the hash, eql?, etc overwriting there? The other route I've tried is to pull out the attributes hash for each Link and doing a .slice('id',...etc) to prune the hashes down. But this requires writing seperate methods for keeping track of the Link objects while doing set operations on the hashes, or writing seperate Collection classes to wrap the incoming_links hash-list and Link-list which seems a bit overkill. What is the best way to design this interaction? Extra credit for cleanliness.

    Read the article

  • How does this Perl grep work to determine the union of several hashes?

    - by titaniumdecoy
    I don't understand the last line of this function from Programming Perl 3e. Here's how you might write a function that does a kind of set intersection by returning a list of keys occurring in all the hashes passed to it: @common = inter( \%foo, \%bar, \%joe ); sub inter { my %seen; for my $href (@_) { while (my $k = each %$href) { $seen{$k}++; } } return grep { $seen{$_} == @_ } keys %seen; } I understand that %seen is a hash which maps each key to the number of times it was encountered in any of the hashes provided to the function.

    Read the article

  • What ASP.NET MVC Route controls the appearance of hashes in URIs?

    - by rasx
    I have integrated a Silverlight Navigation Application in an ASP.NET MVC web. However when Silverlight calls for its default page, say, IndexPage ASP.NET MVC displays the route as: http://localhost/#/IndexPage I have tried to get ASP.NET MVC to respond to this route: http://localhost/#IndexPage but I am unable to find a configuration that works with this. Does ASP.NET MVC routes respond to hashes in general?

    Read the article

  • Why is the same input returning two different MD5 hashes?

    - by Rob
    Alright, I have two files. They are the EXACT SAME. The first file is: http://iadsonline.com/servconfig.php And the second file is: http://xzerox.info/servconfig.php However, when I use md5_file() to get their MD5, They return two different MD5's. The first returns cc7819055cde3194bb3b136bad5cf58d, which is incorrect, and the second returns 96a0cec80eb773687ca28840ecc67ca1, which is correct. The file is simply an &nbsp; To verify, I've used this code: $contents = file_get_contents($URL); echo htmlentities($contents); And they both return &nbsp; So why is it hashing them differently?

    Read the article

  • How can I sort a Perl array of array of hashes?

    - by srk
    @aoaoh; $aoaoh[0][0]{21} = 31; $aoaoh[0][0]{22} = 31; $aoaoh[0][0]{23} = 17; for $k (0 .. $#aoaoh) { for $i(0.. $#aoaoh) { for $val (keys %{$aoaoh[$i][$k]}) { print "$val=$aoaoh[$i][$k]{$val}\n"; } } } The output is: 22=31 21=31 23=17 but i expect it to be 21=31 22=31 23=17 Please tell me where is this wrong. Also how do I sort the values so that i get the output as 23=17 22=31 21=31 (if 2 keys have same value then key with higher value come first)

    Read the article

  • Help me write my LISP :) LISP environments, Ruby Hashes...

    - by MikeC8
    I'm implementing a rudimentary version of LISP in Ruby just in order to familiarize myself with some concepts. I'm basing my implementation off of Peter Norvig's Lispy (http://norvig.com/lispy.html). There's something I'm missing here though, and I'd appreciate some help... He subclasses Python's dict as follows: class Env(dict): "An environment: a dict of {'var':val} pairs, with an outer Env." def __init__(self, parms=(), args=(), outer=None): self.update(zip(parms,args)) self.outer = outer def find(self, var): "Find the innermost Env where var appears." return self if var in self else self.outer.find(var) He then goes on to explain why he does this rather than just using a dict. However, for some reason, his explanation keeps passing in through my eyes and out through the back of my head. Why not use a dict, and then inside the eval function, when a new "sub-environment" needs to be created, just take the existing dict and update the key/value pairs that need to be updated, and pass that new dict into the next eval? Won't the Python interpreter keep track of the previous "outer" envs? And won't the nature of the recursion ensure that the values are pulled out from "inner" to "outer"? I'm using Ruby, and I tried to implement things this way. Something's not working though, and it might be because of this, or perhaps not. Here's my eval function, env being a regular Hash: def eval(x, env = $global_env) ........ elsif x[0] == "lambda" then ->(*args) { eval(x[2], env.merge(Hash[*x[1].zip(args).flatten(1)])) } ........ end The line that matters of course is the "lambda" one. If there is a difference, what's importantly different between what I'm doing here and what Norvig did with his Env class? If there's no difference, then perhaps someone can enlighten me as to why Norvig uses the Env class. Thanks :)

    Read the article

  • (Rails) Creating multi-dimensional hashes/arrays from a data set...?

    - by humble_coder
    Hi All, I'm having a bit of an issue wrapping my head around something. I'm currently using a hacked version of Gruff in order to accommodate "Scatter Plots". That said, the data is entered in the form of: g.data("Person1",[12,32,34,55,23],[323,43,23,43,22]) ...where the first item is the ENTITY, the second item is X-COORDs, and the third item is Y-COORDs. I currently have a recordset of items from a table with the columns: POINT, VALUE, TIMESTAMP. Due to the "complex" calculations involved I must grab everything using a single query or risk way too much DB activity. That said, I have a list of items for which I need to dynamically collect all data from the recordset into a hash (or array of arrays) for the creation of the data items. I was thinking something like the following: @h={} e = Events.find_by_sql(my_query) e.each do |event| @h["#{event.Point}"][x] = event.timestamp @h["#{event.Point}"][y] = event.value end Obviously that's not the correct syntax, but that's where my brain is going. Could someone clean this up for me or suggest a more appropriate mechanism by which to accomplish this? Basically the main goal is to keep data for each pointname grouped (but remember the recordset has them all). Much appreciated. EDIT 1 g = Gruff::Scatter.new("600x350") g.title = self.name e = Event.find_by_sql(@sql) h ={} e.each do |event| h[event.Point.to_s] ||= {} h[event.Point.to_s].merge!({event.Timestamp.to_i,event.Value}) end h.each do |p| logger.info p[1].values.inspect g.data(p[0],p[1].keys,p[1].values) end g.write(@chart_file)

    Read the article

  • Program to change/obfuscate all hashes (MD5/SHA1) in a directory tree?

    - by anon
    Hi fellas, A) Are there any FOSS programs out there that can manage to hashchange all files in a directory tree? B) Failing that, what methods could be used to develop this capability in a (crappy) self-written program without requiring the program to be sophisticated and content-aware? Is there any (roughly) universally safe location within a file (for example, around EOF?) where on could one simply append/add psuedorandom data so the resulting hash is different? Is there a better/more elegant solution? Muchos gracias

    Read the article

  • How can I merge several hashes into one hash in Perl?

    - by Nick
    In Perl, how do I get this: $VAR1 = { '999' => { '998' => [ '908', '906', '0', '998', '907' ] } }; $VAR1 = { '999' => { '991' => [ '913', '920', '918', '998', '916', '919', '917', '915', '912', '914' ] } }; $VAR1 = { '999' => { '996' => [] } }; $VAR1 = { '999' => { '995' => [] } }; $VAR1 = { '999' => { '994' => [] } }; $VAR1 = { '999' => { '993' => [] } }; $VAR1 = { '999' => { '997' => [ '986', '987', '990', '984', '989', '988' ] } }; $VAR1 = { '995' => { '101' => [] } }; $VAR1 = { '995' => { '102' => [] } }; $VAR1 = { '995' => { '103' => [] } }; $VAR1 = { '995' => { '104' => [] } }; $VAR1 = { '995' => { '105' => [] } }; $VAR1 = { '995' => { '106' => [] } }; $VAR1 = { '995' => { '107' => [] } }; $VAR1 = { '994' => { '910' => [] } }; $VAR1 = { '993' => { '909' => [] } }; $VAR1 = { '993' => { '904' => [] } }; $VAR1 = { '994' => { '985' => [] } }; $VAR1 = { '994' => { '983' => [] } }; $VAR1 = { '993' => { '902' => [] } }; $VAR1 = { '999' => { '992' => [ '905' ] } }; to this: $VAR1 = { '999:' => [ { '992' => [ '905' ] }, { '993' => [ { '909' => [] }, { '904' => [] }, { '902' => [] } ] }, { '994' => [ { '910' => [] }, { '985' => [] }, { '983' => [] } ] }, { '995' => [ { '101' => [] }, { '102' => [] }, { '103' => [] }, { '104' => [] }, { '105' => [] }, { '106' => [] }, { '107' => [] } ] }, { '996' => [] }, { '997' => [ '986', '987', '990', '984', '989', '988' ] }, { '998' => [ '908', '906', '0', '998', '907' ] }, { '991' => [ '913', '920', '918', '998', '916', '919', '917', '915', '912', '914' ] } ]};

    Read the article

  • Efficient and accurate way to compact and compare Python lists?

    - by daveslab
    Hi folks, I'm trying to a somewhat sophisticated diff between individual rows in two CSV files. I need to ensure that a row from one file does not appear in the other file, but I am given no guarantee of the order of the rows in either file. As a starting point, I've been trying to compare the hashes of the string representations of the rows (i.e. Python lists). For example: import csv hashes = [] for row in csv.reader(open('old.csv','rb')): hashes.append( hash(str(row)) ) for row in csv.reader(open('new.csv','rb')): if hash(str(row)) not in hashes: print 'Not found' But this is failing miserably. I am constrained by artificially imposed memory limits that I cannot change, and thusly I went with the hashes instead of storing and comparing the lists directly. Some of the files I am comparing can be hundreds of megabytes in size. Any ideas for a way to accurately compress Python lists so that they can be compared in terms of simple equality to other lists? I.e. a hashing system that actually works? Bonus points: why didn't the above method work?

    Read the article

  • joining text files with 600M+ lines

    - by dnkb
    I have two files huge.txt and small.txt. Huge has around 600M rows and it's 14Gigs, each line has four space separated words (tokens) and finally another space separated column with a number. Small has 150K rows with a size of ~3M, a space separated word and a number. Both Files are sorted using the sort command, with no extra options. The words in both files may include apostrophes (') and dashes (-). The deisred output would contain all columns from the huge.txt and the second column (the number) from small txt where the first word of huge.txt and the first word of small.txt match. My attemtpts below failed miserably with the following error: cat huge.txt|join -o 1.1 1.2 1.3 1.4 2.2 - small.txt > output.txt join: memory exhausted What I suspect is that the sorting order isn't right somehow even though the files are pre-sorted using: sort -k1 huge.unsorted.txt > huge.txt sort -k1 small.unsorted.txt > small.txt Problems seem to appear around words that have apostrophes (') or dashes (-). I also tried dictinoary sorting using the -d option bumping into the same error at the end. I see two ways out of this but don't know how to implement any of them. 1) Any tips how to sort the files in a way that the join command considers them to be sorted properly? 2) I was thinking of calculating MD5 or some other hashes of the strings to get rid of the apostrophes and dashes, and do the sorting and joining with the hashes instead of the strings themselves an dat the "translate" back the hashes to strings, but leave the numbers intact at the end of the lines. Since there would be only 150K hashes it's not that bad. What would be a good way to calculate individual hashes for each of the strings? Some AWK magic? See file samples at the end. Thank you! sample of huge.txt had stirred me to 46 had stirred my corruption 57 had stirred old emotions 55 had stirred something in 69 had stirred something within 40 sample of small.txt caley 114881 calf 2757974 calfed 137861 calfee 71143 calflora 154624 calfskin 148347 calgary 9416465 calgon's 94846

    Read the article

  • Scripting out Contained Database Users

    - by Argenis
      Today’s blog post comes from a Twitter thread on which @SQLSoldier, @sqlstudent144 and @SQLTaiob were discussing the internals of contained database users. Unless you have been living under a rock, you’ve heard about the concept of contained users within a SQL Server database (hit the link if you have not). In this article I’d like to show you that you can, indeed, script out contained database users and recreate them on another database, as either contained users or as good old fashioned logins/server principals as well. Why would this be useful? Well, because you would not need to know the password for the user in order to recreate it on another instance. I know there is a limited number of scenarios where this would be necessary, but nonetheless I figured I’d throw this blog post to show how it can be done. A more obscure use case: with the password hash (which I’m about to show you how to obtain) you could also crack the password using a utility like hashcat, as highlighted on this SQLServerCentral article. The Investigation SQL Server uses System Base Tables to save the password hashes of logins and contained database users. For logins it uses sys.sysxlgns, whereas for contained database users it leverages sys.sysowners. I’ll show you what I do to figure this stuff out: I create a login/contained user, and then I immediately browse the transaction log with, for example, fn_dblog. It’s pretty obvious that only two base tables touched by the operation are sys.sysxlgns, and also sys.sysprivs – the latter is used to track permissions. If I connect to the DAC on my instance, I can query for the password hash of this login I’ve just created. A few interesting things about this hash. This was taken on my laptop, and I happen to be running SQL Server 2014 RTM CU2, which is the latest public build of SQL Server 2014 as of time of writing. In 2008 R2 and prior versions (back to 2000), the password hashes would start with 0x0100. The reason why this changed is because starting with SQL Server 2012 password hashes are kept using a SHA512 algorithm, as opposed to SHA-1 (used since 2000) or Snefru (used in 6.5 and 7.0). SHA-1 is nowadays deemed unsafe and is very easy to crack. For regular SQL logins, this information is exposed through the sys.sql_logins catalog view, so there is really no need to connect to the DAC to grab an SID/password hash pair. For contained database users, there is (currently) no method of obtaining SID or password hashes without connecting to the DAC. If we create a contained database user, this is what we get from the transaction log: Note that the System Base Table used in this case is sys.sysowners. sys.sysprivs is used as well, and again this is to track permissions. To query sys.sysowners, you would have to connect to the DAC, as I mentioned previously. And this is what you would get: There are other ways to figure out what SQL Server uses under the hood to store contained database user password hashes, like looking at the execution plan for a query to sys.dm_db_uncontained_entities (Thanks, Robert Davis!) SIDs, Logins, Contained Users, and Why You Care…Or Not. One of the reasons behind the existence of Contained Users was the concept of portability of databases: it is really painful to maintain Server Principals (Logins) synced across most shared-nothing SQL Server HA/DR technologies (Mirroring, Availability Groups, and Log Shipping). Often times you would need the Security Identifier (SID) of these logins to match across instances, and that meant that you had to fetch whatever SID was assigned to the login on the principal instance so you could recreate it on a secondary. With contained users you normally wouldn’t care about SIDs, as the users are always available (and synced, as long as synchronization takes place) across instances. Now you might be presented some particular requirement that might specify that SIDs synced between logins on certain instances and contained database users on other databases. How would you go about creating a contained database user with a specific SID? The answer is that you can’t do it directly, but there’s a little trick that would allow you to do it. Create a login with a specified SID and password hash, create a user for that server principal on a partially contained database, then migrate that user to contained using the system stored procedure sp_user_migrate_to_contained, then drop the login. CREATE LOGIN <login_name> WITH PASSWORD = <password_hash> HASHED, SID = <sid> ; GO USE <partially_contained_db>; GO CREATE USER <user_name> FROM LOGIN <login_name>; GO EXEC sp_migrate_user_to_contained @username = <user_name>, @rename = N’keep_name’, @disablelogin = N‘disable_login’; GO DROP LOGIN <login_name>; GO Here’s how this skeleton would look like in action: And now I have a contained user with a specified SID and password hash. In my example above, I renamed the user after migrated it to contained so that it is, hopefully, easier to understand. Enjoy!

    Read the article

  • How can I implement the Gale-Shapley stable marriage algorithm in Perl?

    - by srk
    Problem : We have equal number of men and women.each men has a preference score toward each woman. So do the woman for each man. each of the men and women have certain interests. Based on the interest we calculate the preference scores. So initially we have an input in a file having x columns. First column is the person(men/woman) id. id are nothing but 0.. n numbers.(first half are men and next half woman) the remaining x-1 columns will have the interests. these are integers too. now using this n by x-1 matrix... we have come up with a n by n/2 matrix. the new matrix has all men and woman as their rows and scores for opposite sex in columns. We have to sort the scores in descending order, also we need to know the id of person related to the scores after sorting. So here i wanted to use hash table. once we get the scores we need to make up pairs.. for which we need to follow some rules. My trouble is with the second matrix of n by n/2 that needs to give information of which man/woman has how much preference on a woman/man. I need these scores sorted so that i know who is the first preferred woman/man, 2nd preferred and so on for a man/woman. I hope to get good suggestions on the data structures i use.. I prefer php or perl. Thank you in advance Hey guys this is not an home work. This a little modified version of stable marriage algorithm. I have working solution. I am only working on optimizing my code. more info: It is very similar to stable marriage problem but here we need to calculate the scores based on the interests they share. So i have implemented it as the way you see in the wiki page http://en.wikipedia.org/wiki/Stable_marriage_problem. my problem is not solving the problem. i solved it and can run it. I am just trying to have a better solution. so i am asking suggestions on the type of data structure to use. Conceptually I tried using an array of hashes. where the array index give the person id and the hash in it gives the id's <= score's in sorted manner. I initially start with an array of hashes. now i sort the hashes on values, but i could not store the sorted hashes back in an array.So just stored the keys after sorting and used these to get the values from my initial unsorted hashes. Can we store the hashes after sorting ? Can you suggest a better structure ?

    Read the article

  • accepts_nested_attributes_for and nested_form plugin

    - by Denis
    Hi folks, I've the following code in a _form.html.haml partial, it's used for new and edit actions. (fyi I use the Ryan Bates' plugin nested_form) .fields - f.fields_for :transportations do |builder| = builder.collection_select :person_id, @people, :id, :name, {:multiple => true} = builder.link_to_remove 'effacer' = f.link_to_add "ajouter", :transportations works fine for the new action... for the edit action, as explain in the doc, I've to add the :id of already existing associations, so, I've to add something like = builder.hidden_field :id, ?the value? if ?.new_record? How can I get the value? Here is the doc of accepts_nested_attributes_for for reference (source: http://github.com/rails/rails/blob/master/activerecord/lib/active_record/nested_attributes.rb#L332) # Assigns the given attributes to the collection association. # # Hashes with an <tt>:id</tt> value matching an existing associated record # will update that record. Hashes without an <tt>:id</tt> value will build # a new record for the association. Hashes with a matching <tt>:id</tt> # value and a <tt>:_destroy</tt> key set to a truthy value will mark the # matched record for destruction. # # For example: # # assign_nested_attributes_for_collection_association(:people, { # '1' => { :id => '1', :name => 'Peter' }, # '2' => { :name => 'John' }, # '3' => { :id => '2', :_destroy => true } # }) # # Will update the name of the Person with ID 1, build a new associated # person with the name `John', and mark the associatied Person with ID 2 # for destruction. # # Also accepts an Array of attribute hashes: # # assign_nested_attributes_for_collection_association(:people, [ # { :id => '1', :name => 'Peter' }, # { :name => 'John' }, # { :id => '2', :_destroy => true } # ]) Thanks for your help.

    Read the article

  • Is it possible to convert a 40-character SHA1 hash to a 20-character SHA1 hash?

    - by ewitch
    My problem is a bit hairy, and I may be asking the wrong questions, so please bear with me... I have a legacy MySQL database which stores the user passwords & salts for a membership system. Both of these values have been hashed using the Ruby framework - roughly like this: hashedsalt = Digest::SHA1.hexdigest("--#{Time.now.to_s}--#{login}--") hashedpassword = Digest::SHA1.hexdigest("#{hashedsalt}:#{password}") So both values are stored as 40-character strings (varchar(40)) in MySQL. Now I need to import all of these users into the ASP.NET membership framework for a new web site, which uses a SQL Server database. It is my understanding that the the way I have ASP.NET membership configured, the user passwords and salts are also stored in the membership database (in table aspnet_Membership) as SHA1 hashes, which are then Base64 encoded (see here for details) and stored as nvarchar(128) data. But from the length of the Base64 encoded strings that are stored (28 characters) it seems that the SHA1 hashes that ASP.NET membership generates are only 20 characters long, rather than 40. From some other reading I have been doing I am thinking this has to do with the number of bits per character/character set/encoding or something related. So is there some way to convert the 40-character SHA1 hashes to 20-character hashes which I can then transfer to the new ASP.NET membership data table? I'm pretty familiar with ASP.NET membership by now but I feel like I'm just missing this one piece. However, it may also be known that SHA1 in Ruby and SHA1 in .NET are incompatible, so I'm fighting a losing battle... Thanks in advance for any insight.

    Read the article

  • OSX 10.6.6 SSH md5 break-in check

    - by Alex
    Information Recently one of the linux servers that I access was compromised to steal passwords and ssh keys using a modified ssh binary. This lead me to question if the attacker had compromised my OSX Laptop which had ssh access turned on. A sophos virus scan turned up nothing, and I did not have rkhunter installed before the attack, so I could not compare hashes of the system binaries to be sure. However because OSX is relatively standard for each of their major releases, I asked fiends for md5 hashes md5 /usr/bin/ssh and md5 /usr/sbin/sshd as a basic first check to see if there was anything different about my machine. A few emails later I have found the following data: Version (Arch) [N] MD5 (/usr/bin/ssh) MD5 (/usr/sbin/sshd) OSX 10.5.8 (PPC) [3] 1e9fd483eef23464ec61c815f7984d61 9d32a36294565368728c18de466e69f1 OSX 10.5.8 (intel) [5] 1e9fd483eef23464ec61c815f7984d61 9d32a36294565368728c18de466e69f1 OSX 10.6.x (intel) [7] 591fbe723011c17b6ce41c537353b059 e781fad4fc86cf652f6df22106e0bf0e OSX 10.6.x (intel) [4] 58be068ad5e575c303ec348a1c71d48b 33dafd419194b04a558c8404b484f650 Mine 10.6.6 (intel) df344cc00a294c91230c65e8b7332a79 b5094ccf4cd074aaf573d4f5df75906a where N is the number of machines with with that MD5, and the last row is my laptop. The sample is relatively heterogeneous spaning a few years of different makes and models of Apples, and different versions of 10.6.x. The different hash for my system made me worried that these binaries might have been compromised. So I made sure that my backup for the week was good, and dived into formatting my system and reinstalling OSX. After reinstalling OSX from the manufacturer DVD, I found that the MD5 hash did not change for either ssh, or sshd. Goal Make sure that my system is does not have any malicious software. Should I be worried that this base install of OSX (with no other software installed) has been compromised? I have also updated my system to 10.6.6 and found no change as well. Other Information I am not sure if this is helpful information, but my laptop is a i7 15 inch MacBook Pro bought in Nov 2010, and here is some output from system_profiler: System Software Overview: System Version: Mac OS X 10.6.6 (10J567) Kernel Version: Darwin 10.6.0 64-bit Kernel and Extensions: No Time since boot: 1:37 Hardware: Hardware Overview: Model Name: MacBook Model Identifier: MacBook6,2 Processor Name: Intel Core i7 Processor Speed: 2.66 GHz Number Of Processors: 1 Total Number Of Cores: 2 L2 Cache (per core): 256 KB L3 Cache: 4 MB Memory: 4 GB Processor Interconnect Speed: 4.8 GT/s Boot ROM Version: MBP61.0057.B0C SMC Version (system): 1.58f16 Sudden Motion Sensor: State: Enabled On the laptop, I find: $ codesign -vvv /usr/bin/ssh /usr/bin/ssh: valid on disk /usr/bin/ssh: satisfies its Designated Requirement $ codesign -vvv /usr/sbin/sshd /usr/sbin/sshd: valid on disk /usr/sbin/sshd: satisfies its Designated Requirement $ ls -la /usr/bin/ssh -rwxr-xr-x 1 root wheel 1001520 Feb 11 2010 /usr/bin/ssh $ ls -la /usr/sbin/sshd -rwxr-xr-x 1 root wheel 1304800 Feb 11 2010 /usr/sbin/sshd $ ls -la /sbin/md5 -r-xr-xr-x 1 root wheel 65232 May 18 2009 /sbin/md5 Update So far I have not gotten an answer about this question, but if you could help by increasing the number of hashes that I can compare against, that would be great. To get hashes, and version numbers, run the following on osx: md5 /usr/bin/ssh md5 /usr/sbin/sshd ssh -V sw_vers

    Read the article

  • Tool to compute SHA256 Tree Hash

    - by Benjamin
    I've started using AWS Glacier, and noticed that it hashes the files using an algorithm called SHA-256 Tree Hash. To my surprise, this algorithm is different from SHA-256, so I can't use the tools I'm used to, to compare hashes and verify file integrity. Do you know a Windows tool, if possible integrated in the context menu, to compute the SHA-256 Tree Hash of a file? I'd also accept a Linux command-line tool, as a second choice :-)

    Read the article

  • Creating a Rails query from a hash of user input

    - by Jamie
    I'm attempting to create a fairly complex search engine for a project using a variable number of search criteria. The user input is sorted into an array of hashes. The hashes contain the following information: { :column => "", :value => "", :operator => "", # Such as: =, !=, <, >, etc. :and_or => "", # Two possible values: "and" and "or" } How can I loop through this array and use the information in these hashes to make an ActiveRecord WHERE query?

    Read the article

  • Is there a function that can read a php function post-parsing?

    - by Rob
    I've got a php file echoing hashes from a MySQL database. This is necessary for a remote program I'm using, but at the same time I need my other php script opening and checking it for specified strings POST parsing. If it checks for the string pre-parsing, it'll just get the MySQL query rather than the strings to look for. I'm not sure if any functions do this. Does fopen() read the file prior to parsing? or file_get_contents()? If so, is there a function that'll read the file after the php and mysql code runs? The file with the hashes query and echo is in the same directory as the php file reading it, if that makes a difference. Perhaps fopen reads it post-parse, and I've done something wrong, but at first I was storing the hashes directly in the file, and it was working fine. After I changed it to echo the contents of the MySQL table, it bugged out.

    Read the article

  • Simple Serialization Faster Than JSON? (in Ruby)

    - by Sinan Taifour
    I have an application written in ruby (that runs in the JRuby VM). When profiling it, I realized that it spends a lot (actually almost all of) its time converting some hashes into JSON. These hashes have keys of symbols, values of other similar hashes, arrays, strings, and numbers. Is there a serialization method that is suitable for such an input, and would typically run faster than JSON? It would preferable if it is has a Java or JRuby-compatible gem, too. I am currently using the jruby-json gem, which is the fastest JSON implementation in JRuby (as I am told), so the move will most likely be to a different serialization method rather than just a different library. Any help is appreciated! Thanks.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >