Search Results

Search found 3339 results on 134 pages for 'hash collision'.

Page 33/134 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Should my game handle collisions in the Player object?

    - by user1264811
    I'm making a 2D platform game. Right now I'm just working on making a very generic Player class. I'm wondering if it would be more efficient/better practice to have an ActionListener within the Player class to detect collisions with Enemy objects (also have an ActionListener) or to handle all the collisions in the main world. Furthermore, I'm thinking ahead about how I will handle collisions with the platforms themselves. I've looked into the double boolean arrays to see which tiles players can go to and which they can't. I don't understand how to use this class and the player class at the same time.

    Read the article

  • Using a Javascript Variable & Sending to JSON

    - by D Franks
    Hello all! I'm trying to take a URL's hash value, send it through a function, turn that value into an object, but ultimately send the value to JSON. I have the following setup: function content(cur){ var mycur = $H(cur); var pars = "p="+mycur.toJSON(); new Ajax.Updater('my_box', 'test.php', { parameters: pars }); } function update(){ if(window.location.hash.length > 0){ content(window.location.hash.substr(1)); // Everything after the '#' } } var curHashVal = window.location.hash; window.onload = function(){ setInterval(function(){ if(curHashVal != window.location.hash){ update(); curHashVal = window.location.hash; } },1); } But for some reason, I can't seem to get the right JSON output. It will either return as a very large object (1:"{",2:"k") or not return at all. I doubt that it is impossible to accomplish, but I've exhausted most of the ways I can think of. Other ways I've tried were "{" + cur + "}" as well as cur.toObject(), however, none seemed to get the job done. Thanks for the help!

    Read the article

  • Pixel Perfect Collision Detection in HTML5 Canvas

    - by Armin Ronacher
    Hi, I want to check a collision between two Sprites in HTML5 canvas. So for the sake of the discussion, let's assume that both sprites are IMG objects and a collision means that the alpha channel is not 0. Now both of these sprites can have a rotation around the object's center but no other transformation in case this makes this any easier. Now the obvious solution I came up with would be this: calculate the transformation matrix for both figure out a rough estimation of the area where the code should test (like offset of both + calculated extra space for the rotation) for all the pixels in the intersecting rectangle, transform the coordinate and test the image at the calculated position (rounded to nearest neighbor) for the alpha channel. Then abort on first hit. The problem I see with that is that a) there are no matrix classes in JavaScript which means I have to do that in JavaScript which could be quite slow, I have to test for collisions every frame which makes this pretty expensive. Furthermore I have to replicate something I already have to do on drawing (or what canvas does for me, setting up the matrices). I wonder if I'm missing anything here and if there is an easier solution for collision detection.

    Read the article

  • VPN - local and remote networks IP collision

    - by Guido García
    I have created a VPN connection in Windows using the New Network Connection wizard that comes with Windows. It works without problems in most places, but there is one concrete place where, despite the connection to the remote public IP works fine, it is not able to validate the login/password and establish the VPN connection. In this place, the network is 10.0.0.x (the same I use in other places where I am able to connect). The remote network is 192.168.x.x, so I suspect there is some kind of IP collision, because before connecting, a traceroute to i.e. 192.168.0.40 does not fail. 1 4 ms 1 ms 1 ms LINKSYS [10.0.0.1] 2 5 ms 1 ms 1 ms 172.26.27.1 3 4 ms 5 ms 3 ms 192.168.1.100 ... (more) I can't modify the local network further than the first router (10.0.0.1). That is the only different I've found so far. Any idea about how to solve it? Thank you.

    Read the article

  • VPN - local and remote networks IP collision

    - by Guido García
    I have created a VPN connection in Windows using the New Network Connection wizard that comes with Windows. It works without problems in most places, but there is one concrete place where, despite the connection to the remote public IP works fine, it is not able to validate the login/password and establish the VPN connection. In this place, the network is 10.0.0.x (the same I use in other places where I am able to connect). The remote network is 192.168.x.x, so I suspect there is some kind of IP collision, because before connecting, a traceroute to i.e. 192.168.0.40 does not fail. 1 4 ms 1 ms 1 ms LINKSYS [10.0.0.1] 2 5 ms 1 ms 1 ms 172.26.27.1 3 4 ms 5 ms 3 ms 192.168.1.100 ... (more) I can't modify the local network further than the first router (10.0.0.1). That is the only different I've found so far. Any idea about how to solve it? Thank you.

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • How do I detect the colillison of components?

    - by Coupon22
    How do I detect the collision of components, specifically JLabels (or ImageIcons?)?I have tried this: add(test1); test1.setLocation(x, y); add(test2); test1.setLocation(x1, y1); validate(); if(intersects(test1, test2)) { ehealth-=50; } public boolean intersects(JLabel testa, JLabel testb) { boolean b3 = false; if(testa.contains(testb.getX(), testb.getY())) { b3 = true; } return b3; } When I run this, it does nothing! I used to use rectangle, but it didn't go well with me. I was thinking about an image with a border (using paint.net) and moving an imageicon, but I don't know how to get the x of an imageicon or detect collision. I don't know how to detect collision of a label or increase the location either. I have searched for collision detection with components/ImageIcons, but nothing has came up. I have also searched for getting the x of ImageIcons.

    Read the article

  • GameQuery Collision Detection

    - by Pez Cuckow
    I am having a problem with GameQuery (jQuery) collision detection Tthey just never seem to fire?!? I have checked all the .arrow's exist and the same for the .bot's but it just never seems to call the function I have the below code in my main callback: $(".bot").each(function(){ $(this).collision(".arrow").each(function(){ alert("Test"); }); }); Do you have any idea why this would just simply be doing nothing? The bot walks (has it's x value) moved right over the arrow. Current revision online at http://labs.pezcuckow.com/solveit/game.html (no where near finished) if anyone doesn't mine having a look! Many thanks,

    Read the article

  • Puppet&Hiera: $variable is not an hash or array when accessing it

    - by txworking
    I wrote a puppet module and the content of init.pp was: class install( $common_instanceconfig = hiera_hash('common_instanceconfig'), $common_instances = hiera('common_instances') ) { define instances { common { $title: name => $title, path => $common_instanceconfig[$title]['path'], version => $common_instanceconfig[$title]['version'], files => $common_instanceconfig[$title]['files'], pre => $common_instanceconfig[$title]['pre'], after => $common_instanceconfig[$title]['after'], properties => $common_instanceconfig[$title]['properties'], require => $common_instanceconfig[$title]['require'] , } } instances {$common_instances:} } And the hieradata file was: classes: - install common_instances: - common_instance_1 - common_instance_2 common_instanceconfig: common_instance_1 path : '/opt/common_instance_1' version : 1.0 files : software-1.bin pre : pre_install.sh after : after_install.sh properties: "properties" common_instance_2: path : '/opt/common_instance_2' version : 2.0 files : software-2.bin pre : pre_install.sh after : after_install.sh properties: "properties" I always got a error message When puppet agent run Error: common_instanceconfig String is not an hash or array when accessing it with common_instance_1 at /etc/puppet/modules/install/manifests/init.pp:16 on node puppet.agent1.tmp It seems $common_instances can be got correctly, but $commono_instanceconfig always be treated as a string. I used YAML.load_file to load the hieradata file, and got a correct hash object. Can anybody help?

    Read the article

  • ASP.NET Membership C# - How to compare existing password/hash

    - by Steve
    I have been on this problem for a while. I need to compare a paasword that the user enters to a password that is in the membership DB. The password is hashed and has a salt. Because of the lack of documentation I do not know if the salt is append to the password and then hashed how how it is created. I am unable to get this to match. The hash returned from the function never matches the hash in the DB and I know for fact it is the same password. Microsoft seems to hash the password in a different way then I am. I hope someone has some insights please. Here is my code: protected void Button1_Click(object sender, EventArgs e) { //HERE IS THE PASSWORD I USE, SAME ONE IS HASHED IN THE DB string pwd = "Letmein44"; //HERE IS THE SALT FROM THE DB string saltVar = "SuY4cf8wJXJAVEr3xjz4Dg=="; //HERE IS THE PASSWORD THE WAY IT STORED IN THE DB AS HASH string bdPwd = "mPrDArrWt1+tybrjA0OZuEG1P5w="; // FOR COMPARISON I DISPLAY IT TextBox1.Text = bdPwd; // HERE IS WHERE I DISPLAY THE return from THE FUNCTION, IT SHOULD MATCH THE PASSWORD FROM THE DB. TextBox2.Text = getHashedPassUsingUserIdAsSalt(pwd, saltVar); } private string getHashedPassUsingUserIdAsSalt(string vPass, string vSalt) { string vSourceText = vPass + vSalt; System.Text.UnicodeEncoding vUe = new System.Text.UnicodeEncoding(); byte[] vSourceBytes = vUe.GetBytes(vSourceText); System.Security.Cryptography.SHA1CryptoServiceProvider vSHA = new System.Security.Cryptography.SHA1CryptoServiceProvider(); byte[] vHashBytes = vSHA.ComputeHash(vSourceBytes); return Convert.ToBase64String(vHashBytes); }

    Read the article

  • Is this a safe/valid hash method implementation?

    - by Sean
    I have a set of classes to represent some objects loaded from a database. There are a couple variations of these objects, so I have a common base class and two subclasses to represent the differences. One of the key fields they have in common is an id field. Unfortunately, the id of an object is not unique across all variations, but within a single variation. What I mean is, a single object of type A could have an id between, say, 0 and 1,000,000. An object of type B could have an id between, 25,000 and 1,025,000. This means there's some overlap of id numbers. The objects are just variations of the same kind of thing, though, so I want to think of them as such in my code. (They were assigned ids from different sets for legacy reasons.) So I have classes like this: @class BaseClass @class TypeAClass : BaseClass @class TypeBClass : BaseClass BaseClass has a method (NSNumber *)objectId. However instances of TypeA and TypeB could have overlapping ids as discussed above, so when it comes to equality and putting these into sets, I cannot just use the id alone to check it. The unique key of these instances is, essentially, (class + objectId). So I figured that I could do this by making the following hash function on the BaseClass: -(NSUInteger)hash { return (NSUInteger)[self class] ^ [self.objectId hash]; } I also implemented isEqual like so: - (BOOL)isEqual:(id)object { return (self == object) || ([object class] == [self class] && [self.objectId isEqual:[object objectId]]); } This seems to be working, but I guess I'm just asking here to make sure I'm not overlooking something - especially with the generation of the hash by using the class pointer in that way. Is this safe or is there a better way to do this?

    Read the article

  • Jquery href with hash

    - by Ramesh
    Hi, i am using tis code for tab navigation. function hashIt(toHash) { toHash == "" ? window.location.hash = window.location.hash.replace( /#.*/, "") : window.location.hash = toHash; return false; } and also i am using jquery popup on page onload. a hyperlink in the popup is not working, if i remove the hashIt function its fine. but i want both. Please help me out. Ramesh.

    Read the article

  • Is perl's each function worth using?

    - by eugene y
    From perldoc -f each we read: There is a single iterator for each hash, shared by all each, keys, and values function calls in the program; it can be reset by reading all the elements from the hash, or by evaluating keys HASH or values HASH. The iterator is not reset when you leave the scope containing the each(), and this can lead to bugs: my %h = map { $_, 1 } qw(1 2 3); while (my $k = each %h) { print "1: $k\n"; last } while (my $k = each %h) { print "2: $k\n" } Output: 1: 1 2: 3 2: 2 What are the common workarounds for this behavior? And is it worth using each in general?

    Read the article

  • Is perl's each function worth using?

    - by eugene y
    From perldoc -f each we read: There is a single iterator for each hash, shared by all each, keys, and values function calls in the program; it can be reset by reading all the elements from the hash, or by evaluating keys HASH or values HASH. The iterator is not reset when you leave the scope containing the each(), and this can lead to bugs: my %h = map { $_, 1 } qw(1 2 3); while (my $k = each %h) { print "1: $k\n"; last } while (my $k = each %h) { print "2: $k\n" } Output: 1: 1 2: 3 2: 2 What are the common workarounds for this behavior? And is it worth using each in general?

    Read the article

  • HASH reference error with HTTP::Message::decodable

    - by scarba05
    Hi, I'm getting an "Can't use an undefined value as a HASH reference" error trying to call HTTP::Message::decodable() using Perl 5.10 / libwww installed on Debian Lenny OS using the aptitude package manager. I'm really stuck so would appreciate some help please. Here's the error: Can't use an undefined value as a HASH reference at (eval 2) line 1. at test.pl line 4 main::__ANON__('Can\'t use an undefined value as a HASH reference at enter code here`(eval 2)...') called at (eval 2) line 1 HTTP::Message::__ANON__() called at test.pl line 6 Here's the code: use strict; use HTTP::Request::Common; use Carp; $SIG{ __DIE__ } = sub { Carp::confess( @_ ) }; print HTTP::Message::decodable();

    Read the article

  • Are hash values globally unqiue

    - by Wololo
    I want to generate a hash code for a file. Using C# I would do something like this then store the value in a database. byte[] b = File.ReadAllBytes(@"C:\image.jpg"); string hash = ComputeHash(b); Now, if i use say a Java program that implements the same hashing alogorithm (Md5), can i expect the hash values to be the equal to the value generated in C#? What if i execute the java program from different environments, Windows, Linux or Mac?

    Read the article

  • Why a different SHA-1 for the same file under windows or linux?

    - by Fabio Vitale
    Why on the same machine computing the SHA-1 hash of the same file produces two completely different SHA-1 hashes in windows and inside a msysgit Git bash? Doesn't the SHA-1 algorithm was intended to produce the same hash for the same file in all OSes? On windows (with HashCheck): File hello.txt 22596363b3de40b06f981fb85d82312e8c0ed511 Inside a msysgit's Git bash windows (same machine, same file): $ git hash-object hello.txt 3b18e512dba79e4c8300dd08aeb37f8e728b8dad

    Read the article

  • For Loop help In a Hash Cracker Homework.

    - by aaron burns
    On the homework I am working on we are making a hash cracker. I am implementing it so as to have my cracker. java call worker.java. Worker.java implements Runnable. Worker is to take the start and end of a list of char, the hash it is to crack, and the max length of the password that made the hash. I know I want to do a loop in run() BUT I cannot think of how I would do it so it would go to the given max pasword length. I have posted the code I have so far. Any directions or areas I should look into.... I thought there was a way to do this with a certain way to write the loop but I don't know or can't find the correct syntax. Oh.. also. In main I divide up so x amount of threads can be chosen and I know that as of write now it only works for an even number of the 40 possible char given. package HashCracker; import java.util.*; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; public class Cracker { // Array of chars used to produce strings public static final char[] CHARS = "abcdefghijklmnopqrstuvwxyz0123456789.,-!".toCharArray(); public static final int numOfChar=40; /* Given a byte[] array, produces a hex String, such as "234a6f". with 2 chars for each byte in the array. (provided code) */ public static String hexToString(byte[] bytes) { StringBuffer buff = new StringBuffer(); for (int i=0; i<bytes.length; i++) { int val = bytes[i]; val = val & 0xff; // remove higher bits, sign if (val<16) buff.append('0'); // leading 0 buff.append(Integer.toString(val, 16)); } return buff.toString(); } /* Given a string of hex byte values such as "24a26f", creates a byte[] array of those values, one byte value -128..127 for each 2 chars. (provided code) */ public static byte[] hexToArray(String hex) { byte[] result = new byte[hex.length()/2]; for (int i=0; i<hex.length(); i+=2) { result[i/2] = (byte) Integer.parseInt(hex.substring(i, i+2), 16); } return result; } public static void main(String args[]) throws NoSuchAlgorithmException { if(args.length==1)//Hash Maker { //create a byte array , meassage digestand put password into it //and get out a hash value printed to the screen using provided methods. byte[] myByteArray=args[0].getBytes(); MessageDigest hasher=MessageDigest.getInstance("SHA-1"); hasher.update(myByteArray); byte[] digestedByte=hasher.digest(); String hashValue=Cracker.hexToString(digestedByte); System.out.println(hashValue); } else//Hash Cracker { ArrayList<Thread> myRunnables=new ArrayList<Thread>(); int numOfThreads = Integer.parseInt(args[2]); int charPerThread=Cracker.numOfChar/numOfThreads; int start=0; int end=charPerThread-1; for(int i=0; i<numOfThreads; i++) { //creates, stores and starts threads. Runnable tempWorker=new Worker(start, end, args[1], Integer.parseInt(args[1])); Thread temp=new Thread(tempWorker); myRunnables.add(temp); temp.start(); start=end+1; end=end+charPerThread; } } } import java.util.*; public class Worker implements Runnable{ private int charStart; private int charEnd; private String Hash2Crack; private int maxLength; public Worker(int start, int end, String hashValue, int maxPWlength) { charStart=start; charEnd=end; Hash2Crack=hashValue; maxLength=maxPWlength; } public void run() { byte[] myHash2Crack_=Cracker.hexToArray(Hash2Crack); for(int i=charStart; i<charEnd+1; i++) { Cracker.numOfChar[i]////// this is where I am stuck. } } }

    Read the article

  • Using hashing to group similar records

    - by Neil Dobson
    I work for a fulfillment company and we have to pack and ship many orders from our warehouse to customers. To improve efficiency we would like to group identical orders and pack these in the most optimum way. By identical I mean having the same number of order lines containing the same SKUs and same order quantities. To achieve this I was thinking about hashing each order. We can then group by hash to quickly see which orders are the same. We are moving from an Access database to a PostgreSQL database and we have .NET based systems for data loading and general order processing systems, so we can either do the hashing during the data loading or hand this task over to the DB. My question firstly is should the hashing be managed by DB, possibly using triggers, or should the hash be created on-the-fly using a view or something? And secondly would it be best to calculate a hash for each order line and then to combine these to find an order-level hash for grouping, or should I just use a trigger for all CRUD operations on the order lines table which re-calculates a single hash for the entire order and store the value in the orders table? TIA

    Read the article

  • Distributions and hashes

    - by Don Mackenzie
    Has anyone ever had an incidence of downloading software from a genuine site, where an MD5 or SHA series hash for the download is also supplied and then discovered that the hash calculated from the downloaded artifact doesn't match the published hash? I understand the theory but am curious how prevalent the problem is. Many software publishers seem to discount the threat.

    Read the article

  • Python unhash value

    - by blah01
    Hi all I am a newbie to the python. Can I unhash, or rather how can I unhash a value. I am using std hash() function. What I would like to do is to first hash a value send it somewhere and then unhash it as such: #process X hashedVal = hash(someVal) #send n receive in process Y someVal = unhash(hashedVal) #for example print it print someVal Thx in advance

    Read the article

  • Why does git hash-object return a different hash than openssl sha1?

    - by user657606
    Context: I downloaded a file (Audirvana 0.7.1.zip) from code.google to my Macbook Pro (Mac OS X 10.6.6). (current url: http://code.google.com/p/audirvana/downloads/detail?name=Audirvana%200.7.1.zip&can=2&q= ) I wanted to verify the checksum, which for that particular file is posted as 862456662a11e2f386ff0b24fdabcb4f6c1c446a (SHA-1). git hash-object gave me a different hash, but openssl sha1 returned the expected 862456662a11e2f386ff0b24fdabcb4f6c1c446a. The following experiment seems to rule out any possible download corruption or newline differences and to indicate that there are actually two different algorithms at play: $ echo A > foo.txt $ cat foo.txt A $ git hash-object foo.txt f70f10e4db19068f79bc43844b49f3eece45c4e8 $ openssl sha1 foo.txt SHA1(foo.txt)= 7d157d7c000ae27db146575c08ce30df893d3a64 What's going on?

    Read the article

  • Do I have to hash twice in C#?

    - by Joe H
    I have code like the following: class MyClass { string Name; int NewInfo; } List<MyClass> newInfo = .... // initialize list with some values Dictionary<string, int> myDict = .... // initialize dictionary with some values foreach(var item in newInfo) { if(myDict.ContainsKey(item.Name)) // 'A' I hash the first time here myDict[item.Name] += item.NewInfo // 'B' I hash the second (and third?) time here else myDict.Add(item.Name, item.NewInfo); } Is there any way to avoid doing two lookups in the Dictionary -- the first time to see if contains an entry, and the second time to update the value? There may even be two hash lookups on line 'B' -- one to get the int value and another to update it.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >