Search Results

Search found 4650 results on 186 pages for 'perfect hash'.

Page 15/186 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Finding an odd perfect number

    - by Coin Bird
    I wrote these two methods to determine if a number is perfect. My prof wants me to combine them to find out if there is an odd perfect number. I know there isn't one(that is known), but I need to actually write the code to prove that. The issue is with my main method. I tested the two test methods. I tried debugging and it gets stuck on the number 5, though I can't figure out why. Here is my code: public class Lab6 { public static void main (String[]args) { int testNum = 3; while (testNum != sum_of_divisors(testNum) && testNum%2 != 0) testNum++; } public static int sum_of_divisors(int numDiv) { int count = 1; int totalDivisors = 0; while (count < numDiv) if (numDiv%count == 0) { totalDivisors = totalDivisors + count; count++; } else count++; return totalDivisors; } public static boolean is_perfect(int numPerfect) { int count = 1; int totalPerfect = 0; while (totalPerfect < numPerfect) { totalPerfect = totalPerfect + count; count++; } if (numPerfect == totalPerfect) return true; else return false; } }

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • Check if BigInteger is not a perfect square

    - by Ender
    I have a BigInteger value, let's say it is 282 and is inside the variable x. I now want to write a while loop that states: while b2 isn't a perfect square: a ? a + 1 b2 ? a*a - N endwhile How would I do such a thing using BigInteger? EDIT: The purpose for this is so I can write this method. As the article states one must check if b2 is not square.

    Read the article

  • How to get design mockup as a overlay for quicker development without using Firefox's pixel perfect

    - by metal-gear-solid
    This is FF plugin http://www.pixelperfectplugin.com/ Pixel Perfect is a Firefox/Firebug extension that allows web developers and designers to easily overlay a web composition over top of the developed HTML. Read more: http://pixelperfectplugin.com/how-to-use/walkthrough/#ixzz0eOfezx1N How to get mockup image behind all div like this plugin does.? this tool only shows design behind layout only on firefox and i want to see on all browser.

    Read the article

  • Is .NET Compact a perfect subset of .NET?

    - by Andrew J. Brehm
    Is .NET Compact a perfect subset of .NET? Can I write a Windows Forms application and run it on .NET Compact, assuming that I took into account screen size and other limitations and avoid classes and methods not supported by .NET Compact or is .NET Compact a diffent and incompatible GUI framework?

    Read the article

  • Puppet&Hiera: $variable is not an hash or array when accessing it

    - by txworking
    I wrote a puppet module and the content of init.pp was: class install( $common_instanceconfig = hiera_hash('common_instanceconfig'), $common_instances = hiera('common_instances') ) { define instances { common { $title: name => $title, path => $common_instanceconfig[$title]['path'], version => $common_instanceconfig[$title]['version'], files => $common_instanceconfig[$title]['files'], pre => $common_instanceconfig[$title]['pre'], after => $common_instanceconfig[$title]['after'], properties => $common_instanceconfig[$title]['properties'], require => $common_instanceconfig[$title]['require'] , } } instances {$common_instances:} } And the hieradata file was: classes: - install common_instances: - common_instance_1 - common_instance_2 common_instanceconfig: common_instance_1 path : '/opt/common_instance_1' version : 1.0 files : software-1.bin pre : pre_install.sh after : after_install.sh properties: "properties" common_instance_2: path : '/opt/common_instance_2' version : 2.0 files : software-2.bin pre : pre_install.sh after : after_install.sh properties: "properties" I always got a error message When puppet agent run Error: common_instanceconfig String is not an hash or array when accessing it with common_instance_1 at /etc/puppet/modules/install/manifests/init.pp:16 on node puppet.agent1.tmp It seems $common_instances can be got correctly, but $commono_instanceconfig always be treated as a string. I used YAML.load_file to load the hieradata file, and got a correct hash object. Can anybody help?

    Read the article

  • ASP.NET Membership C# - How to compare existing password/hash

    - by Steve
    I have been on this problem for a while. I need to compare a paasword that the user enters to a password that is in the membership DB. The password is hashed and has a salt. Because of the lack of documentation I do not know if the salt is append to the password and then hashed how how it is created. I am unable to get this to match. The hash returned from the function never matches the hash in the DB and I know for fact it is the same password. Microsoft seems to hash the password in a different way then I am. I hope someone has some insights please. Here is my code: protected void Button1_Click(object sender, EventArgs e) { //HERE IS THE PASSWORD I USE, SAME ONE IS HASHED IN THE DB string pwd = "Letmein44"; //HERE IS THE SALT FROM THE DB string saltVar = "SuY4cf8wJXJAVEr3xjz4Dg=="; //HERE IS THE PASSWORD THE WAY IT STORED IN THE DB AS HASH string bdPwd = "mPrDArrWt1+tybrjA0OZuEG1P5w="; // FOR COMPARISON I DISPLAY IT TextBox1.Text = bdPwd; // HERE IS WHERE I DISPLAY THE return from THE FUNCTION, IT SHOULD MATCH THE PASSWORD FROM THE DB. TextBox2.Text = getHashedPassUsingUserIdAsSalt(pwd, saltVar); } private string getHashedPassUsingUserIdAsSalt(string vPass, string vSalt) { string vSourceText = vPass + vSalt; System.Text.UnicodeEncoding vUe = new System.Text.UnicodeEncoding(); byte[] vSourceBytes = vUe.GetBytes(vSourceText); System.Security.Cryptography.SHA1CryptoServiceProvider vSHA = new System.Security.Cryptography.SHA1CryptoServiceProvider(); byte[] vHashBytes = vSHA.ComputeHash(vSourceBytes); return Convert.ToBase64String(vHashBytes); }

    Read the article

  • Is this a safe/valid hash method implementation?

    - by Sean
    I have a set of classes to represent some objects loaded from a database. There are a couple variations of these objects, so I have a common base class and two subclasses to represent the differences. One of the key fields they have in common is an id field. Unfortunately, the id of an object is not unique across all variations, but within a single variation. What I mean is, a single object of type A could have an id between, say, 0 and 1,000,000. An object of type B could have an id between, 25,000 and 1,025,000. This means there's some overlap of id numbers. The objects are just variations of the same kind of thing, though, so I want to think of them as such in my code. (They were assigned ids from different sets for legacy reasons.) So I have classes like this: @class BaseClass @class TypeAClass : BaseClass @class TypeBClass : BaseClass BaseClass has a method (NSNumber *)objectId. However instances of TypeA and TypeB could have overlapping ids as discussed above, so when it comes to equality and putting these into sets, I cannot just use the id alone to check it. The unique key of these instances is, essentially, (class + objectId). So I figured that I could do this by making the following hash function on the BaseClass: -(NSUInteger)hash { return (NSUInteger)[self class] ^ [self.objectId hash]; } I also implemented isEqual like so: - (BOOL)isEqual:(id)object { return (self == object) || ([object class] == [self class] && [self.objectId isEqual:[object objectId]]); } This seems to be working, but I guess I'm just asking here to make sure I'm not overlooking something - especially with the generation of the hash by using the class pointer in that way. Is this safe or is there a better way to do this?

    Read the article

  • Jquery href with hash

    - by Ramesh
    Hi, i am using tis code for tab navigation. function hashIt(toHash) { toHash == "" ? window.location.hash = window.location.hash.replace( /#.*/, "") : window.location.hash = toHash; return false; } and also i am using jquery popup on page onload. a hyperlink in the popup is not working, if i remove the hashIt function its fine. but i want both. Please help me out. Ramesh.

    Read the article

  • Tracking finger move in order to rotate a triangle : tracking is not perfect

    - by Laurent BERNABE
    I've written a custom view, with the OpenGL_1 technology, in order to let user rotate a red triangle just by dragging it along x axis. (Will give a rotation around Y axis). It works, but there is a bit of latency when dragging from one direction to the other (without releasing the mouse/finger). So it seems that my code is not yet "goal perfect". (I am convinced that no code is perfect in itself). I thought of using a quaternion, but maybe it won't be so usefull : must I really use a Quaternion (or a kind of Matrix) ? I've designed application for Android 4.0.3, but it could fit into Android api 3 (Android 1.5) as well (at least, I think it could). So here is my main layout : activity_main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <com.laurent_bernabe.android.triangletournant3d.MyOpenGLView android:layout_width="match_parent" android:layout_height="match_parent" /> </LinearLayout> Here is my main activity : MainActivity.java package com.laurent_bernabe.android.triangletournant3d; import android.app.Activity; import android.os.Bundle; import android.support.v4.app.NavUtils; import android.view.Menu; import android.view.MenuItem; public class MainActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case android.R.id.home: NavUtils.navigateUpFromSameTask(this); return true; } return super.onOptionsItemSelected(item); } } And finally, my OpenGL view MyOpenGLView.java package com.laurent_bernabe.android.triangletournant3d; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import android.content.Context; import android.graphics.Point; import android.opengl.GLSurfaceView; import android.opengl.GLSurfaceView.Renderer; import android.opengl.GLU; import android.util.AttributeSet; import android.view.MotionEvent; public class MyOpenGLView extends GLSurfaceView implements Renderer { public MyOpenGLView(Context context, AttributeSet attrs) { super(context, attrs); setRenderer(this); } public MyOpenGLView(Context context) { this(context, null); } @Override public boolean onTouchEvent(MotionEvent event) { int actionMasked = event.getActionMasked(); switch(actionMasked){ case MotionEvent.ACTION_DOWN: savedClickLocation = new Point((int) event.getX(), (int) event.getY()); break; case MotionEvent.ACTION_UP: savedClickLocation = null; break; case MotionEvent.ACTION_MOVE: Point newClickLocation = new Point((int) event.getX(), (int) event.getY()); int dx = newClickLocation.x - savedClickLocation.x; angle += Math.toRadians(dx); break; } return true; } @Override public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); GLU.gluLookAt(gl, 0f, 0f, 5f, 0f, 0f, 0f, 0f, 1f, 0f ); gl.glRotatef(angle, 0f, 1f, 0f); gl.glColor4f(1f, 0f, 0f, 0f); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, triangleCoordsBuff); gl.glDrawArrays(GL10.GL_TRIANGLES, 0, 3); } @Override public void onSurfaceChanged(GL10 gl, int width, int height) { gl.glViewport(0, 0, width, height); gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); GLU.gluPerspective(gl, 60f, (float) width / height, 0.1f, 10f); gl.glMatrixMode(GL10.GL_MODELVIEW); } @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { gl.glEnable(GL10.GL_DEPTH_TEST); gl.glClearDepthf(1.0f); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); buildTriangleCoordsBuffer(); } private void buildTriangleCoordsBuffer() { ByteBuffer buffer = ByteBuffer.allocateDirect(4*triangleCoords.length); buffer.order(ByteOrder.nativeOrder()); triangleCoordsBuff = buffer.asFloatBuffer(); triangleCoordsBuff.put(triangleCoords); triangleCoordsBuff.rewind(); } private float [] triangleCoords = {-1f, -1f, +1f, -1f, +1f, +1f}; private FloatBuffer triangleCoordsBuff; private float angle = 0f; private Point savedClickLocation; } I don't think I really have to give you my manifest file. But I can if you think it is necessary. I've just tested on Emulator, not on real device. So, how can improve the reactivity ? Thanks in advance.

    Read the article

  • Is perl's each function worth using?

    - by eugene y
    From perldoc -f each we read: There is a single iterator for each hash, shared by all each, keys, and values function calls in the program; it can be reset by reading all the elements from the hash, or by evaluating keys HASH or values HASH. The iterator is not reset when you leave the scope containing the each(), and this can lead to bugs: my %h = map { $_, 1 } qw(1 2 3); while (my $k = each %h) { print "1: $k\n"; last } while (my $k = each %h) { print "2: $k\n" } Output: 1: 1 2: 3 2: 2 What are the common workarounds for this behavior? And is it worth using each in general?

    Read the article

  • Is perl's each function worth using?

    - by eugene y
    From perldoc -f each we read: There is a single iterator for each hash, shared by all each, keys, and values function calls in the program; it can be reset by reading all the elements from the hash, or by evaluating keys HASH or values HASH. The iterator is not reset when you leave the scope containing the each(), and this can lead to bugs: my %h = map { $_, 1 } qw(1 2 3); while (my $k = each %h) { print "1: $k\n"; last } while (my $k = each %h) { print "2: $k\n" } Output: 1: 1 2: 3 2: 2 What are the common workarounds for this behavior? And is it worth using each in general?

    Read the article

  • HASH reference error with HTTP::Message::decodable

    - by scarba05
    Hi, I'm getting an "Can't use an undefined value as a HASH reference" error trying to call HTTP::Message::decodable() using Perl 5.10 / libwww installed on Debian Lenny OS using the aptitude package manager. I'm really stuck so would appreciate some help please. Here's the error: Can't use an undefined value as a HASH reference at (eval 2) line 1. at test.pl line 4 main::__ANON__('Can\'t use an undefined value as a HASH reference at enter code here`(eval 2)...') called at (eval 2) line 1 HTTP::Message::__ANON__() called at test.pl line 6 Here's the code: use strict; use HTTP::Request::Common; use Carp; $SIG{ __DIE__ } = sub { Carp::confess( @_ ) }; print HTTP::Message::decodable();

    Read the article

  • Are hash values globally unqiue

    - by Wololo
    I want to generate a hash code for a file. Using C# I would do something like this then store the value in a database. byte[] b = File.ReadAllBytes(@"C:\image.jpg"); string hash = ComputeHash(b); Now, if i use say a Java program that implements the same hashing alogorithm (Md5), can i expect the hash values to be the equal to the value generated in C#? What if i execute the java program from different environments, Windows, Linux or Mac?

    Read the article

  • Why a different SHA-1 for the same file under windows or linux?

    - by Fabio Vitale
    Why on the same machine computing the SHA-1 hash of the same file produces two completely different SHA-1 hashes in windows and inside a msysgit Git bash? Doesn't the SHA-1 algorithm was intended to produce the same hash for the same file in all OSes? On windows (with HashCheck): File hello.txt 22596363b3de40b06f981fb85d82312e8c0ed511 Inside a msysgit's Git bash windows (same machine, same file): $ git hash-object hello.txt 3b18e512dba79e4c8300dd08aeb37f8e728b8dad

    Read the article

  • For Loop help In a Hash Cracker Homework.

    - by aaron burns
    On the homework I am working on we are making a hash cracker. I am implementing it so as to have my cracker. java call worker.java. Worker.java implements Runnable. Worker is to take the start and end of a list of char, the hash it is to crack, and the max length of the password that made the hash. I know I want to do a loop in run() BUT I cannot think of how I would do it so it would go to the given max pasword length. I have posted the code I have so far. Any directions or areas I should look into.... I thought there was a way to do this with a certain way to write the loop but I don't know or can't find the correct syntax. Oh.. also. In main I divide up so x amount of threads can be chosen and I know that as of write now it only works for an even number of the 40 possible char given. package HashCracker; import java.util.*; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; public class Cracker { // Array of chars used to produce strings public static final char[] CHARS = "abcdefghijklmnopqrstuvwxyz0123456789.,-!".toCharArray(); public static final int numOfChar=40; /* Given a byte[] array, produces a hex String, such as "234a6f". with 2 chars for each byte in the array. (provided code) */ public static String hexToString(byte[] bytes) { StringBuffer buff = new StringBuffer(); for (int i=0; i<bytes.length; i++) { int val = bytes[i]; val = val & 0xff; // remove higher bits, sign if (val<16) buff.append('0'); // leading 0 buff.append(Integer.toString(val, 16)); } return buff.toString(); } /* Given a string of hex byte values such as "24a26f", creates a byte[] array of those values, one byte value -128..127 for each 2 chars. (provided code) */ public static byte[] hexToArray(String hex) { byte[] result = new byte[hex.length()/2]; for (int i=0; i<hex.length(); i+=2) { result[i/2] = (byte) Integer.parseInt(hex.substring(i, i+2), 16); } return result; } public static void main(String args[]) throws NoSuchAlgorithmException { if(args.length==1)//Hash Maker { //create a byte array , meassage digestand put password into it //and get out a hash value printed to the screen using provided methods. byte[] myByteArray=args[0].getBytes(); MessageDigest hasher=MessageDigest.getInstance("SHA-1"); hasher.update(myByteArray); byte[] digestedByte=hasher.digest(); String hashValue=Cracker.hexToString(digestedByte); System.out.println(hashValue); } else//Hash Cracker { ArrayList<Thread> myRunnables=new ArrayList<Thread>(); int numOfThreads = Integer.parseInt(args[2]); int charPerThread=Cracker.numOfChar/numOfThreads; int start=0; int end=charPerThread-1; for(int i=0; i<numOfThreads; i++) { //creates, stores and starts threads. Runnable tempWorker=new Worker(start, end, args[1], Integer.parseInt(args[1])); Thread temp=new Thread(tempWorker); myRunnables.add(temp); temp.start(); start=end+1; end=end+charPerThread; } } } import java.util.*; public class Worker implements Runnable{ private int charStart; private int charEnd; private String Hash2Crack; private int maxLength; public Worker(int start, int end, String hashValue, int maxPWlength) { charStart=start; charEnd=end; Hash2Crack=hashValue; maxLength=maxPWlength; } public void run() { byte[] myHash2Crack_=Cracker.hexToArray(Hash2Crack); for(int i=charStart; i<charEnd+1; i++) { Cracker.numOfChar[i]////// this is where I am stuck. } } }

    Read the article

  • Using hashing to group similar records

    - by Neil Dobson
    I work for a fulfillment company and we have to pack and ship many orders from our warehouse to customers. To improve efficiency we would like to group identical orders and pack these in the most optimum way. By identical I mean having the same number of order lines containing the same SKUs and same order quantities. To achieve this I was thinking about hashing each order. We can then group by hash to quickly see which orders are the same. We are moving from an Access database to a PostgreSQL database and we have .NET based systems for data loading and general order processing systems, so we can either do the hashing during the data loading or hand this task over to the DB. My question firstly is should the hashing be managed by DB, possibly using triggers, or should the hash be created on-the-fly using a view or something? And secondly would it be best to calculate a hash for each order line and then to combine these to find an order-level hash for grouping, or should I just use a trigger for all CRUD operations on the order lines table which re-calculates a single hash for the entire order and store the value in the orders table? TIA

    Read the article

  • Distributions and hashes

    - by Don Mackenzie
    Has anyone ever had an incidence of downloading software from a genuine site, where an MD5 or SHA series hash for the download is also supplied and then discovered that the hash calculated from the downloaded artifact doesn't match the published hash? I understand the theory but am curious how prevalent the problem is. Many software publishers seem to discount the threat.

    Read the article

  • Python unhash value

    - by blah01
    Hi all I am a newbie to the python. Can I unhash, or rather how can I unhash a value. I am using std hash() function. What I would like to do is to first hash a value send it somewhere and then unhash it as such: #process X hashedVal = hash(someVal) #send n receive in process Y someVal = unhash(hashedVal) #for example print it print someVal Thx in advance

    Read the article

  • Why does git hash-object return a different hash than openssl sha1?

    - by user657606
    Context: I downloaded a file (Audirvana 0.7.1.zip) from code.google to my Macbook Pro (Mac OS X 10.6.6). (current url: http://code.google.com/p/audirvana/downloads/detail?name=Audirvana%200.7.1.zip&can=2&q= ) I wanted to verify the checksum, which for that particular file is posted as 862456662a11e2f386ff0b24fdabcb4f6c1c446a (SHA-1). git hash-object gave me a different hash, but openssl sha1 returned the expected 862456662a11e2f386ff0b24fdabcb4f6c1c446a. The following experiment seems to rule out any possible download corruption or newline differences and to indicate that there are actually two different algorithms at play: $ echo A > foo.txt $ cat foo.txt A $ git hash-object foo.txt f70f10e4db19068f79bc43844b49f3eece45c4e8 $ openssl sha1 foo.txt SHA1(foo.txt)= 7d157d7c000ae27db146575c08ce30df893d3a64 What's going on?

    Read the article

  • Hash passwords before transmitting? (web)

    - by wag2639
    I was reading this Ars article on password security and it mentioned there are sites that "hash the password before transmitting"? Now, assuming this isn't using an SSL connection (HTTPS), a. is this actually secure and b. if it is how would you do this in a secure manor? Edit 1: (some thoughts based on first few answers) c. If you do hash the password before transmission, how do you use that if you only store a salted hash version of the password in your user credentials databas? d. Just to check, if you are using a HTTPS secured connection, is any of this necessary?

    Read the article

  • Do I have to hash twice in C#?

    - by Joe H
    I have code like the following: class MyClass { string Name; int NewInfo; } List<MyClass> newInfo = .... // initialize list with some values Dictionary<string, int> myDict = .... // initialize dictionary with some values foreach(var item in newInfo) { if(myDict.ContainsKey(item.Name)) // 'A' I hash the first time here myDict[item.Name] += item.NewInfo // 'B' I hash the second (and third?) time here else myDict.Add(item.Name, item.NewInfo); } Is there any way to avoid doing two lookups in the Dictionary -- the first time to see if contains an entry, and the second time to update the value? There may even be two hash lookups on line 'B' -- one to get the int value and another to update it.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >