Search Results

Search found 31528 results on 1262 pages for 'spl object hash'.

Page 6/1262 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • In which object should I implement wait()/notify()?

    - by Christopher Francisco
    I'm working in an Android project with multithreading. Basically I have to wait to the server to respond before sending more data. The data sending task is delimited by the flag boolean hasServerResponded so the Thread will loop infinitely without doing anything until the flag becomes true. Since this boolean isn't declared as volatile (yet), and also looping without doing anything wastes resources, I thought maybe I should use AtomicBoolean and also implement wait() / notify() mechanism. Should I use the AtomicBoolean object notify() and wait() methods or should I create a lock Object?

    Read the article

  • Representing complex object dependencies

    - by max
    I have several classes with a reasonably complex (but acyclic) dependency graph. All the dependencies are of the form: class X instance contains an attribute of class Y. All such attributes are set during initialization and never changed again. Each class' constructor has just a couple parameters, and each object knows the proper parameters to pass to the constructors of the objects it contains. class Outer is at the top of the dependency hierarchy, i.e., no class depends on it. Currently, the UI layer only creates an Outer instance; the parameters for Outer constructor are derived from the user input. Of course, Outer in the process of initialization, creates the objects it needs, which in turn create the objects they need, and so on. The new development is that the a user who knows the dependency graph may want to reach deep into it, and set the values of some of the arguments passed to constructors of the inner classes (essentially overriding the values used currently). How should I change the design to support this? I could keep the current approach where all the inner classes are created by the classes that need them. In this case, the information about "user overrides" would need to be passed to Outer class' constructor in some complex user_overrides structure. Perhaps user_overrides could be the full logical representation of the dependency graph, with the overrides attached to the appropriate edges. Outer class would pass user_overrides to every object it creates, and they would do the same. Each object, before initializing lower level objects, will find its location in that graph and check if the user requested an override to any of the constructor arguments. Alternatively, I could rewrite all the objects' constructors to take as parameters the full objects they require. Thus, the creation of all the inner objects would be moved outside the whole hierarchy, into a new controller layer that lies between Outer and UI layer. The controller layer would essentially traverse the dependency graph from the bottom, creating all the objects as it goes. The controller layer would have to ask the higher-level objects for parameter values for the lower-level objects whenever the relevant parameter isn't provided by the user. Neither approach looks terribly simple. Is there any other approach? Has this problem come up enough in the past to have a pattern that I can read about? I'm using Python, but I don't think it matters much at the design level.

    Read the article

  • Passing functions into other functions as parameters, bad practice?

    - by BlueHat
    We've been in the process of changing how our AS3 application talks to our back end and we're in the process of implementing a REST system to replace our old one. Sadly the developer who started the work is now on long term sick leave and it's been handed over to me. I've been working with it for the past week or so now and I understand the system, but there's one thing that's been worrying me. There seems to be a lot of passing of functions into functions. For example our class that makes the call to our servers takes in a function that it will then call and pass an object to when the process is complete and errors have been handled etc. It's giving me that "bad feeling" where I feel like it's horrible practice and I can think of some reasons why but I want some confirmation before I propose a re-work to system. I was wondering if anyone had any experience with this possible problem?

    Read the article

  • What kind of programs/solutions can only be written with OOP or are too hard to achieve without it?

    - by user1598390
    Paraphrasing a recent question: What is Object Oriented Programming ill-suited for? I would like to ask the opposite question: What kind of programs cannot be written unless you use OOP? What kind of programs are not recommended to be written using non-OOP techniques? What kind of programs need OOP in order to even be written? What kind of programs would be too hard to write without OOP ? The answer to this question can help sell the idea of OOP to project leaders that have no special interest in code quality. At least they could buy the idea if one shows them the kind of things that are not even possible unless you use OOP.

    Read the article

  • Sharing object between 2 classes

    - by Justin
    I am struggling to wrap my head around being able to share an object between two classes. I want to be able to create only one instance of the object, commonlib in my main class and then have the classes, foo1 and foo2, to be able to mutually share the properties of the commonlib. commonlib is a 3rd party class which has a property Queries that will be added to in each child class of bar. This is why it is vital that only one instance is created. I create two separate queries in foo1 and foo2. This is my setup: abstract class bar{ //common methods } class foo1 extends bar{ //add query to commonlib } class foo2 extends bar{ //add query to commonlib } class main { public $commonlib = new commonlib(); public function start(){ //goal is to share one instance of $this->commonlib between foo1 and foo2 //so that they can both add to the properites of $this->commonlib (global //between the two) //now execute all of the queries after foo1 and foo2 add their query $this->commonlib->RunQueries(); } }

    Read the article

  • Any tips/tricks/resources on actually TEACHING a class on OOP? [closed]

    - by Sempus
    I may slowly be getting into teaching an Object-Orientated Programming class at my school in a year or two. I just graduated and work at my school as an Application Programmer. I'd first start off as a TA/grader and then slowly move into the Professor role. The class would be in Java. I always see resources on this fine site about HOW to program, but does anyone know any tips/tricks/resources on how to TEACH a programming class? It would be full of all different skills levels(but still semi-technical) so it would have to be a little more understandable than if it was just CS kids.

    Read the article

  • Killing Stuck Child JVM's

    - by ACShorten
    Note: This facility only applies to Oracle Utilities Application Framework products using COBOL. In some situations, the Child JVM's may spin. This causes multiple startup/shutdown Child JVM messages to be displayed and recursive child JVM's to be initiated and shunned. If the following: Unable to establish connection on port …. after waiting .. seconds.The issue can be caused intermittently by CPU spins in connection to the creation of new processes, specifically Child JVMs. Recursive (or double) invocation of the System.exit call in the remote JVM may be caused by a Process.destroy call that the parent JVM always issues when shunning a JVM. The issue may happen when the thread in the parent JVM that is responsible for the recycling gets stuck and it affects all child JVMs. If this issue occurs at your site then there are a number of options to address the issue: Configure an Operating System level kill command to force the Child JVM to be shunned when it becomes stuck. Configure a Process.destroy command to be used if the kill command is not configured or desired. Specify a time tolerance to detect stuck threads before issuing the Process.destroy or kill commands. Note: This facility is also used when the Parent JVM is also shutdown to ensure no zombie Child JVM's exit. The following additional settings must be added to the spl.properties for the Business Application Server to use this facility: spl.runtime.cobol.remote.kill.command – Specify the command to kill the Child JVM process. This can be a command or specify a script to execute to provide additional information. The kill.command property can accept two arguments, {pid} and {jvmNumber}, in the specified string. The arguments must be enclosed in curly braces as shown here. Note: The PID will be appended to the killcmd string, unless the {pid} and {jvmNumber} arguments are specified. The jvmNumber can be useful if passed to a script for logging purposes. Note: If a script is used it must be in the path and be executable by the OS user running the system. spl.runtime.cobol.remote.destroy.enabled – Specify whether to use the Process.destroy command instead of the kill command. Specify true or false. Default value is false. Note: Unless otherwise required, it is recommended to use the kill command option if shunning JVM's is an issue. There this value can remain its default value, false, unless otherwise required. spl.runtime.cobol.remote.kill.delaysecs – Specify the number of seconds to wait for the Child JVM to terminate naturally before issuing the Process.destroy or kill commands. Default is 10 seconds. For example: spl.runtime.cobol.remote.kill.command=kill -9 {pid} {jvmNumber}spl.runtime.cobol.remote.destroy.enabled=falsespl.runtime.cobol.remote.kill.delaysecs=10 When a Child JVM is to be recycled, these properties are inspected and the spl.runtime.cobol.remote.kill.command, executed if provided. This is done after waiting for spl.runtime.cobol.remote.kill.delaysecs seconds to give the JVM time to shut itself down. The spl.runtime.cobol.remote.destroy.enabled property must be set to true AND the spl.runtime.cobol.remote.kill.command omitted for the original Process.destroy command to be used on the process. Note: By default the spl.runtime.cobol.remote.destroy enabled is set to false and is therefore disabled. If neither spl.runtime.cobol.remote.kill.command nor spl.runtime.cobol.remote.destroy.enabled is specified, child JVMs will not beforcibly killed. They will be left to shut themselves down (which may lead to orphan JVMs). If both are specified, the spl.runtime.cobol.remote.kill.command is preferred and spl.runtime.cobol.remote.destroy.enabled defaulted to false.It is recommended to invoke a script to issue the direct kill command instead of directly using the kill -9 commands.For example, the following sample script ensures that the process Id is an active cobjrun process before issuing the kill command: forcequit.sh #!/bin/shTHETIME=`date +"%Y-%m-%d %H:%M:%S"`if [ "$1" = "" ]then  echo "$THETIME: Process Id is required" >>$SPLSYSTEMLOGS/forcequit.log  exit 1fijavaexec=cobjrunps e $1 | grep -c $javaexecif [ $? = 0 ]then  echo "$THETIME: Process $1 is an active $javaexec process -- issuing kill-9 $1" >>$SPLSYSTEMLOGS/forcequit.log  kill -9 $1exit 0else  echo "$THETIME: Process id $1 is not a $javaexec process or not active --  kill will not be issued" >>$SPLSYSTEMLOGS/forcequit.logexit 1fi This script's name would then be specified as the value for the spl.runtime.cobol.remote.kill.command property, for example: spl.runtime.cobol.remote.kill.command=forcequit.sh The forcequit script does not have any explicit parameters but pid is passed automatically. To use the jvmNumber parameter it must explicitly specified in the command. For example, to call script forcequit.sh and pass it the pid and the child JVM number, specify it as follows: spl.runtime.cobol.remote.kill.command=forcequit.sh {pid} {jvmNumber} The script can then use the JVM number for logging purposes or to further ensure that the correct pid is being killed.If the arguments are omitted, the pid is automatically appended to the spl.runtime.cobol.remote.kill.command string. To use this facility the following patches must be installed: Patch 13719584 for Oracle Utilities Application Framework V2.1, Patches 13684595 and 13634933 for Oracle Utilities Application Framework V2.2 Group Fix 4 (as Patch 13640668) for Oracle Utilities Application Framework V4.1.

    Read the article

  • Storing a SHA512 Password Hash in Database

    - by Chris
    In my ASP.NET web app I'm hashing my user passwords with SHA512. Despite much SO'ing and Googling I'm unclear how I should be storing them in the database (SQL2005) - the code below shows the basics of how I'm creating the hash as a string and I'm currently inserting it into the database into a Char(88) column as that seems to be the length created consistently Is holding it as a String the best way to do it, if so will it always be 88 chars on a SHA512 (as I have seen some bizarre stuff on Google)? Dim byteInput As Byte() = Encoding.UTF8.GetBytes(sSalt & sInput) Dim hash As HashAlgorithm = New SHA512Managed() Dim sInsertToDatabase As String = Convert.ToBase64String(hash.ComputeHash(byteInput))

    Read the article

  • Add properties to stdClass object from another object

    - by Florin
    I would like to be able to do the following: $obj = new stdClass; $obj->status = "success"; $obj2 = new stdClass; $obj2->message = "OK"; How can I extend $obj so that it contains the properties of $obj2, eg: $obj->status //"success" $obj->message // "OK" I know I could use an array, add all properties to the array and then cast that back to object, but is there a more elegant way, something like this: extend($obj, $obj2); //adds all properties from $obj2 to $obj Thanks!

    Read the article

  • Is the salt contained in a phpass hash or do you need to salt its input?

    - by Exception e
    phpass is a widely used hashing 'framework'. Is it good practice to salt the plain password before giving it to PasswordHash (v0.2), like so?: $dynamicSalt = $record['salt']; $staticSalt = 'i5ininsfj5lt4hbfduk54fjbhoxc80sdf'; $plainPassword = $_POST['password']; $password = $plainPassword . $dynamicSalt . $staticSalt; $passwordHash = new PasswordHash(8, false); $storedPassword = $passwordHash->HashPassword($password); For reference the phpsalt class: # Portable PHP password hashing framework. # # Version 0.2 / genuine. # # Written by Solar Designer <solar at openwall.com> in 2004-2006 and placed in # the public domain. # # # class PasswordHash { var $itoa64; var $iteration_count_log2; var $portable_hashes; var $random_state; function PasswordHash($iteration_count_log2, $portable_hashes) { $this->itoa64 = './0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'; if ($iteration_count_log2 < 4 || $iteration_count_log2 > 31) $iteration_count_log2 = 8; $this->iteration_count_log2 = $iteration_count_log2; $this->portable_hashes = $portable_hashes; $this->random_state = microtime() . getmypid(); } function get_random_bytes($count) { $output = ''; if (is_readable('/dev/urandom') && ($fh = @fopen('/dev/urandom', 'rb'))) { $output = fread($fh, $count); fclose($fh); } if (strlen($output) < $count) { $output = ''; for ($i = 0; $i < $count; $i += 16) { $this->random_state = md5(microtime() . $this->random_state); $output .= pack('H*', md5($this->random_state)); } $output = substr($output, 0, $count); } return $output; } function encode64($input, $count) { $output = ''; $i = 0; do { $value = ord($input[$i++]); $output .= $this->itoa64[$value & 0x3f]; if ($i < $count) $value |= ord($input[$i]) << 8; $output .= $this->itoa64[($value >> 6) & 0x3f]; if ($i++ >= $count) break; if ($i < $count) $value |= ord($input[$i]) << 16; $output .= $this->itoa64[($value >> 12) & 0x3f]; if ($i++ >= $count) break; $output .= $this->itoa64[($value >> 18) & 0x3f]; } while ($i < $count); return $output; } function gensalt_private($input) { $output = '$P$'; $output .= $this->itoa64[min($this->iteration_count_log2 + ((PHP_VERSION >= '5') ? 5 : 3), 30)]; $output .= $this->encode64($input, 6); return $output; } function crypt_private($password, $setting) { $output = '*0'; if (substr($setting, 0, 2) == $output) $output = '*1'; if (substr($setting, 0, 3) != '$P$') return $output; $count_log2 = strpos($this->itoa64, $setting[3]); if ($count_log2 < 7 || $count_log2 > 30) return $output; $count = 1 << $count_log2; $salt = substr($setting, 4, 8); if (strlen($salt) != 8) return $output; # We're kind of forced to use MD5 here since it's the only # cryptographic primitive available in all versions of PHP # currently in use. To implement our own low-level crypto # in PHP would result in much worse performance and # consequently in lower iteration counts and hashes that are # quicker to crack (by non-PHP code). if (PHP_VERSION >= '5') { $hash = md5($salt . $password, TRUE); do { $hash = md5($hash . $password, TRUE); } while (--$count); } else { $hash = pack('H*', md5($salt . $password)); do { $hash = pack('H*', md5($hash . $password)); } while (--$count); } $output = substr($setting, 0, 12); $output .= $this->encode64($hash, 16); return $output; } function gensalt_extended($input) { $count_log2 = min($this->iteration_count_log2 + 8, 24); # This should be odd to not reveal weak DES keys, and the # maximum valid value is (2**24 - 1) which is odd anyway. $count = (1 << $count_log2) - 1; $output = '_'; $output .= $this->itoa64[$count & 0x3f]; $output .= $this->itoa64[($count >> 6) & 0x3f]; $output .= $this->itoa64[($count >> 12) & 0x3f]; $output .= $this->itoa64[($count >> 18) & 0x3f]; $output .= $this->encode64($input, 3); return $output; } function gensalt_blowfish($input) { # This one needs to use a different order of characters and a # different encoding scheme from the one in encode64() above. # We care because the last character in our encoded string will # only represent 2 bits. While two known implementations of # bcrypt will happily accept and correct a salt string which # has the 4 unused bits set to non-zero, we do not want to take # chances and we also do not want to waste an additional byte # of entropy. $itoa64 = './ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789'; $output = '$2a$'; $output .= chr(ord('0') + $this->iteration_count_log2 / 10); $output .= chr(ord('0') + $this->iteration_count_log2 % 10); $output .= '$'; $i = 0; do { $c1 = ord($input[$i++]); $output .= $itoa64[$c1 >> 2]; $c1 = ($c1 & 0x03) << 4; if ($i >= 16) { $output .= $itoa64[$c1]; break; } $c2 = ord($input[$i++]); $c1 |= $c2 >> 4; $output .= $itoa64[$c1]; $c1 = ($c2 & 0x0f) << 2; $c2 = ord($input[$i++]); $c1 |= $c2 >> 6; $output .= $itoa64[$c1]; $output .= $itoa64[$c2 & 0x3f]; } while (1); return $output; } function HashPassword($password) { $random = ''; if (CRYPT_BLOWFISH == 1 && !$this->portable_hashes) { $random = $this->get_random_bytes(16); $hash = crypt($password, $this->gensalt_blowfish($random)); if (strlen($hash) == 60) return $hash; } if (CRYPT_EXT_DES == 1 && !$this->portable_hashes) { if (strlen($random) < 3) $random = $this->get_random_bytes(3); $hash = crypt($password, $this->gensalt_extended($random)); if (strlen($hash) == 20) return $hash; } if (strlen($random) < 6) $random = $this->get_random_bytes(6); $hash = $this->crypt_private($password, $this->gensalt_private($random)); if (strlen($hash) == 34) return $hash; # Returning '*' on error is safe here, but would _not_ be safe # in a crypt(3)-like function used _both_ for generating new # hashes and for validating passwords against existing hashes. return '*'; } function CheckPassword($password, $stored_hash) { $hash = $this->crypt_private($password, $stored_hash); if ($hash[0] == '*') $hash = crypt($password, $stored_hash); return $hash == $stored_hash; } }

    Read the article

  • md5 hash for file without File Attributes

    - by Glennular
    Using the following code to compute MD5 hashs of files: Private _MD5Hash As String Dim md5 As New System.Security.Cryptography.MD5CryptoServiceProvider Dim md5hash() As Byte md5hash = md5.ComputeHash(Me._BinaryData) Me._MD5Hash = ByteArrayToString(md5hash) Private Function ByteArrayToString(ByVal arrInput() As Byte) As String Dim sb As New System.Text.StringBuilder(arrInput.Length * 2) For i As Integer = 0 To arrInput.Length - 1 sb.Append(arrInput(i).ToString("X2")) Next Return sb.ToString().ToLower End Function We are getting different hashes depending on the create-date and modify-date of the file. We are storing the hash and the binary file in a SQL DB. This works fine when we upload the same instance of a file. But when we save a new instance of the file from the DB (with today's date as the create/modify) on the file-system and then check the new hash versus the MD5 stored in the DB they do not match, and therefor fail a duplicate check. How can we check for a file hash excluding the file attributes? or is there a different issue here?

    Read the article

  • Using hash functions with Bloom filters

    - by dangerstat
    Hi, A bloom filter uses a hash function (or many) to generate a value between 0 and m given an input string X. My question is how to you use a hash function to generate a value in this way, for example an MD5 hash is typically represented by a 32 length hex string, how would I use an MD5 hashing algorithm to generate a value between 0 and m where I can specify m? I'm using Java at the moment so an example of to do this with the MessageDigest functionality it offers would be great, though just a generic description of how to do about it would be fine too. Thanks

    Read the article

  • Rails flash hash violation of MVC?

    - by user94154
    I know Rails' flash hash is nothing new, but I keep running into the same problem with it. Controllers should be for business logic and db queries, not formatting strings for display to the user. But the flash hash is always set in the controller. This means that I need to hack and work around Rails to use Helpers that I made to format strings for the flash hash. Is this just a pragmatic compromise to MVC or am I missing something here? How do you deal with this problem? Or do you not even see it as one?

    Read the article

  • Prevent query string manipulation by adding a hash?

    - by saille
    To protect a web application from query string manipulation, I was considering adding a query string parameter to every url which stores a SHA1 hash of all the other query string parameters & values, then validating against the hash on every request. Does this method provide strong protection against user manipulation of query string values? Are there any other downsides/side-effects to doing this? I am not particularly concerned about the 'ugly' urls for this private web application. Url's will still be 'bookmarkable' as the hash will always be the same for the same query string arguments. This is an ASP.NET application.

    Read the article

  • MD5 hash differences between Python and other file hashers

    - by Sam
    I have been doing a bit of programming in Python (still a n00b at it) and came across something odd. I made a small program to find the MD5 hash of a filename passed to it on the command line. I used a function I found here on SO. When I ran it against a file, I got a hash "58a...113". But when I ran Microsoft's FCIV or the md5sum.py in \Python26\Tools\Scripts\, I get a different hash, "591...ae6". The actual hashing part of the md5sum.py in Scripts is m = md5.new() while 1: data = fp.read(bufsize) if not data: break m.update(data) out.write('%s %s\n' % (m.hexdigest(), filename)) This looks functionally identical to the code in the function given in the other answer... What am I missing? (This is my first time posting to stackoverflow, please let me know if I am doing it wrong.)

    Read the article

  • How to sort Ruby Hash based on date?

    - by Eki Eqbal
    I have a hash object with the following structure: {"action1"=> {"2014-08-20"=>0, "2014-07-26"=>1, "2014-07-31"=>1 }, "action2"=> {"2014-08-01"=>2, "2014-08-20"=>2, "2014-07-25"=>2, "2014-08-06"=>1, "2014-08-21"=>1 } "action3"=> {"2014-07-30"=>2, "2014-07-31"=>1, "2014-07-22"=>1, } } I want to sort the hash based on the date and return back a Hash(Not array). The final result should be: {"action1"=> {"2014-07-26"=>1, "2014-07-31"=>1, "2014-08-20"=>0 }, "action2"=> {"2014-07-25"=>2, "2014-08-01"=>2, "2014-08-06"=>2, "2014-08-20"=>1, "2014-08-21"=>1 } "action3"=> {"2014-07-22"=>1, "2014-07-30"=>2, "2014-07-31"=>1 } }

    Read the article

  • javascript location.hash refreshing in IE

    - by aepheus
    I need to modify the hash, remove it after certain processing takes place so that if the user refreshes they do not cause the process to run again. This works fine in FF, but it seems that IE is reloading every time I try to change the hash. I think it is related to other things that are loading on the page, though I am not certain. I have an iframe that loads (related to the process) as well as some scripts that are still being fetched in the parent window. I can't seem to figure out a good way to change the hash after all the loading completes. And, at the same time am not even positive that it is related to the loading. Any ideas on how to solve this?

    Read the article

  • Cleaner way to store to replace a scalar hash value with an array ref?

    - by user275455
    I am building a hash where the keys, associated with scalars, are not necessarily unique. I want the desired behavior to be that if the key is unique, the value is the scalar. If the key is not unique, I want the value to be an array reference of the scalars associated witht the key. Since the hash is built up iteratively, I don't know if the key is unique ahead of time. Right now, I am doing something like this: if(!defined($hash{$key})){ $hash{$key} = $val; } elseif(ref($hash{$key}) ne 'ARRAY'){ my @a; push(@a, $hash{$key}); push(@, $val); $hash{$key} = \@a; } else{ push(@{$hash{$key}}, $val); } Is there a simpler way to do this?

    Read the article

  • Perl array and hash manipulation using map

    - by somebody
    I have the following test code use Data::Dumper; my $hash = { foo => 'bar', os => 'linux' }; my @keys = qw (foo os); my $extra = 'test'; my @final_array = (map {$hash->{$_}} @keys,$extra); print Dumper \@final_array; The output is $VAR1 = [ 'bar', 'linux', undef ]; Shouldn't the elements be "bar, linux, test"? Why is the last element undefined and how do I insert an element into @final_array? I know I can use the push function but is there a way to insert it on the same line as using the map command? Basically the manipulated array is meant to be used in an SQL command in the actual script and I want to avoid using extra variables before that and instead do something like: $sql->execute(map {$hash->{$_}} @keys,$extra);

    Read the article

  • Split Entire Hash Range Into n Equal Ranges

    - by noxtion
    Hello. I am looking to take a hash range (md5 or sha1) and split it into n equal ranges. For example, if n=5, the entire hash range would be split by 5 so that there would be a uniform distribution of key ranges. I would like n=1 to be from the beginning of the hash range to 1/5, 2 from 1/2 to 2/5, etc all the way to the end. I am new to hashing and a little bit unsure of where I could start on solving this for a project. Any help you could give would be great.

    Read the article

  • Simple Hash that is always equal between C# and Java

    - by GaiusSensei
    I have a C# WebService and a (Java) Android Application. Is there a SIMPLE hash function that produces the same result between these two languages? The simplest C# hash is a String.GetHashCode(), but I can't replicate it in Java. The simplest Java hash is not simple at all. And I don't know if I can replicate it exactly in C#. In case it's relevant, I'm hashing passwords before sending it across the internet. I'm currently using Encode64, but that's obviously not secure since we can reverse it.

    Read the article

  • Comparing large strings in JavaScript with a hash

    - by user4815162342
    I have a form with a textarea that can contain large amounts of content (say, articles for a blog) edited using one of a number of third party rich text editors. I'm trying to implement something like an autosave feature, which should submit the content through ajax if it's changed. However, I have to work around the fact that some of the editors I have as options don't support an "isdirty" flag, or an "onchange" event which I can use to see if the content has changed since the last save. So, as a workaround, what I'd like to do is keep a copy of the content in a variable (let's call it lastSaveContent), as of the last save, and compare it with the current text when the "autosave" function fires (on a timer) to see if it's different. However, I'm worried about how much memory that could take up with very large documents. Would it be more efficient to store some sort of hash in the lastSaveContent variable, instead of the entire string, and then compare the hash values? If so, can you recommend a good javascript library/jquery plugin that implements an appropriate hash for this requirement?

    Read the article

  • Reversing a hash function

    - by martani_net
    Hi, I have the following hash function, and I'm trying to get my way to reverse it, so that I can find the key from a hashed value. uint Hash(string s) { uint result = 0; for (int i = 0; i < s.Length; i++) { result = ((result << 5) + result) + s[i]; } return result; } The code is in C# but I assume it is clear. I am aware that for one hashed value, there can be more than one key, but my intent is not to find them all, just one that satisfies the hash function suffices. Any ideas? Thank you.

    Read the article

  • Use decorator and factory together to extend objects?

    - by TheClue
    I'm new to OOP and design pattern. I've a simple app that handles the generation of Tables, Columns (that belong to Table), Rows (that belong to Column) and Values (that belong to Rows). Each of these object can have a collection of Property, which is in turn defined as an enum. They are all interfaces: I used factories to get concrete instances of these products, depending on circumnstances. Now I'm facing the problem of extending these classes. Let's say I need another product called "SpecialTable" which in turn has some special properties or new methods like 'getSomethingSpecial' or an extended set of Property. The only way is to extend/specialize all my elements (ie. build a SpecialTableFactory, a SpecialTable interface and a SpecialTableImpl concrete)? What to do if, let's say, I plan to use standard methods like addRow(Column column, String name) that doesn't need to be specialized? I don't like the idea to inherit factories and interfaces, but since SpecialTable has more methods than Table i guess it cannot share the same factory. Am I wrong? Another question: if I need to define product properties at run time (a Table that is upgraded to SpecialTable at runtime), i guess i should use a decorator. Is it possible (and how) to combine both factory and decorator design? Is it better to use a State or Strategy pattern, instead?

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >