Search Results

Search found 4125 results on 165 pages for 'hash cluster'.

Page 11/165 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Iterating Over Params Hash

    - by Joe Clark
    I'm having an extremely frustrating time getting some images to upload. They are obviously being uploaded as rack/multipart but the way that I'm iterating over my params hash must be causing the problem. I could REALLY use some help, so I can stop pulling out my hair. So I've got a params hash that looks like this: Parameters: {"commit"=>"Submit", "sighting_report"=>[{"number_seen"=>"1", "picture"=>#<File:/var/folders/IX/IXXrbzpCHkq68OuyY-yoI++++TI/-Tmp-/RackMultipart.85991.5>, "species_id"=>"2"}], "authenticity_token"=>"u0eN5MAfvGWtfEzrqBt4qfrL54VJ9SGX0jFLZCJ8iRM=", "sighting"=>{"sighting_date(2i)"=>"6", "name"=>"", "sighting_date(3i)"=>"5", "county"=>"0", "notes"=>"", "location"=>"", "sighting_date(1i)"=>"2010", "email"=>""}} My form can have multiple sighting reports with multiple pictures in each sighting report. Here's my controller code: def create_multiple @report = Report.new @report.name = params[:sighting]["name"] @report.sighting_date = Date.civil(params[:sighting][:"sighting_date(1i)"].to_i, params[:sighting][:"sighting_date(2i)"].to_i, params[:sighting][:"sighting_date(3i)"].to_i) @report.county_id = params[:sighting][:county] @report.location = params[:sighting][:location] @report.notes = params[:sighting][:notes] @report.email = params[:sighting][:email] @report.save! @report.reload for sr in params[:sighting_report] do sighting = SightingReport.new sighting.report_id = @report.id sighting.species_id = sr[:species_id] sighting.number_seen = sr[:number_seen] sighting.save if sr[:picture] sighting.reload for pic in sr[:picture] do p = SpeciesPic.new p.uploaded_picture = pic p.species_id = sighting.species_id p.report_id = @report.id p.save! end end end redirect_to :action => 'new_multiple' end

    Read the article

  • Problem storing a hash in DB using Storable::nfreeze in Perl

    - by Sam
    I want to insert a hash in the db using Storable::nfreeze but the data is not inserted properly. My code is as follows: %rec=(); $rec{'name'} = 'my name'; $rec{'address'} = 'my address'; my $order1 = new Order(); $order1->set_session(\%rec); $self->createOrder($order1); sub createOrder { my $self = $_[0]; my $order = $_[1]; # Retrieve the fields to insert into the database. my $st = $dbh->prepare("insert into order (session,.......) values(?,........)"); my $session = %{$order->get_session()}; $st->execute(&Storable::nfreeze(\%session),.....); $st->finish(); } sub getOrder { ... my $session = &Storable::thaw( $ref->{'session'} ); ..... } The thaw is working fine because I tested it withe some rows that have been inserted correctly, but when I try to get a row that was inserted using the createOrder subroutine, I get an error saying: Storable binary image v36.65 more recent than I am (v2.7) at blib/lib/Storable.pm (autosplit into blib/lib/auto/Storable/thaw.al) line 415 The error comes from the line that have thaw. The nfreeze did not store the hash properly. Can someone point me to what I'm doing wrong in the createOrder subroutine? I know the module version have nothing to do with the problem.

    Read the article

  • problem storing a hash in DB using Storage::nfreeze Perl

    - by Sam
    Hello, I want to insert a hash in the db using Storage::nfreeze but the data is not inserted properly. the code is as follow: %rec=(); $rec{'name'} = 'my name'; $rec{'address'} = 'my address'; my $order1 = new Order(); $order1->set_session(\%rec); $self->createOrder($order1); sub createOrder { my $self = $_[0]; my $order = $_[1]; # Retrieve the fields to insert into the database. my $st = $dbh->prepare("insert into order (session,.......) values(?,........)"); my $session = %{$order->get_session()}; $st->execute(&Storable::nfreeze(\%session),.....); $st->finish(); } sub getOrder { ... my $session = &Storable::thaw( $ref->{'session'} ); ..... } the thaw is working fine because I tested it withe some rows that have been inserted correctly. but when I try to get a row that was inserted using the createOrder subroutine, I get an error saying" Storable binary image v36.65 more recent than I am (v2.7) at blib/lib/Storable.pm (autosplit into blib/lib/auto/Storable/thaw.al) line 415 the error comes from the line that have thaw. the nfreeze did not store the hash properly. Can someone point me to what i m doing wrong in the createOrder subroutine? Thanks in advance. I know the module version have nothing to do with the problem.

    Read the article

  • Technical choices in unmarshaling hash-consed data

    - by Pascal Cuoq
    There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshaling-unmarshaling of data. I am looking for citable references to these tidbits. For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshaling itself might as well be, too, so that it's possible to do everything in a single pass. I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations. I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshaling or distribution.

    Read the article

  • SQL Cluster install on Hyper V options

    - by Chris W
    I've been reading up on running a SQL Cluster in a Hyper V environment and there seems to be a couple of options: Install guest cluster on 2 VMs that are themselves part of a fail over cluster. Install SQL cluster on 2 VMs but the VMs themselves are not part of an underlying cluster. With option 1, it's little more complex as there's effectively two clusters in play but this adds some flexibility in the sense that I'm free to migrate the VMs between and physical blades in their cluster for physical maintenance without affecting the status of the SQL guest cluster that's running within them. With option 2, the set-up is a bit simpler as there's only 1 cluster in the mix but my VMs are anchored to the physical blades that they're set-up on (I'll ignore the fact I could manually move the VHDs for the purposes of this question). Are there any other factors that I should consider here when deciding which option to go for? I'm free to test out both options and probably will do but if any one has working experience of these set-ups and can offer some input that would be great.

    Read the article

  • How do I debug a cluster running Microsoft server 2003?

    - by alcor
    I'm sole developer of a complex critical software system, written in Visual C++ 2005. It's deployed on a classical Microsoft cluster scenario (active/passive), that has Windows Server 2003 R2. If a server A goes down, the other one (B) starts and take the ownership of its duties. You have to know that: Both servers have the same Microsoft patches/fixes, same hardware, same everything. Both servers use the same memory storage (a RAID-6 through fiber channel). This software has a main module that launches the peripheral modules. if a peripheral module crashes, the main module restarts it. When I switch the application in one of the two servers (let's say the B server) two of the peripheral modules of the main applications just started to crash apparently without reason about 2 seconds after the start of the peripheral module. What could I do to analyze/inspect/resolve this weird situation?

    Read the article

  • Hash#key for 1.8.6

    - by Tobias
    Greetings, I am trying to make my 1.9.1 source 1.8.6 compatible. I recognized that there's no Hash#key method. Any idea or method how to solve that? Thanks! Tobias

    Read the article

  • SQL Server 2000, yes 2000 password hash

    - by Justin808
    I need to store a password has in a SQL server 2000 database. The information isn't critical but I really don't want to store the password in clear text. How can I get a unique hash (sha, sha1, md5, etc) in SQL server 2000 as HashBytes isn't available. I'm not looking for compiled DLL or the ilk, I dont have access to the server, needs to be pure MS SQL.

    Read the article

  • setting ruby hash .default to a list

    - by matpalm
    i thought i understood what the default method does to a hash... give a default value for a key if it doesn't exist irb(main):001:0> a = {} => {} irb(main):002:0> a.default = 4 => 4 irb(main):003:0> a[8] => 4 irb(main):004:0> a[9] += 1 => 5 irb(main):005:0> a => {9=>5} all good. but if i set the default to be a empty list, or empty hash, i dont understand it's behaviour at all.... irb(main):001:0> a = {} => {} irb(main):002:0> a.default = [] => [] irb(main):003:0> a[8] << 9 => [9] # great! irb(main):004:0> a => {} # ?! would have expected {8=>[9]} irb(main):005:0> a[8] => [8] # awesome! irb(main):006:0> a[9] => [9] # unawesome! shouldn't this be [] ?? i was hoping / expecting the same behaviour as if i had used the ||= operator... irb(main):001:0> a = {} => {} irb(main):002:0> a[8] ||= [] => [] irb(main):003:0> a[8] << 9 => [9] irb(main):004:0> a => {8=>[9]} irb(main):005:0> a[9] => nil can anyone explain what is going on ???

    Read the article

  • computing hash values, integral types versus struct/class

    - by aaa
    hello I would like to know if there is a difference in speed between computing hash value (for example std::map key) of primitive integral type, such as int64_t and pod type, for example struct { int16_t v[4]; };. I know this is going to implementation specific, so my question ultimately pertains to gnu standard library. Thanks

    Read the article

  • Good Hash Function for Strings

    - by Leif Andersen
    I'm trying to think up a good hash function for strings. And I was thinking it might be a good idea to sum up the unicode values for the first five characters in the string (assuming it has five, otherwise stop where it ends). Would that be a good idea, or is it a bad one? I am doing this in Java, but I wouldn't imagine that would make much of a difference.

    Read the article

  • Use hash or case-statement [Ruby]

    - by user94154
    Generally which is better to use?: case n when 'foo' result = 'bar' when 'peanut butter' result = 'jelly' when 'stack' result = 'overflow' return result or map = {'foo' => 'bar', 'peanut butter' => 'jelly', 'stack' => 'overflow'} return map[n] More specifically, when should I use case-statements and when should I simply use a hash?

    Read the article

  • hash password in mssql (asp.net)

    - by ile
    Is this how hashed password stored in mssql should look like? This is function I use to hash password (I found it in some tutorial) public string EncryptPassword(string password) { //we use codepage 1252 because that is what sql server uses byte[] pwdBytes = Encoding.GetEncoding(1252).GetBytes(password); byte[] hashBytes = System.Security.Cryptography.MD5.Create().ComputeHash(pwdBytes); return Encoding.GetEncoding(1252).GetString(hashBytes); } Thanks, Ile

    Read the article

  • Perl, convert hash to array

    - by Mike
    If I have a hash in Perl that contains complete and sequential integer mappings (ie, all keys from from 0 to n are mapped to something), is there a means of converting this to an Array? I know I could iterate over the key/value pairs and place them into a new array, but something tells me there should be a built-in means of doing this.

    Read the article

  • Searching for cluster computation framework

    - by petkov_d
    I have a library, written in C#, containing one method: Response CalculateSomething(Request); The execution time of this method is relatively large, and there are a lot of responses that should be processed. I want to use a "cluster", spread this DLL to different machines (nodes) in this "cluster" and write some controller that will distribute responses to the nodes. There should be mechanism that perevent losing task because of node crush, load balancing. Can someone suggest framework that addresses this issue? P.S. There is a framework Qizmt written in C# but I think MapReduce is not good for the above scenario

    Read the article

  • Using the C Cluster library in Visual C++.

    - by Stefan K.
    Right so i'm trying to use a C library in C++, never actually done this before i thought it would be a case of declaring the header includes under a extern "C" and setting the compile as flag to "default" but i'm still getting linker errors and think that the header file might have to be complied as a DLL. I have no idea really. Is it the library that's the problem or is it me? There are some make files in the cluster-1.47\src, but i don't know how or if they relate to "cluster.h". I've uploaded a visual studio 2008 project for anyone to take a gander, any help would be appreciated as i've been hitting my head against the wall for time now. thanks Stefan Link to Visual Studio 2008 Project

    Read the article

  • djb2 Hash Function

    - by Jainish
    I am using the djb2 algorithm to generate the hash key for a string which is as follows hash(unsigned char *str) { unsigned long hash = 5381; int c; while (c = *str++) hash = ((hash << 5) + hash) + c; /* hash * 33 + c */ return hash; } Now with every loop there is a multiplication with two big numbers, After some time with the 4th of 5th character of the string there is a overflow as the hash value becomes huge What is the correct way to refactor so that the hash value does not overflow and the hashing also happens correctly

    Read the article

  • Solution 6 : Kill a Non-Clustered Process during Two-Node Cluster Failover

    - by StanleyGu
    Using Visual Studio 2008 and C#, I developed a windows service A and deployed it to two nodes of a windows server 2008 failover cluster. The service A is part of the failover cluster service, which means, when failover occurs at node1, the cluster service will failover the windows service A from node 1 to node 2. One of the tasks implemented by the windows service A is to start, monitor or kill a process B. The process B is installed to the two nodes but is not part of the failover cluster service. When a failover occurs at node1, the cluster service does not failover the process B from node 1 to node 2, and the process B continues running at node1. The requirement is: When failover occurs at node1, we want the process B running at node1 gets killed, but we do not want the process B be part of the failover cluster service. The first idea that pops up immediately is to put some code in an event handler triggered by the failover in the windows service A. The failover effect to the windows service A is similar to using the task manager to kill the process of the windows service A, but there is no event in windows service that can be triggered by killing the process of the window service. The events related to terminating a windows service are OnStop and OnShutDown, but killing the process of windows service A triggers neither of them. The OnStop event can only be triggered by stopping the windows service using Services Control Manager or Services Management Console. Apparently, the first idea is not feasible. The second idea that emerges is to put code into the OnStart event handler of the windows service A. When failover occurs at node 1, the windows service A is killed at node 1 and started at node 2. During the starting, the windows service A at node 2 kills the process B that is running at node 1. It is a workaround and works very well. The C# code implementation within the OnStart event handler is as following: 1.       Capture server names of the two nodes from App.config 2.       Determine server name of the remote node. 3.       Kill the process B running on the remote node. Check here for sample code.  

    Read the article

  • How to use the md5 hash?

    - by Ken
    Okay, so I'm learning php, html, and mysql to learn website development (for fun). One thing I still don't get is how to use md5 of sha1 hashes. I know how to hash the plain text, but say I want to make a login page. Since the password is hashed and can't be reversed, how would mysql know that the user-inserted password matches the hashed password in the database? Here is what I mean: $password = md5($_POST['password']); $query = ("INSERT INTO `users`.`data` (`password`) VALUES ('$password')"); I know that this snippet of script hashes the password, but how would I use this piece of code and make a login page? Any working examples would be great.

    Read the article

  • Salt and hash a password in .NET

    - by Jon Canning
    I endeavoured to follow the CrackStation rules: Salted Password Hashing - Doing it Right    public class SaltedHash     {         public string Hash { get; private set; }         public string Salt { get; private set; }         public SaltedHash(string password)         {             var saltBytes = new byte[32];             new RNGCryptoServiceProvider().GetNonZeroBytes(saltBytes);             Salt = ConvertToBase64String(saltBytes);             var passwordAndSaltBytes = Concat(password, saltBytes);             Hash = ComputeHash(passwordAndSaltBytes);         }         static string ConvertToBase64String(byte[] bytes)         {             return Convert.ToBase64String(bytes);         }         static string ComputeHash(byte[] bytes)         {             return ConvertToBase64String(SHA256.Create().ComputeHash(bytes));         }         static byte[] Concat(string password, byte[] saltBytes)         {             var passwordBytes = Encoding.UTF8.GetBytes(password);             return passwordBytes.Concat(saltBytes).ToArray();         }         public static bool Verify(string salt, string hash, string password)         {             var saltBytes = Convert.FromBase64String(salt);             var passwordAndSaltBytes = Concat(password, saltBytes);             var hashAttempt = ComputeHash(passwordAndSaltBytes);             return hash == hashAttempt;         }     }

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >