Search Results

Search found 4125 results on 165 pages for 'hash cluster'.

Page 8/165 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Rails flash hash violation of MVC?

    - by user94154
    I know Rails' flash hash is nothing new, but I keep running into the same problem with it. Controllers should be for business logic and db queries, not formatting strings for display to the user. But the flash hash is always set in the controller. This means that I need to hack and work around Rails to use Helpers that I made to format strings for the flash hash. Is this just a pragmatic compromise to MVC or am I missing something here? How do you deal with this problem? Or do you not even see it as one?

    Read the article

  • Prevent query string manipulation by adding a hash?

    - by saille
    To protect a web application from query string manipulation, I was considering adding a query string parameter to every url which stores a SHA1 hash of all the other query string parameters & values, then validating against the hash on every request. Does this method provide strong protection against user manipulation of query string values? Are there any other downsides/side-effects to doing this? I am not particularly concerned about the 'ugly' urls for this private web application. Url's will still be 'bookmarkable' as the hash will always be the same for the same query string arguments. This is an ASP.NET application.

    Read the article

  • low performance on HPC cluster (sge) when running multiple jobs

    - by Yotam
    O know this is a long-shot but I'm clueless here. I'm running several computer simulations on High Performance Computation cluster (HPC) of oracale grid engine (sge). A single job runs at a certain speed (roughly 80 steps per second) when I add jobs to the machine, at a certain treshhold, the speed is recuded by two. On one machine (I don't know the cpu kind) the treshold is 11 jobs for 16 cpu's. On another one with the same number and kind of cpu's , the treshold is 8. I thought at first that this is a memory issue but each job takes about 60MB - 100MB and I have 16GB of ram on each of those machine. Did any of you encountered such a problem? is there any way to analyz this? Thanks.

    Read the article

  • Shared block device file system (cluster file system without networking)

    - by fungs
    Is there any file system that can be mounted multiple times and supports concurrent file access for Linux? Basically I want something like a cluster file system but without the need to have a running network for a distributed lock manager. That can be very handy in connection with virtual machines that can share data with the host or another VM without the need to create a network link. This I want to avoid to keep the network architecture secure (virtual machine in DMZ) but share large files. No need to scale it up, just two machines that mount the same block device. Shouldn't it be possible to have file locking information right on the disk?

    Read the article

  • How to present shared storage for MS Cluster Services running on vSphere 5

    - by MDMarra
    I've seen two approaches to handling the presentation of shared storage to Windows Server 2008 R2 cluster VMs on VMWare vSphere. One is the traditional method of carving out a LUN on your SAN and presenting it to both hosts through the Microsoft ISCSI software initiator. The other method is to make a vmdk on an existing LUN and attach it to both hosts and made it an independent disk so that it isn't affected by snapshots. Is one way the "correct" way, or are both viable? Is there any advantage or disadvantage to doing either?

    Read the article

  • Windows 2012 Cluster on P6300 SCSI-3 Persistent Reservation issues

    - by Bruno J. Melo
    Scenario: 1 HP 6300 with latest XCS version 1 Command View 10.1 + with hosts defined as Windows 2008 2 BL460c Gen8 Servers with SPP 2012.10 and Windows Server 2012 Datacenter Edition with all the updates + MPIO feature enabled DSM v4.03.00 Cluster Analyser Tool triggers this error: Test Disk 0 does not support SCSI-3 Persistent Reservations commands needed to support clustered Storage Pools. Some storage devices require specific firmware versions or settings to function properly with failover clusters. Please contact your storage administrator or storage vendor to check the configuration of the storage to allow it to function properly with failover clusters. Any ideas? Thanks for your help!

    Read the article

  • MD5 hash differences between Python and other file hashers

    - by Sam
    I have been doing a bit of programming in Python (still a n00b at it) and came across something odd. I made a small program to find the MD5 hash of a filename passed to it on the command line. I used a function I found here on SO. When I ran it against a file, I got a hash "58a...113". But when I ran Microsoft's FCIV or the md5sum.py in \Python26\Tools\Scripts\, I get a different hash, "591...ae6". The actual hashing part of the md5sum.py in Scripts is m = md5.new() while 1: data = fp.read(bufsize) if not data: break m.update(data) out.write('%s %s\n' % (m.hexdigest(), filename)) This looks functionally identical to the code in the function given in the other answer... What am I missing? (This is my first time posting to stackoverflow, please let me know if I am doing it wrong.)

    Read the article

  • MySQL cluster: 20Tb x 3K tables

    - by ethrbunny
    Over the next 2-3 years we will be scaling up data collection for a project. As a result the amount of data will grow 10-fold. Our current MySQL installation can keep up with the 2Tb of data but for larger queries there is a fair amount of IOWait. Im investigating a migration to a clustered solution to spread out the IO but am wondering about NDB and what happens to data that doesn't get accessed very often. The impression I get from reading about MySQL cluster is that it relies on memory tables for most of the data. What happens with tables that don't get accessed very often (or at all)? And how does backup work? Can I use MYSQLDUMP or is there a better solution?

    Read the article

  • Creating New WebSphere cluster

    - by user154561
    I need to improve WAS perfomance and want to make cluster. I have two separate machines with WebSpphere 7 on it. As I see to do this, I need to add node from my second (remote) WAS to the first one. I try to use "Add node" from console, but without success. It can't find host when I try to execute it. WAS help says about host: Specifies the host name or IP address of the node to add to the cell. A WebSphere Application Server instance must be running on this machine. Does that mean that I cannot add node from a remote machine with "add node?"

    Read the article

  • How to sort Ruby Hash based on date?

    - by Eki Eqbal
    I have a hash object with the following structure: {"action1"=> {"2014-08-20"=>0, "2014-07-26"=>1, "2014-07-31"=>1 }, "action2"=> {"2014-08-01"=>2, "2014-08-20"=>2, "2014-07-25"=>2, "2014-08-06"=>1, "2014-08-21"=>1 } "action3"=> {"2014-07-30"=>2, "2014-07-31"=>1, "2014-07-22"=>1, } } I want to sort the hash based on the date and return back a Hash(Not array). The final result should be: {"action1"=> {"2014-07-26"=>1, "2014-07-31"=>1, "2014-08-20"=>0 }, "action2"=> {"2014-07-25"=>2, "2014-08-01"=>2, "2014-08-06"=>2, "2014-08-20"=>1, "2014-08-21"=>1 } "action3"=> {"2014-07-22"=>1, "2014-07-30"=>2, "2014-07-31"=>1 } }

    Read the article

  • Receiving a notification on a Cluster Group Failover

    - by Diego
    Hi all, I have several Windows Cluster set up and I have the need of keeping track of failovers. I'd need to receive a notification of sort whenever a group fails over. I've seen some examples around, but I can't rely on the approach of just sending a message whenever the resources are stopped/restarted as it would generate too many false alarms. In few words, I need to be notified if and only if the group really fails over. I was thinking that probably the best way is monitoring the System Event Log, but, if possible, I'd prefer not having to write a script/program from scratch for this issue. Is there any script/product that already does it? Thanks.

    Read the article

  • javascript location.hash refreshing in IE

    - by aepheus
    I need to modify the hash, remove it after certain processing takes place so that if the user refreshes they do not cause the process to run again. This works fine in FF, but it seems that IE is reloading every time I try to change the hash. I think it is related to other things that are loading on the page, though I am not certain. I have an iframe that loads (related to the process) as well as some scripts that are still being fetched in the parent window. I can't seem to figure out a good way to change the hash after all the loading completes. And, at the same time am not even positive that it is related to the loading. Any ideas on how to solve this?

    Read the article

  • Cleaner way to store to replace a scalar hash value with an array ref?

    - by user275455
    I am building a hash where the keys, associated with scalars, are not necessarily unique. I want the desired behavior to be that if the key is unique, the value is the scalar. If the key is not unique, I want the value to be an array reference of the scalars associated witht the key. Since the hash is built up iteratively, I don't know if the key is unique ahead of time. Right now, I am doing something like this: if(!defined($hash{$key})){ $hash{$key} = $val; } elseif(ref($hash{$key}) ne 'ARRAY'){ my @a; push(@a, $hash{$key}); push(@, $val); $hash{$key} = \@a; } else{ push(@{$hash{$key}}, $val); } Is there a simpler way to do this?

    Read the article

  • Perl array and hash manipulation using map

    - by somebody
    I have the following test code use Data::Dumper; my $hash = { foo => 'bar', os => 'linux' }; my @keys = qw (foo os); my $extra = 'test'; my @final_array = (map {$hash->{$_}} @keys,$extra); print Dumper \@final_array; The output is $VAR1 = [ 'bar', 'linux', undef ]; Shouldn't the elements be "bar, linux, test"? Why is the last element undefined and how do I insert an element into @final_array? I know I can use the push function but is there a way to insert it on the same line as using the map command? Basically the manipulated array is meant to be used in an SQL command in the actual script and I want to avoid using extra variables before that and instead do something like: $sql->execute(map {$hash->{$_}} @keys,$extra);

    Read the article

  • Split Entire Hash Range Into n Equal Ranges

    - by noxtion
    Hello. I am looking to take a hash range (md5 or sha1) and split it into n equal ranges. For example, if n=5, the entire hash range would be split by 5 so that there would be a uniform distribution of key ranges. I would like n=1 to be from the beginning of the hash range to 1/5, 2 from 1/2 to 2/5, etc all the way to the end. I am new to hashing and a little bit unsure of where I could start on solving this for a project. Any help you could give would be great.

    Read the article

  • Simple Hash that is always equal between C# and Java

    - by GaiusSensei
    I have a C# WebService and a (Java) Android Application. Is there a SIMPLE hash function that produces the same result between these two languages? The simplest C# hash is a String.GetHashCode(), but I can't replicate it in Java. The simplest Java hash is not simple at all. And I don't know if I can replicate it exactly in C#. In case it's relevant, I'm hashing passwords before sending it across the internet. I'm currently using Encode64, but that's obviously not secure since we can reverse it.

    Read the article

  • Comparing large strings in JavaScript with a hash

    - by user4815162342
    I have a form with a textarea that can contain large amounts of content (say, articles for a blog) edited using one of a number of third party rich text editors. I'm trying to implement something like an autosave feature, which should submit the content through ajax if it's changed. However, I have to work around the fact that some of the editors I have as options don't support an "isdirty" flag, or an "onchange" event which I can use to see if the content has changed since the last save. So, as a workaround, what I'd like to do is keep a copy of the content in a variable (let's call it lastSaveContent), as of the last save, and compare it with the current text when the "autosave" function fires (on a timer) to see if it's different. However, I'm worried about how much memory that could take up with very large documents. Would it be more efficient to store some sort of hash in the lastSaveContent variable, instead of the entire string, and then compare the hash values? If so, can you recommend a good javascript library/jquery plugin that implements an appropriate hash for this requirement?

    Read the article

  • Reversing a hash function

    - by martani_net
    Hi, I have the following hash function, and I'm trying to get my way to reverse it, so that I can find the key from a hashed value. uint Hash(string s) { uint result = 0; for (int i = 0; i < s.Length; i++) { result = ((result << 5) + result) + s[i]; } return result; } The code is in C# but I assume it is clear. I am aware that for one hashed value, there can be more than one key, but my intent is not to find them all, just one that satisfies the hash function suffices. Any ideas? Thank you.

    Read the article

  • Intermittent unavailability of an instance in a failover cluster while a standby node is offline in

    - by Emil Fridriksson
    Hi everyone. I've got a small failover cluster that I run for the websites my company has. During a RAM upgrade of the standby server, our websites started to show errors about not being able to access the database server. I verified that the instance was indeed up and the server accessable via remote desktop. I also tried a SQL connection to it and it worked, but that might have been after it became available again. This happened on and off until we were able to roll back the hardware changes that were in progress on the standby server and we were able to bring it back up. There was nothing of interest in the SQL Server log, but there is a continous log for the whole duration of the problem, so there was no restart of the SQL Server service. The event viewer is of more interest, since it shows events relating to the heartbeat network card, but I don't know how that would affect the availability of the server, since the standby node is offline. I'd appreciate any help you can provide, it's not very redundant if the setup depends on the standby server being up. :) Here are the event logs from the time of the problem, I include all of them since I can't seem to see what could possibly be the cause of the problem. Event log: http://hlekkir.com:800/htmltable.htm

    Read the article

  • 3 Servers, is this is a cluster?

    - by Andy Barlow
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Cluster FIle System

    - by Ben
    We are looking for to choose a clustered file system for our in house appplication. Let me first highlight my requirement. we have a storage and 2 servers at present.We get the data files from remote servers to our server and on both servers we are running our application to access those data and make a final result as per our requirements. In future may be after 3-4 months, we can add another servers in current cluster pool to handle more data load from remote location data senders. So my requirement is that to integrate same storage partition on 2-3 servers , it might be 4-5 more servers in future, My application read data from storage partition and write back to storage partition. Is there any bottleneck / limitation from RHCS , GFS2 or anything.? We are new with RHCS + GFS and all. Can we have any other better approach or someway to deal with our requirement light way? what is the best OS version for this ? how's RHEL 6.4 64 bit ? please share some case study or some gudie reference as per past experiences with such environnmnets Regards, Ben

    Read the article

  • 3 Server, is this a cluster scenario?

    - by HornedBeast
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? In short, how can I get the best out of my 3 hardware servers? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Short Python alphanumeric hash with minimal collisions

    - by ensnare
    I'd like to set non-integer primary keys for a table using some kind of hash function. md5() seems to be kind of long (32-characters). What are some alternative hash functions that perhaps use every letter in the alphabet as well as integers that are perhaps shorter in string length and have low collision rates? Thanks!

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >