Search Results

Search found 4125 results on 165 pages for 'hash cluster'.

Page 26/165 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Perl - getting a value from a hash where the key has a dot

    - by imerez
    I have a hash in Perl which has been dumped into from some legacy code the name of the key has now changed from simply reqHdrs to reqHdrs.bla $rec->{reqHdrs.bla} My problem is now I cant seem to access this field from the hash any ideas? The following is my error Download Script Output: Bareword "reqHdrs" not allowed while "strict subs" in use

    Read the article

  • Remove hash (#) from url in Ajax navigation without refresh

    - by Email
    I have an ajax navigation similar like here. now if a menu is clicked window.location.hash is added like this #about i want to REmove the hash (#) so that people can easily copy and share the link naturally. How this can be done in april 2012 without a pagerefresh crossbrowserwise (IE7+,FF,Opera,Safari) ? For inspiration: Here is actually someone already doing this, click on "portfolio" or "features" and watch the url in your browser. thanks for tips

    Read the article

  • How to hash a password and store for later verification with another digest

    - by oxygen8
    I am using gsoap's wsseapi plugin and would like to store hashed sha1 passwords rather than plain text. I have spent a rediculous amount of time experimenting with various methods of hashing the plain text password for storage. Can anyone suggest a way to hash a password so it can be later verified against a username token digest sent by the client. I can't seem to get the client password to authenticate against my stored hash

    Read the article

  • Hash Table question [closed]

    - by Fatimah
    I need your help to solve this program ... Implement a separate chaining hash table that stores strings. You’ll need a hash function that converts string into an index number. Assume the strings will be lowercase words, so 26 characters will suffice.

    Read the article

  • Problem with initializing a hash in ruby

    - by Cyborgo
    Hi, I have a text file from which I want to create a Hash for faster access. My text file is of format (space delimited) author title date popularity I want to create a hash in which author is the key and the remaining is the value as an array. created_hash["briggs"] = ["Manup", "Jun,2007", 10] Thanks in advance.

    Read the article

  • Perl, "closure" using Hash

    - by Mike
    I would like to have a subroutine as a member of a hash which is able to have access to other hash members. For example sub setup { %a = ( txt => "hello world", print_hello => sub { print ${txt}; }) return %a } my %obj = setup(); $obj{print_hello}; Ideally this would output "hello world"

    Read the article

  • Cant turn off Redirected Access on Cluster Shared Volumes 2008r2 Failover clustering

    - by 562networks
    I read up on LH Mode and am still boggled what it is and what it does. I pass all validation on the Failover cluster wizard but in the Event Viewer I get erros for Event ID 5121 and 1034 related to one of the disks that is in the CSV for my hyper v machines. We have two disks in the CSV for our hyper V farm. Everything seems to work just fine but im worried about the even viewer errors. I have also read that people are having problems like I turning off Redirected access.

    Read the article

  • Windows Network Load Balancing on ESX Cluster with Dell PowerConnect stacks

    - by dunxd
    We recently switched out our Cisco 6500 core switch for a pair of Dell PowerConnect 6248 stacks. Since then, our Network Load Balanced Sharepoint, which runs on two virtual machines on an ESX cluster has been behaving very poorly. The symptoms are that opening and saving documents stored in sharepoint takes a very very long time. There are no errors showing up on the Sharepoint servers or SQL server, just a lot of annoyed users. Initially I thought there was no way NLB could cause this, but as soon as we repointed the DNS records for our intranet to the ip address of one of the web front ends, the problems disappeared. We suspect there is an issue related to multicast in the Dell configs - NLB is configured for multicast, but not IGMP. Has anyone got a similar set up to us and fixed this sort of issue? Sharepoint on VMware ESX, with Dell PowerConnect switches.

    Read the article

  • How can I get access to password hashing in postgresql? Tried installing postgresql-contrib in ubun

    - by Tchalvak
    So I'm trying to just hash some passwords in postgresql, and the only hashing solution that I've found for postgresql is part of the pgcrytpo package ( http://www.postgresql.org/docs/8.3/static/pgcrypto.html ) that is supposed to be in postgresql-contrib ( http://www.postgresql.org/docs/8.3/static/contrib.html ). So I installed postgresql-contrib, (sudo apt-get install postgresql-contrib), restarted my server (as a simple way to restart postgresql). However, I still don't have access to any of the functions for hashing that are supposed to be in postgresql-contrib, e.g.: ninjawars=# select crypt('global salt' || 'new password' || 'user created date', gen_salt('sha256')); ERROR: function gen_salt(unknown) does not exist ninjawars=# select digest('test', 'sha256') from players limit 1; ERROR: function digest(unknown, unknown) does not exist ninjawars=# select hmac('test', 'sha256') from players limit 1; ERROR: function hmac(unknown, unknown) does not exist So how can I hash passwords in postgresql, on ubuntu?

    Read the article

  • lsyncd + csync2 : cluster of 3 or more nodes

    - by sbrattla
    I've got 3 (and potentially more) web servers hosting the same content (fronted by a load balancer). Thus, I need to make sure that files on these web servers are the same. It appears that csync2 in combination with lsyncd is able to do synchronize a cluster of nodes, but according to this article there's a problem with cyclic events in such a setup. In other words, the author writes that a file change on one machine would trigger a replication event to other machines, which again would trigger a replication event back to the original machine. It appears that this is a consequence of the setup which uses lsyncd (and inotify) to catch file modification events and from there trigger csync2 to replicate the file tree. Does anyone have experience with lsyncd in combination with csync2. Have you had trouble with cyclic events?

    Read the article

  • Ubuntu web server cluster checks Ubuntu repository for script updates with cron

    - by StuartTheY
    I have a cluster of Ubuntu 12.04 web servers running a lamp stack. All of these servers are connected to a Load Balancer on Amazon Web Services. What I want to be able to do is have a dedicated Ubuntu server that I can update the PHP files on and have the other web servers check with cron to get the updates files from the repository. They don't have to use cron but that was the only thing I could think of, unless there was a way to have the updated repository tell them that it has updated files. And then how to transfer those files. Also if there is a ways for a server to check for updated files when it boots because I am going to be using auto scaling on AWS so when there is an increase in the load and another server gets created I need it to download the updated files from the repository when launched. Not sure how to transfer files from server to server.

    Read the article

  • perl sorting an array of hashes

    - by srk
    use strict; my @arr; $arr[0][0]{5} = 16; $arr[0][1]{6} = 11; $arr[0][2]{7} = 25; $arr[0][3]{8} = 31; $arr[0][4]{9} = 16; $arr[0][5]{10} = 17; sort the array based on hash values so this should change to $arr[0][0]{6} = 11; $arr[0][1]{9} = 16; $arr[0][2]{5} = 16; $arr[0][3]{10} = 17; $arr[0][4]{7} = 25; $arr[0][5]{8} = 31; first sort on values in the hash.. when the values are same reverse sort based on keys... Please tell me how to do this.. Thank you

    Read the article

  • Performance issue when configuring non HA VM in cluster

    - by laiys
    Hi, I saw this article http://technet.microsoft.com/en-us/library/cc764243.aspx Quote taken from the link “ Important It is recommended that you not deploy virtual machines that are not highly available on your host clusters. Although you can do this by using Hyper-V (VMM does not allow it), the non-highly available virtual machines will consume resources that otherwise would be available to the HAVMs What kind of resources (CPU,memory, NIC, etc) that non HA VM will consume? Just curious as not all VM (in production) not to be in Failover Cluster and Live Migration. If i put the VM into CSV but did not make it as HA, what impact does it make since i allocate same vCPU, vNic and VMemory into the VM. (not to mention that i lost failover feature). Curious to understand more about this. Please advise. Thanks

    Read the article

  • Rebooting Guest OS on a Hyper-V 2008 R2 Cluster results in a Shutdown

    - by S_Kuwahara
    Hi Folks, I have an interessting issue here. Sometimes when I manually reboot some of my guest OS (W2K3 / W2K8) on my Hyper-V 2008 R2 Cluster it does not reboot, it just shuts down. When I'm talking about a manual reboot, I mean connecting with RPD to the virtual server and use the shutdown funktion in the OS itself. I than have to start the virtual machine again via SCVMM / Hyper-V manager and it works just fine. There is nothing special in the eventlog of the host or guest OS. There is also nothing special logged in SCVMM. The guest OS all have the integration tools installed. Any hints? Thanks in Advance

    Read the article

  • Better performance with memcached cluster or local memcaches?

    - by Nicholas Tolley Cottrell
    I have a small cluster of servers balancing a Java web app. Currently I have 3 memcached servers caching data and all web apps shares all 3 memcached instances. I often get strange slowdowns and timeouts to some of the memcacheds and I wondering if there is a good way of analyzing the performance. I am wondering whether my iptables rules (or some other system limitation) are blocking/slowing connections. I am considering reconfiguring the web apps so that they only query the memcached process on their own localhost.

    Read the article

  • DFS-R (2008 and R2) 2 node server cluster, all file writes end in conflictAndDeleted

    - by Andrew Gauger
    Both servers in a 2 server cluster are reporting event 4412 20,000 times per day. If I sit in the conflictAndDetected folder I can observe files appearing and disappearing. Users report that their files saved by peers at the same location are overriding each other. The configuration began with a single server, then DFS-R was set up using the 2008 R2 wizard that set up the share on the second server. DFSN was set up independently. Windows users have drives mapped using domain based namespace (\domain.com\share). Mac users are pointed directly to the new server share created by DFS-R. It is PC users indicating most of the lost files, but there has been 2 reports by Mac users about files reverting.

    Read the article

  • LUNS access issue in ESX4 Cluster server

    - by rmustafa
    HI, I've created volumes in equallogic in PS 6000 XV(having 2 member which is in 1 pool), checked & those volumes can be easily detected my ISCSI software in windows. But the problem with ESX , not able to see the assigned disk on ESX server, I can explain what I've done: 1.Created Cluster with enabled HA & DRS 2.Added 3 ESX4 HOST 3.Added VMkernel & configured in all 3 ESX4, enabled vmotion & FT on the same adapter. 4.went to iSCSI storage adapter properties, enabled iSCSI 5.Trying to discover the available storage with the controller IP on dynamic discovery, but not able to see the assigned storage Note: the same volume is accessed to windows that means there is no issue from storage , am I right ???? Note: I wanted to mount the same volume in all 3 ESX host. Please suggest .... Thanks & Regards, Rashid Mustafa

    Read the article

  • Amazon EC2, fastest way to get a node into an existing cluster

    - by imaginative
    I'm new to Amazon AWS. A lot of the time I hear about people folks spawning instances and almost instantly putting them behind a load balancer and into an existing cluster. In the traditional world of managed machines, this would include provisioning hardware, installing an OS, configuring the network on the machine and once the network is available, use a tool of your choice such as CFengine, Puppet or Chef to bootstrap the machine based on its class. It seems like there are "shortcuts" that are able to get a server of a particular class up and running in Amazon EC2. If I have a particular stack running on my server, such as erlang, tomcat6 etc.. what's the fastest way to get these up and running and hooked into Amazon's load balancer? From network, to software stack to kernel tuning? Is it a combination of creating an AMI then running a tool like Puppet against the new instance? Any idea

    Read the article

  • Mutliple VMs for Tomcat cluster vs Multiple Tomcat instances on one physical box

    - by Greymeister
    I'm working on a project that will be implemented into production using a cluster of Apache Tomcat instances and I'm looking for the best Hardware/OS solutions and VMs have come up as one option. I have run ESXi/ESX instances before for development and testing, but I'm curious for a hosting environment if having multiple VMs is actually worse than just configuring a server to host multiple instances of Tomcat. These are my guesses: Pros for VMWare Easier Maintenance/Backup for individual VMs (VMWare makes this easy) Can remote login to individual VMs without having to give host access (security?) Easier way to re-purpose machine for OS/Hardware changes Pros for running on one Physical Machine Overhead of only one OS (also no VMWare footprint) Update OS/security changes once One less administrative layer (No VM expertise required) I'm curious if anyone has any other ideas about what the benefits would be for either option.

    Read the article

  • In Windows XP, is it possible to disable user credential caching for particular users

    - by kdt
    I understand that when windows caches user credentials, these can sometimes be used by malicious parties to access other machines once a machine containing cached credentials is compromised, a method known as "pass the hash"[1]. For this reason I would like to get control over what's cached to reduce the risk of cached credentials being used maliciously. It is possible to prevent all caching by zeroing HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\CachedLogonsCount, but this is too indiscriminate: laptops users need to be able to login when away from the network. What I would like to do is prevent the caching of credentials of certain users, such as administrators -- is there any way to do that in Windows XP? http://www.lbl.gov/cyber/systems/pass-the-hash.html

    Read the article

  • linux: accessing thousands of files in hash of directories

    - by 130490868091234
    I would like to know what is the most efficient way of concurrently accessing thousands of files of a similar size in a modern Linux cluster of computers. I am carrying an indexing operation in each of these files, so the 4 index files, about 5-10x smaller than the data file, are produced next to the file to index. Right now I am using a hierarchy of directories from ./00/00/00 to ./99/99/99 and I place 1 file at the end of each directory, like ./00/00/00/file000000.ext to ./00/00/00/file999999.ext. It seems to work better than having thousands of files in the same directory but I would like to know if there is a better way of laying out the files to improve access.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >