Search Results

Search found 4125 results on 165 pages for 'hash cluster'.

Page 27/165 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • postgresql 9.1 Multiple Cluster on same host

    - by user1272305
    I have 2 cluster databases, running on the same host, Ubuntu. My fist database port is set to default but my second database port is set to 5433 in the postgresql.conf file. While everything is ok with local connections, I cannot connect using any of my tools to the second database with port 5433, including pgAdmin. Please help. Any parameter that I need to modify for the new database with port 5433? netstat -an | grep 5433 shows, tcp 0 0 0.0.0.0:5433 0.0.0.0:* LISTEN tcp6 0 0 :::5433 :::* LISTEN unix 2 [ ACC ] STREAM LISTENING 72842 /var/run/postgresql/.s.PGSQL.5433 iptables -L shows, Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Advice on off-site backup of Hyper-V Failover Cluster

    - by Paul McCowat
    We are currently setting up a Server 2008 R2 which will be off-site over a leased line with VPN. At the main site is 2 x Hyper-V hosts in a failover cluster with PowerVault M3000i iSCSI SAN. We are using BackupAssist for local backups and each host backups up itself and it's guests nightly creating a 500GB backup each which is copied to a 2TB rotated NAS drive. Files and SQL DB's are also backed up / log shipped etc. Looking for the best way to backup the Hyper-V VM's and copy them off-site so that the OS's are only a month old and the data is a day old. The main backups are too large to transfer between backups so options discussed so far are: Take rotating individual backups of the VM's each day and copy over, Day 1 SQL VM, Day 2 Exchange VM etc, would require more storage. Look in to Hyper-V snapshots, however don't believe these are supported in clustering. 3rd party replication tools

    Read the article

  • PHP Zend Hash Vulnerability Exploitation Vector [closed]

    - by Resurrected Laplacian
    Possible Duplicate: CVE-2007-5416 PHP Zend Hash Vulnerability Exploitation Vector (Drupal) According to exploit-db, http://www.exploit-db.com/exploits/4510/, it says the following: Example: http://www.example.com/drupal/?_menu[callbacks][1][callback]=drupal_eval&_menu[items][][type]=-1&-312030023=1&q=1/ What are "[callbacks]","[1]" and all these stuffs? What should I put in to these stuffs? Can anyone present a real possible example? I wasn't asking for a real website; I was asking for a possible example! So, how would address be like - what should I put in to these stuffs, as the question says..

    Read the article

  • IPMI queries to multiple nodes at once

    - by lorin
    I'm using IPMI to manage some nodes in a cluster. I'd like to be able to submit IPMI queries to all of the nodes in parallel. Something similar to Cluster SSH or tentakel, except that it wraps ipmitool instead of ssh. Are there any tools like that?

    Read the article

  • Run a script prior to start of SQL instance via Windows clusters

    - by Shahryar G. Hashemi
    Hi, We have a Windows 2008 cluster with several SQL 2008 instance. We would like to run a script that modifies 4 registry keys prior to the startup of SQL. I do not know if there is a way to have a script run through Windows 2008 clustering that does that. I have a VBS script to do it and tried to add a Generic Script to an existing cluster group, but it failed saying it could not be registered. Any ideas?

    Read the article

  • How can I share Perl data structures through a socket?

    - by pavun_cool
    In sockets I have written the client server program. First I tried to send the normal string among them it sends fine. After that I tried to send the hash and array values from client to server and server to client. When I print the values using Dumper, it gives me only the reference value. What should I do to get the actual values in client server? Server Program: use IO::Socket; use strict; use warnings; my %hash = ( "name" => "pavunkumar " , "age" => 20 ) ; my $new = \%hash ; #Turn on System variable for Buffering output $| = 1; # Creating a a new socket my $socket= IO::Socket::INET->new(LocalPort=>5000,Proto=>'tcp',Localhost => 'localhost','Listen' => 5 , 'Reuse' => 1 ); die "could not create $! \n" unless ( $socket ); print "\nUDPServer Waiting port 5000\n"; my $new_sock = $socket->accept(); my $host = $new_sock->peerhost(); while(<$new_sock>) { #my $line = <$new_sock>; print Dumper "$host $_"; print $new_sock $new . "\n"; } print "$host is closed \n" ; Client Program use IO::Socket; use Data::Dumper ; use warnings ; use strict ; my %hash = ( "file" =>"log.txt" , size => "1000kb") ; my $ref = \%hash ; # This client for connecting the specified below address and port # INET function will create the socket file and establish the connection with # server my $port = shift || 5000 ; my $host = shift || 'localhost'; my $recv_data ; my $send_data; my $socket = new IO::Socket::INET ( PeerAddr => $host , PeerPort => $port , Proto => 'tcp', ) or die "Couldn't connect to Server\n"; while (1) { my $line = <stdin> ; print $socket $ref."\n"; if ( $line = <$socket> ) { print Dumper $line ; } else { print "Server is closed \n"; last ; } } I have given my sample program about what I am doing. Can any one tell me what I am doing wrong in this code? And what I need to do for accessing the hash values?

    Read the article

  • subgraph cluster ranking in dot

    - by Chris Becke
    I'm trying to use graphviz on media wiki as a documentation tool for software. First, I documented some class relationships which worked well. Everything was ranked vertically as expected. But, then, some of our modules are dlls, which I wanted to seperate into a box. When I added the nodes to a cluster, they got edged, but clusters seem to have a LR ranking rule. Or being added to a cluster broke the TB ranking of the nodes as the cluster now appears on the side of the graph. This graph represents what I am trying to do: at the moment, cluster1 and cluster2 appear to the right of cluster0. I want/need them to appear below. <graphviz> digraph d { subgraph cluster0 { A -> {B1 B2} B2 -> {C1 C2 C3} C1 -> D; } subgraph cluster1 { C2 -> dll1_A; dll1_A -> B1; } subgraph cluster2 { C3 -> dll2_A; } dll1_A -> dll2_A; } </graphviz>

    Read the article

  • Ruby 1.9: turn these 4 arrays into hash of key/value pairs

    - by randombits
    I have four arrays that are coming in from the client. Let's say that there is an array of names, birth dates, favorite color and location. The idea is I want a hash later where each name will have a hash with respective attributes: Example date coming from the client: [name0, name1, name2, name3] [loc0, loc1] [favcololor0, favcolor1] [bd0, bd1, bd2, bd3, bd4, bd5] Output I'd like to achieve: name0 => { location => loc0, favcolor => favcolor0, bd => bd0 } name1 => { location => loc1, favcolor => favcolor1, bd => bd1 } name2 => { location => nil, favcolor => nil, bd => bd2 } name3 => { location => nil, favcolor => nil, bd => bd3 } I want to have an array at the end of the day where I can iterate and work on each particular person hash. There need not be an equivalent number of values in each array. Meaning, names are required.. and I might receive 5 of them, but I only might receive 3 birth dates, 2 favorite colors and 1 location. Every missing value will result in a nil. How does one make that kind of data structure with Ruby 1.9?

    Read the article

  • Cisco SG200 vlan issue in ESXi VSA cluster

    - by George
    I have three Cisco SG200-26 switches, and I also have two ESXi hosts that I have connected like shown in the below "best practice" map by VMware: http://communities.vmware.com/servlet/JiveServlet/previewBody/17393-102-1-22458/VSA_networking_map.pdf Even though I created the VLANs in the SG200 and I set the two VLANs (508 and 608) as allowed for these untagged ports (where my ESX NIC's are connected), I can not ping from host 1 to host 2 when configuring the NIC's to use 608 VLAN. Am I missing something? my IP's are all in the 192.168. range, and the only reason I need the VLANs is to isolate the traffic of VSA back-end internally, only the two hosts will be using the VLANs. So I think I do not have to create virtual interfaces on my router since that's the case, is my understanding correct? Also sending my switch config screenshot below.. all 3 switches have the latest firmware (it seems these were originally linksys and got rebranded as cisco after the acquisition) http://img31.imageshack.us/img31/2503/switch.gif Any ideas what to change on the Cisco SG200 to make this work , would be appreciated! The second VLAN (608) only needs two IP's: 192.168.0.1 and 192.168.0.2 The first VLAN (508) will have about 15 IP's for ESXi Management and VSA cluster service, I could use either 192.168.1.xx or 10.0.1.xx The rest of my network (about 50 clients) is in 192.168.1.xx range VMware also states that the VLAN protocol on the physical switch must be 802.1Q, not ISL, anyone knows which of the two my SG200-26 uses? In addition to that, the only requirement from VSA is that my two hosts: -Are in the same subnet. -Have static IP addresses set. -Have the same Default Gateway configured. If I need inter-vlan routing for this, I suppose I have to create virtual interfaces on my sonicwall, and assign an IP for each VLAN, and then set routes between them? Thank you for your time!

    Read the article

  • Liferay - Verify each node in a cluster

    - by Schrute
    In this example, I have two clustered instances of Liferay using bundled Tomcat running, using cluster link and shared documents. Let's say the name of the public community is fubar and friendy URL used is fubar.lipsum.com. Let's say the ports listening on each server is 8080. If I go to both server1:8080 or server2:8080 I will get the default page for Liferay. How can I test fubar.lipsum.com on each node by using the backend server, so I can verify each server? If I test it, it just goes to the load balancer, I wish there was a way to append to the backend connection to bring it up. I can add the friendly URL to my local machines hosts file and this seems to kinda work, but then once something is called in the application, it tries to go out again from the backend server and then uses SSL and then we have problems. I think I may be able to do port forwarding, but this seems like a basic thing we should be able to do and what I've found so far in the admin docs has not helped. Using the option to print the server name in the page details isn't an option either.

    Read the article

  • Setting up MongoDB in High Performance Computing LSF linux cluster

    - by Dnaiel
    I am trying to run mongo in a LSF cluster computing environment where I have no admin control. Our sysadmin installed mongodb, but it is not running. Any ideas on what should I ask the server admin to do for it to run? Or if I could run it locally? [node1382]allelix> mongod --dbpath /users/dnaiel/ma/mongodb/ Tue Oct 2 21:33:48 [initandlisten] MongoDB starting : pid=22436 port=27017 dbpath=/seq/epigenome01/allelix/ma/mongodb/ 64-bit host=node1382 Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] ** WARNING: You are running on a NUMA machine. Tue Oct 2 21:33:48 [initandlisten] ** We suggest launching mongod like this to avoid performance problems: Tue Oct 2 21:33:48 [initandlisten] ** numactl --interleave=all mongod [other options] Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] db version v2.2.0, pdfile version 4.5 Tue Oct 2 21:33:48 [initandlisten] git version: f5e83eae9cfbec7fb7a071321928f00d1b0c5207 Tue Oct 2 21:33:48 [initandlisten] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49 Tue Oct 2 21:33:48 [initandlisten] options: { dbpath: "/users/dnaiel/ma/mongodb/" } Tue Oct 2 21:33:48 [initandlisten] journal dir=users/dnaiel/ma/mongodb/journal Tue Oct 2 21:33:48 [initandlisten] recover begin Tue Oct 2 21:33:48 [initandlisten] info no lsn file in journal/ directory Tue Oct 2 21:33:48 [initandlisten] recover lsn: 0 Tue Oct 2 21:33:48 [initandlisten] recover /seq/epigenome01/allelix/ma/mongodb/journal/j._0 Tue Oct 2 21:33:48 [initandlisten] recover cleaning up Tue Oct 2 21:33:48 [initandlisten] removeJournalFiles Tue Oct 2 21:33:48 [initandlisten] recover done Tue Oct 2 21:33:48 [websvr] admin web console waiting for connections on port 28017 Tue Oct 2 21:33:48 [initandlisten] waiting for connections on port 27017 It basically waits forever and cannot start mongodb. These servers are not webservers but they do have network access, it's a cloud computing LSF environment system. Any advice would be welcome, thanks in advance.

    Read the article

  • Intermittent NFS lockups on Isilon cluster

    - by blackbox222
    We have an Isilon cluster with 8 IQ 12000x nodes which exports storage via several NFS shares for a handful of Linux and Solaris clients. There is a Linux system that has one of these NFS filesystems mounted. I/O to this filesystem is moderately heavy from the Linux system. Every 3-4 weeks (it's not on any kind of discernible schedule, and sometimes is more/less frequent than this), we notice that all activity ceases on this NFS mount (the process hangs, as if the network stopped working so process is stuck in uninterruptible sleep) - 30 minutes later, the share recovers and things continue to work normally. The kernel log from the affected machine is as follows: Dec 3 10:07:29 redacted kernel: [8710020.871993] nfs: server nfs-redacted not responding, still trying Dec 3 10:37:17 redacted kernel: [8711805.966130] nfs: server nfs-redacted OK relevant /etc/fstab line: nfs-redacted:/ifs/nfs/export_data/shared/...redacted... /data nfs defaults 0 0 I've checked to see if there are any scheduled processes e.g. cron jobs, Isilon related functions e.g. snapshots, etc that might be causing these hangups but I can't seem to find anything. I'm also not aware of any network related issues or maintenance that would cause this. All of the lockups last almost exactly 30 minutes per the kernel logs. Perhaps someone has some suggestions I could try? (I considered a soft mount to avoid the problems associated with processes accessing the filesystem hanging; however am wary of the corruption that could result and it would not really solve the underlying issue anyway).

    Read the article

  • eXML-PARSER output contains unwanted hash references

    - by seaworthy
    So I wrote a parser routine to take one xml file and reparse into another one. This code I later modified to split a large xml file into many small xml files. I am having a problem with an output. Parsing works fine the only thing output also includes unwanted strings like HASH(0x19f9b58), I am not sure why and need set of friendly eyes. use Encode; use XML::Parser; my $parser = XML::Parser->new( Handlers => {Start => \&handle_elem_start, End => \&handle_elem_end,Char => \&handle_char_data,}); my $record; my $file = shift @ARGV; if( $file ) {$parser->parsefile( $file );} exit; sub handle_elem_start { my( $expat, $name, %atts ) = @_; if ($name eq 'articles'){$file="_data.xml";unlink($file);} $record .= "<"; $record .= "$name"; foreach my $key (keys %atts){$record .= " $key=\"$atts{$key}\"";} $record .= ">"; } sub handle_char_data { my( $expat, $text ) = @_; $text = decode_utf8( $text ); $record .= "$text"; } sub handle_elem_end { my( $expat, $name ) = @_; $record .= "</$name>"; if( $name eq 'article' ) { open (MYFILE, '>>'.$file); print MYFILE $record; close (MYFILE); print $record; $record = {}; } return unless( $name eq 'article' ); } Sample output: ... </article>HASH(0x19f9b40) <article doi="10.1103/PhysRevSeriesI.9.304"> <journal short="Phys. Rev. (Series I)" jcode="PRI">Physical Review (Series I)</journal> <volume>9</volume> <issue printdate="1899-11-00">5</issue> <fpage>304</fpage> <lpage>309</lpage> <seqno>1</seqno> <price></price><tocsec>Articles</tocsec> <arttype type="article"></arttype><doi>10.1103/PhysRevSeriesI.9.304</doi> <title>An Investigation of the Magnetic Qualities of Building Brick</title> <authgrp> <author><givenname>O.</givenname><middlename>A.</middlename><surname>Gage</surname></author> <author><givenname>H.</givenname><middlename>E.</middlename><surname>Lawrence</surname></author> </authgrp> <cpyrt> <cpyrtdate date="1899"></cpyrtdate><cpyrtholder>The American Physical Society</cpyrtholder> </cpyrt> </article>HASH(0x19f9b58) ... HASH strings are not wanted, please advise.

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • passing hashes to a subroutine

    - by Vishalrix
    In one of my main( or primary) routines,I have two or more hashes. I want the subroutine foo() to recieve these possibly-multiple hashes as distinct hashes. Right now I have no preference if they go by value, or as references. I am struggling with this for the last many hours and would appreciate help, so that I dont have to leave perl for php! ( I am using mod_perl, or will be) Right now I have got some answer to my requirement, shown here From http://forums.gentoo.org/viewtopic-t-803720-start-0.html # sub: dump the hash values with the keys '1' and '3' sub dumpvals { foreach $h (@_) { print "1: $h->{1} 3: $h->{3}\n"; } } # initialize an array of anonymous hash references @arr = ({1,2,3,4}, {1,7,3,8}); # create a new hash and add the reference to the array $t{1} = 5; $t{3} = 6; push @arr, \%t; # call the sub dumpvals(@arr); I only want to extend it so that in dumpvals I could do something like this: foreach my %k ( keys @_[0]) { # use $k and @_[0], and others } The syntax is wrong, but I suppose you can tell that I am trying to get the keys of the first hash ( hash1 or h1), and iterate over them. How to do it in the latter code snippet above?

    Read the article

  • comparing salt and hashed passwords during login doesn't seem work right....

    - by Pandiya Chendur
    I stored salt and hash values of password during user registration... But during their login i then salt and hash the password given by the user, what happens is a new salt and a new hash is generated.... string password = collection["Password"]; reg.PasswordSalt = CreateSalt(6); reg.PasswordHash = CreatePasswordHash(password, reg.PasswordSalt); These statements are in both registration and login.... salt and hash during registration was eVSJE84W and 18DE22FED8C378DB7716B0E4B6C0BA54167315A2 During login it was 4YDIeARH and 12E3C1F4F4CFE04EA973D7C65A09A78E2D80AAC7..... Any suggestion.... public static string CreateSalt(int size) { //Generate a cryptographic random number. RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); byte[] buff = new byte[size]; rng.GetBytes(buff); // Return a Base64 string representation of the random number. return Convert.ToBase64String(buff); } public static string CreatePasswordHash(string pwd, string salt) { string saltAndPwd = String.Concat(pwd, salt); string hashedPwd = FormsAuthentication.HashPasswordForStoringInConfigFile( saltAndPwd, "sha1"); return hashedPwd; }

    Read the article

  • Comparing lists of field-hashes with equivalent AR-objects.

    - by Tim Snowhite
    I have a list of hashes, as such: incoming_links = [ {:title => 'blah1', :url => "http://blah.com/post/1"}, {:title => 'blah2', :url => "http://blah.com/post/2"}, {:title => 'blah3', :url => "http://blah.com/post/3"}] And an ActiveRecord model which has fields in the database with some matching rows, say: Link.all => [<Link#2 @title='blah2' @url='...post/2'>, <Link#3 @title='blah3' @url='...post/3'>, <Link#4 @title='blah4' @url='...post/4'>] I'd like to do set operations on Link.all with incoming_links so that I can figure out that <Link#4 ...> is not in the set of incoming_links, and {:title => 'blah1', :url =>'http://blah.com/post/1'} is not in the Link.all set, like so: #pseudocode #incoming_links = as above links = Link.all expired_links = links - incoming_links missing_links = incoming_links - links expired_links.destroy missing_links.each{|link| Link.create(link)} One route I've tried: I'd rather not rewrite Array#- and such, and I'm okay with converting incoming_links to a set of unsaved Link objects; so I've tried overwriting hash eql? and so on in Link so that it ignored the id equality that AR::Base provides by default. But this is the only place this sort of equality should be considered in the application - in other places the Link#id default identity is required. Is there some way I could subclass Link and apply the hash, eql?, etc overwriting there? The other route I've tried is to pull out the attributes hash for each Link and doing a .slice('id',...etc) to prune the hashes down. But this requires writing seperate methods for keeping track of the Link objects while doing set operations on the hashes, or writing seperate Collection classes to wrap the incoming_links hash-list and Link-list which seems a bit overkill. What is the best way to design this interaction? Extra credit for cleanliness.

    Read the article

  • HYPER-V R2 Can not mount ISO from network location (UNC Path)

    - by Entity_Razer
    So, as the name suggest I'm trying to mount a ISO from a network share using the UNC path to a HYPER-V R2 Cluster. This is a pure Demo / test case setup with: 2x HYPER-V R2 1X NAS/iSCSI CSV Cluster Management is happening through the MMC with RSAT tools. So what i've done so far is: Set up the cluster and configure Quorum, add CSV Shares and disks, set up 1 Virtual Machine on the Hyper-1 node. What i'm trying to do is, you go to settings --- DVD Drive --- use network location ---- Pick ISO file and press "apply". Error I'm getting is either "User account does not have rights to mount iso". I changed that or stopped getting that message when I went to the HYPER-V Node settings and tabbed on: "Use Default Credentials Automatically". Now I stopped getting the "user does not have right..." message but I get the following: Error applying DVD Drive Changes Failed to remove device microsoft synthetic DVD Drive:" the specified network resource or device is no longer available" I've google'd the problem but am unable to find a solution. Anyone here able to help me out ? Much abbliged !

    Read the article

  • I need advice about iscsi + zfs(or ntfs) + windows 2008 clustering

    - by Fatih
    I want to setup a storage farm with iSCSI. I have 2 cluster node machine, 1 iscsi target machine that has 8TB installed as RAID 10. The capacity is now 8TB, but I'll upgrade the capacity in future. Let's say, I installed clusters as file server, and I connected these servers to iscsi target, then I shared 8TB capacity as an only folder to the windows users. Users now see only a folder whose capacity is 8TB. But if I want to add another 8TB to expand the main capacity, the users must not see the second folder for this new 8 TB. The users must see only a folder as before, but this time this folder's capacity expanded to 16TB. And so on, if I add another 8TB, the users must deal with only a folder. For this purpose, I've learnt that ZFS can expand its size without a problem. So if I use ZFS as a file system on iSCSI luns, how can the cluster machines see the ZFS. Because the cluster machines have windows 2008. Is there another way to expand the size of shared folder without a problem? Does ntfs support it?

    Read the article

  • Clustered MSDTC

    - by niel
    Hi I'm setting up a SQL cluster (SQL 2008), Windows 2008 R2. I enable the network access on local dtc and then create a DTC resource in my cluster . the problem is that when i start up the resource it does nto pull through my settings to enable network access. the log shows this: MSDTC started with the following settings: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 0, Trasaction Manager Communication: Allow Inbound Transactions = 0, Allow Outbound Transactions = 0, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 0, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = Mutual Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 where when i restart the local dtc service it says this: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 1, Trasaction Manager Communication: Allow Inbound Transactions = 1, Allow Outbound Transactions = 1, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 1, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = No Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 settings on both nodes in teh cluster is the same. I have reinstalled and restarted to many times to mention. Any ideas ?

    Read the article

  • Use of double pointer in linux kernel Hash list implementation

    - by bala1486
    Hi, I am trying to understand Linux Kernel implementation of linked list and hash table. A link to the implementation is here. I understood the linked list implementation. But i am little confused of why double pointers is being used in hlist (**pprev). Link for hlist is here. I understand that hlist is used in implementation of hash table since head of the list requires only one pointer and it saves space. Why cant it be done using single pointer (just *prev like the linked list)? Please help me.

    Read the article

  • location.hash in an iframe scrolls the parent window

    - by Ben Clayton
    Hi all. I have a page with an iframe. Inside the iframe is code (that I can't change) that sets location.hash to the id of an element in the iframe window. This has the unwanted effect of scrolling my outermost browser window so that the top of the window touches the top of the iframe. This is quite annoying as I have a toolbar above the iframe that is vital to my app. Is there any way of preventing the setting of location.hash affecting the scroll position of the main window? Will preventDefault help me out here? Thanks!

    Read the article

  • Python: Improving long cumulative sum

    - by Bo102010
    I have a program that operates on a large set of experimental data. The data is stored as a list of objects that are instances of a class with the following attributes: time_point - the time of the sample cluster - the name of the cluster of nodes from which the sample was taken code - the name of the node from which the sample was taken qty1 = the value of the sample for the first quantity qty2 = the value of the sample for the second quantity I need to derive some values from the data set, grouped in three ways - once for the sample as a whole, once for each cluster of nodes, and once for each node. The values I need to derive depend on the (time sorted) cumulative sums of qty1 and qty2: the maximum value of the element-wise sum of the cumulative sums of qty1 and qty2, the time point at which that maximum value occurred, and the values of qty1 and qty2 at that time point. I came up with the following solution: dataset.sort(key=operator.attrgetter('time_point')) # For the whole set sys_qty1 = 0 sys_qty2 = 0 sys_combo = 0 sys_max = 0 # For the cluster grouping cluster_qty1 = defaultdict(int) cluster_qty2 = defaultdict(int) cluster_combo = defaultdict(int) cluster_max = defaultdict(int) cluster_peak = defaultdict(int) # For the node grouping node_qty1 = defaultdict(int) node_qty2 = defaultdict(int) node_combo = defaultdict(int) node_max = defaultdict(int) node_peak = defaultdict(int) for t in dataset: # For the whole system ###################################################### sys_qty1 += t.qty1 sys_qty2 += t.qty2 sys_combo = sys_qty1 + sys_qty2 if sys_combo > sys_max: sys_max = sys_combo # The Peak class is to record the time point and the cumulative quantities system_peak = Peak(time_point=t.time_point, qty1=sys_qty1, qty2=sys_qty2) # For the cluster grouping ################################################## cluster_qty1[t.cluster] += t.qty1 cluster_qty2[t.cluster] += t.qty2 cluster_combo[t.cluster] = cluster_qty1[t.cluster] + cluster_qty2[t.cluster] if cluster_combo[t.cluster] > cluster_max[t.cluster]: cluster_max[t.cluster] = cluster_combo[t.cluster] cluster_peak[t.cluster] = Peak(time_point=t.time_point, qty1=cluster_qty1[t.cluster], qty2=cluster_qty2[t.cluster]) # For the node grouping ##################################################### node_qty1[t.node] += t.qty1 node_qty2[t.node] += t.qty2 node_combo[t.node] = node_qty1[t.node] + node_qty2[t.node] if node_combo[t.node] > node_max[t.node]: node_max[t.node] = node_combo[t.node] node_peak[t.node] = Peak(time_point=t.time_point, qty1=node_qty1[t.node], qty2=node_qty2[t.node]) This produces the correct output, but I'm wondering if it can be made more readable/Pythonic, and/or faster/more scalable. The above is attractive in that it only loops through the (large) dataset once, but unattractive in that I've essentially copied/pasted three copies of the same algorithm. To avoid the copy/paste issues of the above, I tried this also: def find_peaks(level, dataset): def grouping(object, attr_name): if attr_name == 'system': return attr_name else: return object.__dict__[attrname] cuml_qty1 = defaultdict(int) cuml_qty2 = defaultdict(int) cuml_combo = defaultdict(int) level_max = defaultdict(int) level_peak = defaultdict(int) for t in dataset: cuml_qty1[grouping(t, level)] += t.qty1 cuml_qty2[grouping(t, level)] += t.qty2 cuml_combo[grouping(t, level)] = (cuml_qty1[grouping(t, level)] + cuml_qty2[grouping(t, level)]) if cuml_combo[grouping(t, level)] > level_max[grouping(t, level)]: level_max[grouping(t, level)] = cuml_combo[grouping(t, level)] level_peak[grouping(t, level)] = Peak(time_point=t.time_point, qty1=node_qty1[grouping(t, level)], qty2=node_qty2[grouping(t, level)]) return level_peak system_peak = find_peaks('system', dataset) cluster_peak = find_peaks('cluster', dataset) node_peak = find_peaks('node', dataset) For the (non-grouped) system-level calculations, I also came up with this, which is pretty: dataset.sort(key=operator.attrgetter('time_point')) def cuml_sum(seq): rseq = [] t = 0 for i in seq: t += i rseq.append(t) return rseq time_get = operator.attrgetter('time_point') q1_get = operator.attrgetter('qty1') q2_get = operator.attrgetter('qty2') timeline = [time_get(t) for t in dataset] cuml_qty1 = cuml_sum([q1_get(t) for t in dataset]) cuml_qty2 = cuml_sum([q2_get(t) for t in dataset]) cuml_combo = [q1 + q2 for q1, q2 in zip(cuml_qty1, cuml_qty2)] combo_max = max(cuml_combo) time_max = timeline.index(combo_max) q1_at_max = cuml_qty1.index(time_max) q2_at_max = cuml_qty2.index(time_max) However, despite this version's cool use of list comprehensions and zip(), it loops through the dataset three times just for the system-level calculations, and I can't think of a good way to do the cluster-level and node-level calaculations without doing something slow like: timeline = defaultdict(int) cuml_qty1 = defaultdict(int) #...etc. for c in cluster_list: timeline[c] = [time_get(t) for t in dataset if t.cluster == c] cuml_qty1[c] = [q1_get(t) for t in dataset if t.cluster == c] #...etc. Does anyone here at Stack Overflow have suggestions for improvements? The first snippet above runs well for my initial dataset (on the order of a million records), but later datasets will have more records and clusters/nodes, so scalability is a concern. This is my first non-trivial use of Python, and I want to make sure I'm taking proper advantage of the language (this is replacing a very convoluted set of SQL queries, and earlier versions of the Python version were essentially very ineffecient straight transalations of what that did). I don't normally do much programming, so I may be missing something elementary. Many thanks!

    Read the article

  • php selecting hash using wildcards

    - by tipu
    Say I have a hashmap, $hash = array('fox' => 'some value', 'fort' => 'some value 2', 'fork' => 'some value again); I am trying to accomplish an autocomplete feature. When the user types 'fo', I would like to retrieve, via ajax, the 3 keys from $hash. When the user types 'for', I would like to only retrieve the keys fort and fork. Is this possible? What I was thinking was using binary search to isolate the keys with 'f', instead of brute-force searching. Then continue eliminating the indexes as the user types out their query. Is there a more efficient solution to this?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >