Search Results

Search found 4125 results on 165 pages for 'hash cluster'.

Page 33/165 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • memcached cluster maintenance

    - by Yang
    Scaling up memcached to a cluster of shards/partitions requires either distributed routing/partition table maintenance or centralized proxying (and other stuff like detecting failures). What are the popular/typical approaches/systems here? There's software like libketama, which provides consistent hashing, but this is just a client-side library that reacts to messages about node arrivals/departures---do most users just run something like this, plus separate monitoring nodes that, on detecting failures, notify all the libketamas of the departure? I imagine something like this might be sufficient since typical use of memcached as a soft-state cache doesn't require careful attention to consistency, but I'm curious what people do.

    Read the article

  • Hashing a python function to regenerate output when the function is modified

    - by Seth Johnson
    I have a python function that has a deterministic result. It takes a long time to run and generates a large output: def time_consuming_function(): # lots_of_computing_time to come up with the_result return the_result I modify time_consuming_function from time to time, but I would like to avoid having it run again while it's unchanged. [time_consuming_function only depends on functions that are immutable for the purposes considered here; i.e. it might have functions from Python libraries but not from other pieces of my code that I'd change.] The solution that suggests itself to me is to cache the output and also cache some "hash" of the function. If the hash changes, the function will have been modified, and we have to re-generate the output. Is this possible or ridiculous? Updated: based on the answers, it looks like what I want to do is to "memoize" time_consuming_function, except instead of (or in addition to) arguments passed into an invariant function, I want to account for a function that itself will change.

    Read the article

  • Parse file to hash in Ruby

    - by Taschetto
    I'm a ruby newcomer who's trying to read a text file (a Valgrind simulation output) like this: -------------------------------------------------------------------------------- Profile data file 'temp/gt_1024_2_16.out' -------------------------------------------------------------------------------- I1 cache: 1024 B, 16 B, 2-way associative D1 cache: 32768 B, 64 B, 8-way associative LL cache: 3145728 B, 64 B, 12-way associative Profiled target: bash run.sh Events recorded: Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw Events shown: Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw Event sort order: Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw Thresholds: 99 0 0 0 0 0 0 0 0 Include dirs: User annotated: Auto-annotation: off -------------------------------------------------------------------------------- Ir I1mr ILmr Dr D1mr DLmr Dw D1mw DLmw -------------------------------------------------------------------------------- 1,894,017 246,981 2,448 519,124 4,691 2,792 337,817 1,846 1,672 PROGRAM TOTALS // other data I want to extract the PROGRAM TOTALS table and put it into a hash. Something like... myHash = { :Ir => 1894017, :I1mr => 246981, ILmr => 2448, ..., DLmw => 1672 } What are the best options for doing this? Could the CSV classes help me out? Thanks a bunch.

    Read the article

  • How to JNDI lookup from cluster 1 : a queue that exists in cluster 2 in Websphere 6?

    - by user347394
    I have a Websphere topology wherein in Cluster1, I have an MDB that's trying to publish to another MDB that resides in Cluster2. Since they're both in the same container, I tried simply Blockquote Context ctx = new InitialContext(); ctx.lookup("jms/MyQueue"); Blockquote The "jms/MyQueue" is configured in Cluster2. As you can see, this doesn't work!! 1) Do I have to provide an environment entry while creating the InitialContext? Even though both clusters are part of the same container? 2) If not, how then can I lookup the said queue in Cluster 2?

    Read the article

  • Use a function in a conditions hash

    - by Pierre
    Hi, I'm building a conditions hash to run a query but I'm having a problem with one specific case: conditions2 = ['extract(year from signature_date) = ?', params[:year].to_i] unless params[:year].blank? conditions[:country_id] = COUNTRIES.select{|c| c.geography_id == params[:geographies]} unless params[:geographies].blank? conditions[:category_id] = CATEGORY_CHILDREN[params[:categories].to_i] unless params[:categories].blank? conditions[:country_id] = params[:countries] unless params[:countries].blank? conditions['extract(year from signature_date)'] = params[:year].to_i unless params[:year].blank? But the last line breaks everything, as it gets interpreted as follows: AND ("negotiations"."extract(year from signature_date)" = 2010 Is there a way to avoid that "negotiations"." is prepended to my condition? thank you, P.

    Read the article

  • Password Hash Implementation

    - by oyerli
    I am developing a new application using Symfony. I want to store the passwords hashed, so I overridded the save method in my User model: public function save(Doctrine_Connection $conn = null) { $this->setUserPassword( md5($this->getUserPassword()) ); return parent::save($conn); } This works good when a new user created. However, this causes problems when we edit a user without changing his password. This causes Doctrine to hash the already hashed password. So, I need to check that whether the UserPassword is modified in this DoctrineRecord instance. How can I manage to do that?

    Read the article

  • How to deploy updates to .NET website in cluster

    - by royappa
    We are operating a corporate web application on a load-balanced cluster that consists of two identical IIS servers talking to a single MSSQL database. To deploy updates I am using this primitive process: 1) Make a copy of the entire site folder (wwwroot\inetpub\whatever) on each IIS box 2) Download the updated, compiled files onto each IIS box from our development area 3) Shut down IIS both web servers 4) Copy the new and updated files into the wwwroot folder (overwriting any same files) 5) Then restart IIS on both machines When there are database changes involved there are a few other steps. The whole process is fairly quick but it is ugly and fraught with danger, so it has to be done with full concentration. I would like to just push one button to make it all happen. And I want a one-click rollback in case there is a problem (that's the reason I make the copy in step #1). I am looking for tools to manage and improve this process. If it also helped us maintain a changelog, that would be nice. Thanks.

    Read the article

  • Redirect to hash URL

    - by r1987
    I'm building a site with a hashchange on wordpress, all working good. It just loads a single.php template file into a div. The problem is that i can still access my single url (http://www.mydomain.com/my-single-post). Since its not having any head and style tags with it, i don't want people to go over there. Also google has picked up the direct links, because I use the href attribute to load content into the div. So my question is: If someone clicks a link lets say in a forum, http://www.mydomain.com/my-single-post , is it possible to redirect him instantly to http://www.mydomain.com/#my-single-post ? I have researched that it has something to do with .htaccess, but I also have Pages, where i don't want the hash infront of the page-name.

    Read the article

  • Multi-Boot through IPMI possible?

    - by yoursort
    Hi, I want to setup a cluster. On each machine Windows XP and Linux should be installed and the OS should be selectable by a boat loader (e.g. grub). All machines have IPMI cards. Is it possible when starting the machines over IPMI to also select the OS to boot? And how? Thanks!

    Read the article

  • What is the differences between a Remote File Share Witness Quorum and a Disk Witness Quorum?

    - by arex1337
    Say you set up a Windows Server Failover Cluster consisting of two nodes, on Windows Server 2008 R2 Enterprise. Now when you want to configure quorum you can use either a remote shared folder, or a shared disk (unless your nodes are separated geographically). I'd love to know more about when to use a disk witness and when to use a file share witness. What are the differences? Does it matter at all?

    Read the article

  • protecting an application against hardware failure [on hold]

    - by alex
    I have an application for which I am looking for a way to protect against hardware and software (operating system ) failure. Cluster seems OK but the storage become the single point of failure and also I do not have a SAN. Can you please tell me if there are other ways to protect the application? Periodically this application is updated and changes should be replicated automatically to the second server.

    Read the article

  • Windows DFS File System Clustering

    - by tearman
    We're attempted to set up a high availability network for our file servers, and we're wanting to do a DFS file system cluster using the same back-end storage (our back-end storage has its own clustering mechanisms that it manages itself). The question being, A. how would one go about setting up DFS clustering, and B. how can we get Windows to cooperate with multiple servers accessing the same SAN volumes?

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Facebook, Yimg and google-analytics CDNs is freaking me out

    - by Millisami
    Hi, Its been a couple of weeks that some sites just keeps on hanging. e.g. Facebooks = static.ak.fbcdn.net FLicker = l.yimg.com GoogleAnalytics I've googled and found many problems like this and some answers which are outdated or just doesn't solve the problem. I did: Cookies clearing, ran cc cleaner and several other nifty methods. None solved my problem?? Only with facebook, if I enter https:// manually instead of http:// on every url on facebook, it works and when it does the redirection to http://, everytime I have to type 's' on the address bar to make it https:// It is driving me nuts coz I'm developing Facebook App and this problem in being pain in my ass. What might be the reason for these CDNs hanging behaviour?? Update: Mon Feb 8, 2010 Well when I viewed the source with firefox, this is the header part: <link type="text/css" rel="stylesheet" href="http://b.static.ak.fbcdn.net/rsrc.php/zDO0B/hash/8jpbog60.css" /> <link type="text/css" rel="stylesheet" href="http://static.ak.fbcdn.net/rsrc.php/zA96O/hash/8jqnsh63.css" /> <link type="text/css" rel="stylesheet" href="http://b.static.ak.fbcdn.net/rsrc.php/z9X8U/hash/5zy5e7ns.css" /> <link type="text/css" rel="stylesheet" href="http://b.static.ak.fbcdn.net/rsrc.php/z7XWB/hash/b881ctjq.css" /> <link type="text/css" rel="stylesheet" href="http://static.ak.fbcdn.net/rsrc.php/zEMLE/hash/6n3druoq.css" /> <link type="text/css" rel="stylesheet" href="http://b.static.ak.fbcdn.net/rsrc.php/zEEQQ/hash/3et16vbl.css" /> <link type="text/css" rel="stylesheet" href="http://b.static.ak.fbcdn.net/rsrc.php/zF0BN/hash/4ey03a8b.css" /> #<link type="text/css" rel="stylesheet" href="http://b.static.ak.fbcdn.net/rsrc.php/zD46U/hash/4ctxkmr7.css" /> <script type="text/javascript" src="http://b.static.ak.fbcdn.net/rsrc.php/z5KPU/hash/f92tjc5l.js"></script> When I clicked the each link, all links open with its contents except the last link with -# sign prefixed. So the url -#http://b.static.ak.fbcdn.net/rsrc.php/zD46U/hash/4ctxkmr7.css is not opening and this css file is not downloaded and the facebook page looks horrible and all left aligned?? Update: Tue Feb 9, 2010 Today the link with the -# sign is just keeps hanging and looping: <link type="text/css" rel="stylesheet" href="http://static.ak.fbcdn.net/rsrc.php/zEMLE/hash/6n3druoq.css" /> <link type="text/css" rel="stylesheet" href="http://b.static.ak.fbcdn.net/rsrc.php/z9X8U/hash/5zy5e7ns.css" /> <link type="text/css" rel="stylesheet" href="http://b.static.ak.fbcdn.net/rsrc.php/zF0BN/hash/4ey03a8b.css" /> <link type="text/css" rel="stylesheet" href="http://static.ak.fbcdn.net/rsrc.php/z1580/hash/4l5utauj.css" /> #<link type="text/css" rel="stylesheet" href="http://b.static.ak.fbcdn.net/rsrc.php/z4851/hash/532htj7z.css" /> <link type="text/css" rel="stylesheet" href="http://static.ak.fbcdn.net/rsrc.php/z1GEW/hash/dh01t0zv.css" /> <link type="text/css" rel="stylesheet" href="http://static.ak.fbcdn.net/rsrc.php/z80UK/hash/3a6o59ih.css" /> <link type="text/css" rel="stylesheet" href="http://b.static.ak.fbcdn.net/rsrc.php/zD46U/hash/4ctxkmr7.css" /> <script type="text/javascript" src="http://b.static.ak.fbcdn.net/rsrc.php/z5KPU/hash/f92tjc5l.js"></script> Why that url http://b.static.ak.fbcdn.net acting weird? Has something Akamai got to do with this?

    Read the article

  • Mulitple full joins in Postgres is slow

    - by blast83
    I have a program to use the IMDB database and am having very slow performance on my query. It appears that it doesn't use my where condition until after it materializes everything. I looked around for hints to use but nothing seems to work. Here is my query: SELECT * FROM name as n1 FULL JOIN aka_name ON n1.id = aka_name.person_id FULL JOIN cast_info as t2 ON n1.id = t2.person_id FULL JOIN person_info as t3 ON n1.id = t3.person_id FULL JOIN char_name as t4 ON t2.person_role_id = t4.id FULL JOIN role_type as t5 ON t2.role_id = t5.id FULL JOIN title as t6 ON t2.movie_id = t6.id FULL JOIN aka_title as t7 ON t6.id = t7.movie_id FULL JOIN complete_cast as t8 ON t6.id = t8.movie_id FULL JOIN kind_type as t9 ON t6.kind_id = t9.id FULL JOIN movie_companies as t10 ON t6.id = t10.movie_id FULL JOIN movie_info as t11 ON t6.id = t11.movie_id FULL JOIN movie_info_idx as t19 ON t6.id = t19.movie_id FULL JOIN movie_keyword as t12 ON t6.id = t12.movie_id FULL JOIN movie_link as t13 ON t6.id = t13.linked_movie_id FULL JOIN link_type as t14 ON t13.link_type_id = t14.id FULL JOIN keyword as t15 ON t12.keyword_id = t15.id FULL JOIN company_name as t16 ON t10.company_id = t16.id FULL JOIN company_type as t17 ON t10.company_type_id = t17.id FULL JOIN comp_cast_type as t18 ON t8.status_id = t18.id WHERE n1.id = 2003 Very table is related to each other on the join via foreign-key constraints and have indexes for all the mentioned columns. The query plan details: "Hash Left Join (cost=5838187.01..13756845.07 rows=15579622 width=835) (actual time=146879.213..146891.861 rows=20 loops=1)" " Hash Cond: (t8.status_id = t18.id)" " -> Hash Left Join (cost=5838185.92..13542624.18 rows=15579622 width=822) (actual time=146879.199..146891.833 rows=20 loops=1)" " Hash Cond: (t10.company_type_id = t17.id)" " -> Hash Left Join (cost=5838184.83..13328403.29 rows=15579622 width=797) (actual time=146879.165..146891.781 rows=20 loops=1)" " Hash Cond: (t10.company_id = t16.id)" " -> Hash Left Join (cost=5828372.95..10061752.03 rows=15579622 width=755) (actual time=146426.483..146429.756 rows=20 loops=1)" " Hash Cond: (t12.keyword_id = t15.id)" " -> Hash Left Join (cost=5825164.23..6914088.45 rows=15579622 width=731) (actual time=146372.411..146372.529 rows=20 loops=1)" " Hash Cond: (t13.link_type_id = t14.id)" " -> Merge Left Join (cost=5825162.82..6699867.24 rows=15579622 width=715) (actual time=146372.366..146372.472 rows=20 loops=1)" " Merge Cond: (t6.id = t13.linked_movie_id)" " -> Merge Left Join (cost=5684009.29..6378956.77 rows=15579622 width=699) (actual time=144019.620..144019.711 rows=20 loops=1)" " Merge Cond: (t6.id = t12.movie_id)" " -> Merge Left Join (cost=5182403.90..5622400.75 rows=8502523 width=687) (actual time=136849.731..136849.809 rows=20 loops=1)" " Merge Cond: (t6.id = t19.movie_id)" " -> Merge Left Join (cost=4974472.00..5315778.48 rows=8502523 width=637) (actual time=134972.032..134972.099 rows=20 loops=1)" " Merge Cond: (t6.id = t11.movie_id)" " -> Merge Left Join (cost=1830064.81..2033131.89 rows=1341632 width=561) (actual time=63784.035..63784.062 rows=2 loops=1)" " Merge Cond: (t6.id = t10.movie_id)" " -> Nested Loop Left Join (cost=1417360.29..1594294.02 rows=1044480 width=521) (actual time=59279.246..59279.264 rows=1 loops=1)" " Join Filter: (t6.kind_id = t9.id)" " -> Merge Left Join (cost=1417359.22..1429787.34 rows=1044480 width=507) (actual time=59279.222..59279.224 rows=1 loops=1)" " Merge Cond: (t6.id = t8.movie_id)" " -> Merge Left Join (cost=1405731.84..1414378.65 rows=1044480 width=491) (actual time=59121.773..59121.775 rows=1 loops=1)" " Merge Cond: (t6.id = t7.movie_id)" " -> Sort (cost=1346206.04..1348817.24 rows=1044480 width=416) (actual time=58095.230..58095.231 rows=1 loops=1)" " Sort Key: t6.id" " Sort Method: quicksort Memory: 17kB" " -> Hash Left Join (cost=172406.29..456387.53 rows=1044480 width=416) (actual time=57969.371..58095.208 rows=1 loops=1)" " Hash Cond: (t2.movie_id = t6.id)" " -> Hash Left Join (cost=104700.38..256885.82 rows=1044480 width=358) (actual time=49981.493..50006.303 rows=1 loops=1)" " Hash Cond: (t2.role_id = t5.id)" " -> Hash Left Join (cost=104699.11..242522.95 rows=1044480 width=343) (actual time=49981.441..50006.250 rows=1 loops=1)" " Hash Cond: (t2.person_role_id = t4.id)" " -> Hash Left Join (cost=464.96..12283.95 rows=1044480 width=269) (actual time=0.071..0.087 rows=1 loops=1)" " Hash Cond: (n1.id = t3.person_id)" " -> Nested Loop Left Join (cost=0.00..49.39 rows=7680 width=160) (actual time=0.051..0.066 rows=1 loops=1)" " -> Nested Loop Left Join (cost=0.00..17.04 rows=3 width=119) (actual time=0.038..0.041 rows=1 loops=1)" " -> Index Scan using name_pkey on name n1 (cost=0.00..8.68 rows=1 width=39) (actual time=0.022..0.024 rows=1 loops=1)" " Index Cond: (id = 2003)" " -> Index Scan using aka_name_idx_person on aka_name (cost=0.00..8.34 rows=1 width=80) (actual time=0.010..0.010 rows=0 loops=1)" " Index Cond: ((aka_name.person_id = 2003) AND (n1.id = aka_name.person_id))" " -> Index Scan using cast_info_idx_pid on cast_info t2 (cost=0.00..10.77 rows=1 width=41) (actual time=0.011..0.020 rows=1 loops=1)" " Index Cond: ((t2.person_id = 2003) AND (n1.id = t2.person_id))" " -> Hash (cost=463.26..463.26 rows=136 width=109) (actual time=0.010..0.010 rows=0 loops=1)" " -> Index Scan using person_info_idx_pid on person_info t3 (cost=0.00..463.26 rows=136 width=109) (actual time=0.009..0.009 rows=0 loops=1)" " Index Cond: (person_id = 2003)" " -> Hash (cost=42697.62..42697.62 rows=2442362 width=74) (actual time=49305.872..49305.872 rows=2442362 loops=1)" " -> Seq Scan on char_name t4 (cost=0.00..42697.62 rows=2442362 width=74) (actual time=14.066..22775.087 rows=2442362 loops=1)" " -> Hash (cost=1.12..1.12 rows=12 width=15) (actual time=0.024..0.024 rows=12 loops=1)" " -> Seq Scan on role_type t5 (cost=0.00..1.12 rows=12 width=15) (actual time=0.012..0.014 rows=12 loops=1)" " -> Hash (cost=31134.07..31134.07 rows=1573507 width=58) (actual time=7841.225..7841.225 rows=1573507 loops=1)" " -> Seq Scan on title t6 (cost=0.00..31134.07 rows=1573507 width=58) (actual time=21.507..2799.443 rows=1573507 loops=1)" " -> Materialize (cost=59525.80..63203.88 rows=294246 width=75) (actual time=812.376..984.958 rows=192075 loops=1)" " -> Sort (cost=59525.80..60261.42 rows=294246 width=75) (actual time=812.363..922.452 rows=192075 loops=1)" " Sort Key: t7.movie_id" " Sort Method: external merge Disk: 24880kB" " -> Seq Scan on aka_title t7 (cost=0.00..6646.46 rows=294246 width=75) (actual time=24.652..164.822 rows=294246 loops=1)" " -> Materialize (cost=11627.38..12884.43 rows=100564 width=16) (actual time=123.819..149.086 rows=41907 loops=1)" " -> Sort (cost=11627.38..11878.79 rows=100564 width=16) (actual time=123.807..138.530 rows=41907 loops=1)" " Sort Key: t8.movie_id" " Sort Method: external merge Disk: 3136kB" " -> Seq Scan on complete_cast t8 (cost=0.00..1549.64 rows=100564 width=16) (actual time=0.013..10.744 rows=100564 loops=1)" " -> Materialize (cost=1.08..1.15 rows=7 width=14) (actual time=0.016..0.029 rows=7 loops=1)" " -> Seq Scan on kind_type t9 (cost=0.00..1.07 rows=7 width=14) (actual time=0.011..0.013 rows=7 loops=1)" " -> Materialize (cost=412704.52..437969.09 rows=2021166 width=40) (actual time=3420.356..4278.545 rows=1028995 loops=1)" " -> Sort (cost=412704.52..417757.43 rows=2021166 width=40) (actual time=3420.349..3953.483 rows=1028995 loops=1)" " Sort Key: t10.movie_id" " Sort Method: external merge Disk: 90960kB" " -> Seq Scan on movie_companies t10 (cost=0.00..35214.66 rows=2021166 width=40) (actual time=13.271..566.893 rows=2021166 loops=1)" " -> Materialize (cost=3144407.19..3269057.42 rows=9972019 width=76) (actual time=65485.672..70083.219 rows=5039009 loops=1)" " -> Sort (cost=3144407.19..3169337.23 rows=9972019 width=76) (actual time=65485.667..68385.550 rows=5038999 loops=1)" " Sort Key: t11.movie_id" " Sort Method: external merge Disk: 735512kB" " -> Seq Scan on movie_info t11 (cost=0.00..212815.19 rows=9972019 width=76) (actual time=15.750..15715.608 rows=9972019 loops=1)" " -> Materialize (cost=207925.01..219867.92 rows=955433 width=50) (actual time=1483.989..1785.636 rows=429401 loops=1)" " -> Sort (cost=207925.01..210313.59 rows=955433 width=50) (actual time=1483.983..1654.165 rows=429401 loops=1)" " Sort Key: t19.movie_id" " Sort Method: external merge Disk: 31720kB" " -> Seq Scan on movie_info_idx t19 (cost=0.00..15047.33 rows=955433 width=50) (actual time=7.284..221.597 rows=955433 loops=1)" " -> Materialize (cost=501605.39..537645.64 rows=2883220 width=12) (actual time=5823.040..6868.242 rows=1597396 loops=1)" " -> Sort (cost=501605.39..508813.44 rows=2883220 width=12) (actual time=5823.026..6477.517 rows=1597396 loops=1)" " Sort Key: t12.movie_id" " Sort Method: external merge Disk: 78888kB" " -> Seq Scan on movie_keyword t12 (cost=0.00..44417.20 rows=2883220 width=12) (actual time=11.672..839.498 rows=2883220 loops=1)" " -> Materialize (cost=141143.93..152995.81 rows=948150 width=16) (actual time=1916.356..2253.004 rows=478358 loops=1)" " -> Sort (cost=141143.93..143514.31 rows=948150 width=16) (actual time=1916.344..2125.698 rows=478358 loops=1)" " Sort Key: t13.linked_movie_id" " Sort Method: external merge Disk: 29632kB" " -> Seq Scan on movie_link t13 (cost=0.00..14607.50 rows=948150 width=16) (actual time=27.610..297.962 rows=948150 loops=1)" " -> Hash (cost=1.18..1.18 rows=18 width=16) (actual time=0.020..0.020 rows=18 loops=1)" " -> Seq Scan on link_type t14 (cost=0.00..1.18 rows=18 width=16) (actual time=0.010..0.012 rows=18 loops=1)" " -> Hash (cost=1537.10..1537.10 rows=91010 width=24) (actual time=54.055..54.055 rows=91010 loops=1)" " -> Seq Scan on keyword t15 (cost=0.00..1537.10 rows=91010 width=24) (actual time=0.006..14.703 rows=91010 loops=1)" " -> Hash (cost=4585.61..4585.61 rows=245461 width=42) (actual time=445.269..445.269 rows=245461 loops=1)" " -> Seq Scan on company_name t16 (cost=0.00..4585.61 rows=245461 width=42) (actual time=12.037..309.961 rows=245461 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=25) (actual time=0.013..0.013 rows=4 loops=1)" " -> Seq Scan on company_type t17 (cost=0.00..1.04 rows=4 width=25) (actual time=0.009..0.010 rows=4 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=13) (actual time=0.006..0.006 rows=4 loops=1)" " -> Seq Scan on comp_cast_type t18 (cost=0.00..1.04 rows=4 width=13) (actual time=0.002..0.003 rows=4 loops=1)" "Total runtime: 147055.016 ms" Is there anyway to force the name.id = 2003 before it tries to join all the tables together? As you can see, the end result is 4 tuples but it seems like it should be a fast join by using the available index after it limited it down with the name clause, although very complex.

    Read the article

  • Remove/squash entries in a vertical hash

    - by Forkrul Assail
    I have a grid that represents an X, Y matrix, stored as a hash here. Some points on the X Y matrix may have values (as type string), and some may not. A typical grid could look like this: {[9, 5]=>"Alaina", [10, 3]=>"Courtney", [11, 1]=>"Gladys", [8, 7]=>"Alford", [14, 11]=>"Lesley", [17, 2]=>"Lawson", [0, 5]=>"Katrine", [2, 1]=>"Tyra", [3, 3]=>"Fredy", [1, 7]=>"Magnus", [6, 9]=>"Nels", [7, 11]=>"Kylie", [11, 0]=>"Kellen", [10, 2]=>"Johan", [14, 10]=>"Justice", [0, 4]=>"Barton", [2, 0]=>"Charley", [3, 2]=>"Magnolia", [1, 6]=>"Maximo", [7, 10]=>"Olga", [19, 5]=>"Isadore", [16, 3]=>"Delfina", [17, 1]=>"Noe", [20, 11]=>"Francis", [10, 5]=>"Creola", [9, 3]=>"Bulah", [8, 1]=>"Lempi", [11, 7]=>"Raquel", [13, 11]=>"Jace", [1, 5]=>"Garth", [3, 1]=>"Ernest", [2, 3]=>"Malcolm", [0, 7]=>"Alejandrin", [7, 9]=>"Marina", [6, 11]=>"Otilia", [16, 2]=>"Hailey", [20, 10]=>"Brandt", [8, 0]=>"Madeline", [9, 2]=>"Leanne", [13, 10]=>"Jenifer", [1, 4]=>"Humberto", [3, 0]=>"Nicholaus", [2, 2]=>"Nadia", [0, 6]=>"Abigail", [6, 10]=>"Zola", [20, 5]=>"Clementina", [23, 3]=>"Alvah", [19, 11]=>"Wallace", [11, 5]=>"Tracey", [8, 3]=>"Hulda", [9, 1]=>"Jedidiah", [10, 7]=>"Annetta", [12, 11]=>"Nicole", [2, 5]=>"Alison", [0, 1]=>"Wilma", [1, 3]=>"Shana", [3, 7]=>"Judd", [4, 9]=>"Lucio", [5, 11]=>"Hardy", [19, 10]=>"Immanuel", [9, 0]=>"Uriel", [8, 2]=>"Milton", [12, 10]=>"Elody", [5, 10]=>"Alexanne", [1, 2]=>"Lauretta", [0, 0]=>"Louvenia", [2, 4]=>"Adelia", [21, 5]=>"Erling", [18, 11]=>"Corene", [22, 3]=>"Haskell", [11, 11]=>"Leta", [10, 9]=>"Terrence", [14, 1]=>"Giuseppe", [15, 3]=>"Silas", [12, 5]=>"Johnnie", [4, 11]=>"Aurelie", [5, 9]=>"Meggie", [2, 7]=>"Phoebe", [0, 3]=>"Sister", [1, 1]=>"Violet", [3, 5]=>"Lilian", [18, 10]=>"Eusebio", [11, 10]=>"Emma", [15, 2]=>"Theodore", [14, 0]=>"Cassidy", [4, 10]=>"Edmund", [2, 6]=>"Claire", [0, 2]=>"Madisen", [1, 0]=>"Kasey", [3, 4]=>"Elijah", [17, 11]=>"Susana", [20, 1]=>"Nicklaus", [21, 3]=>"Kelsie", [10, 11]=>"Garnett", [11, 9]=>"Emanuel", [15, 1]=>"Louvenia", [14, 3]=>"Otho", [13, 5]=>"Vincenza", [3, 11]=>"Tate", [2, 9]=>"Beau", [5, 7]=>"Jason", [6, 1]=>"Jayde", [7, 3]=>"Lamont", [4, 5]=>"Curt", [17, 10]=>"Mack", [21, 2]=>"Lilyan", [10, 10]=>"Ruthe", [14, 2]=>"Georgianna", [4, 4]=>"Nyasia", [6, 0]=>"Sadie", [16, 11]=>"Emil", [21, 1]=>"Melba", [20, 3]=>"Delia", [3, 10]=>"Rosalee", [2, 8]=>"Myrtle", [7, 2]=>"Rigoberto", [14, 5]=>"Jedidiah", [13, 3]=>"Flavie", [12, 1]=>"Evie", [8, 9]=>"Olaf", [9, 11]=>"Stan", [20, 2]=>"Judge", [5, 5]=>"Cassie", [7, 1]=>"Gracie", [6, 3]=>"Armando", [4, 7]=>"Delia", [3, 9]=>"Marley", [16, 10]=>"Robyn", [2, 11]=>"Richie", [12, 0]=>"Gilberto", [13, 2]=>"Dedrick", [9, 10]=>"Liam", [5, 4]=>"Jabari", [7, 0]=>"Enola", [6, 2]=>"Lela", [3, 8]=>"Jade", [2, 10]=>"Johnson", [15, 5]=>"Willow", [12, 3]=>"Fredrick", [13, 1]=>"Beau", [9, 9]=>"Carlie", [8, 11]=>"Daisha", [6, 5]=>"Declan", [4, 1]=>"Carolina", [5, 3]=>"Cruz", [7, 7]=>"Jaime", [0, 9]=>"Anthony", [1, 11]=>"Esta", [13, 0]=>"Shaina", [12, 2]=>"Alec", [8, 10]=>"Lora", [6, 4]=>"Emely", [4, 0]=>"Rodger", [5, 2]=>"Cedrick", [0, 8]=>"Collin", [1, 10]=>"Armani", [16, 5]=>"Brooks", [19, 3]=>"Eleanora", [18, 1]=>"Alva", [7, 5]=>"Melissa", [5, 1]=>"Tabitha", [4, 3]=>"Aniya", [6, 7]=>"Marc", [1, 9]=>"Marjorie", [0, 11]=>"Arvilla", [19, 2]=>"Adela", [7, 4]=>"Zakary", [5, 0]=>"Emely", [4, 2]=>"Alison", [1, 8]=>"Lorenz", [0, 10]=>"Lisandro", [17, 5]=>"Aylin", [18, 3]=>"Giles", [19, 1]=>"Kyleigh", [8, 5]=>"Mary", [11, 3]=>"Claire", [10, 1]=>"Avis", [9, 7]=>"Manuela", [15, 11]=>"Chesley", [18, 2]=>"Kristopher", [24, 3]=>"Zola", [8, 4]=>"Pietro", [10, 0]=>"Delores", [11, 2]=>"Timmy", [15, 10]=>"Khalil", [18, 5]=>"Trudie", [17, 3]=>"Rafael", [16, 1]=>"Anthony"} What I need to do though, is basically remove all the empty entries. Let's say [17,3] = Raphael does not have an element in front of if (let's say - no [16,3] exists) then [17,3] should become [16,3] etc. So basically all empty items will be popped off the vertical (row) structure of the hash. Are there functions I should have a look at or is there an easy squash-like method that would just remove blanks and adjust and move other items? Thanks in advance for your help.

    Read the article

  • Secure hash and salt for PHP passwords

    - by luiscubal
    It is currently said that MD5 is partially unsafe. Taking this into consideration, I'd like to know which mechanism to use for password protection. Is “double hashing” a password less secure than just hashing it once? Suggests that hashing multiple times may be a good idea. How to implement password protection for individual files? Suggests using salt. I'm using PHP. I want a safe and fast password encryption system. Hashing a password a million times may be safer, but also slower. How to achieve a good balance between speed and safety? Also, I'd prefer the result to have a constant number of characters. The hashing mechanism must be available in PHP It must be safe It can use salt (in this case, are all salts equally good? Is there any way to generate good salts?) Also, should I store two fields in the database(one using MD5 and another one using SHA, for example)? Would it make it safer or unsafer? In case I wasn't clear enough, I want to know which hashing function(s) to use and how to pick a good salt in order to have a safe and fast password protection mechanism. EDIT: The website shouldn't contain anything too sensitive, but still I want it to be secure. EDIT2: Thank you all for your replies, I'm using hash("sha256",$salt.":".$password.":".$id) Questions that didn't help: What's the difference between SHA and MD5 in PHP Simple Password Encryption Secure methods of storing keys, passwords for asp.net How would you implement salted passwords in Tomcat 5.5

    Read the article

  • Insert MANY key value pairs fast into berkeley db with hash access

    - by Kungi
    Hi, i'm trying to build a hash with berkeley db, which shall contain many tuples (approx 18GB of key value pairs), but in all my tests the performance of the insert operations degrades drastically over time. I've written this script to test the performance: #include<iostream> #include<db_cxx.h> #include<ctime> #define MILLION 1000000 int main () { long long a = 0; long long b = 0; int passes = 0; int i = 0; u_int32_t flags = DB_CREATE; Db* dbp = new Db(NULL,0); dbp->set_cachesize( 0, 1024 * 1024 * 1024, 1 ); int ret = dbp->open( NULL, "test.db", NULL, DB_HASH, flags, 0); time_t time1 = time(NULL); while ( passes < 100 ) { while( i < MILLION ) { Dbt key( &a, sizeof(long long) ); Dbt data( &b, sizeof(long long) ); dbp->put( NULL, &key, &data, 0); a++; b++; i++; } DbEnv* dbep = dbp->get_env(); int tmp; dbep->memp_trickle( 50, &tmp ); i=0; passes++; std::cout << "Inserted one million --> pass: " << passes << " took: " << time(NULL) - time1 << "sec" << std::endl; time1 = time(NULL); } } Perhaps you can tell me why after some time the "put" operation takes increasingly longer and maybe how to fix this. Thanks for your help, Andreas

    Read the article

  • Parsing Chunk of Data into Hash of Array With Perl

    - by neversaint
    I have data that looks like this: #info #info2 1:SRX004541 Submitter: UT-MGS, UT-MGS Study: Glossina morsitans transcript sequencing project(SRP000741) Sample: Glossina morsitans(SRS002835) Instrument: Illumina Genome Analyzer Total: 1 run, 8.3M spots, 299.9M bases Run #1: SRR016086, 8330172 spots, 299886192 bases 2:SRX004540 Submitter: UT-MGS Study: Anopheles stephensi transcript sequencing project(SRP000747) Sample: Anopheles stephensi(SRS002864) Instrument: Solexa 1G Genome Analyzer Total: 1 run, 8.4M spots, 401M bases Run #1: SRR017875, 8354743 spots, 401027664 bases 3:SRX002521 Submitter: UT-MGS Study: Massive transcriptional start site mapping of human cells under hypoxic conditions.(SRP000403) Sample: Human DLD-1 tissue culture cell line(SRS001843) Instrument: Solexa 1G Genome Analyzer Total: 6 runs, 27.1M spots, 977M bases Run #1: SRR013356, 4801519 spots, 172854684 bases Run #2: SRR013357, 3603355 spots, 129720780 bases Run #3: SRR013358, 3459692 spots, 124548912 bases Run #4: SRR013360, 5219342 spots, 187896312 bases Run #5: SRR013361, 5140152 spots, 185045472 bases Run #6: SRR013370, 4916054 spots, 176977944 bases What I want to do is to create a hash of array with first line of each chunk as keys and SR## part of lines with "^Run" as its array member: $VAR = { 'SRX004541' => ['SRR016086'], # etc } But why my construct doesn't work. And it must be a better way to do it. use Data::Dumper; my %bighash; my $head = ""; my @temp = (); while ( <> ) { chomp; next if (/^\#/); if ( /^\d{1,2}:(\w+)/ ) { print "$1\n"; $head = $1; } elsif (/^Run \#\d+: (\w+),.*/){ print "\t$1\n"; push @temp, $1; } elsif (/^$/) { push @{$bighash{$head}}, [@temp]; @temp =(); } } print Dumper \%bighash ;

    Read the article

  • jboss cache as hibernate 2nd level - cluster node doesn't persist replicated data

    - by Sergey Grashchenko
    I'm trying to build an architecture basically described in user guide http://www.jboss.org/file-access/default/members/jbosscache/freezone/docs/3.2.1.GA/userguide_en/html/cache_loaders.html#d0e3090 (Replicated caches with each cache having its own store.) but having jboss cache configured as hibernate second level cache. I've read manual for several days and played with the settings but could not achieve the result - the data in memory (jboss cache) gets replicated across the hosts, but it's not persisted in the datasource/database of the target (not original) cluster host. I had a hope that a node might become persistent at eviction, so I've got a cache listener and attached it to @NoveEvicted event. I found that though I could adjust eviction policy to fully control it, no any persistence takes place. Then I had a though that I could try to modify CacheLoader to set "passivate" to true, but I found that in my case (hibernate 2nd level cache) I don't have a way to access a loader. I wonder if replicated data persistence is possible at all by configuration tuning ? If not, will it work for me to create some manual peristence in CacheListener (I could check whether the eviction event is local, and if not - persist it to hibernate datasource somehow) ? I've used mvcc-entity configuration with the modification of cacheMode - set to REPL_ASYNC. I've also played with the eviction policy configuration. Last thing to mention is that I've tested entty persistence and replication in project that has been generated with Seam. I guess it's not important though.

    Read the article

  • XML serialization of hash table(C#3.0)

    - by Newbie
    Hi I am trying to serialize a hash table but not happening private void Form1_Load(object sender, EventArgs e) { Hashtable ht = new Hashtable(); DateTime dt = DateTime.Now; for (int i = 0; i < 10; i++) ht.Add(dt.AddDays(i), i); SerializeToXmlAsFile(typeof(Hashtable), ht); } private void SerializeToXmlAsFile(Type targetType, Object targetObject) { try { string fileName = @"C:\testtttttt.xml"; //Serialize to XML XmlSerializer s = new XmlSerializer(targetType); TextWriter w = new StreamWriter(fileName); s.Serialize(w, targetObject); w.Flush(); w.Close(); } catch (Exception ex) { throw ex; } } After a google search , I found that objects that impelment IDictonary cannot be serialized. However, I got success with binary serialization. But I want to have xml one. Is there any way of doing so? I am using C#3.0 Thanks

    Read the article

  • iptable CLUSTERIP won't work

    - by Rad Akefirad
    We have some requirements which explained here. We tried to satisfy them without any success as described. Here is the brief information: Here are requirements: 1. High Availability 2. Load Balancing Current Configuration: Server #1: one static (real) IP for each 10.17.243.11 Server #2: one static (real) IP for each 10.17.243.12 Cluster (virtual and shared among all servers) IP: 10.17.243.15 I tried to use CLUSTERIP to have the cluster IP by the following: on the server #1 iptables -I INPUT -i eth0 -d 10.17.243.15 -j CLUSTERIP --new --hashmode sourceip --clustermac 01:00:5E:00:00:20 --total-nodes 2 --local-node 1 on the server #2 iptables -I INPUT -i eth0 -d 10.17.243.15 -j CLUSTERIP --new --hashmode sourceip --clustermac 01:00:5E:00:00:20 --total-nodes 2 --local-node 2 When we try to ping 10.17.243.15 there is no reply. And the web service (tomcat on port 8080) is not accessible either. However we managed to get the packets on both servers by using TCPDUMP. Some useful information: iptable roules (iptables -L -n -v): Chain INPUT (policy ACCEPT 21775 packets, 1470K bytes) pkts bytes target prot opt in out source destination 0 0 CLUSTERIP all -- eth0 * 0.0.0.0/0 10.17.243.15 CLUSTERIP hashmode=sourceip clustermac=01:00:5E:00:00:20 total_nodes=2 local_node=1 hash_init=0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 14078 packets, 44M bytes) pkts bytes target prot opt in out source destination Log messages: ... kernel: [ 7.329017] e1000e: eth3 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None ... kernel: [ 7.329133] e1000e 0000:05:00.0: eth3: 10/100 speed: disabling TSO ... kernel: [ 7.329567] ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready ... kernel: [ 71.333285] ip_tables: (C) 2000-2006 Netfilter Core Team ... kernel: [ 71.341804] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) ... kernel: [ 71.343168] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully ... kernel: [ 108.456043] device eth0 entered promiscuous mode ... kernel: [ 112.678859] device eth0 left promiscuous mode ... kernel: [ 117.916050] device eth0 entered promiscuous mode ... kernel: [ 140.168848] device eth0 left promiscuous mode TCPDUMP while pinging: tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 12:11:55.335528 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2390, length 64 12:11:56.335778 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2391, length 64 12:11:57.336010 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2392, length 64 12:11:58.336287 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2393, length 64 And there is no ping reply as I said. Does anyone know which part I missed? Thanks in advance.

    Read the article

  • C# how to calculate hashcode from an object reference.

    - by Wayne
    Folks, here's a thorny problem for you! A part of the TickZoom system must collect instances of every type of object into a Dictionary< type. It is imperative that their equality and hash code be based on the instance of the object which means reference equality instead of value equality. The challenge is that some of the objects in the system have overridden Equals() and GetHashCode() for use as value equality and their internal values will change over time. That means that their Equals and GetHashCode are useless. How to solve this generically rather than intrusively? So far, We created a struct to wrap each object called ObjectHandle for hashing into the Dictionary. As you see below we implemented Equals() but the problem of how to calculate a hash code remains. public struct ObjectHandle : IEquatable<ObjectHandle>{ public object Object; public bool Equals(ObjectHandle other) { return object.ReferenceEquals(this.Object,other.Object); } } See? There is the method object.ReferenceEquals() which will compare reference equality without regard for any overridden Equals() implementation in the object. Now, how to calculate a matching GetHashCode() by only considering the reference without concern for any overridden GetHashCode() method? Ahh, I hope this give you an interesting puzzle. We're stuck over here. Sincerely, Wayne

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >