Search Results

Search found 1795 results on 72 pages for 'veritas cluster'.

Page 33/72 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • SQL Performance Problem IA64

    - by Vendoran
    We’ve got a performance problem in production. QA and DEV environments are 2 instances on the same physical server: Windows 2003 Enterprise SP2, 32 GB RAM, 1 Quad 3.5 GHz Intel Xeon X5270 (4 cores x64), SQL 2005 SP3 (9.0.4262), SAN Drives Prod: Windows 2003 Datacenter SP2, 64 GB RAM, 4 Dual Core 1.6 GHz Intel Family 80000002, Model 6 Itanium (8 cores IA64), SQL 2005 SP3 (9.0.4262), SAN Drives, Veritas Cluster I am seeing excessive Signal Wait Percentages ( 250%) and Page Reads /s (50) and Page Writes /s (25) are both high occasionally. I did test this query on both QA and PROD and it has the same execution plan and even the same stats: SELECT top 40000000 * INTO dbo.tmp_tbl FROM dbo.tbl GO Scan count 1, logical reads 429564, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. As you can see it’s just logical reads, however: QA: 0:48 Prod: 2:18 So It seems like a processor related issue, however I’m not sure where to go next, any ideas? Thanks, Aaron

    Read the article

  • Oracle Linux / Symantec Partnership

    - by Ted Davis
    Fred Astaire and Ginger Rogers sang the now famous lyrics:  “You like to-may-toes and I like to-mah-toes”. In the tech world, is it Semantic or is it Symantec? Ah, well, we know it’s the latter. Actually, who doesn’t know or hasn’t heard of Symantec in the tech world? Symantec is thoroughly engrained in Enterprise customer infrastructure from their Storage Foundation Suite to their Anti-Virus products. It would be hard to find anyone who doesn’t use their software. Likewise, Oracle Linux is thoroughly engrained in Enterprise infrastructure – so our paths cross quite a bit. This is why the Oracle Linux  engineering team works with Symantec to make sure their applications and agents are supported on Oracle Linux. We also want to make sure the Oracle Linux / Symantec customer experience is trouble free so customer work continues at the same blistering pace. Here are a few Symantec applications that are supported on Oracle Linux: Storage Foundation Netbackup Enterprise Server Symantec Antivirus For Linux Veritas Cluster Server Backup Exec Agent for Linux So, while Fred and Ginger may disagree on how to spell tomato, for our software customers, the Oracle / Symantec partnership works together so our joint customers experience and hear the sweet song of success.

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. Beyond documents being stored on the file system and/or database and metadata being stored in the database along with other structured data, there is other information being read and written to on the file system.  Information such as user profile preferences, workflow item state information, metadata profiles, and other details are stored in files.  In addition, for certain processes within WCC, each of the nodes needs to know what the other nodes are doing so they don’t step on each other.  WCC keeps track of this through the use of lock files on the file system.  Because of this, each node of the WCC must have access to the same file system just as they have access to the same database. WCC uses its own locking mechanism using files, so it also needs to have access to those files without file attribute caching and without locking being done by the client (node).  If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process which another node is already working on.  Or if a particular file is locked by one of the node clients, this could interfere with access by another node.  Unfortunately, when disabling file attribute caching on the file share, this can impact performance.  So it is important to only disable caching and locking on the particular folders which require it.  When configuring WebCenter Content after deploying the domain, it asks for 3 different directories: Content Server Instance Folder, Native File Repository Location, and Weblayout Folder.  And starting in PS5, it now asks for the User Profile Folder. Even if you plan on storing the content in the database, you still need to establish a Native File (Vault) and Weblayout directories.  These will be used for handling temporary files, cached files, and files used to deliver the UI. For these directories, the only folder which needs to have the file attribute caching and locking disabled is the ‘Content Server Instance Folder’.  So when establishing this share through NFS or a clustered file system, be sure to specify those options. For instance, if creating the share through NFS, use the ‘noac’ and ‘nolock’ options for the mount options. For the other directories, caching and locking should be enabled to provide best performance to those locations.   These directory path configurations are contained within the <domain dir>\ucm\cs\bin\intradoc.cfg file: #Server System PropertiesIDC_Id=UCM_server1 #Server Directory Variables IdcHomeDir=/u01/fmw/Oracle_ECM1/ucm/idc/ FmwDomainConfigDir=/u01/fmw/user_projects/domains/base_domain/config/fmwconfig/ AppServerJavaHome=/u01/jdk/jdk1.6.0_22/jre/ AppServerJavaUse64Bit=true IntradocDir=/mnt/share_no_cache/base_domain/ucm/cs/ VaultDir=/mnt/share_with_cache/ucm/cs/vault/ WeblayoutDir=/mnt/share_with_cache/ucm/cs/weblayout/ #Server Classpath variables #Additional Variables #NOTE: UserProfilesDir is only available in PS5 – 11.1.1.6.0UserProfilesDir=/mnt/share_with_cache/ucm/cs/data/users/profiles/ In addition to these folder configurations, it’s also recommended to move node-specific folders to local disk to avoid unnecessary traffic to the shared directory.  So on each node, go to <domain dir>\ucm\cs\bin\intradoc.cfg and add these additional configuration entries: VaultTempDir=<domain dir>/ucm/<cs>/vault/~temp/ TraceDirectory=<domain dir>/servers/<UCM_serverN>/logs/EventDirectory=<domain dir>/servers/<UCM_serverN>/logs/event/ And of course, don’t forget the cluster-specific configuration values to add as well.  These can be added through Admin Server -> General Configuration -> Additional Configuration Variables or directly in the <IntradocDir>/config/config.cfg file: ArchiverDoLocks=true DisableSharedCacheChecking=true ServiceAllowRetry=true    (use only with Oracle RAC Database)PublishLockTimeout=300000  (time can vary depending on publishing time and number of nodes) For additional information and details on clustering configuration, I highly recommend reviewing document [1209496.1] on the support site.  In addition, there is a great step-by-step guide on setting up a WebCenter Content cluster [1359930.1].

    Read the article

  • Easy Made Easier

    - by dragonfly
        How easy is it to deploy a 2 node, fully redundant Oracle RAC cluster? Not very. Unless you use an Oracle Database Appliance. The focus of this member of Oracle's Engineered Systems family is to simplify the configuration, management and maintenance throughout the life of the system, while offering pay-as-you-grow scaling. Getting a 2-node RAC cluster up and running in under 2 hours has been made possible by the Oracle Database Appliance. Don't take my word for it, just check out these blog posts from partners and end users. The Oracle Database Appliance Experience - Zip Zoom Zoom http://www.fuadarshad.com/2012/02/oracle-database-appliance-experience.html Off-the-shelf Oracle database servers http://normanweaver.wordpress.com/2011/10/10/off-the-shelf-oracle-database-servers/ Oracle Database Appliance – Deployment Steps http://marcel.vandewaters.nl/oracle/database-appliance/oracle-database-appliance-deployment-steps     See how easy it is to deploy an Oracle Database Appliance for high availability with RAC? Now for the meat of this post, which is the first in a series of posts describing tips for making the deployment of an ODA even easier. The key to the easy deployment of an Oracle Database Appliance is the Appliance Manager software, which does the actual software deployment and configuration, based on best practices. But in order for it to do that, it needs some basic information first, including system name, IP addresses, etc. That's where the Appliance Manager GUI comes in to play, taking a wizard approach to specifying the information needed.     Using the Appliance Manager GUI is pretty straight forward, stepping through several screens of information to enter data in typical wizard style. Like most configuration tasks, it helps to gather the required information before hand. But before you rush out to a committee meeting on what to use for host names, and rely on whatever IP addresses might be hanging around, make sure you are familiar with some of the auto-fill defaults for the Appliance Manager. I'll step through the key screens below to highlight the results of the auto-fill capability of the Appliance Manager GUI.     Depending on which of the 2 Configuration Types (Config Type screen) you choose, you will get a slightly different set of screens. The Typical configuration assumes certain default configuration choices and has the fewest screens, where as the Custom configuration gives you the most flexibility in what you configure from the start. In the examples below, I have used the Custom config type.     One of the first items you are asked for is the System Name (System Info screen). This is used to identify the system, but also as the base for the default hostnames on following screens. In this screen shot, the System Name is "oda".     When you get to the next screen (Generic Network screen), you enter your domain name, DNS IP address(es), and NTP IP address(es). Next up is the Public Network screen, seen below, where you will see the host name fields are automatically filled in with default host names based on the System Name, in this case "oda". The System Name is also the basis for default host names for the extra ethernet ports available for configuration as part of a Custom configuration, as seen in the 2nd screen shot below (Other Network). There is no requirement to use these host names, as you can easily edit any of the host names. This does make filling in the configuration details easier and less prone to "fat fingers" if you are OK with these host names. Here is a full list of the automatically filled in host names. 1 2 1-vip 2-vip -scan 1-ilom 2-ilom 1-net1 2-net1 1-net2 2-net2 1-net3 2-net3     Another auto-fill feature of the Appliance Manager GUI follows a common practice of deploying IP Addresses for a RAC cluster in sequential order. In the screen shot below, I entered the first IP address (Node1-IP), then hit Tab to move to the next field. As a result, the next 5 IP address fields were automatically filled in with the next 5 IP addresses sequentially from the first one I entered. As with the host names, these are not required, and can be changed to whatever your IP address values are. One note of caution though, if the first IP Address field (Node1-IP) is filled out and you click in that field and back out, the following 5 IP addresses will be set to the sequential default. If you don't use the sequential IP addresses, pay attention to where you click that mouse. :-)     In the screen shot below, by entering the netmask value in the Netmask field, in this case 255.255.255.0, the gateway value was auto-filled into the Gateway field, based on the IP addresses and netmask previously entered. As always, you can change this value.     My last 2 screen shots illustrate that the same sequential IP address autofill and netmask to gateway autofill works when entering the IP configuration details for the Integrated Lights Out Manager (ILOM) for both nodes. The time these auto-fill capabilities save in entering data is nice, but from my perspective not as important as the opportunity to avoid data entry errors. In my next post in this series, I will touch on the benefit of using the network validation capability of the Appliance Manager GUI prior to deploying an Oracle Database Appliance.

    Read the article

  • Windows Azure Evolution &ndash; Caching (Preview)

    - by Shaun
    Caching is a popular topic when we are building a high performance and high scalable system not only on top of the cloud platform but the on-premise environment as well. On March 2011 the Windows Azure AppFabric Caching had been production launched. It provides an in-memory, distributed caching service over the cloud. And now, in this June 2012 update, the cache team announce a grand new caching solution on Windows Azure, which is called Windows Azure Caching (Preview). And the original Windows Azure AppFabric Caching was renamed to Windows Azure Shared Caching.   What’s Caching (Preview) If you had been using the Shared Caching you should know that it is constructed by a bunch of cache servers. And when you want to use you should firstly create a cache account from the developer portal and specify the size you want to use, which means how much memory you can use to store your data that wanted to be cached. Then you can add, get and remove them through your code through the cache URL. The Shared Caching is a multi-tenancy system which host all cached items across all users. So you don’t know which server your data was located. This caching mode works well and can take most of the cases. But it has some problems. The first one is the performance. Since the Shared Caching is a multi-tenancy system, which means all cache operations should go through the Shared Caching gateway and then routed to the server which have the data your are looking for. Even though there are some caches in the Shared Caching system it also takes time from your cloud services to the cache service. Secondary, the Shared Caching service works as a block box to the developer. The only thing we know is my cache endpoint, and that’s all. Someone may satisfied since they don’t want to care about anything underlying. But if you need to know more and want more control that’s impossible in the Shared Caching. The last problem would be the price and cost-efficiency. You pay the bill based on how much cache you requested per month. But when we host a web role or worker role, it seldom consumes all of the memory and CPU in the virtual machine (service instance). If using Shared Caching we have to pay for the cache service while waste of some of our memory and CPU locally. Since the issues above Microsoft offered a new caching mode over to us, which is the Caching (Preview). Instead of having a separated cache service, the Caching (Preview) leverage the memory and CPU in our cloud services (web role and worker role) as the cache clusters. Hence the Caching (Preview) runs on the virtual machines which hosted or near our cloud applications. Without any gateway and routing, since it located in the same data center and same racks, it provides really high performance than the Shared Caching. The Caching (Preview) works side-by-side to our application, initialized and worked as a Windows Service running in the virtual machines invoked by the startup tasks from our roles, we could get more information and control to them. And since the Caching (Preview) utilizes the memory and CPU from our existing cloud services, so it’s free. What we need to pay is the original computing price. And the resource on each machines could be used more efficiently.   Enable Caching (Preview) It’s very simple to enable the Caching (Preview) in a cloud service. Let’s create a new windows azure cloud project from Visual Studio and added an ASP.NET Web Role. Then open the role setting and select the Caching page. This is where we enable and configure the Caching (Preview) on a role. To enable the Caching (Preview) just open the “Enable Caching (Preview Release)” check box. And then we need to specify which mode of the caching clusters we want to use. There are two kinds of caching mode, co-located and dedicate. The co-located mode means we use the memory in the instances we run our cloud services (web role or worker role). By using this mode we must specify how many percentage of the memory will be used as the cache. The default value is 30%. So make sure it will not affect the role business execution. The dedicate mode will use all memory in the virtual machine as the cache. In fact it will reserve some for operation system, azure hosting etc.. But it will try to use as much as the available memory to be the cache. As you can see, the Caching (Preview) was defined based on roles, which means all instances of this role will apply the same setting and play as a whole cache pool, and you can consume it by specifying the name of the role, which I will demonstrate later. And in a windows azure project we can have more than one role have the Caching (Preview) enabled. Then we will have more caches. For example, let’s say I have a web role and worker role. The web role I specified 30% co-located caching and the worker role I specified dedicated caching. If I have 3 instances of my web role and 2 instances of my worker role, then I will have two caches. As the figure above, cache 1 was contributed by three web role instances while cache 2 was contributed by 2 worker role instances. Then we can add items into cache 1 and retrieve it from web role code and worker role code. But the items stored in cache 1 cannot be retrieved from cache 2 since they are isolated. Back to our Visual Studio we specify 30% of co-located cache and use the local storage emulator to store the cache cluster runtime status. Then at the bottom we can specify the named caches. Now we just use the default one. Now we had enabled the Caching (Preview) in our web role settings. Next, let’s have a look on how to consume our cache.   Consume Caching (Preview) The Caching (Preview) can only be consumed by the roles in the same cloud services. As I mentioned earlier, a cache contributed by web role can be connected from a worker role if they are in the same cloud service. But you cannot consume a Caching (Preview) from other cloud services. This is different from the Shared Caching. The Shared Caching is opened to all services if it has the connection URL and authentication token. To consume the Caching (Preview) we need to add some references into our project as well as some configuration in the Web.config. NuGet makes our life easy. Right click on our web role project and select “Manage NuGet packages”, and then search the package named “WindowsAzure.Caching”. In the package list install the “Windows Azure Caching Preview”. It will download all necessary references from the NuGet repository and update our Web.config as well. Open the Web.config of our web role and find the “dataCacheClients” node. Under this node we can specify the cache clients we are going to use. For each cache client it will use the role name to identity and find the cache. Since we only have this web role with the Caching (Preview) enabled so I pasted the current role name in the configuration. Then, in the default page I will add some code to show how to use the cache. I will have a textbox on the page where user can input his or her name, then press a button to generate the email address for him/her. And in backend code I will check if this name had been added in cache. If yes I will return the email back immediately. Otherwise, I will sleep the tread for 2 seconds to simulate the latency, then add it into cache and return back to the page. 1: protected void btnGenerate_Click(object sender, EventArgs e) 2: { 3: // check if name is specified 4: var name = txtName.Text; 5: if (string.IsNullOrWhiteSpace(name)) 6: { 7: lblResult.Text = "Error. Please specify name."; 8: return; 9: } 10:  11: bool cached; 12: var sw = new Stopwatch(); 13: sw.Start(); 14:  15: // create the cache factory and cache 16: var factory = new DataCacheFactory(); 17: var cache = factory.GetDefaultCache(); 18:  19: // check if the name specified is in cache 20: var email = cache.Get(name) as string; 21: if (email != null) 22: { 23: cached = true; 24: sw.Stop(); 25: } 26: else 27: { 28: cached = false; 29: // simulate the letancy 30: Thread.Sleep(2000); 31: email = string.Format("{0}@igt.com", name); 32: // add to cache 33: cache.Add(name, email); 34: } 35:  36: sw.Stop(); 37: lblResult.Text = string.Format( 38: "Cached = {0}. Duration: {1}s. {2} => {3}", 39: cached, sw.Elapsed.TotalSeconds.ToString("0.00"), name, email); 40: } The Caching (Preview) can be used on the local emulator so we just F5. The first time I entered my name it will take about 2 seconds to get the email back to me since it was not in the cache. But if we re-enter my name it will be back at once from the cache. Since the Caching (Preview) is distributed across all instances of the role, so we can scaling-out it by scaling-out our web role. Just use 2 instances and tweak some code to show the current instance ID in the page, and have another try. Then we can see the cache can be retrieved even though it was added by another instance.   Consume Caching (Preview) Across Roles As I mentioned, the Caching (Preview) can be consumed by all other roles within the same cloud service. For example, let’s add another web role in our cloud solution and add the same code in its default page. In the Web.config we add the cache client to one enabled in the last role, by specifying its role name here. Then we start the solution locally and go to web role 1, specify the name and let it generate the email to us. Since there’s no cache for this name so it will take about 2 seconds but will save the email into cache. And then we go to web role 2 and specify the same name. Then you can see it retrieve the email saved by the web role 1 and returned back very quickly. Finally then we can upload our application to Windows Azure and test again. Make sure you had changed the cache cluster status storage account to the real azure account.   More Awesome Features As a in-memory distributed caching solution, the Caching (Preview) has some fancy features I would like to highlight here. The first one is the high availability support. This is the first time I have heard that a distributed cache support high availability. In the distributed cache world if a cache cluster was failed, the data it stored will be lost. This behavior was introduced by Memcached and is followed by almost all distributed cache productions. But Caching (Preview) provides high availability, which means you can specify if the named cache will be backup automatically. If yes then the data belongs to this named cache will be replicated on another role instance of this role. Then if one of the instance was failed the data can be retrieved from its backup instance. To enable the backup just open the Caching page in Visual Studio. In the named cache you want to enable backup, change the Backup Copies value from 0 to 1. The value of Backup Copies only for 0 and 1. “0” means no backup and no high availability while “1” means enabled high availability with backup the data into another instance. But by using the high availability feature there are something we need to make sure. Firstly the high availability does NOT means the data in cache will never be lost for any kind of failure. For example, if we have a role with cache enabled that has 10 instances, and 9 of them was failed, then most of the cached data will be lost since the primary and backup instance may failed together. But normally is will not be happened since MS guarantees that it will use the instance in the different fault domain for backup cache. Another one is that, enabling the backup means you store two copies of your data. For example if you think 100MB memory is OK for cache, but you need at least 200MB if you enabled backup. Besides the high availability, the Caching (Preview) support more features introduced in Windows Server AppFabric Caching than the Windows Azure Shared Caching. It supports local cache with notification. It also support absolute and slide window expiration types as well. And the Caching (Preview) also support the Memcached protocol as well. This means if you have an application based on Memcached, you can use Caching (Preview) without any code changes. What you need to do is to change the configuration of how you connect to the cache. Similar as the Windows Azure Shared Caching, MS also offers the out-of-box ASP.NET session provider and output cache provide on top of the Caching (Preview).   Summary Caching is very important component when we building a cloud-based application. In the June 2012 update MS provides a new cache solution named Caching (Preview). Different from the existing Windows Azure Shared Caching, Caching (Preview) runs the cache cluster within the role instances we have deployed to the cloud. It gives more control, more performance and more cost-effect. So now we have two caching solutions in Windows Azure, the Shared Caching and Caching (Preview). If you need a central cache service which can be used by many cloud services and web sites, then you have to use the Shared Caching. But if you only need a fast, near distributed cache, then you’d better use Caching (Preview).   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Comparison of the multiprocessing module and pyro?

    - by fivebells
    I use pyro for basic management of parallel jobs on a compute cluster. I just moved to a cluster where I will be responsible for using all the cores on each compute node. (On previous clusters, each core has been a separate node.) The python multiprocessing module seems like a good fit for this. I notice it can also be used for remote-process communication. If anyone has used both frameworks for remote-process communication, I'd be grateful to hear how they stack up against each other. The obvious benefit of the multiprocessing module is that it's built-in from 2.6. Apart from that, it's hard for me to tell which is better.

    Read the article

  • RequestDispatcher forward between Tomcat instances

    - by MontyBongo
    I have a scenario where I have single entry point Servlet and further Servlets that requests are forwarded to that undertake heavy processing. I am looking at options to distribute this load and I would like to know if it is possible using Tomcat or another platform to forward requests between Servlets sitting on different servers using a cluster type configuration or similar. I have found some documentation on clustering Servlets and Tomcat but none indicate if Servlet request forwarding is possible from what I can see. http://java.sun.com/blueprints/guidelines/designing_enterprise_applications_2e/web-tier/web-tier5.html http://tomcat.apache.org/tomcat-5.5-doc/cluster-howto.html

    Read the article

  • OpenCV's cvKMeans2 - chosing clusters

    - by Goffrey
    Hi, I'm using cvKMeans2 for clustering, but I'm not sure, how it works in general - the part of choosing clusters. I thought that it set the first positions of clusters from given samples. So it means that in the end of clustering process would every cluster has at least one sample - in the output array of cluster labels will be full range of numbers (for 100 clusters - numbers 0 to 99). But as I checked output labels, I realised that some labels weren't used at all and only some were used. So, does anyone know, how it works? Or how should I use the parameters of cvKMeans2 to do what I want (cause I'm not sure if I use them right)? I'm using cvKMeans2 function also with optional parameters: cvKMeans2(descriptorMat, N_CLUSTERS, clusterLabels, cvTermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0), 1, 0, 0, clusterCenters, 0) Thanks for any advices!

    Read the article

  • ASP.NET MVC Validation of ViewState MAC failed

    - by Kevin Pang
    After publishing a new build of my ASP.NET MVC web application, I often see this exception thrown when browsing to the site: System.Web.Mvc.HttpAntiForgeryException: A required anti-forgery token was not supplied or was invalid. --- System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. --- System.Web.UI.ViewStateException: Invalid viewstate. This exception will continue to occur on each page I visit in my web application until I close out of Firefox. After reopening Firefox, the site works perfectly. Any idea what's going on? Additional notes: I am not using any ASP.NET web controls (there are no instances of runat="server" in my application) If I take out the <%= Html.AntiForgeryToken % from my pages, this problem seems to go away

    Read the article

  • geographical deployment Vs geo load balancing SharePoint 2010

    - by vrajaraman
    we have a company wide SharePoint portals planned for few thousand users. since the users are distributed among different countries and their applications (hosted in sharepoint) We would like to consider geo deployment Vs geo load balancing. Please share your inputs. We are aware of this, Geo SharePoint Cluster facilitates - Farms at Central and other sites , db into regional. 2 db cluster - syncing using logshipping or SAN sync or SQL 2008 features like database mirroing Vs Loading balancing using URL and some 3rd party. all farm,sites,db centralised. benefits expecting. 1 High availability. 2.diaster recovering management. 3.maintenance hope i miss some of the points to be covered

    Read the article

  • Clustering [assessment] algorithm with distance matrix as an input

    - by Max
    Can anyone suggest some clustering algorithm which can work with distance matrix as an input? Or the algorithm which can assess the "goodness" of the clustering also based on the distance matrix? At this moment I'm using a modification of Kruskal's algorithm (http://en.wikipedia.org/wiki/Kruskal%27s_algorithm) to split data into two clusters. It has a problem though. When the data has no distinct clusters the algorithm will still create two clusters with one cluster containing one element and the other containing all the rest. In this case I would rather have one cluster containing all the elements and another one which is empty. Are there any algorithms which are capable of doing this type of clustering? Are there any algorithms which can estimate how well the clustering was done or even better how many clusters are there in the data? The algorithms should work only with distance(similarity) matrices as an input.

    Read the article

  • C# - Data Clustering approach

    - by Brett
    Hi all, I am writing a program in C# in which I have a set of 200 points displayed on an image. However, the points tend to cluster in various regions, and I am looking to find a way to "cluster." In other words, maybe draw a circle/ellipse around the clustered points. Has anyone seen any way to do this? I have heard about K-means clustering, but I am not sure how to implement it in C#. Any favorite implementations out there? Cheers, Brett

    Read the article

  • Tips on deploying Ror

    - by notnoop
    How can I go about deploying a Rails app on a cluster of Amazon EC2 servers? Any recommended guides? I maintain a RoR app (currently hosted on Heroku) that uses a DB and DelayedJobs). The app has a large footprint, and needs to be distributed on a cluster most likely. Any tips would be appreciated. Are there Amazon AMIs that replicate some of Heroku's features (especially DJ)? P.S. I'm quite a Ruby newbie.

    Read the article

  • AJAX with Web services and ASP.NET SessionState

    - by needhelp1
    We have an application which uses ScriptManager to generate a client-side proxy which makes AJAX calls to web services. The web services being invoked live in a separate appDomain(separate cluster altogether). The problem is that our application uses a State server for storing session. I want the web services to be able to access session also. First off, does anyone see anything wrong with the client making web service calls to a separate cluster(we're hoping this would be a better approach for scalability)? I was thinking that possibly anytime there is an update to the session dictionary in one appDomain, automatically update the session in the other appDomain also(referring to the web service appDomain, don't know how to do this, only theoretical). What do others think? Thanks!

    Read the article

  • complex mysql query problem

    - by Scarface
    Hey guys I have a query that selects data and organizes but not in the correct order. What I want to do is select all the comments for a user in that week and sort it by each topic, then sort the cluster by the latest timestamp of each comment in their respective cluster. My current query selects the right data, but in seemingly random order. Does anyone have any ideas? select * from ( SELECT topic.topic_title, topic.topic_id FROM comments JOIN topic ON topic.topic_id=comments.topic_id WHERE comments.user='$user' AND comments.timestamp>$week order by comments.timestamp desc) derived_table group by topic_id

    Read the article

  • Segmenting a double array of labels

    - by Ami
    The Problem: I have a large double array populated with various labels. Each element (cell) in the double array contains a set of labels and some elements in the double array may be empty. I need an algorithm to cluster elements in the double array into discrete segments. A segment is defined as a set of pixels that are adjacent within the double array and one label that all those pixels in the segment have in common. (Diagonal adjacency doesn't count and I'm not clustering empty cells). |-------|-------|------| | Jane | Joe | | | Jack | Jane | | |-------|-------|------| | Jane | Jane | | | | Joe | | |-------|-------|------| | | Jack | Jane | | | Joe | | |-------|-------|------| In the above arrangement of labels distributed over nine elements, the largest cluster is the “Jane” cluster occupying the four upper left cells. What I've Considered: I've considered iterating through every label of every cell in the double array and testing to see if the cell-label combination under inspection can be associated with a preexisting segment. If the element under inspection cannot be associated with a preexisting segment it becomes the first member of a new segment. If the label/cell combination can be associated with a preexisting segment it associates. Of course, to make this method reasonable I'd have to implement an elaborate hashing system. I'd have to keep track of all the cell-label combinations that stand adjacent to preexisting segments and are in the path of the incrementing indices that are iterating through the double array. This hash method would avoid having to iterate through every pixel in every preexisting segment to find an adjacency. Why I Don't Like it: As is, the above algorithm doesn't take into consideration the case where an element in the double array can be associated with two unique segments, one in the horizontal direction and one in the vertical direction. To handle these cases properly, I would need to implement a test for this specific case and then implement a method that will both associate the element under inspection with a segment and then concatenate the two adjacent identical segments. On the whole, this method and the intricate hashing system that it would require feels very inelegant. Additionally, I really only care about finding the large segments in the double array and I'm much more concerned with the speed of this algorithm than with the accuracy of the segmentation, so I'm looking for a better way. I assume there is some stochastic method for doing this that I haven't thought of. Any suggestions?

    Read the article

  • Why do mainframe greybeards refer to DB2/zOS as "he"? [closed]

    - by Alex Nauda
    If you ask a DB2/zOS engine DBA a question about DB2's behavior, the DBA will refer to the DB2 engine as "he" much the way a sailor uses "she" to refer to his ship. For example: "Once you fill the freespace, DB2 still wants to keep those rows in cluster order in the tablespace. That's why he'll split that page in half, and you end up with lots of half-empty pages. That is, unless the cluster key of the row you've just inserted is the highest in the table, in which case he makes a new empty page, and he puts just your new row into that page. So I wouldn't have to do this REORG if you would just sort your input like I suggested." Does anyone know where this tradition comes from?

    Read the article

  • Distributed, persistent cache using EHCache

    - by Richard
    I currently have a distributed cache using EHCache via RMI that works just fine. I was wondering if you can include persistence with the caches to create a distributed, persistent cache. Alongside this, if the cache was persistent, would it load from the file store, then bootstrap from the cache cluster? Basically, what I want is: Cache starts Cache loads persistent objects from the file store Cache joins the distruted cluster and bootstraps as normal The usecase behind this is having 2 identical components running on independent machines, distributing the cache to avoid losing data in the event that one of the components fails. The persistence would guard against losing all data on the rare occasion that both components fail. Would moving to another distribution method (such as Terracotta) support this?

    Read the article

  • Emacs/xterm color annoyance on Linux

    - by tgamblin
    I'm using emacs in a console window both on my local Linux box and on the login node of a remote cluster. I use emacs regularly, and I've got the foreground color set to white in my .emacs file like so: (set-foreground-color "white") (set-background-color "black") However, when I run emacs, the foreground isn't white; it's grey and very hard to read. On my Mac, emacs in a console window with the same settings shows up as proper white. But on both linux boxes, in konsole and xterm, it's grey. In case it matters, I've got TERM set to xterm-color, the desktop is running RHEL 5, and the cluster node is running RHEL 4 (CentOS). Is this some default with how Linux sets up terminal colors? How do I get white to be white? Note: this is with console emacs, not emacs under X. That's emacs -nw if you have DISPLAY set.

    Read the article

  • Optimal Sharing of heavy computation job using Snow and/or multicore

    - by James
    Hi, I have the following problem. First my environment, I have two 24-CPU servers to work with and one big job (resampling a large dataset) to share among them. I've setup multicore and (a socket) Snow cluster on each. As a high-level interface I'm using foreach. What is the optimal sharing of the job? Should I setup a Snow cluster using CPUs from both machines and split the job that way (i.e. use doSNOW for the foreach loop). Or should I use the two servers separately and use multicore on each server (i.e. split the job in two chunks, run them on each server and then stich it back together). Basically what is an easy way to: 1. Keep communication between servers down (since this is probably the slowest bit). 2. Ensure that the random numbers generated in the servers are not highly correlated.

    Read the article

  • Weblogic Apache plugin and session stickiness

    - by h4tech
    If two webserver are configured in between a load balancer and weblogic cluster, will the two Apache server maintain session stickiness.?? Say for e.g. the load balancer forwards the first request to the 1st apache and inturn 1st apache forwards to 1st WL managed instance.Even if the second req from the same user is forwarded by the load balancer to the second apache, will the sec apache be able to forward it to the 1st WLManaged instance which served the first request rather than the sec WLManaged instance which is not aware of the session information at all.What should ideally be the behaviour of weblogic apache plugin??.Catch is i dont want ot enable the session replication @ the wl server cluster..Pls help.

    Read the article

  • Avoid the use of loops (for) with R

    - by albergali
    Hi, I'm working with R and I have a code like this: i<-1 j<-1 for (i in 1:10) for (j in 1:100) if (data[i] == paths[j,1]) cluster[i,4] <- paths[j,2] where : data is a vector with 100 rows and 1 column paths is a matrix with 100 rows and 5 columns cluster is a matrix with 100 rows and 5 columns My question is: how could I avoid the use of "for" loops to iterate through the matrix? I don't know whether apply functions (lapply, tapply...) are useful in this case. This is a problem when j=10000 for example, because execution time is very long. Thank you

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >