Search Results

Search found 1795 results on 72 pages for 'veritas cluster'.

Page 58/72 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • identify documents from results of mahout clustering

    - by Tejas
    I am using mahout to cluster text documents indexed using solr. I have used the "text" field in the document to form vectors. Then I used the k-means driver in mahout for clustering and then the clusterdumper utility to dump the results. I am having difficulty in understanding the output results from the dumper. I could see the clusters formed with term vectors in those clusters. But how do I extract the documents from these clusters. I want the result to be the input documents appearing in different clusters.

    Read the article

  • FlockDB - What is it? And best cases for it uses.

    - by Guru
    Just came across FlockDB graph database. Details at github /flockDB. Twitter claims it uses FlockDB for the following: Twitter runs FlockDB on a large cluster of machines. we use it to store social graphs (who follows whom, who blocks whom) and secondary indices at twitter. At first glance, setup and trying it doesn't look straight forward. Have anyone already used it / setup this? If so, please answer the following general queries. What kind of applications is it better suited for? (Twitter claims it is simple and very rough, it remains to see what it meant though) How is FlockDB better than other graph db / noSQL db. Have you setup FlockDB, used it for a application? Early advices any? Note: I am evaluating the FlockDB and other graph databases mainly for learning them. Perhaps, I will build an application for that.

    Read the article

  • MySQL with Java: Open connection only if possible

    - by emempe
    I'm running a database-heavy Java application on a cluster, using Connector/J 5.1.14. Therefore, I have up to 150 concurrent tasks accessing the same MySQL database. I get the following error: Exception in thread "main" com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Too many connections This happens because the server can't handle so many connections. I can't change anything on the database server. So my question is: Can I check if a connection is possible BEFORE I actually connect to the database? Something like this (pseudo code): check database for open connection slots if (slot is free) { Connection cn = DriverManager.getConnection(url, username, password); } else { wait ... } Cheers

    Read the article

  • J2EE/EJB + service locator: is it safe to cache EJB Home lookup result ?

    - by Guillaume
    In a J2EE application, we are using EJB2 in weblogic. To avoid losing time building the initial context and looking up EJB Home interface, I'm considering the Service Locator Pattern. But after a few search on the web I found that event if this pattern is often recommended for the InitialContext caching, there are some negative opinion about the EJB Home caching. Questions: Is it safe to cache EJB Home lookup result ? What will happen if one my cluster node is no more working ? What will happen if I install a new version of the EJB without refreshing the service locator's cache ?

    Read the article

  • how to optimize sql server table for faster response?

    - by Thomas
    i found a in a table there are 50 thousands records and it takes one minute when we fetch data from sql server table just by issuing a sql. there are one primary key that means a already a cluster index is there. i just do not understand why it takes one minute. beside index what are the ways out there to optimize a table to get the data faster. in this situation what i need to do for faster response. also tell me how we can write always a optimize sql. please tell me all the steps in detail for optimization. thanks.

    Read the article

  • Horizontal Scaling of Tomcat in Microsoft Azure

    - by Fabe
    Hey everyone, I am working on this quiet a while, but still no conclustion. I want to do horizontal scaling of Tomcat instances in Microsoft Azure (1,2,3,... Tomcat instances for one service). I read lots of articles about session replication, clustering,... with Tomcat. Since Azure does not support Multicasts, there is no easy way to cluster Tomcat. Also sticky sessions is no options, because Azure does round robin load balancing. Setting up two services - one with Terracotta or Apache mod_jk - and the other with Tomcat instances seems overkill for me (if even doable)... Is this even possible? Thanks in advance for reading and answering my question. Every comment/idea is highly appreciated.

    Read the article

  • Help regarding no sql databases like hadoop, hbase etc

    - by user560370
    I am new to the distributed NoSQL databases like Hadoop, Cassandra, etc. I have few questions for which I seek an expert advice: Can you list problems/challenges one will generally face when making a shift from the present conventional database like MySQL to these large cluster-based databases? What are the difficulties, if any, when one needs to adapt to a newer version of these open source projects? Can you list out the things which are generally stored/kept in memcached for fast rendering of the page? How can I understand the source code of open-source projects so that I can build on it and maybe give back to the community? Above questions may sound to be idiotic and basic but please it's a request for the experts to answer the above questions in detailed and to best of their abilities.

    Read the article

  • kmeans based on mapreduce by python

    - by user3616059
    I am going to write a mapper and reducer for the kmeans algorithm, I think the best course of action to do is putting the distance calculator in mapper and sending to reducer with the cluster id as key and coordinates of row as value. In reducer, updating the centroids would be performed. I am writing this by python. As you know, I have to use Hadoop streaming to transfer data between STDIN and STOUT. according to my knowledge, when we print (key + "\t"+value), it will be sent to reducer. Reducer will receive data and it calculates the new centroids but when we print new centroids, I think it does not send them to mapper to calculate new clusters and it just send it to STDOUT and as you know, kmeans is a iterative program. So, my questions is whether Hadoop streaming suffers of doing iterative programs and we should employ MRJOB for iterative programs?

    Read the article

  • Rails: How can I log all requests which take more than 4s to execute?

    - by Fedyashev Nikita
    I have a web app hosted in a cloud environment which can be expanded to multiple web-nodes to serve higher load. What I need to do is to catch this situation when we get more and more HTTP requests (assets are stored remotely). How can I do that? The problem I see from this point of view is that if we have more requests than mongrel cluster can handle then the queue will grow. And in our Rails app we can only count only after mongrel will receive the request from balancer.. Any recommendations?

    Read the article

  • python parallel computing: split keyspace to give each node a range to work on

    - by MatToufoutu
    My question is rather complicated for me to explain, as i'm not really good at maths, but i'll try to be as clear as possible. I'm trying to code a cluster in python, which will generate words given a charset (i.e. with lowercase: aaaa, aaab, aaac, ..., zzzz) and make various operations on them. I'm searching how to calculate, given the charset and the number of nodes, what range each node should work on (i.e.: node1: aaaa-azzz, node2: baaa-czzz, node3: daaa-ezzz, ...). Is it possible to make an algorithm that could compute this, and if it is, how could i implement this in python? I really don't know how to do that, so any help would be much appreciated

    Read the article

  • DBA's say no to SQL Server DTC?

    - by NabilS
    I am trying to get our DBA's to enable DTC on a cluster of SQL Server 2005. Unfortunately they keep refusing. Their argument that they would need to set up a dedicated host for DTC (Could take months!!) as it is not a matter of ticking a few boxes. Is this true? How intrusive is DTC on a shared environment such as a SQL farm. Do I have an argument against this? Thanks

    Read the article

  • Use C function in C++ program; "multiply-defined" error

    - by eom
    I am trying to use this code for the Porter stemming algorithm in a C++ program I've already written. I followed the instructions near the end of the file for using the code as a separate module. I created a file, stem.c, that ends after the definition and has extern int stem(char * p, int i, int j) ... It worked fine in Xcode but it does not work for me on Unix with gcc 4.1.1--strange because usually I have no problem moving between the two. I get the error ld: fatal: symbol `stem(char*, int, int)' is multiply-defined: (file /var/tmp//ccrWWlnb.o type=FUNC; file /var/tmp//cc6rUXka.o type=FUNC); ld: fatal: File processing errors. No output written to cluster I've looked online and it seems like there are many things I could have wrong, but I'm not sure what combination of a header file, extern "C", etc. would work.

    Read the article

  • Python - a clean approach to this problem?

    - by Seafoid
    Hi, I am having trouble picking the best data structure for solving a problem. The problem is as below: I have a nested list of identity codes where the sublists are of varying length. li = [['abc', 'ghi', 'lmn'], ['kop'], ['hgi', 'ghy']] I have a file with two entries on each line; an identity code and a number. abc 2.93 ghi 3.87 lmn 5.96 Each sublist represents a cluster. I wish to select the i.d. from each sublist with the highest number associated with it, append that i.d. to a new list and ultimately write it to a new file. What data structure should the file with numbers be read in as? Also, how would you iterate over said data structure to return the i.d. with the highest number that matches the i.d. within a sublist? Thanks, S :-)

    Read the article

  • Detecting touch area on Android

    - by HappyAppDeveloper
    Is it possible to detect every pixel being touched? More specifically, when the user touches the screen, is it possible to track all the x-y coordinates of the cluster of points touched by the user? How can I tell the difference between when users are drawing with their thumb and when they are drawing with the tip of a finger? I would like to reflect the brush difference depending on how users touch the screen, and would also like to track x-y coordinates of all the pixels being touched over time. Thanks so much in advance for any help.

    Read the article

  • Efficient job progress update in web application

    - by Endru6
    Hi, Creating a web application (Django in my case, but I think the question is more general) that is administrating a cluster of workers doing queued jobs, there is a need to track each jobs progress. When I've done it using database UPDATE (PostgreSQL in this case), it severely hits the database performance, because each UPDATE creates a new row in a table, and in my case only vacuuming DB removes obsolete rows. Having 30 jobs running and reporting progress every 1 minute DB may require vacuuming (and it means huge slow downs on a front end side for all the employees working with the system) every 10 days. Because the progress information isn't critical, ie. it doesn't have to be persistent, how would you do the progress updates from jobs without using an overhead database implies? There are 30 worker servers, each doing 1 or 2 jobs simultaneously, 1 front end server which serves a web application to users, and 1 database server.

    Read the article

  • executorservice to read data from database in chuncks and run process on them

    - by TazMan
    I'm trying to write a process that would read data from a database and upload it onto a cloud datastore. How can I decide the partition strategy of the data? I want to query the table in chunks and process each chunk in 10 threads. Each thread basically will send the data to an individual node on a 10 node cluster on the cloud.. Where in the below multi threading code will the dataquery to extract and send 10 concurrent requests for uploading data to cloud would be? public class Caller { public static void main(String[] args) { ExecutorService executor = Executors.newFixedThreadPool(10); for (int i = 0; i < 10; i++) { Runnable worker = new DomainCDCProcessor(i); executor.execute(worker); } executor.shutdown(); while (!executor.isTerminated()) { } System.out.println("Finished all threads"); } }

    Read the article

  • Why Hadoop is tightly bound to linux?

    - by user1676346
    I am new with Hadoop. What are the specific reasons why Hadoop is so tightly bound with Linux, and the cluster it runs upon is homogeneous? I'm looking for really specific details that can tell me why Hadoop does not work well with windows, and if there are some libraries some specific scripts that are involved? My project is to deploy Hadoop without using Cygwin. I have already seen the article from Hayes Davis where he explained how to install Hadoop without Cygwin, but he said that there are some bugs. I might start from scratch to properly configure Hadoop on Windows, but if any one can explain what, specifically, are the reasons that Hadoop doesn't work well on windows that would be very helpful.

    Read the article

  • Will the program installed in a folder function properly if I remove the write permission in linux? [on hold]

    - by Kevin Powell
    I have a user account on a cluster( a server), and can only install program like python on the home folder. In case I might accidentally delete the bin, lib, share,include folders coming with the installation of python on the home folder. I change the permissions of the above folder like this chmod -w folder but I am worried when the program need to write/delete some files of the folders, it might not function because the removal of write permission. Am I right? or I the run, including write files in the folder, of a program have permissions different than the permission of user. BTW, is there a way to hide the folders without changing the names?

    Read the article

  • How can I get the Forever to write to a different log file every day?

    - by user1438940
    I have a cluster of production servers running a Node.JS app via Forever. As far as I can tell, my options for log files are as follows: Let Forever do it on its own, in which case it will log to ~/.forever/XXXX.log Specify one specific log file for the entire life of the process What I'd like to do, however, is have it log to a different file every day. eg. 20121027.log, 20121028.log, etc. Is this possible? If so, how can it be done?

    Read the article

  • Older SAS1 hardware Vs. newer SAS2 hardware

    - by user12620172
    I got a question today from someone asking about the older SAS1 hardware from over a year ago that we had on the older 7x10 series. They didn't leave an email so I couldn't respond directly, but I said this blog would be blunt, frank, and open so I have no problem addressing it publicly. A quick history lesson here: When Sun first put out the 7x10 family hardware, the 7410 and 7310 used a SAS1 backend connection to a JBOD that had SATA drives in it. This JBOD was not manufactured by Sun nor did Sun own the IP for it. Now, when Oracle took over, they had a problem with that, and I really can’t blame them. The decision was made to cut off that JBOD and it’s manufacturer completely and use our own where Oracle controlled both the IP and the manufacturing. So in the summer of 2010, the cut was made, and the 7410 and 7310 had a hardware refresh and now had a SAS2 backend going to a SAS2 JBOD with SAS2 drives instead of SATA. This new hardware had two big advantages. First, there was a nice performance increase, mostly due to the faster backend. Even better, the SAS2 interface on the drives allowed for a MUCH faster failover between cluster heads, as the SATA drives were the bottleneck on the older hardware. In September of 2010 there was a major refresh of the rest of the 7000 hardware, the controllers and the other family members, and that’s where we got today’s current line-up of the 7x20 series. So the 7x20 has always used the new trays, and the 7410 and 7310 have used the new SAS2 trays since last July of 2010. Now for the bad news. People who have the 7410 and 7310 from BEFORE the July 2010 cutoff have the models with SAS1 HBAs in them to connect to the older SAS1 trays. Remember, that manufacturer cut all ties with us and stopped making the JBOD, so there’s just no way to get more of them, as they don’t exist. There are some options, however. Oracle support does support taking out the SAS1 HBAs in the old 7410 and 7310 and put in newer SAS2 HBAs which can talk to the new trays. Hey, I didn’t say it was a great option, I just said it’s an option. I fully realize that you would then have a SAS1 JBOD full of SATA drives that you could no longer connect. I do know a client that did this, and took the SAS1 JBOD and connected it to another server and formatted the drives and is using it as a plain, non-7000 JBOD. This is not supported by Oracle support. The other option is to just keep it as-is, as it works just fine, but you just can’t expand it. Then you can get a newer 7x20 series, and use the built-in ZFSSA replication feature to move the data over. Now you can use the newer one for your production data and use the older one for DR, snaps and clones.

    Read the article

  • Oracle Service Registry 11gR1 Support for Oracle Fusion Middleware/SOA Suite 11g PatchSet 2

    - by Dave Berry
    As you might be aware, a few days back we released Patchset 2 (PS2) for several products in the Oracle Fusion Middleware 11g Release 1 stack including WebLogic Server and SOA Suite. Though there was no patchset released for Oracle Service Registry (OSR) 11g, being an integral part of Fusion Middleware & SOA, OSR 11g R1 ( 11.1.1.2 ) is fully certified with this release. Below is some recommended reading before installing OSR 11g with the new PS2 : OSR 11g R1 & SOA Suite 11g PS2 in a Shared WebLogic Domain If you intend to deploy OSR 11g in the same domain as the SOA Suite 11g, the primary recommendation is to install OSR 11g in its own Managed Server within the same Weblogic Domain as the SOA Suite, as the following diagram depicts : An important pre-requisite for this setup is to apply Patch 9499508, after installation. It basically replaces a registry library - wasp.jar - in the registry application deployed on your server, so as to enable co-deployment of OSR 11g & SOA Suite 11g in the same WLS Domain. The patch fixes a java.lang.LinkageError: loader constraint violation that appears in your OSR system log and is now available for download. The second, equally important, pre-requisite is to modify the setDomainEnv.sh/.cmd file for your WebLogic Domain to conditionally set the CLASSPATH so that the oracle.soa.fabric.jar library is not included in it for the Managed Server(s) hosting OSR 11g. Both these pre-requisites and other OSR 11g Topology Best Practices are covered in detail in the new Knowledge Base article Oracle Service Registry 11g Topology : Best Practices. Architecting an OSR 11g High Availability Setup Typically you would want to create a High Availability (HA) OSR 11g setup, especially on your production system. The following illustrates the recommended topology. The article, Hands-on Guide to Creating an Oracle Service Registry 11g High-Availability Setup on Oracle WebLogic Server 11g on OTN provides step-by-step instructions for creating such an active-active HA setup of multiple OSR 11g nodes with a Load Balancer in an Oracle WebLogic Server cluster environment. Additional Info The OSR Home Page on OTN is the hub for OSR and is regularly updated with latest information, articles, white papers etc. For further reading, this FAQ answers some common questions on OSR. The OSR Certification Matrix lists the Application Servers, Databases, Artifact Storage Tools, Web Browsers, IDEs, etc... that OSR 11g is certified against. If you hit any problems during OSR 11g installation, design time or runtime, the first place to look into is the logs. To find more details about which logs to check when & where, take a look at Where to find Oracle Service Registry Logs? Finally, if you have any questions or problems, there are various ways to reach us - on the SOA Governance forum on OTN, on the Community Forums or by contacting Oracle Support. Yogesh Sontakke and Dave Berry

    Read the article

  • HPC Server Dynamic Job Scheduling: when jobs spawn jobs

    - by JoshReuben
    HPC Job Types HPC has 3 types of jobs http://technet.microsoft.com/en-us/library/cc972750(v=ws.10).aspx · Task Flow – vanilla sequence · Parametric Sweep – concurrently run multiple instances of the same program, each with a different work unit input · MPI – message passing between master & slave tasks But when you try go outside the box – job tasks that spawn jobs, blocking the parent task – you run the risk of resource starvation, deadlocks, and recursive, non-converging or exponential blow-up. The solution to this is to write some performance monitoring and job scheduling code. You can do this in 2 ways: manually control scheduling - allocate/ de-allocate resources, change job priorities, pause & resume tasks , restrict long running tasks to specific compute clusters Semi-automatically - set threshold params for scheduling. How – Control Job Scheduling In order to manage the tasks and resources that are associated with a job, you will need to access the ISchedulerJob interface - http://msdn.microsoft.com/en-us/library/microsoft.hpc.scheduler.ischedulerjob_members(v=vs.85).aspx This really allows you to control how a job is run – you can access & tweak the following features: max / min resource values whether job resources can grow / shrink, and whether jobs can be pre-empted, whether the job is exclusive per node the creator process id & the job pool timestamp of job creation & completion job priority, hold time & run time limit Re-queue count Job progress Max/ min Number of cores, nodes, sockets, RAM Dynamic task list – can add / cancel jobs on the fly Job counters When – poll perf counters Tweaking the job scheduler should be done on the basis of resource utilization according to PerfMon counters – HPC exposes 2 Perf objects: Compute Clusters, Compute Nodes http://technet.microsoft.com/en-us/library/cc720058(v=ws.10).aspx You can monitor running jobs according to dynamic thresholds – use your own discretion: Percentage processor time Number of running jobs Number of running tasks Total number of processors Number of processors in use Number of processors idle Number of serial tasks Number of parallel tasks Design Your algorithms correctly Finally , don’t assume you have unlimited compute resources in your cluster – design your algorithms with the following factors in mind: · Branching factor - http://en.wikipedia.org/wiki/Branching_factor - dynamically optimize the number of children per node · cutoffs to prevent explosions - http://en.wikipedia.org/wiki/Limit_of_a_sequence - not all functions converge after n attempts. You also need a threshold of good enough, diminishing returns · heuristic shortcuts - http://en.wikipedia.org/wiki/Heuristic - sometimes an exhaustive search is impractical and short cuts are suitable · Pruning http://en.wikipedia.org/wiki/Pruning_(algorithm) – remove / de-prioritize unnecessary tree branches · avoid local minima / maxima - http://en.wikipedia.org/wiki/Local_minima - sometimes an algorithm cant converge because it gets stuck in a local saddle – try simulated annealing, hill climbing or genetic algorithms to get out of these ruts   watch out for rounding errors – http://en.wikipedia.org/wiki/Round-off_error - multiple iterations can in parallel can quickly amplify & blow up your algo ! Use an epsilon, avoid floating point errors,  truncations, approximations Happy Coding !

    Read the article

  • ArchBeat Link-o-Rama for 11/15/2011

    - by Bob Rhubart
    Java Magazine - November/December 2011 - by and for the Java Community Java Magazine is an essential source of knowledge about Java technology, the Java programming language, and Java-based applications for people who rely on them in their professional careers, or who aspire to. Enterprise 2.0 Conference: November 14-17 | Kellsey Ruppel "Oracle is proud to be a Gold sponsor of the Enterprise 2.0 West Conference, November 14-17, 2011 in Santa Clara, CA. You will see the latest collaboration tools and technologies, and learn from thought leaders in Enterprise 2.0's comprehensive conference." The Return of Oracle Wikis: Bigger and Better | @oracletechnet The Oracle Wikis are back - this time, with Oracle SSO on top and powered by Atlassian's Confluence technology. These wikis offer quite a bit more functionality than the old platform. Cloud Migration Lifecycle | Tom Laszewski Laszewski breaks down the four steps in the Set Up Phase of the Cloud Migration lifecycle. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ - Dec14 Spend the day with your peers learning from Oracle experts in engineered systems, cloud computing, Oracle Coherence, Oracle WebLogic, and more. Registration is free, but seating is limited. SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. Live Webcast: New Innovations in Oracle Linux Date: Tuesday, November 15, 2011 Time: 9:00 AM PT / Noon ET Speakers: Chris Mason, Elena Zannoni. People in glass futures should throw stones | Nicholas Carr "Remember that Microsoft video on our glassy future? Or that one from Corning? Or that one from Toyota?" asks Carr. "What they all suggest, and assume, is that our rich natural 'interface' with the world will steadily wither away as we become more reliant on software mediation." Integration of SABSA Security Architecture Approaches with TOGAF ADM | Jeevak Kasarkod Jeevak Kasarkod's overview of a new paper from the OpenGroup and the SABSA institute "which delves into the incorporatation of risk management and security architecture approaches into a well established enterprise architecture methodology - TOGAF." Cloud Computing at the Tactical Edge | Grace Lewis - SEI Lewis describes the SEI's work with Cloudlets, " lightweight servers running one or more virtual machines (VMs), [that] allow soldiers in the field to offload resource-consumptive and battery-draining computations from their handheld devices to nearby cloudlets." Simplicity Is Good | James Morle "When designing cluster and storage networking for database platforms, keep the architecture simple and avoid the complexities of multi-tier topologies," says Morle. "Complexity is the enemy of availability." Mainframe as the cloud? Tom Laszewski There's nothing new about using the mainframe in the cloud, says Laszewski. Let Devoxx 2011 begin! | The Aquarium The Aquarium marks the kick-off of Devoxx 2011 with "a quick rundown of the Java EE and GlassFish side of things."

    Read the article

  • Microsoft Technical Computing

    - by Daniel Moth
    In the past I have described the team I belong to here at Microsoft (Parallel Computing Platform) in terms of contributing to Visual Studio and related products, e.g. .NET Framework. To be more precise, our team is part of the Technical Computing group, which is still part of the Developer Division. This was officially announced externally earlier this month in an exec email (from Bob Muglia, the president of STB, to which DevDiv belongs). Here is an extract: "… As we build the Technical Computing initiative, we will invest in three core areas: 1. Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable ‘just-in-time’ processing. This platform will help ensure processing resources are available whenever they are needed—reliably, consistently and quickly. 2. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today’s modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop… to the cluster… to the cloud. 3. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. …" Our Parallel Computing Platform team is directly responsible for item #2, and we work very closely with the teams delivering items #1 and #3. At the same time as the exec email, our marketing team unveiled a website with interviews that I invite you to check out: Modeling the World. Comments about this post welcome at the original blog.

    Read the article

  • You do not need a separate SQL Server license for a Standby or Passive server - this Microsoft White Paper explains all

    - by tonyrogerson
    If you were in any doubt at all that you need to license Standby / Passive Failover servers then the White Paper “Do Not Pay Too Much for Your Database Licensing” will settle those doubts. I’ve had debate before people thinking you can only have a single instance as a standby machine, that’s just wrong; it would mean you could have a scenario where you had a 2 node active/passive cluster with database mirroring and log shipping (a total of 4 SQL Server instances) – in that set up you only need to buy one physical license so long as the standby nodes have the same or less physical processors (cores are irrelevant). So next time your supplier suggests you need a license for your standby box tell them you don’t and educate them by pointing them to the white paper. For clarity I’ve copied the extract below from the White Paper. Extract from “Do Not Pay Too Much for Your Database Licensing” Standby Server Customers often implement standby server to make sure the application continues to function in case primary server fails. Standby server continuously receives updates from the primary server and will take over the role of primary server in case of failure in the primary server. Following are comparisons of how each vendor supports standby server licensing. SQL Server Customers does not need to license standby (or passive) server provided that the number of processors in the standby server is equal or less than those in the active server. Oracle DB Oracle requires customer to fully license both active and standby servers even though the standby server is essentially idle most of the time. IBM DB2 IBM licensing on standby server is quite complicated and is different for every editions of DB2. For Enterprise Edition, a minimum of 100 PVUs or 25 Authorized User is needed to license standby server.   The following graph compares prices based on a database application with two processors (dual-core) and 25 users with one standby server. [chart snipped]  Note   All prices are based on newest Intel Xeon Nehalem processor database pricing for purchases within the United States and are in United States dollars. Pricing is based on information available on vendor Web sites for Enterprise Edition. Microsoft SQL Server Enterprise Edition 25 users (CALs) x $164 / CAL + $8,592 / Server = $12,692 (no need to license standby server) Oracle Enterprise Edition (base license without options) Named User Plus minimum (25 Named Users Plus per Core) = 25 x 2 = 50 Named Users Plus x $950 / Named Users Plus x 2 servers = $95,000 IBM DB2 Enterprise Edition (base license without feature pack) Need to purchase 125 Authorized User (400 PVUs/100 PVUs = 4 X 25 = 100 Authorized User + 25 Authorized Users for standby server) = 125 Authorized Users x $1,040 / Authorized Users = $130,000  

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >