Search Results

Search found 64047 results on 2562 pages for 'oracle application architecture'.

Page 424/2562 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • Patch Set 11.2.0.2 for Win32 and Win64 now available

    - by Mike Dietrich
    Oracle Database Patch Set 11.2.0.2 for Windows (Patch: 10098816) is now available for download from support.oracle.com: Oracle Database 11.2.0.2 Patch Set for Windows 32bit Oracle Database 11.2.0.2 Patch Set for Windows 64bit Please keep in mind: It's a full install - you don't have to download 11.2.0.1 first, you can start right with 11.2.0.2 You'll get it just from support.oracle.com - no download from OTN or eDelivery as this is a patch set Installation will be done by default into a separate %ORACLE_HOME% .- and this is our strong recommendation. If you'd like to install into your existing 11.2.0.1 %ORACLE_HOME% then you'll have to detach your 11.2.0.1 home from the OUI inventory first (runInstaller -detachHome ORACLE_HOME=c:\orahomes\11.2.0), save the contents of ?\network\admin and ?\database, clean up, install 11.2.0.2 and copy the saved network\admin and \database content back. Btw, Oracle Database Patch Set 10.2.0.5 for HP-UX - Patch:8202632 is available for download as well since today.

    Read the article

  • ArchBeat Link-o-Rama for 11/15/2011

    - by Bob Rhubart
    Java Magazine - November/December 2011 - by and for the Java Community Java Magazine is an essential source of knowledge about Java technology, the Java programming language, and Java-based applications for people who rely on them in their professional careers, or who aspire to. Enterprise 2.0 Conference: November 14-17 | Kellsey Ruppel "Oracle is proud to be a Gold sponsor of the Enterprise 2.0 West Conference, November 14-17, 2011 in Santa Clara, CA. You will see the latest collaboration tools and technologies, and learn from thought leaders in Enterprise 2.0's comprehensive conference." The Return of Oracle Wikis: Bigger and Better | @oracletechnet The Oracle Wikis are back - this time, with Oracle SSO on top and powered by Atlassian's Confluence technology. These wikis offer quite a bit more functionality than the old platform. Cloud Migration Lifecycle | Tom Laszewski Laszewski breaks down the four steps in the Set Up Phase of the Cloud Migration lifecycle. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ - Dec14 Spend the day with your peers learning from Oracle experts in engineered systems, cloud computing, Oracle Coherence, Oracle WebLogic, and more. Registration is free, but seating is limited. SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. Live Webcast: New Innovations in Oracle Linux Date: Tuesday, November 15, 2011 Time: 9:00 AM PT / Noon ET Speakers: Chris Mason, Elena Zannoni. People in glass futures should throw stones | Nicholas Carr "Remember that Microsoft video on our glassy future? Or that one from Corning? Or that one from Toyota?" asks Carr. "What they all suggest, and assume, is that our rich natural 'interface' with the world will steadily wither away as we become more reliant on software mediation." Integration of SABSA Security Architecture Approaches with TOGAF ADM | Jeevak Kasarkod Jeevak Kasarkod's overview of a new paper from the OpenGroup and the SABSA institute "which delves into the incorporatation of risk management and security architecture approaches into a well established enterprise architecture methodology - TOGAF." Cloud Computing at the Tactical Edge | Grace Lewis - SEI Lewis describes the SEI's work with Cloudlets, " lightweight servers running one or more virtual machines (VMs), [that] allow soldiers in the field to offload resource-consumptive and battery-draining computations from their handheld devices to nearby cloudlets." Simplicity Is Good | James Morle "When designing cluster and storage networking for database platforms, keep the architecture simple and avoid the complexities of multi-tier topologies," says Morle. "Complexity is the enemy of availability." Mainframe as the cloud? Tom Laszewski There's nothing new about using the mainframe in the cloud, says Laszewski. Let Devoxx 2011 begin! | The Aquarium The Aquarium marks the kick-off of Devoxx 2011 with "a quick rundown of the Java EE and GlassFish side of things."

    Read the article

  • Introducing the ADF Desktop Integration Troubleshooting Guide

    - by Juan Camilo Ruiz
    Since the addition of ADF Desktop Integration to the ADF Framework, a number of customers internal and external, have started extending their applications in use cases around integration with MS Excel. In an effor to share the knowledge collected since the product came out, we are happy to launch the ADF Desktop Integration Troubleshooting Guide, where usera can find an active collection of best practices to figuring out how to best approach issues while using ADF Desktop Integration. Be sure to bookmark this link and make sure to check it out, plenty of scenarios are covered and more will be added as we continue identifying them. 

    Read the article

  • How to instrument existing ASP.NET application?

    - by jkohlhepp
    We have several highly complex ASP.NET web applications that are used internally by hundreds of users. We are trying to figure out which areas of the applications to invest in to improve functionality, but we aren't sure which screens/features are more heavily used. So, ideally, I'd like to find a way to add a layer of instrumentation to the applications that gathers metrics on which buttons are being clicked, which text boxes are being used, etc. Are there any products / open source apps out there that will do this sort of instrumentation for ASP.NET? Obviously I could do it myself manually by going into the code and injecting logging statements everywhere but this would be a significant amount of work that will be hard to accomplish.

    Read the article

  • Cumulative Feature Overview Tool

    - by Matthew Haavisto
    The popular Cumulative Feature Overview Tool now has a new column that indicates if functionality was introduced in a bundle or maintenance pack (see example).   The CFO tool helps you plan your upgrades by providing concise descriptions of new and enhanced solutions and functionality that have become available between your starting and target releases. You simply identify the products you own, your existing release, and your target implementation release. With a single click, the tool quickly produces a customized set of high-level, concise descriptions of features developed between your starting and target releases. The CFO is available for PeopleTools as well as PeopleSoft applications.

    Read the article

  • Oracle Technológia Fórum, 2010. május 5.

    - by Fekete Zoltán
    Holnap, május 5-ikén lesz Exadata/Database Machine eloadás is (by me). Többek között elmondom, hogyan lehet a Database Machine, Exadata környezeteket patch-elni: Database, Exadata, további elemek. Oracle Technológia Fórum rendezvény, 2010. május 5. szerda. Tessék jönni, kérdezni.

    Read the article

  • ODI 11g – How to override SQL at runtime?

    - by David Allan
    Following on from the posting some time back entitled ‘ODI 11g – Simple, Powerful, Flexible’ here we push the envelope even further. Rather than just having the SQL we override defined statically in the interface design we will have it configurable via a variable….at runtime. Imagine you have a well defined interface shape that you want to be fulfilled and that shape can be satisfied from a number of different sources that is what this allows - or the ability for one interface to consume data from many different places using variables. The cool thing about ODI’s reference API and this is that it can be fantastically flexible and useful. When I use the variable as the option value, and I execute the top level scenario that uses this temporary interface I get prompted (or can get prompted to be correct) for the value of the variable. Note I am using the <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@> notation for the table reference, since this is done at runtime, then the context will resolve to the correct table name etc. Each time I execute, I could use a different source provider (obviously some dependencies on KMs/technologies here). For example, the following groovy snippet first executes and the query uses SCOTT model with EMP, the next time it is from BOB model and the datastore OTHERS. m=new Properties(); m.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@>"); s=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s, null, "GLOBAL", 5, null, true); m2=new Properties(); m2.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","OTHERS", "BOB","D")@>"); s2=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s2, null, "GLOBAL", 5, null, true); You’ll need a patch to 11.1.1.6 for this type of capability, thanks to my ole buddy Ron Gonzalez from the Enterprise Management group for help pushing the envelope!

    Read the article

  • SNEAK PEEK: New Silverlight application themes

    Twas the week before MIX, when all through the tubes Not a developer was sleeping, not even the noobs. The laptops were paved removed of their glitz In hopes that they soon will get some new bits. A developer was coding, building an app Trying to build the next greatest XAP Battleship gray?! Now thats obscene Check our designers latest theme Okay, so Im not going to win any poetry awards. Our UX design team for Silverlight has been thinking about app building a lot this past year,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CPU and Motherboard clock speeds

    - by NZHammer
    I have been doing some reading about CPU clock speeds and how CPU clock speeds are calculated. After reading several articles, I have come to the understanding that your CPU clock speed is determined by: CPU clock speed = cpu multiplier x mobo clock speed A few questions came about after reading this which I cannot seem to find the answer to anywhere: If the CPU clock speed is dependent upon the mobo clock speed, then how is the clock speed of the CPU predetermined upon buying the CPU (i.e. written on the box without knowing what mobo is being used)? After installation, does the CPU adjust it's multiplier based upon the mobo clock speed to achieve advertised speeds? For example, if the CPU clocks speed is advertised at 2.4GHz and the mobo clock speed is 100MHz, will the multiplier be automatically set to 24x? Why does mobo clock speed seem to not be very important / talked about? For example, when I search on Newegg, mobo clock speed never seems to be listed. When I search enthusiast forums and overclocking forums, mobo clock speed is rarely mentioned. To me, it seems like the mobo clock speed would be pretty important. If I am understanding things correctly, a lower mobo clock speed means that you CPU must work harder to achieve advertised clock speeds. I guess that I should stop there with the questions for now, as I may be asking my questions based on incorrect assumptions. Thanks!

    Read the article

  • Feature (de)activation error “The web or site was not found” and Application Pool

    - by panjkov
    I am using Microsoft IW Demo VM (2010-10A) for my experiments related to SharePoint, in all cases when I don’t have time (read: when I’m lazy) to create complete SharePoint Dev environment. Problem This particular time I was playing around with site-scoped features and newly created site collection. So here is my workflow: Create feature with feature receiver Deploy to Site Collection from Visual Studio using “No Activation” deployment profile Activate feature from “Site Collection Features” interface...(read more)

    Read the article

  • 'Development dashboard' web application

    - by espais
    Hi all, I am not sure if something like this exists in that it is ready out of the box. I currently have some web space that I use for various projects, and I would like to setup an area for some friends and I to develop web applications together. My ideal setup would be to create a folder, say, webdev.domain.com. We could all go to this domain, login, and then be able to setup new applications, pick which language will be used, setup database tables, allow HTML based file uploading, and create sub-folders to basically have a test bed for the applications. In retrospect, it seems like I'm describing a limited version of cpanel. I could come up with something in Drupal I'm sure, but I don't want to have to really spend time configuring much. Like I said, I want to install it and have minimal configuration. Does something like this exist (preferably in open-source)?

    Read the article

  • Launching a URL from an OOB Silverlight Application

    So I'm working on this code browser mostly to help me with my fading memory. In the app I want to be able to launch a url. So I do my normal thing and put a line of code that looks like this:System.Windows.Browser.HtmlPage.Window.Navigate(new Uri(ThisURI), "_blank");I was agast when I realized that this didn't work, for that matter it didn't even blow... grr... but with a bit of research I found that the hyper link button worked so a ended up making a little class like this:public class MyHyperLink...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • C# window application : How to validate mobile no.

    - by SAMIR BHOGAYTA
    //First : Simple Method private void textBox1_KeyPress(object sender, KeyPressEventArgs e) { if (char.IsDigit(e.KeyChar) == true) { if (textBox1.Text.Length 10) { MessageBox.Show("Invalid Indian Mobile Number !!"); txtPhone.Focus(); } } //With the help of JavaScript function phone_validate(phone) { var phoneReg = ^((\+)?(\d{2}[-]))?(\d{10}){1}?$; if(phoneReg.test(phone) == false) { alert("Phone number is not yet valid."); } else { alert("You have entered a valid phone number!"); } }

    Read the article

  • The Oscar of Java Programming

    - by Tori Wieldt
    Why bother nominating a peer, yourself or your company for a Duke's Choice Award? I asked Duke's Choice Award winner Fabiane Nardon, whose team won in 2005 for the Healthcare Information System they created for the Brazilian government, what it was like winning the award and if it had any impact on her career. Here's what she told me: 1) What was it like to win a Duke's Choice Award? For me it was like winning an Oscar or a Grammy :-) I think that for a Java developer, a Duke's Choice Award is probably the highest award you can get, so it was really an honor. We had an amazing team working on that project and the team really deserved it. We were all very happy when we got that email with the announcement. That moment was one of the most important moments of my career. 2) What benefits have you gotten from being a "Duke's Choice Award Winner?" I think the most important benefit you get from winning a Duke is the fact that you become known by your peers. This opens many doors, since you are approached by more people, get invitations to speak in more conferences, you meet people with the same technical interests you have and so on. I certainly benefited a lot from it. We were lucky that in 2005, when we got our award, the winners were featured in the JavaOne keynote, with short documentaries produced about each one. So, we could be on the stage and talk a little about the project. We got lots of press at the time. We see  today's winners benefiting a lot from the press coverage. 3) How is the the Brazilian Healthcare Information System project doing today? Still running and getting new features every year. I'm not involved on the project anymore, but there are good people taking care of it. We opened the code since the beginning, so different cities could use and add features to it. There are many new developers working on that code base right now and I hope they can take the whole system to a new level. 4) What are you up to these days? I worked in the healthcare field for many years and a few years ago I decided that it was time to move on and take the experience I got designing large scale and mission critical systems to other fields. Since then I have been working with high access internet applications. I also co-founded ToolsCloud, a company that provides a development environment with open source tools in the cloud. We just launched ToolsCloud in USA, so other companies can get the same bundle of tools, hassle free, that several companies are successfully using in Brazil. Besides that, right now I'm personally working on the coolest project I ever worked on. It combines several technical challenges with a good dose of social impact. We should launch it in the second semester and I should keep it as a secret for now. Hopefully it will be useful to many people and disruptive enough to maybe get us a new Duke's Choice Award. Who knows? Read more about Fabiane in the "Heroes of Java" series by Markus Eisele. Her Twitter handle is @FabianeNardon. The Duke's Choice Awards celebrate extreme innovation in the world of Java technology. Nominate an individual, a group or company who show the best in Java innovation. Nominate via the easy online form at www.Java.net/dukeschoice. Nominations are open until June 15, 2012.

    Read the article

  • Who Do You Turn To for Your Consumer Goods Sales and Marketing Needs

    - by ruth.donohue
    As a sales or marketing executive, you want the best software for managing your marketing, demand generation, trade promotion, customer/volume planning, and retail execution/monitoring activities and analysis. However, working with niche software vendors can result in a very disjointed user and support experience. It would be ideal to have just one end-to-end solution that could manage and optimize each of these processes...but is that just wishful thinking? Read this Gartner article to find out more!

    Read the article

  • Launch an arbitrary application with a specific icon

    - by Camilo Martin
    I'm thinking about customizing my Windows 7's application icons, so that they all follow a specific theme (Token-like icons). This would involve using Resource Hacker (or similar) to patch each .exe. Not only this would be tiresome, but also it would require doing it again at each update of each application (nevermind that some could break just because it was tampered with). Instead, is there any way to launch an application with a specific icon? Ideally it would be something from the command-line (so I can make a shortcut), like this: launchwithicon.exe --app C:\myapp.exe --icon C:\myicon.ico Note that while it is possible to do something similar by setting the taskbar to "always combine, hide labels", I do not like this approach and instead am looking for something that works without combining the taskbar icons.

    Read the article

  • Maintaining Revision Levels

    - by kyle.hatlestad
    A question that came up on an earlier blog post was how to limit the number of revisions on a piece of content. UCM does not inherently enforce any sort of limit on how many revisions you can have. It's unlimited. In some cases, there may be content that goes through lots of changes, but there just simply isn't a need to keep all of its revisions around. Deleting those revisions through the content information screen can be very cumbersome. And going through the Repository Manager applet can take time as well to filter and find the revisions to get rid of. But there is an easier way through the Archiver. The Export Query criteria in Archiver includes a very handy field called 'Revision Rank'. With revision labels, they typically go up as new revisions come in (e.g. 1, 2, 3, 4, etc...). But you can't really use this field to tell it to keep the top 5 revisions. Those top 5 revision numbers are always going up. But revision rank goes the opposite direction. The very latest revision is always 0. The previous revision to that is 1. Previous revision to that is 2. And so on and so forth. With revision rank, you can set your query to look for any Revision Rank greater or equal to 5. Now as older revisions move down the line, their revision rank gets higher and higher until they reach that threshold. Then when you run that archive export, you can choose to delete and remove those revisions. Running that export in Archiver is normally a manual process. But with Idc Command, you can script the process and have it run automatically from the server. Idc Command is a utility that allows you to run any of the content server services via the command line. You basically feed it a text file with the services and parameters defined along with the user to run it as. The Idc Command executable is located within the \bin\ directory: $ ./IdcCommand -f DeleteOlderRevisions.txt -u sysadmin -l delete_revisions.log In this example, our IdcCommand file to run the export and do the deletions would look like: IdcService=EXPORT_ARCHIVE aArchiveName=DeleteOlderRevisions aDoDelete=1 IDC_Name=idc dataSource=RevisionIDs <<EOD>> You can then use automated scheduling routines in the OS to run the command and command file at the frequency needed. Remember that you are deleting the revisions from within UCM, but they are still getting placed within the archive. So you will need to delete those batches to have them fully removed (or re-import if you need to recover them). For more information about Idc Command, you can find that in the Idc Command Reference Guide.

    Read the article

  • How-to dynamically filter model-driven LOV

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Often developers need to filter a LOV query with information obtained from an ADF Faces form or other where. The sample below shows how to define a launch popup listener configured on the launchPopupListener property of the af:inputListOfValues component to filter a list of values. <af:inputListOfValues id="departmentIdId"    value="#{bindings.DepartmentId.inputValue}"                                          model="#{bindings.DepartmentId.listOfValuesModel}"    launchPopupListener="#{PopupLauncher.onPopupLaunch}" … >         … </af:inputListOfValues> A list of values is queried using a search binding that gets created in the PageDef file of a view when a lis of value component gets added. The managed bean code below looks this search binding up to then add a view criteria that filters the query. Note: There is no public API yet available for the FacesCtrlLOVBinding class, which is why I use the internal package class it in the example. public void onPopupLaunch(LaunchPopupEvent launchPopupEvent) {   BindingContext bctx = BindingContext.getCurrent();   BindingContainer bindings = bctx.getCurrentBindingsEntry();   FacesCtrlLOVBinding lov =        (FacesCtrlLOVBinding)bindings.get("DepartmentId");   ViewCriteriaManager vcm =   lov.getListIterBinding().getViewObject().getViewCriteriaManager();             //make sure the view criteria is cleared   vcm.removeViewCriteria(vcm.DFLT_VIEW_CRITERIA_NAME);   //create a new view criteria   ViewCriteria vc =          new ViewCriteria(lov.getListIterBinding().getViewObject());   //use the default view criteria name   //"__DefaultViewCriteria__"   vc.setName(vcm.DFLT_VIEW_CRITERIA_NAME);   //create a view criteria row for all queryable attributes   ViewCriteriaRow vcr = new ViewCriteriaRow(vc);   //for this sample I set the query filter to DepartmentId 60.   //You may determine it at runtime by reading it from a managed bean   //or binding layer   vcr.setAttribute("DepartmentId", 60);   //also note that the view criteria row consists of all attributes   //that belong to the LOV list view object, which means that you can   //filter on multiple attributes   vc.addRow(vcr);             lov.getListIterBinding().getViewObject().applyViewCriteria(vc); }  Note: Instead of using the vcm.DFLT_VIEW_CRITERIA_NAME name you can also define a custom name for the view criteria.

    Read the article

  • Entity Framework and Plain Old CLR Objects in an ASP.Net application

    - by nikolaosk
    This is going to be the sixth post of a series of posts regarding ASP.Net and the Entity Framework and how we can use Entity Framework to access our datastore. You can find the first one here , the second one here and the third one here , the fourth one here and the fifth one here . I have a post regarding ASP.Net and EntityDataSource. You can read it here .I have 3 more posts on Profiling Entity Framework applications. You can have a look at them here , here and here . In this post I will be looking...(read more)

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >