Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 410/549 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • Review: ComponentOne Studio for Entity Framework

    - by Tim Murphy
    While I have always been a fan of libraries that improve coding efficiency and reduce code redundancy I have mostly been using ones that were in the public domain.  As part of the Geeks With Blogs Influencers program a got my hands on ComponentOne’s Studio for Entity Framework.  Below are my thought after working with the product for several weeks. My coding preference has always been maintainable code that is reusable across an enterprises protfolio.  Because of this my focus in reviewing this product is less on the RAD components and more on its benefits for layered applications using code first Entity Framework. Before we get into the pros and cons here is a summary of the main feature listed for SEF. Unified Data Context Virtual Data Access More Powerful Data Binding Pros The first thing that I found to my liking is the C1DataSource. It basically manages a cache for your Entity Model context.  Under RAD conditions this is setup automatically when you drop the object on a your design surface.  If you are like me and want to abstract you data management into a library it takes a little more work, but it is still acceptable and gains the same benefits. The second feature that I found beneficial is the definition of views with improved sorting and filtering.  Again the ease of use of these features is greater on the RAD side but no capabilities are missing when manipulating object in code. Linq has become my friend over the last couple of years and it was great to see that ComponentOne had ensured that it remained a first class citizen in their design.  When you look into this product yourself I would suggest taking a dive into LiveLinq which allow the joining of different data source types. As I went through discovering the features of this framework I appreciated the number of examples that they supplied for different uses.  Besides showing how to use SEF with WinForms, WPF and Silverlight they also showed how to accomplish tasks both RAD, code only and MVVM approaches. Cons The only area that I would really like to see improvement is in there level of detail in their documentation.  Specifically I would like to have seen some of the supporting code explained, such as what some supporting object did, in the examples instead of having to go to the programmer’s reference. I did find some times where currently existing projects had some trouble determining scope that the RAD controls were allowed, but I expect this is something that is in part end user related. Summary Overall I found the Studio for Entity Framework capable and well thought out.  If you are already using the Entity Framework this product will fit into your environment with little effort in return for greater flexibility and greater robustness in your solutions. Whether the $895 list price for a standard version works for you will depend on your return on investment. Smaller companies with only a small number of projects may not be able to stomach it, you get a full featured product that is supported by a well established company.  The more projects and the more code you have the greater your return on investment will be. Personally I intend to apply this product to some production systems and will probably have some tips and tricks in the future. del.icio.us Tags: ComponentOne,Studio for Entity Framework,Geeks With Blogs,Influencers,Product Reviews

    Read the article

  • Le Logiciel Libre – Omniprésent dans le secteur public

    - by gravax
    NOTE : Cet article a servi de base à du contenu publié en Juin 2011 dans le magazine Acteurs Publics. Créé il y a plusieurs décennies déjà, pour répondre à un besoin de partage de savoir, et de compétences, le Logiciel Libre existe sous plusieurs appellations, à l'origine anglo-saxonnes, dont « Free Software » et « Open Source » sont les plus utilisées. En Anglais, le mot « Free » pouvant signifier à la fois libre et gratuit, cela a créé une certaine confusion qui n'existe pas en Français avec le mot « libre ». Du coup, on voit souvent l’acronyme FOSS ou FLOSS, pour « Free, Libre, Open Source Software » afin d'éliminer l’ambiguïté. De nos jours, dans le secteur public, le logiciel libre est, depuis, devenu omniprésent. Il répond à plusieurs besoins critiques dont le contrôle des coûts, le choix (de partenaire, de logiciel, de fonctionnalités), la liberté de pouvoir modifier les applications pour les adapter à ses propres besoins, la sécurité provenant du fait que de nombreux développeurs et utilisateurs ont pu contrôler la qualité du code. Un autre aspect très présent dans les logiciels libres et l'adhérence quasi-systématique aux standards de l'industrie, qui garantit une intégration simple et facile au système d'information existant. Il y a cependant des éléments à prendre en compte lors des choix de logiciels libres stratégiques. Si l'aspect coûts est clairement un élément de choix qui peut conduire aux logiciels libres, il est principalement dû au fait qu'un logiciel libre existe souvent en version gratuite, librement téléchargeable. Mais ceci n'est que le le sommet de l'iceberg. Lors de la mise en production de logiciels il va falloir s'entourer de services dont l'intégration, où les possibilités de choix d'un partenaire seront d'autant plus grandes que le logiciel choisi est populaire et connu, ce qui conduira à des coups tirés vers le bas grâce à une concurrence saine. Mais il faudra aussi prévoir le support technique. La encore, la popularité du logiciel choisi augmentera la palette de prestataires de support possible. Le choix devra se faire suivant des critères très solides, et en particulier la capacité à s'engager sur des niveaux de service, la disponibilité 24 heures sur 24, 7 jours sur 7 (le pays ne s’arrête pas de fonctionner le week-end ou la nuit), et, éventuellement, la couverture géographique correspondant aux métiers que l'on exerce (un pays comme la France couvrant avec ses DOM et ses TOM une grande partie des fuseaux horaires et zones géographiques de la planète). La plus part des services publics, que ce soit éducation, santé, ou gouvernement, utilisent déjà des logiciels libres. On les retrouve coté infrastructure, avec des produits comme la base de données MySQL, fortement appréciée dans le monde de l'éducation pour construire des plate-formes d'e-éducation en conjonction avec d'autres produits libres tels Moodle, ou GlassFish, le serveur d'applications très prisé des développeurs pour son adhérence au standard Java EE version 6 et sa simplicité de mise-en-œuvre. Linux est extrêmement présent comme système d'exploitation libre dans le datacenter, mais aussi sur le poste de travail. On retrouve des outils de virtualisation tels Oracle VM, issu de Xen, dans le datacenter, et VirtualBox sur le poste du développeur. Avec une telle palette de solutions et d'outils dans le monde du Logiciel libre, Oracle se apporte au secteur public des réponses ciblées, efficaces, aux besoins du marché, y compris en matière de support technique et qualité de service associée.

    Read the article

  • A debugging experience with "highly compatible" ASP.NET 4.5

    - by Jeff
    I have to admit that I will pretty much upgrade software for no reason other than being on the latest version. I won't do it if it's super expensive (Adobe gets money from me about once every three or four years at best), but particularly with frameworks and stuff generally available as part of my MSDN subscription, I'll be bleeding edge. CoasterBuzz was running on the MVC 4 framework pretty much as soon as they did a "go live" license for it. I didn't really jump in head-first with Windows 8 and Visual Studio 2012, in part because I just wasn't interested in doing the reinstalls for each new version. Turns out there weren't that many revisions anyway. But when the final versions were released a week and a half ago, I jumped in. I saw on one of the Microsoft sites that .Net 4.5 was a "highly compatible in-place update" to the framework. Good enough for me. I was obviously running it by default in Windows 8, and installed it on my production server. I suppose it's "highly compatible," except when it isn't. Three of my sites are running with various flavors of the MVC version of POP Forums. All of them stopped working under ASP.NET 4.5. It was not immediately obvious what the problem might be beyond an exception indicating that there were no repository classes registered with Ninject, which I use for dependency injection in the forums. This was made all the more weird by the fact that it ran fine locally in the dev Web host. My first instinct was to spin up a Windows Server VM on my local box and put the remote debugger on it. (Side note: running multiple VM's on a Retina MacBook Pro with 16 gigs of RAM is pretty much the most awesome thing ever. I can't believe this computer is for real, and not a 50-pound tower under my desk.) What might have been going on in IIS that doesn't happen in Visual Studio? In the debugging process, I realized that I might be looking in the wrong place. POP Forums creates a Ninject container using a method called from a PreApplicationStartMethod attribute, and at that time registers a module (what Ninject uses to map interfaces to implementations) that maps all of the core dependencies. It also creates an instance of an HttpModule that originally hosted the "services" (search indexing, mailer, etc.), but now just records errors. That's all well and good, but the actual repository mapping, where data is actually read or persisted, happens in Application_Start() in global.asax. The idea there is that you can swap out the SqlSingleWebServer repos for something tuned for multiple servers, Oracle or something else. Of course, if I used something like StructureMap, which does convention-based mapping for dependency injection (a class implementing ISettingsRepository called SettingsRepository is automagically mapped), I wouldn't have to worry about it. In any case, the HttpModule, being instantiated before Application_Start() gets to run, would throw because there was no repo mapped where it could get settings from the database. This makes total sense. The fix is sort of a hack, where I don't setup the innards of the HttpModule until a call to its BeginRequest is made. I say it's a hack, because its primary function, logging exceptions, won't work until the app has warmed up. Still, this brings up an interesting question about the race condition, and what changed in 4.5 when it's running in IIS. In ASP.NET 4, it would appear that the code called via the PreApplicationStartMethod was either failing silently, and running again later, or it was getting to that code after Application_Start was called. In any case, weird thing. The real pain point I'm experiencing now is a bug in MVC 4 that is extremely serious because it renders the mobile/alternate view functionality very much broken.

    Read the article

  • April 2010 Critical Patch Update Released

    - by eric.maurice
    Hi, this is Eric Maurice. Today Oracle released the April 2010 Critical Patch Update (CPUApr2010),the first one to include security fixes for Oracle Solaris. Today's Critical Patch Update (CPU) provides 47 new security fixes across the following product families: Oracle Database Server, Oracle Fusion Middleware, Oracle Collaboration Suite, Oracle E-Business Suite, Oracle PeopleSoft Enterprise, Oracle Life Sciences, Retail, and Communications Industry Suites, and Oracle Solaris. 28 of these 47 new vulnerabilities are remotely exploitable without authentication, but the criticality of the affected components and the severity of these vulnerabilities vary greatly. Customers should, as usual, refer to the Risk Matrices in the CPU Advisory to assess the relevance of these fixes for their environment (and the urgency with which to apply the fixes). 7 of the 47 new vulnerabilities affect various versions of Oracle Database Server. None of these 7 vulnerabilities are remotely exploitable without authentication. Furthermore, none of these fixes are applicable to client-only deployments. The most severe CVSS Base Score for the Database Server vulnerabilities is 7.1. As a reminder, information about Oracle's use of the CVSS 2.0 standard can be found in Note 394487.1 (My Oracle Support subscription required). Note that this Critical Patch Update includes fixes for vulnerabilities that were publicly disclosed by David Litchfield at the BlackHat DC Conference in early February (CVE-2010-0866 and CVE-2010-0867). 5 of the 47 new vulnerabilities affect various components of the Oracle Fusion Middleware product family. The highest CVSS Base Score for these vulnerabilities is 7.5. Note that the patches for Oracle WebLogic Server are cumulative and this Critical Patch Update therefore also includes a fix for a vulnerability (CVE-2010-0073) that was the subject of a Security Alert issued by Oracle on February 4, 2010. Customers, who have not applied the previously-released patch, should apply today's Critical Patch Update as soon as possible. As stated at the beginning of this blog, it is also noteworthy to highlight that this Critical Patch Update provides 16 new fixes for the Sun product line. With the recent close of the Sun acquisition both security organizations have worked diligently to align Sun's previous security practices with Oracle's. Java users know that Oracle released a Critical Patch Update for Java SE and Java For Business earlier this month (in accordance with the Java patching schedule previously published by Sun Microsystems). Please note that for the first time, the Java advisories included CVSS Scores to help assess the severity of the new vulnerabilities fixed with the advisory. The rapid inclusion of the Solaris product lines in the Critical Patch Update and the extension of Oracle Software Security Assurance to Sun technologies are evidence of the flexibility of Oracle's security assurance programs. These should also result in tangible security benefits for the users of the Oracle hardware and software stack (such as a predictable patching schedule for all Oracle products).

    Read the article

  • Offre d’emploi – Job Offer - Montreal

    - by guybarrette
    I’m currently helping a client plan its management systems re-architecture and they are looking to hire a full time .NET developer.  It’s a small 70 people company located in the Old Montreal, you’ll be the sole dev there and you’ll use the latest technologies in re writing their core systems. Here’s the job offer in French: Concepteur de logiciel et programmeur-analyste .NET chevronné (poste permanent à temps plein) Employeur : Traductions Serge Bélair inc. Ville : Montreal QC TRSB, cabinet de traduction en croissance rapide regroupant à l’interne une des équipes de professionnels les plus compétentes et les plus diversifiées du secteur de la traduction au Canada, désire combler le poste de : Le concepteur de logiciel et programmeur-analyste .Net sera responsable de la conception, du développement complet et de l’implantation d’une solution clés en main personnalisée pour répondre aux besoins de l’entreprise. Il réalisera la conception, la programmation, la documentation, les tests, le dépannage et la maintenance du nouveau système de gestion des opérations de l’entreprise utilisant des bases de données et offrant une grande souplesse pour la production de rapports. S’il est nécessaire de faire appel à des fournisseurs ou à des consultants pour la réalisation du projet, il sera responsable de trouver les ressources requises, devra assurer les communications avec ces ressources et voir à l’exécution du travail. Il sera également appelé à mettre à jour et à maintenir les applications actuellement utilisées dans l’entreprise jusqu’à ce que l’application développée puisse être utilisée. Les principales tâches du concepteur et programmeur-analyste chevronné recherché seront les suivantes : Concevoir et développer un nouveau système de gestion des opérations en fonction des besoins d’exploitation de l’entreprise Trouver les ressources externes et internes requises Assurer les communications et le suivi avec des fournisseurs externes (p. ex., programmeurs, analystes ou architectes) Assumer la responsabilité de la mise en place du nouveau système de gestion des opérations Résoudre les problèmes liés au nouveau système de gestion des opérations Assurer le soutien les soirs de semaine et la fin de semaine (au besoin), principalement avec des outils de travail à distance Maintenir la documentation du système de gestion des opérations à jour Exécuter d’autres tâches connexes Exigences Baccalauréat en informatique ou l’équivalent Au moins 5 années d’expérience pertinente 2 ans et plus d'expérience en programmation C# Excellente connaissance en programmation d’applications Web avec bases de données Excellente connaissance en méthodologie structurée de développement et des techniques de programmation itératives Habiletés à procéder à la récolte d’informations ainsi que la rédaction de documents d’analyse Spécialisations techniques Essentielle - Design et programmation orientée objet avec C#, ASP.NET, .NET Framework 3.5, AJAX Importante - Silverlight 3, WCF, LINQ, SQL Server, Team Foundation Server Atout - Entity Framework, MVC, jQuery, MySQL, QuickBooks, Suite d’outils Telerik Technologies utilisées C# 4.0, Visual Studio 2010, Team Foundation Server 2010, LINQ, ASP.NET, ASP.NET MVC, jQuery, WCF, Silverlight 4, SQL Server 2008, MySQL, QuickBooks, Suite d’outils Telerik Qualités recherchées Bilinguisme oral et écrit Sens élevé des responsabilités Autonomie Sens de l’initiative Volonté de dépassement Leadership et aptitudes à la prise de décisions Motivation élevée Minutie et souci du détail Bon sens de l’organisation Souplesse et bonne capacité d’adaptation au changement Une expérience antérieure du développement de logiciel avec flux de processus et modules de facturation, de l’établissement de ponts entre des bases de données de types différents (Quickbooks et SQL p. ex.) et des outils d’aide à la traduction serait un atout important. Excellentes conditions de travail : salaire et avantages sociaux très concurrentiels, milieu de travail stimulant dans un environnement agréable, dans le Vieux-Montréal. Faire parvenir votre CV et votre lettre de motivation à [email protected] TRSB 276, rue Saint-Jacques, bureau 900 Montréal (Québec) H2Y 1N3 L’usage du générique masculin a pour seul but d’alléger le texte et d’en faciliter la lecture. var addthis_pub="guybarrette";

    Read the article

  • CodeStock 2012 Review: Michael Eaton( @mjeaton ) - 3 Simple Things for Increased Productivity

    3 Simple Things for Increased ProductivitySpeaker: Michael EatonTwitter: @mjeatonBlog: http://mjeaton.net/blog This was the first time I had seen Michael Eaton speak but have hear a lot of really good things about his speaking abilities. Needless to say I was really looking forward to his session. He basically addressed the topic of distractions and how they can decrease or increase your productivity as a developer. He makes the case that in order to become more productive you must block/limit all distractions. For example, he covered his top distractions as a developer. Top Distractions Social Media(Twitter, Reddit, Facebook) Wiki sites Phone Email Video Games Coworkers, Friends, Family Michael stated that he uses various types of music to help him block out these distractions in order for him to get into his coding zone. While he states that music works for him, he also notes that he knows of others that cannot really work with music. I have to say I am in the latter group because I require a quiet environment in order to work. A few session attendees also recommended listening to really loud white noise or music in another language other than your own. This allows for less focus to be placed on words being sung compared to the rhythmic beats being played. I have to say that I have not tried these suggestions yet but will in the near future. However, distractions can be very beneficial to productivity in that they give your mind a chance to relax and not think about the issues at hand. He spoke highly of taking vacations, and setting boundaries at work so that develops prevent the problem of burnout. One way he suggested that developer’s combat distractions is to use the Pomodoro technique. In his example he selects one task to do for 20 minutes and he can only do that task during that time. He ignores all other distractions until this task or time limit is complete. After it is completed he allows himself to relax and distract himself for another 5- 10 minutes before his next Pomodoro. This allows him to stay completely focused on a task and when the time is up he can then focus on other things.

    Read the article

  • The 2010 Life Insurance Conference - Washington, DC

    - by [email protected]
    How ironic to be in Washington, DC on April 15 - TAX DAY! Fortunately, I avoided IRS offices and attended the much more enjoyable 2010 Life Insurance Conference, presented by LIMRA, LOMA SOA and ACLI. This year's conference offered a variety of tracks focused on the Life Industry including Distribution/Marketing Marketing, Administration, Actuarial/Product Development, Regulatory, Reinsurance and Strategic Management. President and CEO of the ACLI, Frank Keating, opened the event by moderating a session titled "Executive Viewpoint on new Opportunities." Guest speakers included Ted Mathas, President and CEO of NY Life, and John Walters, President and CEO of Hartford Life. Both speakers were insightful as they shared the challenges and opportunities each company faces and the key role life insurance companies play in our society and the global economy. There were several key themes that were reiterated in multiple sessions throughout the conference - the economy is on the rebound, optimism is growing, consumer spending is up and an uptick in employment is likely to follow. The threat of a double dip recession has seemed to passed. Good news for our industry, and welcomed by all in attendance. Of special interest to me, given my background, was some research shared by both The Nolan Group and Novarica in separate sessions. Both firms indicate that policy administration upgrades/replacement projects remain a top priority in 2010. Carriers continue to invest in modern technology. Modern ultra-configurable systems enable carriers to switch from a waterfall to an agile project methodology, which often entails a "culture change" within an organization. Other themes heard throughout the two-day event: Virtually all sessions focused on People, Process and Technology! Product innovation, agility and speed to market are as important as ever. Social Networks and Twitter are becoming more popular ways of communicating with both field and dispersed staff. Several sessions focused on the application, new business and underwriting process. Companies continue looking for ways to increase market agility, accelerate speed to market, address cost issues and improve service levels across the process. They recognize the need to ease the way to do business with both producers and consumers. Author and economic futurist Jeff Thredgold presented an entertaining, informative and humorous general session on Wednesday afternoon that focused on the US and global economies, financial markets and retirement outlook. Thredgold did not disappoint anyone with his message! The Thursday morning general session was keynoted by Therese Vaughan (CEO - NAIC) and Thomas Crawford (President of C2 Group). Both speakers gave a poignant view of the recent financial crisis and discussed "Putting the Pieces Back Together." Therese spoke of the recent financial turmoil and likely changes to regulations to the financial services sector. Tom's topics focused on economic recovery and the political environment in Washington, and how that impacts our industry. Next year's event will be April 11-13, 2011 in Las Vegas. Roger A.Soppe, CLU, LUTCF, is the Senior Director of Insurance Strategy, Oracle Insurance.

    Read the article

  • Oracle Text query parser

    - by Roger Ford
    Oracle Text provides a rich query syntax which enables powerful text searches.However, this syntax isn't intended for use by inexperienced end-users.  If you provide a simple search box in your application, you probably want users to be able to type "Google-like" searches into the box, and have your application convert that into something that Oracle Text understands.For example if your user types "windows nt networking" then you probably want to convert this into something like"windows ACCUM nt ACCUM networking".  But beware - "NT" is a reserved word, and needs to be escaped.  So let's escape all words:"{windows} ACCUM {nt} ACCUM {networking}".  That's fine - until you start introducing wild cards. Then you must escape only non-wildcarded searches:"win% ACCUM {nt} ACCUM {networking}".  There are quite a few other "gotchas" that you might encounter along the way.Then there's the issue of scoring.  Given a query for "oracle text query syntax", it would be nice if we could score a full phrase match higher than a hit where all four words are present but not in a phrase.  And then perhaps lower than that would be a document where three of the four terms are present.  Progressive relaxation helps you with this, but you need to code the "progression" yourself in most cases.To help with this, I've developed a query parser which will take queries in Google-like syntax, and convert them into Oracle Text queries. It's designed to be as flexible as possible, and will generate either simple queries or progressive relaxation queries. The input string will typically just be a string of words, such as "oracle text query syntax" but the grammar does allow for more complex expressions:  word : score will be improved if word exists  +word : word must exist  -word : word CANNOT exist  "phrase words" : words treated as phrase (may be preceded by + or -)  field:(expression) : find expression (which allows +,- and phrase as above) within "field". So for example if I searched for   +"oracle text" query +syntax -ctxcatThen the results would have to contain the phrase "oracle text" and the word syntax. Any documents mentioning ctxcat would be excluded from the results. All the instructions are in the top of the file (see "Downloads" at the bottom of this blog entry).  Please download the file, read the instructions, then try it out by running "parser.pls" in either SQL*Plus or SQL Developer.I am also uploading a test file "test.sql". You can run this and/or modify it to run your own tests or run against your own text index. test.sql is designed to be run from SQL*Plus and may not produce useful output in SQL Developer (or it may, I haven't tried it).I'm putting the code up here for testing and comments. I don't consider it "production ready" at this point, but would welcome feedback.  I'm particularly interested in comments such as "The instructions are unclear - I couldn't figure out how to do XXX" "It didn't work in my environment" (please provide as many details as possible) "We can't use it in our application" (why not?) "It needs to support XXX feature" "It produced an invalid query output when I fed in XXXX" Downloads: parser.pls test.sql

    Read the article

  • Data validation best practices: how can I better construct user feedback?

    - by Cory Larson
    Data validation, whether it be domain object, form, or any other type of input validation, could theoretically be part of any development effort, no matter its size or complexity. I sometimes find myself writing informational or error messages that might seem harsh or demanding to unsuspecting users, and frankly I feel like there must be a better way to describe the validation problem to the user. I know that this topic is subjective and argumentative. I've migrated this question from StackOverflow where I originally asked it with little response. Basically, I'm looking for good resources on data validation and user feedback that results from it at a theoretical level. Topics and questions I'm interested in are: Content Should I be describing what the user did correctly or incorrectly, or simply what was expected? How much detail can the user read before they get annoyed? (e.g. Is "Username cannot exceed 20 characters." enough, or should it be described more fully, such as "The username cannot be empty, and must be at least 6 characters but cannot exceed 30 characters."?) Grammar How do I decide between phrases like "must not," "may not," or "cannot"? Delivery This can depend on the project, but how should the information be delivered to the user? Should it be obtrusive (e.g. JavaScript alerts) or friendly? Should they be displayed prominently? Immediately (i.e. without confirmation steps, etc.)? Logging Do you bother logging validation errors? Internationalization Some cultures prefer or better understand directness over subtlety and vice-versa (e.g. "Don't do that!" vs. "Please check what you've done."). How do I cater to the majority of users? I may edit this list as I think more about the topic, but I'm genuinely interested in proper user feedback techniques. I'm looking for things like research results, poll results, etc. I've developed and refined my own techniques over the years that users seem to be okay with, but I work in an environment where the users prefer to adapt to what you give them over speaking up about things they don't like. I'm interested in hearing your experiences in addition to any resources to which you may be able to point me.

    Read the article

  • TSAM 11gR1

    - by todd.little
    The Tuxedo System and Application Monitor (TSAM) 11gR1 release provides powerful new application monitoring capabilities, as well as significant improvements in ease of use. The first thing users will notice is the completely redesigned user interface in the TSAM console. Based on Oracle ADF, the console is much easier to navigate, provides a Web 2.0 style interface with dynamically updating panels, and a look and feel familiar to those that have used Oracle Enterprise Manager. Monitoring data can be viewed in both tabular and graphical form and exported to Excel for further analysis. A number of new metrics are collected and displayed in this release. Call path monitoring now displays CPU time, message size, total transport time, and client address giving even more end-to-end information about a specific Tuxedo request. As well the call path display has been completely revamped to make it much easier to see the branches of the call path. The call pattern display now provides statistics on successful vs failed calls, system and application failures, and end-to-end average elapsed time. Service monitoring now displays minimum and maximum message size, CPU usage, and client address. System server monitoring now includes monitoring the SALT gateway servers to provide detailed performance metrics about those servers. Perhaps the most significant new feature is the consolidation of alert definitions and policy management. In previous versions of TSAM, some alerts were defined and checked on the monitored systems while others were defined and checked in the console. Policy management could be performed on both the monitored node via environment variable or command, as well as from the console. Now all alert definitions and policy definitions are only made using the console. For alerts this means that regardless of where the alert is evaluated it is defined in one and only one place. Thus the plug-in alert mechanism of previous releases can now be managed using the TSAM console, making SLA alert definition much easier and cleaner. Finally there is support in TSAM for monitoring rehosted mainframe applications. The newly announced Oracle Tuxedo Application Runtime for CICS and Batch can be monitored in the TSAM console using traditional mainframe views of the application such as regions. Look for a future blog entry with more details on this as well as some entries providing a glimpse of the console. TSAM gives users a single point for monitoring the performance of all of their Tuxedo applications.

    Read the article

  • Vodacom Call Center Management on the NetBeans Platform

    - by Geertjan
    If you live in South Africa, you know about Vodacom. Vodacom is one of the dominant mobile communication companies in South Africa, and beyond, providing voice, messaging, data, and similar mobile services. Inside Vodacom there's an application named Helios, which is a call centre application that had its inception in 2009 and consists of two parts. Firstly, a web-based front-end that allows a call centre agent to service subscribers using a Google-like search on a knowledge base structured as a collection of FAQs. The web-based front-end uses plain-old HTML + CSS + a good helping of JQuery and JQueryUI. This is delivered via JSR-168 portlets running on a cluster of IBM Portal 6 servers. In turn, the portlets communicate via RMI with several back-end EJB's containing the business logic. These EJB's are deployed on a cluster of Weblogic Application Servers, version 10.3.6. The second part is a NetBeans Platform application used for maintaining and constructing the knowledge base, i.e., the back-end of the web-based front-end. Helios is also used for a number of other maintenance functions, such as access permissions, user maintenance, and news bulletins. Below, in the web-based front-end, call centre agents can enter search terms and are presented with a number of FAQs from the knowledge base. Upon selecting a FAQ article, the agent is presented with the article text, the process to guide the subscriber, system checks that display information specific to the subscriber, and links to related applications and articles: Below, you can see that applications are searchable and can be accessed using the same web-based front-end as shown above. And, as can be seen below, knowledge base FAQs are maintained using the Helios Maintenance Application, which is the Vodacom application built on the NetBeans Platform: Several thousand call centre agent user accounts are administered using the Helios Maintenance Application. Below the main FAQ page is shown, together with the About dialog: Vodacom is happy with the back-end NetBeans Platform application. However, the front-end stack runs on quite old technology. Ideally Vodacom would like to migrate the portlets to Oracle Weblogic Portal or Oracle WebCenter, but this hasn't been accomplished yet. Migrating makes sense as the rest of the application server environment consists entirely of Oracle products.

    Read the article

  • BI Publisher - Hottest Show in Vegas

    - by mike.donohue
    Two days down, two to go. Monday was a very busy and rewarding day. Attended "XML Publisher and FSG for Beginners" given by Susan Behn and Alyssa Johnson from Solution Beacon. It was packed, standing room only ... even though it was at 8:00 am. Later in the afternoon, despite being at the same time and in conflict with other Publisher related sessions, Noelle's session, "The Reporting Platform for Applications: Oracle Business Intelligence Publisher" and my session, "Introduction to Oracle Business Intelligence Publisher" were both very well attended. Immediately following our presentations we ran the BI Publisher Hands On Lab which was great fun. The turnout was so large that unfortunately we could not accommodate everyone who came to the lab. There were as many as 5 people huddled around each of the 20 machines. All the the groups completed the 2 main exercises. Some groups even took the product for an off-road test drive. Look at all the fun we had ... For those who could not attend or want the Hands On Lab document: Hands On Lab Oracle BI Publisher Collaborate 2010.pdf Note that these lab instructions assume a specific set up and files that you may not have in your environment. You can download and install a trial license version of BI Publisher from the download page. Highly recommend taking a look at the additional Tutorials available on OTN. Big thanks to Dan Vlamis and Jonathan Clark from Vlamis Software Solutions and to the Oracle BIWA SIG for setting up these machines and getting the time and space to run this lab. It was inspiring to see all of the attendees successfully creating reports. On Tuesday morning we were up early again for a rousing session of BI Publisher Best Practices that was also, very well attended especially considering the 8 am start. Later that morning saw Ben Bruno from STR Software and two of his customers speak on the additional functionality and ROI they have achieved by using Publisher within EBS and AventX to FAX and Email Publisher generated documents. Spent the afternoon staffing the BI Technology demo pod and had a steady flow of people dropping by with questions. Having a great conference so far and looking forward to the rest of it.

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Silverlight Cream for March 31, 2010 -- #826

    - by Dave Campbell
    In this Issue: Andrea Boschin, Radenko Zec, Andrej Tozon, Bobby Diaz, Brad Abrams, Wolf Schmidt, Colin Eberhardt, Anand Iyer, Matthias Shapiro, Jaime Rodriguez, Bill Reiss, and Lee. Shoutouts: Cigdem has a post up about here MIX10 Interviewing experiences: MIX10 SilverlightShow Interviews Ian T. Lackey has his material up from his talk Silverlight SEO at the St. Louis .Net Users Group Not Silverlight but definitely WP7 cool, Michael Klucher reports that there are New Windows Phone Samples on Creators Club Online Tim Heuer posted a survey: What tools are the minimum to get started in Silverlight? From SilverlightCream.com: A RoleManager to apply roles declaratively to user interface Andrea Boschin also has a new post at SilverlightShow discussing the use of a RoleManager in WCF RIA Services to apply user roles to elements of the UI... good stuff, Andrea. Virtualization in Silverlight 4 RC Radenko Zec has a post out at SilverlightShow where he explains UI and Data Virtualization then gives some examples of their use in Silverlight 4RC, and some issues as well. MS Word Mail Merge with Silverlight 4 COM Automation Andrej Tozon has a post up at SilverlightShow that I missed in the rush of MIX10. He's doing MailMerge with COM automation and Silverlight 4... actually prett cool stuff and all the source! KISS and Tell - MVVM and the ViewModelLocator Bobby Diaz is blogging about a very popular subject right now: ViewModelLocator. He's not showing production code, but it's a thought... check it out. Silverlight 4 + RIA Services - Ready for Business: Validating Data I'm running behind, but Brad Abrams' next post in his series is about validating data in the business application. He also discusses setting up shared code validation. A One-stop Shopping XAML Namespace for Silverlight Client SDK Controls Wolf Schmidt at the Silverlight SDK has a post up highlighting the SL4 XAML namespace prefix. He starts with SL3 then demonstrates the feature's use in SL4. Binding a Silverlight 3 DataGrid to dynamic data via IDictionary (Updated) Colin Eberhardt has an update to his previous article of the same title. This one is a bug fix on an upgrade to SL3 and also an expansion of the previous post. Demo Apps from MIX10 on Windows Phone 7 Anand Iyer posted links to all the WP7 demos used at MIX10 and at least in the case of FourSquare, the source is on CodePlex. XAML Files for Location Visualizations in Silverlight and WPF Matthias Shapiro has graciously provided XAML for us for Silverlight and WPF for a bunch of different US maps... too cool, now we don't have to be asking 'where did you get that map?'... thanks Matthias! Theming in Windows Phone Jaime Rodriguez has a post up that deep-dives theming in general and demonstrates using it on WP7... end-user configurations and developer stuff. Space Rocks game step 7: Moving the ship It appears that in the heat of battle (blogging) I said Bill Reiss' Space Rocks game he's building is for WP7... obviously it's not, but it's a game folks... :) THis is Episode 7 and he's moving the ship now. SL4(RC) RichTextBox and Access Violation Lee has some code that looks like it should work for a RichTextBox in SL4RC, and it's throwing an error... see if you have a solution for him... or is it a bug? Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • ATG Live Webcast April 5: Managing Your Oracle E-Business Suite with Oracle Enterprise Manager

    - by BillSawyer
    The next ATG Live Webcast covers one of the hottest topic areas in E-Business Suite Tools and Technology: Lifecycle Management. Angelo Rosado, Product Manager, ATG Development will lead you through using Oracle Enterprise Manager 12c and the latest E-Business Suite Plug-in to manage E-Business Suite systems. You can register for the Apr. 5, 2012 event at: Managing Your Oracle E-Business Suite with Oracle Enterprise Manager The topics covered in this webcast will be: Manage your EBS system configurations Monitor your EBS environment's performance and uptime Keep multiple EBS environments in sync with their patches and configurations Create patches for your EBS customizations and apply them with Oracle's own patching tools Date:               Thursday, April 5, 2012Time:              8:00 AM - 9:00 AM Pacific Standard TimePresenter:    Angelo Rosado, Product Manager, ATG DevelopmentWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:   Domestic Participant Dial-In Number:            877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              99342To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  597073984If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com. 

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Rights Expiry Options in IRM 11g

    - by martin.abrahams
    Among the many enhancements in IRM 11g, we have introduced a couple of new rights expiry options that may be applied to any role. These options were supported in previous versions, but fell into the "advanced configuration" category. In 11g, the options can be applied simply by selecting a check-box in the properties of a role, as shown by the rather extreme example below, where the role allows access for just two minutes after they are sealed. The new options are: To define a role that expires automatically some period after it is assigned To define a role that evaluates expiry relative to the time that each document is sealed These options supplement the familiar options to allow open-ended access (limited by offline access and the ever-present option to revoke rights at any time) and the option to define time windows with specific start dates and end dates. The value of these options is easiest to illustrate with some publishing examples: You might define a role with a one year expiry to be assigned to users who purchase a one year subscription. For each individual user, the year would be calculated from the time that the role was assigned to them. You might define a role that allows documents to be accessed only for 24 hours from the time that they are published - perhaps as a preview mechanism designed to tempt users to sign up for a full subscription. Upon payment of a full fee, users can simply be reassigned a role that gives them greater access to exactly the same documents. In a corporate environment, you might use such roles for fixed term contractors or for workflows that involve information with a short lifespan, or perhaps as part of a compliance process that requires rights to be formally re-approved at intervals. Being role-based, the time constraints apply to any number of documents - including documents that have not yet been created. For example, a user with a one year subscription would have access to all documents published in the relevant classification during the year without any further configuration. Crucially, unlike other solutions, it is not the documents that expire, but the rights of particular users. Whereas some solutions make documents completely inaccessible for all users after expiry, Oracle IRM can allow some users to continue using documents while other users lose access. Equally crucially, a user whose rights have expired can always be granted fresh rights at any time - for example, because they renew their subscription or because a manager confirms that they still need the rights as part of a corporate compliance process. By applying expiry to rights rather than to documents, Oracle IRM avoids the risk of locking an organization out of its own information.

    Read the article

  • SSDT - What's in a name?

    - by jamiet
    SQL Server Data Tools (SSDT) recently got released as part of SQL Server 2012 and depending on who you believe it can be described as either: a suite of tools for building SQL Server database solutions or a suite of tools for building SQL Server database, Integration Services, Analysis Services & Reporting Services solutions Certainly the SQL Server 2012 installer seems to think it is the latter because it describes SQL Server Data Tools as "the SQL server development environment, including the tool formerly named Business Intelligence Development Studio. Also installs the business intelligence tools and references to the web installers for database development tools" as you can see here: Strange then that, seemingly, there is no consensus within Microsoft about what SSDT actually is. On yesterday's blog post First Release of SSDT Power Tools reader Simon Lampen asked the quite legitimate question:I understand (rightly or wrongly) that SSDT is the replacement for BIDS for SQL 2012 and have just installed this. If this is the case can you please point me to how I can edit rdl and rdlc files from within Visual Studio 2010 and import MS Access reports.To which came the following reply:SSDT doesn't include any BIDs (sic) components. Following up with the appropriate team (Analysis Services, Reporting Services, Integration Services) via their forum or msdn page would be the best way to answer you questions about these kinds of services. That's from a Microsoft employee by the way. Simon is even more confused by this and replies with:I have done some more digging and am more confused than ever. This documentation (and many others) : msdn.microsoft.com/.../ms156280.aspx expressly states that SSDT is where report editing tools are to be foundAnd on it goes....You can see where Simon's confusion stems from. He has official documentation stating that SSDT includes all the stuff for building SSIS/SSAS/SSRS solutions (this is confirmed in the installer, remember) yet someone from Microsoft tells him "SSDT doesn't include any BIDs components".I have been close to this for a long time (all the way through the CTPs) so I can kind of understand where the confusion stems from. To my understanding SSDT was originally the name of the database dev stuff but eventually that got expanded to include all of the dev tools - I guess not everyone in Microsoft got the memo.Does this sound familiar? Have we not been down this road before? The database dev tools have had upteen names over the years (do any of datadude, TSData, VSTS for DB Pros, DBPro, VS2010 Database Projects sound familiar) and I was hoping that the SSDT moniker would put all confusion to bed - evidently its as complicated now as it has ever been.Forgive me for whinging but putting meaningful, descriptive, accurate, well-defined and easily-communicated names onto a product doesn't seem like a difficult thing to do. I guess I'm mistaken!Onwards and upwards...@Jamiet

    Read the article

  • World Record Performance on PeopleSoft Enterprise Financials Benchmark on SPARC T4-2

    - by Brian
    Oracle's SPARC T4-2 server achieved World Record performance on Oracle's PeopleSoft Enterprise Financials 9.1 executing 20 Million Journals lines in 8.92 minutes on Oracle Database 11g Release 2 running on Oracle Solaris 11. This is the first result published on this version of the benchmark. The SPARC T4-2 server was able to process 20 million general ledger journal edit and post batch jobs in 8.92 minutes on this benchmark that reflects a large customer environment that utilizes a back-end database of nearly 500 GB. This benchmark demonstrates that the SPARC T4-2 server with PeopleSoft Financials 9.1 can easily process 100 million journal lines in less than 1 hour. The SPARC T4-2 server delivered more than 146 MB/sec of IO throughput with Oracle Database 11g running on Oracle Solaris 11. Performance Landscape Results are presented for PeopleSoft Financials Benchmark 9.1. Results obtained with PeopleSoft Financials Benchmark 9.1 are not comparable to the the previous version of the benchmark, PeopleSoft Financials Benchmark 9.0, due to significant change in data model and supports only batch. PeopleSoft Financials Benchmark, Version 9.1 Solution Under Test Batch (min) SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 8.92 Results from PeopleSoft Financials Benchmark 9.0. PeopleSoft Financials Benchmark, Version 9.0 Solution Under Test Batch (min) Batch with Online (min) SPARC Enterprise M4000 (Web/App) SPARC Enterprise M5000 (DB) 33.09 34.72 SPARC T3-1 (Web/App) SPARC Enterprise M5000 (DB) 35.82 37.01 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 128 GB memory Storage Configuration: 1 x Sun Storage F5100 Flash Array (for database and redo logs) 2 x Sun Storage 2540-M2 arrays and 2 x Sun Storage 2501-M2 arrays (for backup) Software Configuration: Oracle Solaris 11 11/11 SRU 7.5 Oracle Database 11g Release 2 (11.2.0.3) PeopleSoft Financials 9.1 Feature Pack 2 PeopleSoft Supply Chain Management 9.1 Feature Pack 2 PeopleSoft PeopleTools 8.52 latest patch - 8.52.03 Oracle WebLogic Server 10.3.5 Java Platform, Standard Edition Development Kit 6 Update 32 Benchmark Description The PeopleSoft Enterprise Financials 9.1 benchmark emulates a large enterprise that processes and validates a large number of financial journal transactions before posting the journal entry to the ledger. The validation process certifies that the journal entries are accurate, ensuring that ChartFields values are valid, debits and credits equal out, and inter/intra-units are balanced. Once validated, the entries are processed, ensuring that each journal line posts to the correct target ledger, and then changes the journal status to posted. In this benchmark, the Journal Edit & Post is set up to edit and post both Inter-Unit and Regular multi-currency journals. The benchmark processes 20 million journal lines using AppEngine for edits and Cobol for post processes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN PeopleSoft Financial Management oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • Nokia vs. The World

    - by Michael B. McLaughlin
    I’m looking forward to the launch of the Nokia Lumia 920. Why? Well, it stacks up better than the competition for one thing. Then there’s also that security problem that certain other phones have. Mostly, though, it’s because I love my Lumia 900 and the 920, with Windows Phone 8, will be even better. Before I got my Lumia 900, I just took it as given that smart phone cameras couldn’t be good. The Lumia taught me that smart phone cameras can be good if the manufacturer treats them as an important component worth spending time and money on (rather than some thing that consumers expect such that they’d better throw one in). I’m extremely pleased with the quality of pictures that my Lumia 900 gives me as well as the range of settings it provides (you can delve in to tell it a film speed, an f-stop, and a whole range of other settings). And the image stabilization features in the Lumia 920 deliver far better results than the others. Nokia has had great maps for a long time and they continue to improve. Even better, they made a deal that puts many of their excellent maps into Windows Phone 8 itself. There are still Nokia-exclusive features such as Nokia City Lens, of course. But by giving the core OS a great set of fundamental map data and technologies, they help ensure that customers know that buying a Windows Phone 8 will give them a great map experience no matter who made the phone. I’ll be getting a 920, myself, but the HTC and Samsung devices that have been announced have some compelling features, too, and it’s great to know that people who buy one of these won’t need to worry about where their maps might lead them. I’m looking forward to the NFC capabilities and Qi wireless charging my Lumia 920 will have. With the availability of DirectX and C++ programming on Windows Phone 8, I’m also excited about all the great games that will be added to the Windows Phone environment. I love my Xbox Phone. I love my Office phone. I love my Facebook phone. I love my GPS phone. I love my camera phone. I love my SkyDrive phone. In short, I love my Windows Phone!

    Read the article

  • Big Data – Interacting with Hadoop – What is PIG? – What is PIG Latin? – Day 16 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the HIVE in Big Data Story. In this article we will understand what is PIG and PIG Latin in Big Data Story. Yahoo started working on Pig for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. What is Pig and What is Pig Latin? Pig is a high level platform for creating MapReduce programs used with Hadoop and the language we use for this platform is called PIG Latin. The pig was designed to make Hadoop more user-friendly and approachable by power-users and nondevelopers. PIG is an interactive execution environment supporting Pig Latin language. The language Pig Latin has supported loading and processing of input data with series of transforming to produce desired results. PIG has two different execution environments 1) Local Mode – In this case all the scripts run on a single machine. 2) Hadoop – In this case all the scripts run on Hadoop Cluster. Pig Latin vs SQL Pig essentially creates set of map and reduce jobs under the hoods. Due to same users does not have to now write, compile and build solution for Big Data. The pig is very similar to SQL in many ways. The Ping Latin language provide an abstraction layer over the data. It focuses on the data and not the structure under the hood. Pig Latin is a very powerful language and it can do various operations like loading and storing data, streaming data, filtering data as well various data operations related to strings. The major difference between SQL and Pig Latin is that PIG is procedural and SQL is declarative. In simpler words, Pig Latin is very similar to SQ Lexecution plan and that makes it much easier for programmers to build various processes. Whereas SQL handles trees naturally, Pig Latin follows directed acyclic graph (DAG). DAGs is used to model several different kinds of structures in mathematics and computer science. DAG Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Zookeeper. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Reminder: Benefícios da Virtualização para ISVs - 14/Dez/10, Porto

    - by Paulo Folgado
    Esta formação aborda as principais dificuldades com que os Independent Software Vendors (ISVs) se confrontam quando têm de escolher as plataformas sobre as quais irão certificar, instalar e suportar as suas aplicações, e como o Oracle VM (e o Oracle Enterprise Linux) os podem ajudar a ultrapassar essas dificuldades. O modelo de negócio clássico de um ISV - desenvolver uma solução aplicacional para resolver um determinado problema de negócio, analizar o mercado para determinar quais os sistemas operativos e o hardware que os clientes do seu mercado alvo usam, e decidir suportar as plataformas hardware e software que 80% dos seus clientes do seu mercado alvo usam (e tratar como excepções outras configurações que lhe sejam solicitadas por alguns clientes importantes) - funcionou bem no anos 80 e princípios dos anos 90, quando havia uma menor diversidade de plataformas. Contudo, com o aparecimentos nos últimos anos de múltiplas versões de sistemas operativos e de "sabores" Linux, este modelo começou a tornar-se um pesadelo. Cada cliente tem a sua plataforma de eleição e espera dos ISV que suportem essas suas opções, o que constitui um sorvedouro dos recursos e dos custos dos ISVs. As tecnologias de virtualização da Oracle, ao permitirem "simular" uma determinada configuração de hardware, fazendo com que o sistema operativo "pense" que está correr numa configuração de hardware pré-definida e normalizada, na qual correm as aplicações, constituem um veículo excelente para os ISVs que procuram uma solução simples, fácil de instalar e fácil de suportar para instalação das suas aplicações, permitindo obter grandes economias de custos em termos de desenvolvimento, teste e suporte dessas aplicações. Quem deve assistir? Esta formação dirige-se sobretudo a quem que tomar decisões sobre as plataformas tecnológicas que o ISV tem de suportar, assim como a quem lida com a estrutura de custos da suas operações, com uma visão dos custos associados ao desenvolvimento, certificação, instalação e suporte de múltiplas plataformas. Se quer saber mais sobre o Oracle VM e como ele pode ajudar a reduzir drasticamente os sues custos, não perca esta formação. AGENDA: 09:00 Welcome & Introduction  ISV Partner View... Why Use Virtualization?   The ISV Deployment Dilemma: The Problem of Supporting Multiple Platforms  How can Virtualization Help?  The use of Templates What is a Template?  How are Templates Created?  Customer's Point of View  Assembly Builder  Weblogic Virtual Edition Managing Oracle VM Best Practices for Virtualizing Oracle Database 11g  Managing Virtual Environments  Coffee Break   Oracle Complete and Integrated Virtualization Portfolio From Datacenter to Desktop  The Next Generation Virtualization  Private Cloud with Middleware Virtualization  Benefits of Using Oracle VM (and Oracle Enterprise Linux) Support Advantages  Production Ready Virtual Machines  Licensing Terms  Partner Resources and OPN Benefits  12:45 Q&A and Wrap-up  Data: 14 de Dezembro - 09h00 / 13h00Local: Oracle Portugal, Av. da Boavista, 1837- Edifício Burgo - Escritório 13.4, 4100-133 PORTO Audiência: Responsáveis de Desenvolvimento, de Tecnologia e Serviços dos parceiros ISV da Oracle Formação realizada pela Altimate

    Read the article

  • Building Extensions Using E-Business Suite SDK for Java

    - by Sara Woodhull
    We’ve just released Version 2.0.1 of Oracle E-Business Suite SDK for Java.  This new version has several great enhancements added after I wrote about the first version of the SDK in 2010.  In addition to the AppsDataSource and Java Authentication and Authorization Service (JAAS) features that are in the first version, the Oracle E-Business Suite SDK for Java now provides: Session management APIs, so you can share session information with Oracle E-Business Suite Setup script for UNIX/Linux for AppsDataSource and JAAS on Oracle WebLogic Server APIs for Message Dictionary, User Profiles, and NLS Javadoc for the APIs (included with the patch) Enhanced documentation included with Note 974949.1 These features can be used with either Release 11i or Release 12.  References AppsDataSource, Java Authentication and Authorization Service, and Utilities for Oracle E-Business Suite (Note 974949.1) FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications (Doc ID 1296491.1) What's new in those references? Note 974949.1 is the place to look for the latest information as we come out with new versions of the SDK.  The patch number changes for each release.  Version 2.0.1 is contained in Patch 13882058, which is for both Release 11i and Release 12.  Note 974949.1 includes the following topics: Applying the latest patch Using Oracle E-Business Suite Data Sources Oracle E-Business Suite Implementation of Java Authentication and Authorization Service (JAAS) Utilities Error loggingSession management  Message Dictionary User profiles Navigation to External Applications Java EE Session Management Tutorial For those of you using the SDK with Oracle ADF, besides some Oracle ADF-specific documentation in Note 974949.1, we also updated the ADF Integration FAQ as well. EBS SDK for Java Use Cases The uses of the Oracle E-Business Suite SDK for Java fall into two general scenarios for integrating external applications with Oracle E-Business Suite: Application sharing a session with Oracle E-Business Suite Independent application (not shared session) With an independent application, the external application accesses Oracle E-Business  Suite data and server-side APIs, but it has a completely separate user interface. The external application may also launch pages from the Oracle E-Business Suite home page, but after the initial launch there is no further communication with the Oracle E-Business Suite user interface. Shared session integration means that the external application uses an Oracle E-Business Suite session (ICX session), shares session context information with Oracle E-Business Suite, and accesses Oracle E-Business Suite data. The external application may also launch pages from the Oracle E-Business Suite home page, or regions or pages from the external application may be embedded as regions within Oracle Application Framework pages. Both shared session applications and independent applications use the AppsDataSource feature of the Oracle E-Business Suite SDK for Java. Independent applications may also use the Java Authentication and Authorization (JAAS) and logging features of the SDK. Applications that are sharing the Oracle E-Business Suite session use the session management feature (instead of the JAAS feature), and they may also use the logging, profiles, and Message Dictionary features of the SDK.  The session management APIs allow you to create, retrieve, validate and cancel an Oracle E-Business Suite session (ICX session) from your external application.  Session information and context can travel back and forth between Oracle E-Business Suite and your application, allowing you to share session context information across applications. Note: Generally you would use the Java Authentication and Authorization (JAAS) feature of the SDK or the session management feature, but not both together. Send us your feedback Since the Oracle E-Business Suite SDK for Java is still pretty new, we’d like to know about who is using it and what you are trying to do with it.  We’d like to get this type of information: customer name and brief use case configuration and technologies (Oracle WebLogic Server or OC4J, plain Java, ADF, SOA Suite, and so on) project status (proof of concept, development, production) any other feedback you have about the SDK You can send me your feedback directly at Sara dot Woodhull at Oracle dot com, or you can leave it in the comments below.  Please keep in mind that we cannot answer support questions, so if you are having specific issues, please log a service request with Oracle Support. Happy coding! Related Articles New Whitepaper: Extending E-Business Suite 12.1.3 using Oracle Application Express To Customize or Not to Customize? New Whitepaper: Upgrading your Customizations to Oracle E-Business Suite Release 12 ATG Live Webcast: Upgrading your EBS 11i Customizations to Release 12

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >