Search Results

Search found 9122 results on 365 pages for 'cartesian product'.

Page 134/365 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Ralink rt3090 driver installed and wireless doesn't work on Ubuntu 10.04

    - by Marcus Rene
    I have a LG A-410 lap-top (64 bits) with rt 3090 wireless card. Searching the problem I discover that I already have a rt 3090-dkms installed, but my wireless doesn't work. *-network UNCLAIMED description: Network controller product: RT3090 Wireless 802.11n 1T/1R PCIe vendor: RaLink physical id: 0 bus info: pci@0000:02:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:e5400000-e540ffff

    Read the article

  • Some of my favourite Visual Studio 2012 things&ndash;Teams

    - by Aaron Kowall
    Getting the balance right for when and how many team projects to create has always been a bit of a balance.  On large initiatives, there are often teams who work toward a common system.  These teams often have quite a bit of autonomy, but need to roll up to some higher level initiative.  In TFS 2010, people were often tempted to create separate Team Projects for each of the sub-teams and then do some magic with reporting and cross-team queries to get the consolidated view.  My recommendation was always to use Areas as a means of separating work across the team, but that always resulted in a large number of queries that need to be maintained and just seemed confusing.  When doing anything you had to remember to filter the query or view by Area in order to get correct results. Along with the awesome web access portal that comes in TFS 2012 (which I will cover details of in another post) the product group has introduced the concept of Teams.  A team is a sub-group within a TFS 2012 Team Project which allows us to more easily divide work along team boundaries. Technically, a Team is defined by an Area Path and a TFS Group, both of which could be done in TFS 2012.  However, by allowing for creation of a ‘Team’ in TFS 2012, the web portal is able to do a bunch of ‘magic’ for us.  We can view the project site (backlog, taskboard, etc) for the the team, we can assign items to the team and we can view the burndown for the team.  Basically, all the stuff that we had to prepare manually we now get created and managed for us with a nice UI. When you create a Team Project in TFS 2012, a ‘Default’ team is created with the same name as the Team Project.  So, if you only have 1 team working on the project, you are set.  If you want to divide the work into additional teams, you can create teams by using the Team Web Client. Teams are created using the ‘Administer Server’ icon in the top right of the web site.   You can select the team site by using the team chooser: Once you have selected a team, the Product Backlog, TaskBoard, Burndown Charts, etc. are all filtered to that team. NOTE: You always have the ability to choose the ‘Default’ team to see items for the entire project. PS: It’s been a long while since I shared on this blog.  To help with that I’m in a blogging challenge with some other developer and agilist friends.  Please check out their blogs as well: Steve Rogalsky: http://winnipegagilist.blogspot.ca Dylan Smith: http://www.geekswithblogs.net/optikal Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com   Technorati Tags: TFS 2012,Agile,Team

    Read the article

  • ATG Live Webcast April 5: Managing Your Oracle E-Business Suite with Oracle Enterprise Manager

    - by BillSawyer
    The next ATG Live Webcast covers one of the hottest topic areas in E-Business Suite Tools and Technology: Lifecycle Management. Angelo Rosado, Product Manager, ATG Development will lead you through using Oracle Enterprise Manager 12c and the latest E-Business Suite Plug-in to manage E-Business Suite systems. You can register for the Apr. 5, 2012 event at: Managing Your Oracle E-Business Suite with Oracle Enterprise Manager The topics covered in this webcast will be: Manage your EBS system configurations Monitor your EBS environment's performance and uptime Keep multiple EBS environments in sync with their patches and configurations Create patches for your EBS customizations and apply them with Oracle's own patching tools Date:               Thursday, April 5, 2012Time:              8:00 AM - 9:00 AM Pacific Standard TimePresenter:    Angelo Rosado, Product Manager, ATG DevelopmentWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:   Domestic Participant Dial-In Number:            877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              99342To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  597073984If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here.If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com. 

    Read the article

  • What are some examples of open source software that has turned into closed source software? [on hold]

    - by Verrier
    As the title says... can anyone think of any software that has made the transition from open source to closed source / proprietary? These could include software owned by the same company who decided to take a once open source offering and turn it into closed source... but I'm really looking for some examples of companies who developed a commercial closed source product off of an existing open source one (obviously with a permissive license).

    Read the article

  • Oracle Enterprise Innovation Days

    - by Lara Ermacora
    Si è tenuto lo scorso 10 e 11 novembre l'appuntamento con l'innovazione marcato Oracle. L' Oracle Enterprise Innovation Days, alla sua seconda edizione, ha portato a Bologna tutte le aziende che pensano all'innovazione come leva principale per difendere e rafforzare la propria competitività. All'interno di un panorama, come quello odierno, complesso ed eterogeneo si è discusso a lungo di approcci strategici, soluzioni possibili e sono state portate d'esempio alcune esperienze significative. Fra gli ospiti dell'evento Rajan Krishnan, Vice President, Applications Product Development and Product Management for EMEA, ha presentato le strategie applicative di Oracle aprendo così la discussione sulla tematica principale della sessione plenaria: Oracle Fusion Applications. Il suo intervento è stato subito seguito da Enrico Pagliarini, giornalista del sole 24 ore che ha intervistato 3 diverse coppie Partner / Cliente per approfondire con loro i progetti altamente innovativi a cui le loro aziende hanno collaborato.  Si è parlato di Enel Servizi Srl che grazie ad Accenture ha portato la soluzione Syebel Energy CRM alla sua attuale versione 8.0 per una migliore gestione dei clienti all'interno del mercato libero caratterizzato dalla sua alta competitività; Prysmian che, a fronte dell'acquisizione della società olandese Draka, insieme a Reply, ha deciso di rimodellare il processo di Reporting Civilistico e Gestionale di gruppo, creando una nuova applicazione che soddisfi i requisiti della nuova organizzazione nascente; Kinexia e Waste Italia precedentemente parte del gruppo Unendo e ora divisesi l'una nel mercato dei rinnovabili l'altra in quello dello smaltimento rifiuti che con l'aiuto di Deloitte si sono dotate della soluzione full outsourcing JDE, a seguito di  una sw selection tra JDE, SAP e altre soluzioni italiane.Durante la cena altri due momenti hanno attirato l'attenzione dei partecipanti: la presentazione di Michele Stroligo, giovanissimo  Designer Team Member Oracle Racing e i Reference Customer Award ovvero le premiazioni dei clienti che si sono contraddistinti come migliori referenze nei diversi mercati con diversi prodotti. I premi sono stati assegnati a: FIAT, Enel, Boiron Laboratoires, Champion Europe, Mediaset, Coeclerici. Il pomeriggio ha interessato invece vari percorsi di approfondimento declinati sulle diverse figure professionali concludendosi con la presentazione del Tenente Colonello Marco Lant delle Frecce Tricolori, esempio di eccellezza italiana noto in tutto il mondo. La giornata si è conclusa con la cena di gala nel famoso palazzo Re Enzo che troneggia sulla piazza principale della città.  La mattinata del secondo giorno è stata interamente dedicata all'approfondimento degli argomenti di maggior interesse attraverso tavoli interattivi e workshop a cura dei partner Oracle. L'evento si è poi concluso con una serie di iniziative culturali dedicate ai congressisti. A breve sarà disponibile il sito dedicato all'evento con tutte le foto della giornata, i video degli interventi più salienti, potrete inoltre scaricare tutte le presentazioni fatte durante i lavori. Rimani aggiornato sull'Oracle Enterprise Innovation Days 2011 visitando il blog! Strategie Applicative di Oracle - Rajan Krishnan bologna nov 2011 View more presentations from Oracle Apps - Italia .

    Read the article

  • Are You Afraid of Each Other? Study Shows CMO’s/CIO’s Missing Benefits of Collaboration

    - by Mike Stiles
    Remember that person in school you spent months being too scared to talk to?  Then when you finally did, it led to a wonderful friendship…if not something more. New research from Oracle, Social Media Today and Leader Networks shows marketing and IT need to get over whatever’s holding them back and start reaping the benefits of collaboration. Back in the old days of just a few years ago, marketing could stay on their side of the building, IT could stay on their side of the building, and both could refer to the other as “those guys.” Today, the structure of organizations is shifting from islands to “us,” one integrated body where each part knows what the other parts are doing, and all parts work together in accomplishing job one…a winning customer experience. Ignore that, and you start losing. Give your reluctance to change priority over the benefits of new collaborations, and you start losing. You’re either working together and accelerating forward or getting in the way of each other’s separate agendas and grinding down…much to your competitors’ delight. The study reveals a basic current truth: those who are collaborating in marketing and IT report being more effective, however less than 1/3 report collaborating even “frequently.” In other words, this is obviously a good thing, so we’d better not do it. Smart. The white paper, “Socially Driven Collaboration,” set out to explore how today’s always-changing digital, social and mobile landscape is forcing change across the enterprise, whether it’s welcomed or not. Part of what it found is marketing and IT leaders are not unaware of what’s going on and see their roles evolving. And both know the ability to collaborate more effectively now exists. And of those who are collaborating, over 2/3 say they’re “more effective” professionally because of it. Yet even if you don’t want to take the Oracle study’s word for it, an August 2013 Accenture study of 400 senior marketing and 250 IT executives revealed only 10% think CMO/CIO collaboration is at the right level. There’s a lot of room for improvement here, and not just around people. Collaboration is also being called for across processes and technologies. Business benefits of such collaboration cited in the Oracle study include stronger marketing messages, faster speed-to-market, greater product adoption, faster discovery of product and service shortcomings, and reduction in project costs. Those are the benefits you will cheat yourself out of by keeping “those guys” at arm’s length and continuing to try to function in traditional roles while modern business and the consumer is changing around you. “Intelligence is the ability to adapt to change.” –Stephen Hawking @mikestilesPhoto: istockphoto

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

  • Development processes, the use of version control, and unit-testing

    - by ct01
    Preface I've worked at quite a few "flat" organizations in my time. Most of the version control policy/process has been "only commit after it's been tested". We were constantly committing at each place to "trunk" (cvs/svn). The same was true with unit-testing - it's always been a "we need to do this" mentality but it never really materializes in a substantive form b/c there is no institutional knowledge base to do it - no mentorship. Version Control The emphasis for version control management at one place was a very strict protocol for commit messages (format & content). The other places let employees just do "whatever". The branching, tagging, committing, rolling back, and merging aspect of things was always ill defined and almost never used. This sort of seems to leave the version control system in the position of being a fancy file-storage mechanism with a meta-data component that never really gets accessed/utilized. (The same was true for unit testing and committing code to the source tree) Unit tests It seems there's a prevailing "we must/should do this" mentality in most places I've worked. As a policy or standard operating procedure it never gets implemented because there seems to be a very ill-defined understanding about what that means, what is going to be tested, and how to do it. Summary It seems most places I've been to think version control and unit testing is "important" b/c the trendy trade journals say it is but, if there's very little mentorship to use these tools or any real business policies, then the full power of version control/unit testing is never really expressed. So grunts, like myself, never really have a complete understanding of the point beyond that "it's a good thing" and "we should do it". Question I was wondering if there are blogs, books, white-papers, or online journals about what one could call the business process or "standard operating procedures" or uses cases for version control and unit testing? I want to know more than the trade journals tell me and get serious about doing these things. PS: @Henrik Hansen had a great comment about the lack of definition for the question. I'm not interested in a specific unit-testing/versioning product or methodology (like, XP) - my interest is more about work-flow at the individual team/developer level than evangelism. This is more-or-less a by product of the management situation I've operated under more than a lack of reading software engineering books or magazines about development processes. A lot of what I've seen/read is more marketing oriented material than any specifically enumerated description of "well, this is how our shop operates".

    Read the article

  • Pure Front end JavaScript with Web API versus MVC views with ajax

    - by eyeballpaul
    This was more a discussion for what peoples thoughts are these days on how to split a web application. I am used to creating an MVC application with all its views and controllers. I would normally create a full view and pass this back to the browser on a full page request, unless there were specific areas that I did not want to populate straight away and would then use DOM page load events to call the server to load other areas using AJAX. Also, when it came to partial page refreshing, I would call an MVC action method which would return the HTML fragment which I could then use to populate parts of the page. This would be for areas that I did not want to slow down initial page load, or areas that fitted better with AJAX calls. One example would be for table paging. If you want to move on to the next page, I would prefer it if an AJAX call got that info rather than using a full page refresh. But the AJAX call would still return an HTML fragment. My question is. Are my thoughts on this archaic because I come from a .net background rather than a pure front end background? An intelligent front end developer that I work with, prefers to do more or less nothing in the MVC views, and would rather do everything on the front end. Right down to web API calls populating the page. So that rather than calling an MVC action method, which returns HTML, he would prefer to return a standard object and use javascript to create all the elements of the page. The front end developer way means that any benefits that I normally get with MVC model validation, including client side validation, would be gone. It also means that any benefits that I get with creating the views, with strongly typed html templates etc would be gone. I believe this would mean I would need to write the same validation for front end and back end validation. The javascript would also need to have lots of methods for creating all the different parts of the DOM. For example, when adding a new row to a table, I would normally use the MVC partial view for creating the row, and then return this as part of the AJAX call, which then gets injected into the table. By using a pure front end way, the javascript would would take in an object (for, say, a product) for the row from the api call, and then create a row from that object. Creating each individual part of the table row. The website in question will have lots of different areas, from administration, forms, product searching etc. A website that I don't think requires to be architected in a single page application way. What are everyone's thoughts on this? I am interested to hear from front end devs and back end devs.

    Read the article

  • Launch Invitation: Introducing Oracle WebLogic Server 12c

    - by JuergenKress
    Introducing Oracle WebLogic Server 12c, the #1 Application Server Across Conventional and Cloud Environments Please join Hasan Rizvi on December 1, as he unveils the next generation of the industry’s #1 application server and cornerstone of Oracle’s cloud application foundation—Oracle WebLogic Server 12c. Hear, with your fellow IT managers, architects, and developers, how the new release of Oracle WebLogic Server is: Designed to help you seamlessly move into the public or private cloud with an open, standards-based platform Built to drive higher value for your current infrastructure and significantly reduce development time and cost Optimized to run your solutions for Java Platform, Enterprise Edition (Java EE); Oracle Fusion Middleware; and Oracle Fusion Applications Enhanced with transformational platforms and technologies such as Java EE 6, Oracle’s Active GridLink for RAC, Oracle Traffic Director, and Oracle Virtual Assembly Builder Don’t miss this online launch event. Register now. Executive Overview Thurs., December 1, 2011 10 a.m. PT / 1 p.m. ET Presented by: Hasan Rizvi Senior Vice President, Product Development, Oracle Today most businesses have the ambition to move to a cloud infrastructure. However, IT needs to maintain and invest in their current infrastructure for supporting today’s business. With Oracle WebLogic, the #1 app server in the marketplace, we provide you with the best of both worlds. The enhancements contained in WebLogic 12c provide you with significant benefits that drive higher value for your current infrastructure, while significantly reducing development time and cost. In addition, with WebLogic you are cloud-ready. You can move your existing applications as-is to a high performance engineered system, Exalogic, and instantly experience performance and scalability improvements that are orders of magnitude higher. A WebLogic-Exalogic combination may provide your private cloud infrastructure. Moreover, you can develop and test your applications on the recently announced Oracle’s Public Cloud offering: the Java Cloud Service and seamlessly move these to your on-premise infrastructure for production deployments. Developer Deep-Dive Thurs., December 1, 2011 11 a.m. PT / 2 p.m. ET See demos and interact with experts via live chat. Presented by: Will Lyons Director, Oracle WebLogic Server Product Management, Oracle Modern Java development looks very different from even a few years ago. Technology innovation, the ecosystem of tools and their integration with Java standards are changing how development is done. Cloud Computing is causing developers to re-evaluate their development platforms and deployment options. Business users are demanding faster time to market, but without sacrificing application performance and reliability. Find out in this session how Oracle WebLogic Server 12c enables rapid development of modern, lightweight Java EE 6 applications. Learn how you can leverage the latest development technologies, tools and standards when deploying to Oracle WebLogic Server across both conventional and Cloud environments. Don’t miss this online launch event. Register now. For regular information become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Hasan Rizvi,Oracle,WebLogic 12c,OPN,WebLogic Community,Jürgen Kress

    Read the article

  • JD Edwards Delivers Once Again with Significant Announcements

    Listen to Lenley Hensarling, JD Edwards Group Vice President,talk about the significant JD Edwards announcements made during Oracle OpenWorld 2008.Lenley will highlight how JD Edwards’ customers can benefit from the latest product releases from EnterpriseOne and World,discuss the wave of companies who are upgrading to the most recent JD Edwards releases to take advantage of an array of industry specific enhancements,and elaborate on JD Edwards’ strategy about integrating to other Oracle solutions,bringing continuous value to customers.

    Read the article

  • Fiction to Reality Timeline Charts Introduction of Sci-Fi Concepts to Real Life

    - by Jason Fitzpatrick
    Videophones, voice-controlled computers, heads-up displays, and other technological innovations made their first appearances in Sci-Fi. This dual timeline charts the first appearance in Sci-Fi against the date of commercial success for the product in the real world. Hit up the link below for the full resolution image. The Fiction to Reality Timeline [via Cool Inforgraphics] How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2

    Read the article

  • Integrating BizTalk Server and StreamInsight paper

    - by gsusx
    With all the holidays madness I didn't realized that my "Integrating BizTalk Server and StreamInsight" paper is now available on MSDN . This paper was originally an idea of the BizTalk product team and intends to present some fundamental scenarios that can be enabled by the combination of BizTalk Server and StreamInsight. Thanks to everybody who, directly or indirectly, provided feedback about this paper: Syed Rasheed, Mark Simms , Richard Seroter , Roman Schindlauer and Torsten Grabs from the StreamInsight...(read more)

    Read the article

  • Oracle Systems and Solutions at OpenWorld Tokyo 2012

    - by ferhat
    Oracle OpenWorld Tokyo and JavaOne Tokyo will start next week April 4th. We will cover Oracle systems and Oracle Optimized Solutions in several keynote talks and general sessions. Full schedule can be found here. Come by the DemoGrounds to learn more about mission critical integration and optimization of complete Oracle stack. Our Oracle Optimized Solutions experts will be at hand to discuss 1-1 several of Oracle's systems solutions and technologies. Oracle Optimized Solutions are proven blueprints that eliminate integration guesswork by combing best in class hardware and software components to deliver complete system architectures that are fully tested, and include documented best practices that reduce integration risks and deliver better application performance. And because they are highly flexible by design, Oracle Optimized Solutions can be implemented as an end-to-end solution or easily adapted into existing environments. Oracle Optimized Solutions, Servers,  Storage, and Oracle Solaris  Sessions, Keynotes, and General Session Talks DAY TIME TITLE Notes Session Wednesday  April 4 9:00 - 11:15 Keynote: ENGINEERED FOR INNOVATION - Engineered Systems Mark Hurd,  President, Oracle Takao Endo, President & CEO, Oracle Corporation Japan John Fowler, EVP of Systems, Oracle Ed Screven, Chief Corporate Architect, Oracle English Session K1-01 11:50 - 12:35 Simplifying IT: Transforming the Data Center with Oracle's Engineered Systems Robert Shimp, Group VP, Product Marketing, Oracle English Session S1-01 15:20 - 16:05 Introducing Tiered Storage Solution for low cost Big Data Archiving S1-33 16:30 - 17:15 Simplifying IT - IT System Consolidation that also Accelerates Business Agility S1-42 Thursday  April 5 9:30 - 11:15 Keynote: Extreme Innovation Larry Ellison, Chief Executive Officer, Oracle English Session K2-01 11:50 - 13:20 General Session: Server and Storage Systems Strategy John Fowler, EVP of Systems, Oracle English Session G2-01 16:30 - 17:15 Top 5 Reasons why ZFS Storage appliance is "The cloud storage" by SAKURA Internet Inc L2-04 16:30 - 17:15 The UNIX based Exa* Performance IT Integration Platform - SPARC SuperCluster S2-42 17:40 - 18:25 Full stack solutions of hardware and software with SPARC SuperCluster and Oracle E-Business Suite  to minimize the business cost while maximizing the agility, performance, and availability S2-53 Friday April 6 9:30 - 11:15 Keynote: Oracle Fusion Applications & Cloud Robert Shimp, Group VP, Product Marketing Anthony Lye, Senior VP English Session K3-01 11:50 - 12:35 IT at Oracle: The Art of IT Transformation to Enable Business Growth English Session S3-02 13:00-13:45 ZFS Storagge Appliance: Architecture of high efficient and high performance S3-13 14:10 - 14:55 Why "Niko Niko doga" chose ZFS Storage Appliance to support their growing requirements and storage infrastructure By DWANGO Co, Ltd. S3-21 15:20 - 16:05 Osaka University: Lower TCO and higher flexibility for student study by Virtual Desktop By Osaka University S3-33 Oracle Developer Sessions with Oracle Systems and Oracle Solaris DAY TIME TITLE Notes LOCATION Friday April 6 13:00 - 13:45 Oracle Solaris 11 Developers D3-03 13:00 - 14:30 Oracle Solaris Tuning Contest Hands-On Lab D3-04 14:00 - 14:35 How to build high performance and high security Oracle Database environment with Oracle SPARC/Solaris English Session D3-13 15:00 - 15:45 IT Assets preservation and constructive migration with Oracle Solaris virtualization D3-24 16:00 - 17:30 The best packaging system for cloud environment - Creating an IPS package D3-34 Follow Oracle Infrared at Twitter, Facebook, Google+, and LinkedIn  to catch the latest news, developments, announcements, and inside views from  Oracle Optimized Solutions.

    Read the article

  • Problem in installation in My Hp g4 1226se

    - by vivek Verma
    1vivek.100 Dual booting error in Hp pavilion g4 1226se Dear sir or Madam, My name is vivek verma.... I am the user of my Hp laptop which series and model name is HP PAVILION G4 1226SE........ i have purchase in the year of 2012 and month is February.....the windows 7 home basic 64 Bit is already installed in in my laptop.... Now i want to install Ubuntu 12.04 Lts or 13.10 lts..... i have try many time to install in my laptop via live CD or USB installer....and i have try many live CD and many pen drive to install Ubuntu ... but it is not done......now i am in very big problem...... when i put my CD or USB drive to boot and install the Ubuntu......my laptop screen is goes the some black (brightness of my laptop screen is very low and there is very low visibility ) and not showing any thing on my laptop screen..... and when i move the my laptop screen.....then there is graphics option in this screen to installation of the Ubuntu option......and when i press the dual boot with setting button and press to continue them my laptop is goes for shutdown after 2 or 5 minutes..... ...... and Hp service center person is saying to me our laptop hardware has no problem.....please contact to Ubuntu tech support............. show please help me if possible..... My laptop configuration is here...... Hardware Product Name g4-1226se Product Number QJ551EA Microprocessor 2.4 GHz Intel Core i5-2430M Microprocessor Cache 3 MB L3 cache Memory 4 GB DDR3 Memory Max Upgradeable to 4 GB DDR3 Video Graphics Intel HD 3000 (up to 1.65 GB) Display 35,5 cm (14,0") High-Definition LED-backlit BrightView Display (1366 x 768) Hard Drive 500 GB SATA (5400 rpm) Multimedia Drive SuperMulti DVD±R/RW with Double Layer Support Network Card Integrated 10/100 BASE-T Ethernet LAN Wireless Connectivity 802.11 b/g/n Sound Altec Lansing speakers Keyboard Full size island-style keyboard with home roll keys Pointing Device TouchPad supporting Multi-Touch gestures with On/Off button PC Card Slots Multi-Format Digital Media Card Reader for Secure Digital cards, Multimedia cards External Ports 1 VGA 1 headphone-out 1 microphone-in 3 USB 2.0 1 RJ45 Dimensions 34.1 x 23.1 x 3.56 cm Weight Starting at 2.1 kg Power 65W AC Power Adapter 6-cell Lithium-Ion (Li-Ion) What's In The Box Webcam with Integrated Digital Microphone (VGA) Software Operating System: Windows 7 Home Basic 64bit....Genuine..... ......... Sir please help me if possible....... Name =vivek verma Contact no.+919911146737 Email [email protected]

    Read the article

  • Débat Scrum : Que pensez-vous des certifications et processus de certifications définis par la Scrum

    Bonjour, La majorité des gens a connaissance des 2 formations certifiantes historiques autour de Scrum :Certified Scrum Product Owner Certified ScrumMaster (à laquelle s'est rajouté depuis le 1er octobre 2009 une évaluation en ligne) Ces certifications sont catégorisées Foundation-Level Certification par la Scrum Alliance. Ces derniers temps, sont apparues d'autres certifications, à savoir :Certified Scrum Developer (Mid-Level Certifications) Certifie...

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • First-Global-Teach for the Oracle Imaging and Process Management 11g: Administration: San Francisco

    - by stephen.schleifer
    First-Global-Teach for the Oracle Imaging and Process Management 11g: Administration: San Francisco | June 23-25 This course enables participants to use Oracle Imaging and Process Management (I/PM) 11g to access, track, and annotate documents. In addition, they also get an overview of the product architecture of Oracle I/PM running on Oracle WebLogic Server.The course also delves into administration tasks such as security permissions, configuration such as creating BPEL connections, and procedures for creating applications, searches, and input mappings. Customer and partners can register by looking up the course (#D61575GC10) on http://education.oracle.com

    Read the article

  • Google s'achemine vers la recherche sans recherche ou comment délivrer des résultats sans que l'utilisateur n'ait à le demander

    Google s'achemine vers la recherche sans recherche Ou comment délivrer des résultats sans que l'utilisateur n'ait à le demander Lors de la conférence « LeWeb10 », qui se tient actuellement à Paris (à Saint-Denis exactement), Marissa Mayer, ex-Vice Présidente du Search Product (du moteur de recherche donc) de Google - et à présent en charge de la branche Géolocalisation et des Services Localisés - est revenue sur une des perspectives d'avenir de la société. Lors d'un entretien sur la grande scène de la manifestion, elle a ainsi laissé entendre qu'un des prochains produits importants de Google serait la « découverte contextuelle » (contextual discovery) Il ne s...

    Read the article

  • Oracle’s New Release of Primavera Contract Management

    Controlling your construction project's plan, budget, forecast costs, and deliverables is vital to the success of your projects and the future of your business. Tune into this conversation with Krista Lambert, Senior Product Manager, for the Oracle Primavera Global Business Unit to learn about the latest release of Oracle’s Primavera Contract Management version 13 and how this document management, job cost and field controls solution keeps construction projects on schedule and on-budget through complete project control.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >