Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 174/679 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • The winning combination: Oracle VM Server for x86 + Oracle Sun Fire HW

    - by Karim Berrah
    You might be wondering why OVM Server for x86 (OVM/x86 here and below) should be seriously considered as a nice (business point of view) alternative to standard Hypervisors, if you are virtualizing Oracle Software, especially if you are planning to move to Oracle x86 Hardware (rackmount or blades). Well, let see some "not well known" facts that might interest you and help you in saving more money for your entire company (and not only the Virtulization team). Fact 1: OVM/x86 is considered as a hard partitionning technology (check page 2 of Oracle Server Partitionning Licencing Policies), so if you are buying new servers based on the latest INTEL Xeon E7 CPUs (10 cores per Socket) and have some licencing issues in deploying further Oracle SW, because you are using a hypervisor not recognized as a hard partitionning technology (like VMware), then you need to check here how to do it with OVM . This might help you to continue to deploy your Oracle DB instances on new x86 HW (12 cores, 40 cores, 64 cores servers) in a reasonable way, without having to pay licences for 12 CPU, 40 CPUs or 64 CPUs. You might also consider migrating your legacy Oracle DB DBs to a virtualized environment like OVM/x86 an recover some CPU licences, that can be reused somewhere else in production. Fact 2: OVM/x86 is free to use, without any extra licence for any specific feature (LiveMigration, High Availability, Embedded Management Console). If you want to use it on non Oracle HW, there is a support fee per  system and per year, that is much below VMware support (Oracle VM Premier Limited Support for systems up to 2 CPUs, and Oracle VM Premier Support for any bigger system, independently on the number of populated sockets). Fact 3: support is included with your Oracle x86 HW support (OPS for systems)  and you can re-install on you system Oracle Linux, Oracle Solaris or Oracle VM server for x86, without beeing charged, an keeping the same support level. Fact 4: it is less expensive to virtualize Oracle Linux or Oracle Solaris on OVM/x86 with Oracle HW that any other similar solution with VMware, because all the VMs are then supported and licenced when you buy Oracle HW with OPS. Fact 5: Oracle VM Templates bring you many Virtual Machines already installed, patched and optimized for various Oracle applications. And to be more specific, those templates are fully supported by Oracle, which is not really true when it comes to another hypervisor. By optimized VM Kernel, I mean PV drivers, OVM-ready kernels in the VM, single source clock for all the VMS, better memory management of the VM ... Fact 6: there is no extra costs for a management console. OVM comes with a free OVM Manager package for Linux.  More infos: Latest announcement of OVM/x86 update 2.2.2 A short flash demo of OVM server for x86 A short flash demo on OVM Templates and Virtual Assembly Builder Oracle Linux Support and Oracle VM Support Global Price List  ISVs: Benefits for Independant Sofwtare Vendors (ISVs) in using OVM/x86 Consultant Services: Advanced Customer Services for OVM/x86  Technical Features Best practices and Guideline for OVM with Oracle Blades Reduce TCO and get more Value from your x86 Infrastructure

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Purging SOA Instances

    - by angelo.santagata
    All, If your running Oracle SOA suite in a high thoughput environment you'll probably discover that your dehydration store will get large, and larger if you dont manage it properly. There are many strategies in manging how to purge this, and this document recently released by Oracle details them quite nicely. The document is Oracle SOA Suite 10g based however a 11g version is in the works and should be out soon. Enjoy the read

    Read the article

  • Pimp my Silverlight Firestarter

    - by mbcrump
    So Silverlight Firestarter is over and your sitting on your couch thinking… what now? Well its time to So how exactly can you pimp the Silverlight Firestarter? Well read below and you will find out: 1) Pimp the videos: First we are going to use a program named Juice to download all of the Silverlight Firestarter videos. Go ahead and point your browser to http://juicereceiver.sourceforge.net/ and download the application. It works on Mac, Linux and PC. After it is downloaded you are going to want to add an RSS feed by clicking the button highlighted below. At this point you are going to want to add the following URL inside the textbox and hit Save: http://channel9.msdn.com/Series/Silverlight-Firestarter/RSS This RSS feed includes all the Silverlight Firestarter Labs and Presentations located below. The Future of Silverlight Data Binding Strategies with Silverlight and WP7 Building Compelling Apps with WCF using REST and LINQ Building Feature Rich Business Apps Today with RIA Services MVVM: Why and How? Tips and Patterns using MVVM and Service Patterns with Silverlight and WP7 Tips and Tricks for a Great Installation Experience Tune Your Application: Profiling and Performance Tips Performance Tips for Silverlight Windows Phone 7 Select all the videos and click the Download button located below (has blue arrow): Once all the videos are downloaded you will have about 4.64GB of Silverlight fun. You can now move these videos to your MediaServer and watch them with whatever device you want. Put it on an iPad, iPhone.. emm wait I mean WP7 or WMC7.  2) Pimp the Training Material – Download the offline installer for the labs here. This will give you almost a gig of free training materials. Here is the topics covered: Level 100: Getting Started Lab 01 - WinForms and Silverlight Lab 02 - ASP.NET and Silverlight Lab 03 - XAML and Controls Lab 04 - Data Binding Level 200: Ready for More Lab 05 - Migrating Apps to Out-of-Browser Lab 06 - Great UX with Blend Lab 07 - Web Services and Silverlight Lab 08 - Using WCF RIA Services Level 300: Take me Further Lab 09 - Deep Dive into Out-of-Browser Lab 10 - Silverlight Patterns: Using MVVM Lab 11 - Silverlight and Windows Phone 7 You will notice that it install Firestarter to the default of C:\Firestarter. So you will have to navigate to that folder and double click on Default.htm to get started. Now if you followed part one of the pimping guide then you will already have all the videos on your pc. You will notice that once you go into the lab you will get a Lab Document and Source at the bottom of the article. Now instead of opening the Source Folder in a web browser you can just copy the folder C:\Firestarter\Labs into your Visual Studio 2010 Project Folder. This will save a lot of time later.   3) Pimp my Silverlight 5 Knowledge – Always keep reading as much as possible and remember that the Silverlight 5 Beta should come Q1 of 2011 and the final release at the end of 2011. Here are 5 great blog post on Silverlight 5. Scott Gu’s Blog Mary Jo’s Article on Silverlight 5 The Future of Silverlight (Official) Kunal Chowdhury Blog Tim Heuer’s Blog Thats all that I got for now. Have fun with all the new Silverlight content.  Subscribe to my feed

    Read the article

  • Entity Framework 4, WCF &amp; Lazy Loading Tip

    - by Dane Morgridge
    If you are doing any work with Entity Framework and custom WCF services in EFv1, everything works great.  As soon as you jump to EFv4, you may find yourself getting odd errors that you can’t seem to catch.  The problem is almost always has something to do with the new lazy loading feature in Entity Framework 4.  With Entity Framework 1, you didn’t have lazy loading so this problem didn’t surface.  Assume I have a Person entity and an Address entity where there is a one-to-many relationship between Person and Address (Person has many Addresses). In Entity Framework 1 (or in EFv4 with lazy loading turned off), I would have to load the Address data by hand by either using the Include or Load Method: var people = context.People.Include("Addresses"); or people.Addresses.Load(); Lazy loading works when the first time the Person.Addresses collection is accessed: 1: var people = context.People.ToList(); 2:  3: // only person data is currently in memory 4:  5: foreach(var person in people) 6: { 7: // EF determines that no Address data has been loaded and lazy loads 8: int count = person.Addresses.Count(); 9: } 10:  Lazy loading has the useful (and sometimes not useful) feature of fetching data when requested.  It can make your life easier or it can make it a big pain.  So what does this have to do with WCF?  One word: Serialization. When you need to pass data over the wire with WCF, the data contract is serialized into either XML or binary depending on the binding you are using.  Well, if I am using lazy loading, the Person entity gets serialized and during that process, the Addresses collection is accessed.  When that happens, the Address data is lazy loaded.  Then the Address is serialized, and the Person property is accessed, and then also serialized and then the Addresses collection is accessed.  Now the second time through, lazy loading doesn’t kick in, but you can see the infinite loop caused by this process.  This is a problem with any serialization, but I personally found it trying to use WCF. The fix for this is to simply turn off lazy Loading.  This can be done at each call by using context options: context.ContextOptions.LazyLoadingEnabled = false; Turning lazy loading off will now allow your classes to be serialized properly.  Note, this is if you are using the standard Entity Framework classes.  If you are using POCO,  you will have to do something slightly different.  With POCO, the Entity Framework will create proxy classes by default that allow things like lazy loading to work with POCO.  This proxy basically creates a proxy object that is a full Entity Framework object that sits between the context and the POCO object.  When using POCO with WCF (or any serialization) just turning off lazy loading doesn’t cut it.  You have to turn off the proxy creation to ensure that your classes will serialize properly: context.ContextOptions.ProxyCreationEnabled = false; The nice thing is that you can do this on a call-by-call basis.  If you use a new context for each set of operations (which you should) then you can turn either lazy loading or proxy creation on and off as needed.

    Read the article

  • Focus on Oracle Data Profiling and Data Quality 11g - 24/Fev/11

    - by Claudia Costa
    Thursday 24th February, 11am GMTOracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges.Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis.Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data.It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs. Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.  During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.Agenda Oracle Data Integration Strategy overview A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: Oracle Data Profiling Oracle Data Quality for Data Integrator Live demo Q&A  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations received less than 24hours prior to start time may not receive confirmation to attend.To register click here.For any questions please contact [email protected]

    Read the article

  • Now Live: New Java Enterprise Edition 6 Certification Exams

    - by Paul Sorensen
    The new Java Enterprise Edition 6 (EE6) exams are being released into production, effective today (February 21, 2011). If you participated in the beta exams, we appreciate your patience in awaiting your beta scores (there were some initial technical difficulties with these exams that delayed beta review and scoring). While most of the production exams are currently available and most of the beta scores have been mailed, they are not 100% completed. We expect all production exams to be available and all scores to be mailed tentatively by March 31, 2010. We appreciate your patience in receiving your beta scores as we work through some issues that have delayed the release of beta scores for two of these exams. Beta candidates can expect to receive their printed score reports in the mail from Prometric. Please allow 5 business days from the 'date mailed' below to receive your score report. If you have not received it within 5 business days, please contact Prometric. EXAM  PRODUCTIONDATE BETA SCOREMAILED Loading...CX-311-093 Java Platform, Enterprise Edition 6 Enterprise JavaBeans Developer Certified Expert ExamJanuary 12, 2011February 4, 2011CX-311-094 Java Platform, Enterprise Edition 6 Java Persistence API Developer Certified Expert ExamFebruary 1, 2011February 11, 2011CX-311-232 Java Platform, Enterprise Edition 6 Web Services Developer Certified Expert ExamFebruary 8, 2011by March 31, 2011tentative*CX-311-085 Java Platform, Enterprise Edition 6 JavaServer Pages and Servlet Developer Certified Expert Examby March 31, 2011tentative*by March 31, 2011tentative* *Dates are subject to change without notice.Register now at prometric.com/oracle.QUICK LINKSHelp with beta exam score reportOracle Certified Professional, Java Platform, Enterprise Edition 6 JavaServer Pages and Servlet DeveloperOracle Certified Professional, Java Platform, Enterprise Edition 6 Enterprise JavaBeans DeveloperOracle Certified Professional, Java Platform, Enterprise Edition 6 Java Persistence API DeveloperOracle Certified Professional, Java Platform, Enterprise Edition 6 Web Services Developer

    Read the article

  • Silverlight Cream for April 21, 2010 -- #843

    - by Dave Campbell
    In this Issue: Alan Beasley, Roboblob, SilverLaw, Mike Snow, and Chris Koenig. Shoutouts: Ozymandias has a discussion up: The Three Pillars of Xbox Live on Windows Phone John Papa announced that Silverlight 4 is now on WebPI: Get Silverlight 4 – Simplified! Dan Wahlin posted the code and material from DevConnections: Code from my DevConnections Talks and Workshop Tim Heuer has a good deal posted from GoDaddy: Get a Silverlight XAP signing certificate for cheap thanks to GoDaddy From SilverlightCream.com: ListBox Styling (Part2-ControlTemplate) in Expression Blend & Silverlight Alan Beasley is back with part 2 of his ListBox styling tutorial adventure in Expression Blend... this looks like some of the stuff I was getting close to in Win32 a bunch of years back... great stuff... thanks Alan! Unit Testing Modal Dialogs in MVVM and Silverlight 4 Roboblob responds to some feedback with an expansion on his previous post with the addition of some Unit Testing. ChildWindowResizeBehavior - Silverlight 4 Blend 4 RC design time support SilverLaw has a short post about a behavior he has available at the Expression Gallery that resizes a child window with the Mouse Wheel, and also has Design-time support in Blend. Tip of the Day #111 – How to Configure your Silverlight App to run in Elevated Trust Mode Mike Snow has his latest tip up, and this one is on both ends of of the Elevated Trust Mode of OOB ... how to set it, and what your user experience is like. WP7 Part 2 – Working with Data Chris Koenig has part 2 of his WP7 exploration up ... he's tackling Nerd Dinner and pulling down Odata. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • VirtualBox

    - by DesigningCode
    I was wanting to play around with something in a VM the other day.  I was curious what was available for free, if anything, for windows.   I quickly came across Virtual Box  ( http://www.virtualbox.org/ ).   Downloaded, Installed. No Problem!  Works really nicely.   It was commercial software (by sun (now oracle)) that turned open source.   In terms of a license it says :- In summary, the VirtualBox PUEL allows you to use VirtualBox free of charge for personal use or, alternatively, for product evaluation. An interesting feature it has is built in RDP.   Which is useful if you have a guest OS that doesn’t support RDP.   Speaking of RDP…..  which I will in my next blog post… I learnt something REALLY useful the other day.

    Read the article

  • SilverlightShow for Jan 3-9, 2011

    - by Dave Campbell
    Check out the Top Five most popular news at SilverlightShow for Jan 3-9, 2011. SilverlightShow's review of their top 10 visited articles in 2010 got most hits last week. MicrosoftFeed's review of the Facebook application Picturize.me got the second place. Among the top5 is also an interesting review of the top Silverlight books for 2010 by Michael Crump. Here is SilverlightShow's weekly top 5: Top 10 SilverlightShow Articles for Year 2010 Picturize.me - a Silverlight Based Facebook application Face detection in Windows Phone 7 MVVM Navigation with MEF What is the best book on Silverlight 4 Visit and bookmark SilverlightShow. Stay in the 'Light

    Read the article

  • Leverage Location Data with Maps in Your Apps

    - by stephen.garth
    Free Webinar: "Add Maps to Your Java Applications - The Easy Way" Wednesday May 26 at 9:00am Pacific Time Putting maps in your apps is a great way to put your apps on the map! Maps provide a location context that can trigger those "aha" moments leading to better business decisions. Tune into this free webinar to find out how easy it is to leverage spatial and location data, much of which is already in your Oracle Database. NAVTEQ's Dan Abugov and Oracle's Shay Shmeltzer combine their considerable experience with Oracle Spatial and Java application development to demonstrate how you can quickly and easily add maps to your Java applications, leveraging Oracle Spatial 11g, Oracle Fusion Middleware MapViewer, Oracle JDeveloper and ADF 11g. Register here. Learn more about Oracle spatial and location technology var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • ‘Unleash the Power of Oracle WebLogic 12c: Architect, Deploy, Monitor and Tune JEE6’: Free Hands On Technical Workshop

    - by JuergenKress
    Come to our Workshop and get bootstrapped in the use of Oracle WebLogic 12c for high performance systems. The workshop, organised by Oracle Gold Partners - C2B2 Consulting -  and run by the Oracle Application Grid Certified Specialist Steve Milldge, will start with a simple WebLogic 12c system which will scale up to a distributed, reliable system designed to give zero downtime and support extreme throughput. When? Wednesday,25th of July Where? Oracle Corporation UK Ltd. One South Place, London EC2M 2RB Visit www.c2b2.co.uk/weblogic and join us for this unique technical event to learn, network and play with some cool technology! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: c2b2,ias to WebLogic,WebLogic basic,ias upgrade,C2B2,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Kicking yourself because you missed the Oracle OpenWorld and Oracle Develop Call for Papers?

    - by charlie.berger
    Here's a great opportunity!If you missed the Oracle OpenWorld and Oracle Develop Call for Papers, here is another opportunity to submit a paper to present. Submit a paper and ask your colleagues, Oracle Mix community, friends and anyone else you know to vote for your session. As applications of data mining and predictive analytics are always interesting, your chances of getting accepted by votes is higher.  Note, only Oracle Mix members are allowed to vote. Voting is open from the end of May through June 20. For the most part, the top voted sessions will be selected for the program (although we may choose sessions in order to balance the content across the program). Please note that Oracle reserves the right to decline sessions that are not appropriate for the conference, such as subjects that are competitive in nature or sessions that cover outdated versions of products. Oracle OpenWorld and Oracle DevelopSuggest-a-Sessionhttps://mix.oracle.com/oow10/proposals FAQhttps://mix.oracle.com/oow10/faq

    Read the article

  • Adatbázis verziók támogatása

    - by Lajos Sárecz
    Gyakran jön elo az a kérdés, hogy az egyes adatbázis verziók meddig támogatottak. Egy-két éve vezette be az Oracle a Lifetime Support Policy-t, ami tulajdonképpen az összes termék esetében szabályozza a teljes életciklus alatti támogatási szinteket. Ennek megfeleloen 3 támogatási szint létezik: - Premier Support: Ez a megjelenéstol számítva 5 évig ad teljes támogatást. - Extended Support: További 3 év támogatás magasabb áron. Közel ugyanazt a szolgáltatást nyújtja, mint a Premier Support, csupán más gyártók új termékeivel, vagy új verzióival nem certifikál. - Sustaining Support: Technikai támogatás mindaddig, amíg a rendszer muködik. Új hibákra már nem ad javítást, és új Oracle termékekkel sem certifikál

    Read the article

  • Do You Want "Normal?" Good luck!

    - by steve.diamond
    Much has been written about "The New Normal." One thing is for sure: whatever THAT is, economically speaking we won't be experiencing it anytime soon. Sure, we're well beyond the "no floor" perception of 18 months ago--which is certainly comforting, but ask any senior executive and they'll tell you of the constant rigor necessary to continually adapt to an ever-changing macro environment. This brings me to a suggestion that you tune in to a Deloitte Webinar titled, "The New Normal: Embrace Complexity or Seek to Simplify." It features the perspectives on this very topic of Jessica Blume, a principal at Deloitte; and Kirk Mosher, VP of CRM Marketing at Oracle.

    Read the article

  • 11gr2 DataGuard: Restarting DUPLICATE After a Failure

    - by rene.kundersma
    One of the great new features that comes in very handy when databases get larger and larger these days is RMAN's capability to duplicate from an active database and even restart a duplicate when it fails. Imagine yourself the problem I had lately; I used the duplicate from active database feature and had to wait for an hour or 6 before all datafiles where transferred.At the end of the process some error occurred because of the syntax. While this error was easily to solve I was afraid I had to redo the complete procedure and transfer the 2.5 TB again. Well, 11gr2 RMAN surprised when I re-ran my command with the following output: Using previous duplicated file +DATA/fin2prod/datafile/users.2968.719237649 for datafile 12 with checkpoint SCN of 183289288148 Using previous duplicated file +DATA/fin2prod/datafile/users.2703.719237975 for datafile 13 with checkpoint SCN of 183289295823 Above I only show a small snippet, but what happend is that RMAN smartly skipped all files that where already transferred ! The documentation says this: RMAN automatically optimizes a DUPLICATE command that is a repeat of a previously failed DUPLICATE command. The repeat DUPLICATE command notices which datafiles were successfully copied earlier and does not copy them again. This applies to all forms of duplication, whether they are backup-based (with and without a target connection) or active database duplication. The automatic optimization of the DUPLICATE command can be especially useful when a failure occurs during the duplication of very large databases. If a DUPLICATE operation fails, you need only run the DUPLICATE again, using the same parameters contained in the original DUPLICATE command. Please see chapter 23 of the 11g Release 2 Database Backup and Recovery User's Guide for more details. B.w.t. be very careful with the duplicate command. A small mistake in one of the 'convert' parameters can potentially overwrite your target's controlfile without prompting ! Rene Kundersma Technical Architect Oracle Technology Services

    Read the article

  • Copy TFS Build Definitions between Projects and Collections

    - by Jakob Ehn
    Originally posted on: http://geekswithblogs.net/jakob/archive/2014/06/05/copy-tfs-build-definitions-between-projects-and-collections.aspxThe last couple of years it has become apparent that using multiple team projects in TFS is generally a bad idea. There are of course exceptions to this, but there are a lot ot things that becomes much easier to do when you put all of your projects and team in the same team project. Fellow ALM MVP Martin Hinshelwood has blogged about this several times, as well as other people in the community. In particular, using the backlog and portfolio management tools makes much more sense when everything is located in the same team project. Consolidating multiple team projects into one is not that easy unfortunately, it involves migrating source code, work items, reports etc.  Another thing that also need to be migrated is build definitions. It is possible to clone build definitions within the same team project using the TFS power tools. The Community TFS Build Manager also lets you clone build definitions to other team projects. But there is no tool that allows you to clone/copy a build definition to another collection. So, I whipped up a simple console application that let you do this. The tool can be downloaded from https://onedrive.live.com/redir?resid=EE034C9F620CD58D!8162&authkey=!ACTr56v1QVowzuE&ithint=file%2c.zip   Using CopyTFSBuildDefinitions You use the tool like this: CopyTFSBuildDefinitions  SourceCollectionUrl  SourceTeamProject  BuildDefinitionName  DestinationCollectionUrl  DestinationTeamProject [NewDefinitionName] Arguments SourceCollectionUrl The URL to the TFS collection that contains the team project with the build definition that you want to copy SourceTeamProject The name of the team project that contains the build definition BuildDefinitionName Name of the build definition DestinationCollectionUrl The URL to the TFS collection that contains the team project that you want to copy your build definition to DestinationTeamProject The name of the team project in the destination collection NewDefinitionName (Optional) Use this to override the name of the new build definition. If you don’t specify this, the name will the same as the original one Example: CopyTFSBuildDefinitions  https://jakob.visualstudio.com DemoProject  WebApplication.CI https://anotheraccount.visualstudio.com     Notes Since we are (potentially) create a build definition in a new collection, there is no guarantee that the various paths that are defined in the build definition exist in the new collection. For example, a build definition refers to server paths in TFVC or repos + branches in TFGit. It also refers to build controllers that definitely don’t exist in the new collection. So there will be some cleanup to do after you copy your build definitions. You can fix some of these using the Community TFS Build Manager, for example it is very easy to apply the correct build controller to a set of build definitions The problem stated above also applies to build process templates. However, the tool tries to find a build process template in the new team project with the same file name as the one that existed in the old team project. If it finds one, it will be used for the new build definition. Otherwise is will use the default build template If you want to run the tool for many build definitions, you can use this SQL scripts, compliments of Mr. Scrum/ALM MVP Richard Hundhausen to generate the necessary commands: USE Tfs_Collection GO SELECT 'CopyTFSBuildDefinitions.exe http://SERVER:8080/tfs/collection "' + P.ProjectName + '" "' + REPLACE(BD.DefinitionName,'\','') + '" http://NEWSERVER:8080/tfs/COLLECTION TEAMPROJECT'   FROM tbl_Project P        INNER JOIN tbl_BuildGroup BG on BG.TeamProject = P.ProjectUri        INNER JOIN tbl_BuildDefinition BD on BD.GroupId = BG.GroupId   ORDER BY P.ProjectName, BD.DefinitionName   Hope that helps, let me know if you have any problems with the tool or if you find it useful

    Read the article

  • Silverlight Cream for May 22, 2010 -- #867

    - by Dave Campbell
    In this Issue: Michael Washington, Xianzhong Zhu, Jim Lynn, Laurent Bugnion, and Kyle McClellan. A ton of Shoutouts this time: Cigdem Patlak (CrocusGirl) is interviewed about Silverlight 4 on Channel 9: Silverlight discussion with Cigdem Patlak Timmy Kokke has material up from a presentation he did, and check out the SilverAmp project he's got going: Code & Slides – SDE – What’s new in Silverlight 4 Graham Odds at ScottLogic has an interesting post up: Contextual cues in user interface design Einar Ingebrigtsen is discussing Balder licensing and is asking for input: Balder - Licensing SilverLaw has updated two of his stylings at the Expression Gallery to Silverlight 4: ChildWindow and Accordion Styling Silverlight 4 Keep this page bookmarked -- it's the only page you'll need for Silverlight and Expression links.. well, that and my blog :) .. from Adam Kinney: Silverlight and Expression Blend Jeremy Boyd and John-Daniel Trask have some sweet-looking controls in their new release: Introducing Silverlight Elements 1.1 Matthias Shapiro entered the Design for America competition with his Recovery Review: A Silverlight Sunlight Foundation Visualization Project be sure to check out his blog post about it -- there's a link at the bottom. Koen Zwikstra announed a new release: Document Toolkit 2 Beta 1 available ... built for SL4 and lots of features -- check out the blog post. From SilverlightCream.com: Simple Example To Secure WCF Data Service OData Methods Michael Washington has a follow-on tutorial up on WCF Data Security with OData -- essentially this is the 'securing the data' part ... the Silverlight part was in the previous post... all code is available. Developing Freecell Game Using Silverlight 3 Part 1 Xianzhong Zhu has the first of a two-part tutorial up on building Freecell in Silverlight 3 ... yeah... SL3 -- oh, can you say WP7?? :) Silverlight Top Tip: Startup page for Navigation Apps Jim Lynn has detailed how to go straight to a specific page you're working on in a complex Silverlight app say for debug purposes rather than page/page/page ... I was just thinking yesterday about putting a shortcut on my taskbar for something similar in .NET :) Handling DataGrid.SelectedItems in an MVVM-friendly manner Laurent Bugnion responded with code to a question about getting a DataGrid's SelectedItems into the ViewModel in MVVMLight. Demo code available too. RIA Services and Windows Live ID Kyle McClellan has a post up discussing using LiveID and RIA Services and Silverlight. Lots of external links sprinkled around. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • AJAX Control Toolkit - Globalization (Language)

    - by Guilherme Cardoso
    For those who use AjaxToolKit controls presenting a language-dependent data (the CalendarExternder with the names of the months and weeks, for example) you can change the language to be presented in a simple way. In Web.Config let's define the primary culture as follows: < system . web > <System. Web> < globalization uiCulture = "pt-pt" culture = "pt-pt" /> <Globalization UICulture = "en-us" culture = "en-us" /> ... ...  In this example I'm using Portuguese. To finish it is necessary to change our ScriptManager. Be it the ToolScriptManager AjaxToolKit or belonging to the ScriptManager's framework. NET, the properties as a vibrant and true are the EnableScriptGlobalization EnableScriptLocalization. < cc1 : ToolkitScriptManager ID = "ToolkitScriptManager1" runat = "server" EnableScriptGlobalization = "true" EnableScriptLocalization = "true" > <Cc1: ToolkitScriptManager ID = "ToolkitScriptManager1" runat = "server" EnableScriptGlobalization = "true" EnableScriptLocalization = "true"> </ cc1 : ToolkitScriptManager > </ Cc1: ToolkitScriptManager> or < asp : ScriptManager ID = "ScriptManager1" runat = "server" EnableScriptGlobalization = "true" EnableScriptLocalization = "true" > <Asp: ScriptManager ID = "ScriptManager1" runat = "server" EnableScriptGlobalization = "true" EnableScriptLocalization = "true"> </ asp : ScriptManager > </ Asp: ScriptManager>   Like hus we use the controls on AjaxToolKit, always using the Portuguese language. It is important that AjaxTookKit is updated to avoid shortages or errors in translation, though I have not updated by this error in the ModalPopup the latest version, and any controls that I have used are translated correctly.

    Read the article

  • Find the best OpenWorld sessions for learning about UX highlights

    - by mvaughan
    By Kathy Miedema, Oracle Applications User Experience  Have you clicked through the Oracle OpenWorld 2012 catalog? It’s amazingly dense, as usual. But one thing we noticed this year is that nearly half of the sessions mention some component of user experience, which is a sea change in our world. It means that more people understand, appreciate, and desire an effective user experience, and it also means that Oracle’s investment in its next-generation applications user experience, such as Oracle Fusion Applications, is increasingly apparent and interesting to its customers. So how do you choose the user experience sessions that make the most sense for you and your organization? Read our list to find out which sessions we think offer the most value for those interested in finding out more about the Oracle Applications user experience. If you’re interested in Oracle’s strategy for its user experience: CON9438: Oracle Fusion Applications: Transforming Insight into Action10:15 - 11:15 a.m. Tuesday, Oct. 2; Moscone West – 2007 CON9467: Oracle’s Roadmap to a Simple, Modern User Experience3:30 - 4:30 p.m. Wednesday, Oct. 3; Moscone West - 3002/3004 CON8718: Oracle Fusion Applications: Customizing and Extending with Oracle Composers11:15 a.m. - 12:15 p.m. Thursday, Oct. 4; Moscone West – 2008 GEN9663: General Session: A Panel of Masterminds—Where Are Oracle Applications Headed?1:45 - 2:45 p.m. Monday, Oct. 1; Moscone North - Hall D If you’re interested in PeopleSoft/PeopleTools: GEN8928: General Session: PeopleSoft Update and Product Roadmap3:15 - 4:15 p.m. Monday, Oct. 1; Moscone West - 3002/3004 CON9183: PeopleSoft PeopleTools Technology Roadmap4:45 - 5:45 p.m. Monday, Oct. 1; Moscone West - 3002/3004 CON8932: New Functional PeopleSoft PeopleTools Capabilities for the Line-of-Business User5:00 - 6:00 p.m. Tuesday, Oct. 2; Moscone West – 3007 If you’re interested in E-Business Suite: GEN8474: General Session: Oracle E-Business Suite—Strategy, Update, and Roadmap12:15 - 1:15 p.m. Monday, Oct. 1; Moscone West - 2002/2004 CON9026: Latest Oracle E-Business Suite 12.1 User Interface and Usability Enhancements1:15 - 2:15 p.m. Tuesday, Oct. 2; Moscone West – 2016 If you’re interested in Siebel: CON9700: Siebel CRM Overview, Strategy, and Roadmap12:15 - 1:15 p.m. Monday, Oct. 1; Moscone West – 2009 CON9703: User Interface Innovations with the New Siebel “Open UI”10:15 - 11:15 a.m. Tuesday, Oct. 2; Moscone West – 2009 If you’re interested in JD Edwards EnterpriseOne: HOL10452: JD Edwards EnterpriseOne 9.1 User Interface Changes10:15 - 11:15 a.m. Wednesday, Oct. 3; Marriott Marquis - Nob Hill AB CON9160: Showcase of the JD Edwards EnterpriseOne User Experience1:15 - 2:15 p.m. Wednesday, Oct. 3; InterContinental - Grand Ballroom B CON9159: Euphoria with the JD Edwards EnterpriseOne User Experience11:45 a.m. - 12:45 p.m. Wednesday, Oct. 3; InterContinental - Grand Ballroom B If you’re interested in Oracle Fusion Applications user experience design patterns: Functional design patterns that helped create the Oracle Fusion Applications user experience are now available. Learn more about these new, reusable usability solutions and best-practices at the Oracle JDeveloper and Oracle ADF demopods during Oracle OpenWorld 2012. Or visit the OTN Lounge between 4:30 p.m. and 6 p.m. on Wednesday, Oct. 3, to talk to Ultan O'Broin from the Oracle Applications User Experience team.    Demopod location: Moscone Center, South Exhibition Hall Level 1, S-207 OTN (Oracle Technology Network) Lounge: Howard Street tent On the demogrounds: Head to the demogrounds to see new demos from the Applications User Experience team, including the new look for Fusion Applications and what we’re building for mobile platforms. Take a spin on our eye tracker, a very cool tool that we use to research the usability of a particular design. Visit the Usable Apps OpenWorld page to find out where our demopods will be located.Photo by Martin Taylor, Oracle Applications User ExperienceA tour takes place in one of the usability labs at Oracle’s headquarters in Redwood Shores, Calif. At our labs, on-site and at HQ: We are also recruiting participants for our on-site lab, in which we gather feedback on new user experience designs, and taking reservations for a charter bus that will bring you to Oracle headquarters for a lab tour Thursday, Oct. 4, or Friday, Oct. 5. Tours leave at 10 a.m. and 1:45 p.m. from the Moscone Center in San Francisco. You’ll see more of our newest designs at the lab tour, and some of our research tools in action. For more information on any OpenWorld sessions, check the content catalog, also available at www.oracle.com/openworld. For information on Applications User Experience (Apps UX) sessions and activities, go to the Usable Apps OpenWorld page.

    Read the article

  • BAM Data Control in multiple ADF Faces Components

    - by [email protected]
    As we know Oracle BAM data control instance sharing is not supported.When two or more ADF Faces components must display the same data, and are bound to the same Oracle BAM data control definition, we have to make sure that we wrap each ADF Faces component in an ADF task flow, and set the Data Control Scope to isolated. This blog will show a small sample to demonstrate this. In this sample we will create a Pie and Bar using same BAM DC, such that both components use same Data control but have isolated scope.This sample can be downloaded  fromSample1.zip Set-up: Create a BAM data control using employees DO (sample) Steps: Right click on View Controller project and select "New->ADF Task Flow" Check "Create Bounded Task Flow" and give some meaningful name (ex:EmpPieTF.xml ) to the TaskFlow(TF) and click on "OK"CreateTF.bmpFrom the "Components Palette", drag and drop "View" into the task flow diagram. Give a meaningful name to the view. Double Click and Click "Ok" for  "Create New JSF Page Fragment" From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Pie" (Pie: Sales_Number and Slices: Salesperson) Repeat step 1 through 4 for another Task Flow (ex: EmpBarTF). From "Data Controls" drag and drop "Employees->Query"  into this jsff page as "Graph->Bar" (Bars :Sales_Number and X-axis : Salesperson). Open the Taskflow created in step 2. In the Structure Pane, right click on "Task Flow Definition -EmpPieTF" Click "Insert inside Task Flow Definition - EmpPieTF -> ADF Task Flow -> Data Control Scope". Click "OK"TFDCScope.bmpFor the "Data Control Scope", In the Property Inspector ->General section, change data control scope from Shared to Isolated. Repeat step 8 through 11 for the 2nd Task flow created. Now create a new jspx page example: Main.jspxDrag and drop both the Task flows (ex: "EmpPieTF" and "EmpBarTF") as regions. Surround with panel components as needed.Run the page Main.jspxMainPage.bmpNow when the page runs although both components are created using same Data control the bindings are not shared and each component will have a separate instance of the data control.

    Read the article

  • Silverlight Cream for April 08, 2010 -- #834

    - by Dave Campbell
    In this Issue: Michael Washington, Phil Middlemiss, Yochay Kiriaty, Giorgetti Alessandro, Mike Snow, John Papa, SilverLaw, smartyP, and Pete Brown. Shoutouts: Steve Wortham sent me a link to his RegEx tool that is written in Silverlight... definitely worth a look: Introducing Code Hinting for Regular Expressions Joshua Blake posted his MIX10 materials: MIX10 NUI session sample code From SilverlightCream.com: Silverlight MVVM: An (Overly) Simplified Explanation Michael Washington has a tutorial up for getting your arms (and head) around MVVM and Silverlight, and Blend too. A Chrome and Glass Theme - Part 3 Phil Middlemiss has part 3 up of his tutorial series on building an awesome theme for Silverlight... he's styling the textbox and checkbox this time around, and improving the button too Automatic Rotation Support or Automatic Multi-Orientation Layout Support for Windows Phone Yochay Kiriaty is giving up some WP7 goodness with his post on Multi-Orientation Layout Support ... yeah I had to say it twice myself :) good links and all the code in addition to the good blog post Silverlight Navigation Framework: resolve the pages using an IoC container Giorgetti Alessandro has some pretty cool code up as a proof of concept of using an IoC container with the Navigation Framework of Silverlight 4. Silverlight Tip of the Day No. 109 – Attach to Process Debugging Mike Snow is back doing Tips of the Day... and number 109 is showing how to attach the debugger to a running Silverlight app. Silverlight TV 20: Community Driven Development with WCF RIA Services In his latest Silverlight TV episode, John Papa talks with Jeff Handley about RIA Services, and how feedback from the community helped shape the product. ChildWindowMouseScrollResizeBehavior - Silverlight 3 SilverLaw has a new Behavior up at the Expression Gallery that gives you resizing on a ChildWindow using the Mouse Wheel. Creating a Windows Phone 7 Metro Style Pivot Application [Part 3] smartyP has the 3rd and final episode for his WP7 Pivot up, and this one includes not only the source but a video tutorial. Layout Rounding Pete Brown talks about Layout Rounding and it has nothing to do with rounding corners... it has to do with rounding off where your objects get placed pixel-wise ... I've blogged about this seemingly-anti-aliasing more than once... Pete has the real answer Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Enterprise Data Quality - New and Improved on Oracle Technology Network

    - by Mala Narasimharajan
    Looking for Enterprise Data Quality technical and developer resources on your projects? Wondering where the best place is to go for finding the latest documentations, downloads and even code samples and libraries?  Check out the new and improved Oracle Technical Network pages for Oracle Enterprise Data Quality.  This section features developer forums as well for EDQ and Master Data Management so that you can connect with other technical professionals who have submitted concerns or posted tips and tricks and learn from them.  Here are the links to bookmark:    Oracle Technology Network website * NEW *   Installation Guide for Enterprise Data Quality Address Verification  Enterprise Data Quality Forum For more information on Oracle's software offerings for data quality and master data management visit:  http://www.oracle.com/us/products/applications/master-data-management/index.html http://www.oracle.com/us/products/middleware/data-integration/enterprise-data-quality/overview/index.html   

    Read the article

  • Promoting Organizational Visibility for SOA and SOA Governance Initiatives – Part I by Manuel Rosa and André Sampaio

    - by JuergenKress
    The costs of technology assets can become significant and the need to centralize, monitor and control the contribution of each technology asset becomes a paramount responsibility for many organizations. Through the implementation of various mechanisms, it is possible to obtain a holistic vision and develop synergies between different assets, empowering their re-utilization and analyzing the impact on the organization caused by IT changes. When the SOA domain is considered, the issue of governance should therefore always come into play. Although SOA governance is mandatory to achieve any measure of SOA success, its value still passes incognito in most organizations, mostly due to the lack of visibility and the detached view of the SOA initiatives. There are a number of problems that jeopardize the visibility of these initiatives: Understanding and measuring the value of SOA governance and its contribution – SOA governance tools are too technical and isolated from other systems. They are inadequate for anyone outside of the domain (Business Analyst, Project Managers, or even some Enterprise Architects), and are especially harsh at the CxO level. Lack of information exchange with the business, other operational areas and project management – It is not only a matter of lack of dialog but also the question of using a common vocabulary (textual or graphic) that is adequate for all the stakeholders. We need to generate information that can be useful for a wider scope of stakeholders like Business and enterprise architectures. In this article we describe how an organization can leverage from the existing best practices, and with the help of adequate exploration and communication tools, achieve and maintain the level of quality and visibility that is required for SOA and SOA governance initiatives. Introduction Understanding and implementing effective SOA governance has become a corporate imperative in order to ensure coherence and the attainment of the basic objectives of SOA initiatives: develop the correct services control costs and risks bound to the development process reduce time-to-market Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Governance,Link Consulting,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >