Search Results

Search found 21914 results on 877 pages for 'core services'.

Page 473/877 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • What is meant by "Repeat Business" ?

    - by vinoth
    Repeat Business obviously happens because the company has a great product or a great service. In the software industry, do companies make the code base complex enough so that the maintenance comes back to them? I have heard of cases where companies say "ya this code base has minor errors, let's ship them anyway, and let the customers come back for another change request on these". Then they would sometimes charge the customer for that. This question is specific to the software services industry. Do these things happen in the real world? I am trying to understand the business process.

    Read the article

  • How Curiosity Took Its Self Portrait [Video]

    - by Jason Fitzpatrick
    There was enough confusion among the public as to how exactly the Curiosity Rover was able to photograph itself without the camera arm intruding into the photo that NASA released this video detailing the process. For those readers familiar with photograph blending and stitching using multiple photo sources, this should come as no surprise. For the unfamiliar, it’s an interesting look at how dozens of photos can be blended together so effectively that the arm–robotic or otherwise–of the photographer can be taken right out. Hit up the link below to read more about how NASA practiced on Earth for the shot and to see a high-res copy of the actual self portrait. Mars Rover Self-Portrait Shoot Uses Arm Choreography [NASA] Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • SQL Server in the Evening - 19th Jan in Frimley, Surrey

    - by JustinL
    Just a short note to mention, Gavin Payne (blog and twitter) is organising an event shortly in Frimley, Surrey - SQL Server in the Evening.  The Agenda focuses on Infrastructure DBAs, with the following sessions planned:Getting the most for SQL Server from VMware – VMware Sales EngineerSQL Server Transparent Data Encryption – Gavin Payne, Solution Architect, AttendaUnderstanding where cloud services really fit within your data centre – Matt Mould, Advisory Practice Consultant, EMC ConsultingIf it sounds like it might float your boat and/ or you fancy meeting some fellow SQL Server DBAs, it's free to register here: http://www.eventbrite.com/event/1125559579Regards,Justin Langford - Coeo LtdSQL Server Consultants | SQL Server Remote DBA

    Read the article

  • Looking for WAMP Benchmarking (my current WAMP is very slow, so are other solutions)

    - by therobyouknow
    I'm running ZWAMP WAMP stack on my local development machine. However I have found it to be very slow at serving pages from a Drupal site I have setup. By contrast, my live production site on shared hosting is reasonably quick. For me the goal with a local WAMP stack was to develop offline and send completed work to the live production site. I liked ZWAMP because it didn't require adjustments to User Access Control or other permissions. I've looked at Drupal Acquia Development Stack but found this too restrictive: only one site instance/doc root can be installed. I've looked at other DAMP stacks and heard reports of them being slow. My local development machine that I am running the WAMP stack on is a Dual Core 2.6Ghz hyperthreaded Intel i7, 4Gb RAM, 7200rpm hard disk, running Windows 64bit professional. Surely this is fast enough. So I'm looking for: Causes of the slowness of the WAMP and how to improve the speed Benchmark data of various WAMP stacks

    Read the article

  • Exam 71-516: Accessing Data with Microsoft .NET Framework 4

    - by Ricardo Peres
    I had the chance to take the beta version of exam 71-516 today. Here are my thoughts on it: first, I was rather annoyed to discover that I will only know if I passed or not about 8 weeks after the beta period expires (July, 02), which probably means September. It was a difficult exam, especially since I don't have any practice on some of the new Entity Framework options. The items covered, from the most covered to the least covered, were: Entity Framework (50-50 for POCO/Non-POCO) LINQ to SQL WCF Data Services Classic ADO.NET (DataSets, DataTables, DataAdapters, TableAdapters, Connections and Commands LINQ to XML Sync Framework (surprise!) All added up, I think it was a difficult exam. My advise is that you practice a lot! I will post the result as soon as I know it.

    Read the article

  • Chrome Apps Office Hours: Networking APIs

    Chrome Apps Office Hours: Networking APIs Ask and vote for questions at goo.gl Writing a web server in a web browser is a little meta, but it's one of the many new scenarios unlocked by the new APIs available in Chrome Apps! Join Paul Kinlan and Pete LePage as they explore the networking APIs available to Chrome Apps. They'll show you how you you can write your own web server, and connect to other devices via the network. While the web server in a browser may not be a common scenario, accessing the TCP and UDP stack is extremely powerful and will allow your apps to connect to other hardware or services like never before. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • ETPM/OUAF 2.3.1 Framework Overview - Session 2

    - by Rick Finley
    A number of sessions are planned to review the ETPM (OUAF) 2.3.1 Framework.  These sessions will include an overview of the Navigation, Portals, Zones, Business Objects, Business Services, Algorithms, Scripts, etc.. Session 2 includes a more in depth discusion of Business Objects (BO).  Session 2 specifically covers BO Schema, BO Options, and BO inheritance in more depth.  Click on the link below for Session 2 (52 minutes). To stream the recording:   https://oracletalk.webex.com/oracletalk/ldr.php?AT=pb&SP=MC&rID=70624122&rKey=8a16e59ed3736f1c To download the recording: https://oracletalk.webex.com/oracletalk/lsr.php?AT=dw&SP=MC&rID=70624122&rKey=140a83f63b8fa22a For additional questions, please contact [email protected].

    Read the article

  • Corrupt plymouth boot screen when using properietary Nvidia drivers

    - by Tejas Hosangadi
    I recently got the final version of Ubuntu Natty Narwhal x64 and everything is working just fine. The only thing I am having trouble with is the boot screen. After I installed the proprietary nVidia drivers, the boot screen becomes corrupt and only shows text (as it has been doing since Ubuntu 10.04). When I followed the instructions to fix the boot screen in 10.04 and 10.10, the boot screen still remains the same and the problem isn't resolved. Please give a full guide on how to fix this problem. Machine Details Computer : HP Pavilion p6240in Processor : Core 2 duo Memory : 4096 megabytes Graphics : nVidia G210 3d graphics Monitor : Dell 17 inch lcd Another question I would like to ask is whether it is possible to use this script (http://www.webupd8.org/2010/10/script-to-fix-ubuntu-plymouth-for.html) with 11.04. I am asking this question because it says that the script works only with 10.04 and 10.10.

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

  • Is SOA really dead?

    - by Ahsan Alam
    I have come across many articles/blogs where authors have strongly hailed the death of Service Oriented Architecture (SOA). I could almost hear the laughter pouring out of their writings. Being a big supporter of SOA, I have found myself wondering – have I been following the wrong path all along? Do I need to change the way I think? Then I started to look around. Many newer technologies and concepts have evolved in the past few years. People are starting to take advantage of cloud computing, SAAS (Software as a Service), multitudes of on-demand platforms and many more. Now, I started thinking – is SOA really dead? In order to effectively utilize these newer concepts, I believe we need SOA more than ever because it gives us loose-coupling. People often forget that the key principal behind SOA is loose-coupling. We cannot achieve SOA just by throwing services (WCF, Web Service); we need loosely coupled systems.

    Read the article

  • Portal 11.1.1.4 Certified with E-Business Suite

    - by Steven Chan
    Oracle Portal 11g allows you to build, deploy, and manage enterprise portals running on Oracle WebLogic Server.  Oracle Portal 11g includes integration with Oracle WebCenter Services 11g and BPEL, support for open portlet standards JSR 168, WSRP 2.0, and JSR 301.Portal 11g (11.1.1.4) is now certified with Oracle E-Business Suite Release.  Portal 11.1.1.4 is part of Oracle Fusion Middleware 11g Release 1 Version 11.1.1.4.0, also known as FMW 11g Patchset 3.  Certified E-Business Suite releases are:EBS Release 11i 11.5.10.2 + ATG RUP 7 and higherEBS Release 12.0.6 and higherEBS Release 12.1.1 and higherIf you're running a previous version of Portal, there are a number of certified and supported upgrade paths to Portal 11g (11.1.1.4):

    Read the article

  • Codeur.com : plus de 550 appels d'offre ouverts en ce moment sur la première place de marché des prestataires freelances en France

    Codeur.com : plus de 550 appels d'offre ouverts en ce moment sur la 1ère place de marché des prestataires freelances en France Codeur.com est la 1ère place de marché des prestataires freelances en France. Elle compte 26.189 prestataires inscrits et plus de 21.893 appels d'offres publiés depuis sa création 2006. Comme son nom l'indique, Codeur permet de trouver, ou de proposer ses services en tant que développeur mais aussi webmaster, graphiste, référenceur, traducteur, infogérant, rédacteur, community manager et web marketeur. Tous les métiers du web sont représentés. Depuis l'année dernière : le site est gratuit pour les porteurs de projet et très abordable pour les prest...

    Read the article

  • Upgrading in java web development

    - by Vladimir Ivanov
    I'm a java web developer for nearly 3 years. Always trying to learn more and be better but still I feel that the amount of knowledge is not that good as I want. The knowledge in some places still seems to be non-systematic and don't provide a very strong base to solve the problems as good as I want to do it. The example I have is my senior developer, whose solutions are always more efficient and beautiful. So, the question is rather simple and hard the same time. What is the right way to get my knowlege be more systematic and therefore improve it's quality. I understand that there is no practically good answer for the all java programming, so let's focus on the modern java web or nearly web technologies: JSF 2.0 JPA2 and Hibernate as persistence provider Web services and Java SE as a core. What methodologies or books or learning technics lead to the strong knowledge base within the given knowledge area?

    Read the article

  • Ubuntu Server 11.04 installs fine then gets stuck on USBHID

    - by SZetta
    ites! I have installed Server 11.04 many many times from this disc and they have always worked perfectly (installed and operated fine). I wanted to reinstall my server again on the same hardware so I threw the disc in and installed but once it finished installing the boot loader and ejected the disc, it decided to go unresponsive. I restarted it and it loaded in through the grub perfectly it seemed, but then it goes and says either that a ipv6 router is unavailable (even though I have it hardwired to my network) or it goes and says usbhid: USB HID core driver and goes unresponsive there. I am very confused as this never happened before from this install. I want to see if this is just me or what before I just go ahead and download the newest version of server. Any advice would be appreciated. Thanks

    Read the article

  • VirtualBox 4.0 Rocks Extensions and a Simplified GUI

    - by Jason Fitzpatrick
    If you’re a fan of VirtualBox you’ll definitely want to grab the new 4.0 update; it comes packed with an extension manager, a fresh and user-friendly GUI, live virtual machine previews, and more. Check out our screenshot tour for a closer look. Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? Sunset in a Tropical Paradise Wallpaper Natural Wood Grain Icons for Your Desktop and App Launcher Docks My Blackberry Is Not Working! The Apple Too?! [Funny Video] Hidden Tracks Your Stolen Mac; Free Until End of January Why the Other Checkout Line Always Moves Faster World of Warcraft Theme for Windows 7

    Read the article

  • Tech-Ed 2012 North America &ndash; meet me there!

    - by Nikita Polyakov
    I’m excited to be near home in Orlando, FL next week for Tech-Ed 2012 NA. Here are the times you can come chat with me about the Windows Phone topics at the TLC stations in the main expo hall: Monday 7:00pm - 9:00pm Tuesday 12:30pm - 3:30pm Wednesday  12:30pm - 3:30pm Thursday 10:30am - 12:30pm This year I will be attending many new sessions on topics such as System Center 2012 and SharePoint. These topics are exciting and useful for me as part of my new role at Tribridge, in fact this trip is sponsored by Concerto Cloud Services where I am an Cloud Applications Engineer. Also, grab the Nikita application it should be updated just in time so you can have this schedule in your pocket :)

    Read the article

  • How to cancel Update Manager downloading flashplugin-installer?

    - by kev
    Update Manger has frozen for about 60min when downloading flashplugin-installer. And the Cancel button of Applying Changes dialog is disabled. When I click the top-left x, it doesn't response. How to cancel downloading flashplugin-installer? ...SKIP... Setting up libxatracker1 (8.0.2-0ubuntu3.1) ... Setting up update-manager-core (1:0.156.14.5) ... Setting up update-manager (1:0.156.14.5) ... Setting up update-notifier-common (0.119ubuntu8.4) ... flashplugin-installer: downloading http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.2.202.236.orig.tar.gz

    Read the article

  • DVI+HDMI out, at startup only HDMI is available

    - by Alasjo
    I have my computer next to my hdtv. The main screen is connected via DVI while the tv is connected via HDMI. If I start the computer without the HDMI plugged in, everything is ok: I see the login screen and sound is output through analog out. But if the HDMI is plugged in before I start the computer, only the tv gets an image (the login screen), the main screen is black or at some times purple, but even after login the main screen is black. Also sound is still output through analog out. Not sure whether it's a hw issue, or an Ubuntu issue, or a combined hardware/Ubuntu compatibility issue (Sandy Bridge). This is my setup: Ubuntu 11.10 Oneiric Ocelot (64bit) ASUS P8H67-M LE Intel Core i3-2100 I don't have any custom video settings, my main screen is recognized properly when HDMI is not plugged in at startup. Cheers.

    Read the article

  • state: pending & public-address: null :: Juju setup on local machine

    - by Danny Kopping
    I've setup Juju on a local VM inside VMWare running on Mac OSX. Everything seems to be working fine, except when I deployed MySQL & WordPress from the examples, I get the following when I run juju status: danny@ubuntu:~$ juju status machines: 0: dns-name: localhost instance-id: local instance-state: running state: down services: mysql: charm: local:oneiric/mysql-11 relations: db: wordpress units: mysql/0: machine: 0 public-address: 192.168.122.107 relations: db: state: up state: started wordpress: charm: local:oneiric/wordpress-31 exposed: true relations: db: mysql units: wordpress/0: machine: 0 open-ports: [] public-address: null relations: {} state: pending state: pending and public-address: null Can't find any documentation relating to this issue. Any help very much appreciated - wonderful idea for a project!

    Read the article

  • Temporarily share/deploy a python (flask) application

    - by Jeff
    Goal Temporarily (1 month?) deploy/share a python (flask) web app without expensive/complex hosting. More info I've developed a basic mobile web app for the non-profit I work for. It's written in python and uses flask as its framework. I'd like to share this with other employees and beta testers (<25 people). Ideally, I could get some sort of simple hosting space/service and push regular updates to it while we test and iterate on this app. Think something along the lines of dropbox, which of course would not work for this purpose. We do have a website, and hosting services for it, but I'm concerned about using this resource as our website is mission critical and this app is very much pre-alpha at this point. Options I've researched / considered Self host from local machine/network (slow, unreliable) Purchase hosting space (with limited non-profit resources, I'm concerned this is overkill) Using our current web server / hosting (not appropriate for testing) Thanks very much for your time!

    Read the article

  • GPL code allowing non-GPL local copies of nondistributed code

    - by Jason Posit
    I have come across a book that claims that alterations and augmentations to GPL works can be kept close-source as long as these are not redistributed into the wild. Therefore, customizations of websites deriving from GPL packages need not be released under the GPL and developers can earn profit on them by offering their services to their clients while keeping their GPL-based code closed source at the same time. (cf. Chapter 17 of WordPress Plugin Development by Wrox Press). I've never realized this, but essentially, by putting restrictions on redistributable code the GPL says nothing about what can and cannot be done with code which is kept private in terms of the licensing model. Have I understood this correctly?

    Read the article

  • Boston: Free Java Developer Event March 3rd!

    - by Jacob Lehrbaum
    Attention Boston area developers!  Oracle has been running a series of free one-day Java Developer events in the US, Europe, and Asia since last November, and on March 3rd, this highly popular series is coming to the Westin Copley Place in Boston.  The Java Developer Day will include four tracks of sessions and hands-on-labs designed for developers interested in Server, Desktop, Embedded, and core Java SE platform topics.  Technologies covered include Java EE, Java ME and Java SE (including the JDK).  From the event page: Come to this free event if you are interested in:Evaluating the Java platformUsing other languages on the JVMBuilding server side JavaConstructing Rich Web or Desktop ApplicationsUnderstanding the JVM and its built in diagnosticsMaking Smart Devices even smarterCheck out the event page to read more and/or register.  The event is free, but space is limited so register today!

    Read the article

  • How to reduce the tearing while watching videos?

    - by Leonardo TM
    I'm using the Ubuntu 10.10 with both VLC player and MPlayer and already installed the ATI drivers. I watched the same videos on Windows 7 and Ubuntu. But on Ubuntu the images have a lot of tearing. I tried some newbish-configs on my ATI config tool, but nothing changed. I tried videos in mkv, avi, rmvb... and in all kinds of resolutions. I would love to see some tips or maybe a solution to this problem. (I have searched for similar questions but I didn't find any :/ ) The english is not my primary lang so sorry for my mistakes. Thanks in advance! []'s Leonardo My HW Config (Acer Aspire 5740G-6979) -ATI Mobility Radeon 5650 -Intel Core i5-430M -4GB Ram

    Read the article

  • Oracle Database Appliance Now Certified by SAP

    - by Bandari Huang
    All SAP products based on SAP NetWeaver 7.x that are also certified for Oracle Database 11g Release 2 (single node or RAC) can now be used with the Oracle Database Appliance. RAC One Node is NOT supported. Only Three-Tier SAP Installations. Only the Oracle database can run on the Oracle Database Appliance.  No SAP instance can be deployed on the Oracle Database Appliance. SAP instances have to run on different middle-tier machines of any hardware architecture and operating system. Central Services (ASCS and/or SCS) can be configured to run on the Oracle Database Appliance for Unicode installations of SAP. SAP BR*Tools support is now available for the Oracle Database Appliance. For more information about SAP on ODA, please refer: Using SAP NetWeaver with the Oracle Database Appliance New Nov2012 Note 1760737 - SAP Software and Oracle Database Appliance (ODA) Note 1785353 - ODA 11.2.0: Patches for 11.2.0.3  

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >