Search Results

Search found 32439 results on 1298 pages for 'oracle service'.

Page 483/1298 | < Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >

  • Range partition skip check

    - by user289429
    We have large amount of data partitioned on year value using range partition in oracle. We have used range partition but each partition contains data only for one year. When we write a query targeting a specific year, oracle fetches the information from that partition but still checks if the year is what we have specified. Since this year column is not part of the index it fetches the year from table and compares it. We have seen that any time the query goes to fetch table data it is getting too slow. Can we somehow avoid oracle comparing the year values since we for sure know that the partition contains information for only one year.

    Read the article

  • How much Maximum Data we can store in a File in salesforce

    - by Ritesh Mehandiratta
    i searched a little for the size of file in salesforce . i found this link http://help.salesforce.com/HTViewHelpDoc?id=collab_files_size_limits.htm&language=en_US its showing that file size can be upto 2 GB.i have to store IDs in a text file and want to make it scalable for for nearly about 1 Million record .file size will be equal to 15 MB .can any one please provide some good tutorial how to create such kind of files and using it in apex for retrieving and updating data

    Read the article

  • Unit testing installation of services

    - by skiphoppy
    Our installer program is going to be installing a number of system services, under both Windows and UNIX, using JavaServiceWrapper. There will be a class responsible for creating JavaServiceWrapper config files, installing the services, etc. Can I have some suggestions on how to unit-test this class?

    Read the article

  • How to index a date column with null values?

    - by Heinz Z.
    How should I index a date column when some rows has null values? We have to select rows between a date range and rows with null dates. We use Oracle 9.2 and higher. Options I found Using a bitmap index on the date column Using an index on date column and an index on a state field which value is 1 when the date is null Using an index on date column and an other granted not null column My thoughts to the options are: to 1: the table have to many different values to use an bitmap index to 2: I have to add an field only for this purpose and to change the query when I want to retrieve the null date rows to 3: locks tricky to add an field to an index which is not really needed What is the best practice for this case? Thanks in advance Some infos I have read: Oracle Date Index When does Oracle index null column values?

    Read the article

  • Ideas Needed for a Base Code System

    - by Tegan Snyder
    I've developed a PHP web application that is currently in need of a strategic restructuring. Currently when we setup new clients we give them the entire code base on a subdomain of our main domain and create a new table for them in the database. This results in each client having the entire codebase, meaning when we make bug changes, fixes we have to go back and apply them independently across all clients and this is a pain. What I'd like to create is a base code server that holds all the core PHP files. base.domain.com Then all of our clients (client.domain.com) will only need a few files: config.php would have the database connection information. index.php - displays the login box if session non-existant, otherwise it loads baseline code via remote includes to base.domain.com. My question is does my logic seem feasible? How do other people handle similar situations by having a base code? Also.... Is it even possbile to remotely include PHP files from base.domain.com and include them in client.domain.com? Thanks, Tegan

    Read the article

  • Java library suggestions for implementing a custom web server

    - by dexter
    I would like to create a web server that will serve/accept json files through REST. The JSON files being served will come from a database query and format the results into JSON. Any suggestions for a good java library. I have tried using Apache HTTPComponents. While maybe I can just create a servlet but I am not really allowed to install a servlet container in the server machine.

    Read the article

  • Determining idle network transfer bandwidth

    - by rwmnau
    I'm building an application that will move around some potentially large files, but I want to do it without disturbing the user's network connection by flooding it. I know that Windows BITS has this kind of functionality, and that's essentially what I'm looking to replicate (as far as the throttling goes). I know BITS has other functionality as well that I'm not interested in, and I also have the option to consume it from .NET, but I'm interested in how it works. I've looked online, and I haven't found a clear explanation of how exactly BITS determines how much bandwidth to consume, aside from a vague "BITS polls activity to watch for a drop in the bandwidth used by other programs." What does this mean? Bandwidth consumed by other programs can drop for a number of other reasons as well - can BITS tell the difference? If I was looking for a process that replicated this "stay just under the radar, where the user won't notice the transfers" functionality, how would I go about doing it?

    Read the article

  • Need help writing a recurring task scheduler.

    - by Sisiutl
    I need to write a tool that will run a recurring task on a user configurable schedule. I'll write it in C# 3.5 and it will run on XP, Windows 7, or Windows Server 2008. The tasks take about 20 minutes to complete. The users will probably want to set up several configurations: e.g, daily, weekly, and monthly cycles. Using Task Scheduler is not an option. The user will schedule recurrences through an interface similar to Outlook's recurring appointment dialog. Once they set up the schedule they will start it up and it should sit in the system tray and kick off its tasks at the appointed times, then send mail to indicate it has finished. What is the best way to write this so that it doesn't eat up resources, lock up the host, or otherwise misbehave?

    Read the article

  • Real time location tracking - windows program or browser based?

    - by mawg
    I want to track a few hundred, maybe a few thousand people in real time. Let's say that the hardware aspects are sorted out and I can get the data into a database. Now, I want to get it out and show it, in real-time. Weeeell ... "real-enough" time. Let's say that I want to draw a floorplan of a building and plot everyone every 1 to 5 seconds. (I might want to show only certain "kinds" of people at the click of a button; I will need datamining, etc, but let's stick with the worse case scenario). I am comfortable enough with PHP, though not this sort of thing. I personally would be happier with a windows app coded in Delphi, but the trend seems to be to make everything browser based. So, the question, I guess is whether a browser can handle this and whether there are compelling arguments for a windows-based or browser-based solution. If browser-based can handle this (displaying a few thousand data-points a second), and there are no overwhelming arguments for windows then I guess I will go for browser-based and learn a few new tricks. The obvious advantage being that I could also re-use a large part of my code for (vehicle) tracking on Google maps.

    Read the article

  • Jersey, JAXB and getting an objectextending an abstract class as a parameter

    - by krajol
    I want to get an object as a parameter of a POST request. I got an abstract superclass that is called Promotion and subclasses Product and Percent. Here's how I try to get a request: @POST @Consumes(MediaType.APPLICATION_XML) @Produces(MediaType.APPLICATION_XML) @Path("promotion/") public Promotion createPromotion(Promotion promotion) { Product p = (Product) promotion; System.out.println(p.getPriceAfter()); return promotion; } and here's how I use JAXB in classes' definitions: @XmlRootElement(name="promotion") @XmlSeeAlso({Product.class,Percent.class}) public abstract class Promotion { //body } @XmlRootElement(name="promotion") public class Product extends Promotion { //body } @XmlRootElement(name="promotion") public class Percent extends Promotion { //body } So the problem now is when I send a POST request with a body like this: <promotion> <priceBefore>34.5</priceBefore> <marked>false</marked> <distance>44</distance> </promotion> and I try to cast it to Product (as in this case, fields 'marked' and 'distance' are from Promotion class and 'priceBefore' is from Product class) I get an Exception: java.lang.ClassCastException: Percent cannot be cast to Product. It seems like Percent is chosen as a 'default' subclass. Why is that and how can I get an object that is a Product?

    Read the article

  • Looking for a list of free data api's and web services

    - by darren
    I'm wondering if anybody has come across a comprehensive list of free sources for data (as a web api) or web services. I'm looking to start a new project to tinker with in my spare time and I am wondering what interesting data is available to play with. It seems like many api services such as last.fm or google search don't exist or are no longer free. Possible examples of what I am looking for information about a given ip address mapping api's information about books, movies, music information about places, businesses, attractions meteorological, financial or other scientific data shopping, products I would appreciate any suggestions you may have about interesting data freely available through the web. thanks

    Read the article

  • Future of web services

    - by Landon Ashes
    I want to know what are the possible Future research areas Regarding "Web Services" and in what direction "Web Services" are moving. I am not talking about "Microsoft Web Services". I am talking about "Web Services" in general. I did google but what ever i found was like couple of years old and obsolete. couldnt get any direction from IEEE too. Plz some expert of this line should guide me. I will be obliged like anything. Thanks in Advance.

    Read the article

  • Passing custom info to mongrel_rails start

    - by whaka
    One thing I really don't understand is how I can pass custom start-up options to a mongrel instance. I see that a common approach is the use environment variables, but in my environment this is not going to work because my rails application serves many different clients. Much code is shared between clients, but there are also many differences which I implement by subclassing controllers and views to overload or extend existing features or introduce new ones. To make this all work, I simply add the paths to client specific modules the module load path ($:). In order to start the application for a particular client, I could now use an environment variable like say, TARGET=AMAZONE. Unfortunately, on some systems I'm running multiple mongrel clusters, each cluster serving a different client. Some of these systems run under Windows and to start mongrel I installed mongrel_services. Clearly, this makes my environment variable unsuitable. Passing this extra bit of data to the application is proving to be a real challenge. For a start, mongrel_rails service_install will reject any [custom] command line parameters that aren't documented. I'm not too concerned as installing the services using the install program is trivial. However, even if I manage to install mongrel_services such that when run it passes the custom command line option --target to mongrel_rails start, I get an error because mongrel_rails doesn't recognize the switch. So here were the things I looked at: Pass an extra parameter: mongrel_rails start --target XYZ ... use a config file and add target:XYZ, then do: mongrel_rails start -C x:\myapp\myconfig.yml modify the file: Ruby\lib\ruby\gems\1.8\gems\mongrel-1.1.5-x86-mswin32-60\lib\mongrel\command.rb Perhaps I can use the --script option, but all docs that I found on it were for Unix 1 and 2 simply don't work. I played with 4 but never managed it to do anything. So I had no choice but to go with 3. While it is relatively simple, I hate changing ruby library code. Particularly disappointing is that 2 doesn't work. I mean what is so unreasonable about adding other [custom] options in the config file? Actually I think this is a fundamental piece that is missing in rails. Somehow, the application should be able to register and access command line arguments it expects. If anybody has a good idea how to do this more elegantly using the current infrastructure, I have a chocolate fish to give away!!!

    Read the article

  • How to backup using backup API's in c++

    - by user1603185
    I am writing an application that used to backup some specified file, therefore using the backup API calls i.e CreateFile BackupRead and WriteFile API's. getting errors Access violation reading location. I have attached code below. #include <windows.h> int main() { HANDLE hInput, hOutput; //m_filename is a variable holding the file path to read from hInput = CreateFile(L"C:\\Key.txt", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); //strLocation contains the path of the file I want to create. hOutput= CreateFile(L"C:\\tmp\\", GENERIC_WRITE, NULL, NULL, CREATE_ALWAYS, NULL, NULL); DWORD dwBytesToRead = 1024 * 1024 * 10; BYTE *buffer; buffer = new BYTE[dwBytesToRead]; BOOL bReadSuccess = false,bWriteSuccess = false; DWORD dwBytesRead,dwBytesWritten; LPVOID lpContext; //Now comes the important bit: do { bReadSuccess = BackupRead(hInput, buffer, sizeof(BYTE) *dwBytesToRead, &dwBytesRead, false, true, &lpContext); bWriteSuccess= WriteFile(hOutput, buffer, sizeof(BYTE) *dwBytesRead, &dwBytesWritten, NULL); }while(dwBytesRead == dwBytesToRead); return 0; } Any one suggest me how to use these API's? Thanks.

    Read the article

  • Securing Web Services approach valid?

    - by NBrowne
    Hi , Currently I am looking at securing our web services. At the moment we are not using WCF so this is not an option. One approach I have seen and implemented locally fairly easily was the approach described in article: http://www.codeproject.com/KB/aspnet/wsFormsAuthentication.aspx Which describes adding a HttpModule which prompts for user credentials if the user browses to any pages (web services) which are contained in a services folder. Does anyone see any way that this security could fall down and could be bypassed etc. I'm really just trying to decide whether this is a valid approach to take or not? thanks

    Read the article

  • How to Audit Database Activity without Performance and Scalability Issues?

    - by GotoError
    I have a need to do auditing all database activity regardless of whether it came from application or someone issuing some sql via other means. So the auditing must be done at the database level. The database in question is Oracle. I looked at doing it via Triggers and also via something called Fine Grained Auditing that Oracle provides. In both cases, we turned on auditing on specific tables and specific columns. However, we found that Performance really sucks when we use either of these methods. Since auditing is an absolute must due to regulations placed around data privacy, I am wondering what is best way to do this without significant performance degradations. If someone has Oracle specific experience with this, it will be helpful but if not just general practices around database activity auditing will be okay as well.

    Read the article

  • Handling null values with PowerShell dates

    - by Tim Ferrill
    I'm working on a module to pull data from Oracle into a PowerShell data table, so I can automate some analysis and perform various actions based on the results. Everything seems to be working, and I'm casting columns into specific types based on the column type in Oracle. The problem I'm having has to do with null dates. I can't seem to find a good way to capture that a date column in Oracle has a null value. Is there any way to cast a [datetime] as null or empty?

    Read the article

  • Problem with index server talking to remote server names with dashes or dots in them

    - by Aim Kai
    Hi I am having a problem, accessing a remote index server catalog. The name of the server has - in it, so i put the index catalog name as: i.e num.num.num.num\name of catalog or an-example-server I get the following error when using an ole data connection to pull results from the index: "Format of the initialization string does not conform to specification starting at index 39" I tried putting single quotes and &qoute; with no luck - anyone have idea? PS. This Microsoft Index Server Question!

    Read the article

  • How to prevent the symbol "&" from being replaced by "&amp;

    - by tonsils
    Hi, Hoping someone could pls let me know how to prevent the symbol "&" from being replaced by "&amp;" within my URL, specifically within javascript? Just to expand on requirement, I am getting my url from an oracle database table, which I then use within Oracle Application Express, to set the src attribute of an iframe to this url. FYI, the url stored in the Oracle table is actually stored correctly, i.e. http://mydomain.com/xml/getInfo?s=pvalue1&f=mydir/Summary.xml what appears in my use when trying to pass over into iframe src using javascript is: http://mydomain.com/xml/getInfo?s=pvalue1&amp;f=mydir/Summary.xml which basically returns a page cannot be found Hope this clarified further my issue. Thanks.

    Read the article

  • Announcing: Improvements to the Windows Azure Portal

    - by ScottGu
    Earlier today we released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Service Bus Management and Monitoring Support for Managing Co-administrators Import/Export support for SQL Databases Virtual Machine Experience Enhancements Improved Cloud Service Status Notifications Media Services Monitoring Support Storage Container Creation and Access Control Support All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Service Bus Management and Monitoring The new Windows Azure Management Portal now supports Service Bus management and monitoring. Service Bus provides rich messaging infrastructure that can sit between applications (or between cloud and on-premise environments) and allow them to communicate in a loosely coupled way for improved scale and resiliency. With the new Service Bus experience, you can now create and manage Service Bus Namespaces, Queues, Topics, Relays and Subscriptions. You can also get rich monitoring for Service Bus Queues, Topics and Subscriptions. To create a Service Bus namespace, you can now select the “Service Bus” tab in the Windows Azure portal and then simply select the CREATE command: Doing so will bring up a new “Create a Namespace” dialog that allows you to name and create a new Service Bus Namespace: Once created, you can obtain security credentials associated with the Namespace via the ACCESS KEY command. This gives you the ability to obtain the connection string associated with the service namespace. You can copy and paste these values into any application that requires these credentials: It is also now easy to create Service Bus Queues and Topics via the NEW experience in the portal drawer.  Simply click the NEW command and navigate to the “App Services” category to create a new Service Bus entity: Once you provision a new Queue or Topic it can be managed in the portal.  Clicking on a namespace will display all queues and topics within it: Clicking on an item in the list will allow you to drill down into a dashboard view that allows you to monitor the activity and traffic within it, as well as perform operations on it. For example, below is a view of an “orders” queue – note how we now surface both the incoming and outgoing message flow rate, as well as the total queue length and queue size: To monitor pub/sub subscriptions you can use the ADD METRICS command within a topic and select a specific subscription to monitor. Support for Managing Co-Administrators You can now add co-administrators for your Windows Azure subscription using the new Windows Azure Portal. This allows you to share management of your Windows Azure services with other users. Subscription co-administrators share the same administrative rights and permissions that service administrator have - except a co-administrator cannot change or view billing details about the account, nor remove the service administrator from a subscription. In the SETTINGS section, click on the ADMINISTRATORS tab, and select the ADD button to add a co-administrator to your subscription: To add a co-administrator, you specify the email address for a Microsoft account (formerly Windows Live ID) or an organizational account, and choose the subscription you want to add them to: You can later update the subscriptions that the co-administrator has access to by clicking on the EDIT button, and then select or deselect the subscriptions to which they belong. Import/Export Support for SQL Databases The Windows Azure administration portal now supports importing and exporting SQL Databases to/from Blob Storage.  Databases can be imported/exported to blob storage using the same BACPAC file format that is supported with SQL Server 2012.  Among other benefits, this makes it easy to copy and migrate databases between on-premise and cloud environments. SQL Databases now have an EXPORT command in the bottom drawer that when pressed will prompt you to save your database to a Windows Azure storage container: The UI allows you to choose an existing storage account or create a new one, as well as the name of the BACPAC file to persist in blob storage: You can also now import and create a new SQL Database by using the NEW command.  This will prompt you to select the storage container and file to import the database from: The Windows Azure Portal enables you to monitor the progress of import and export operations. If you choose to log out of the portal, you can come back later and check on the status of all of the operations in the new history tab of the SQL Database server – this shows your entire import and export history and the status (success/fail) of each: Enhancements to the Virtual Machine Experience One of the common pain-points we have heard from customers using the preview of our new Virtual Machine support has been the inability to delete the associated VHDs when a VM instance (or VM drive) gets deleted. Prior to today’s release the VHDs would continue to be in your storage account and accumulate storage charges. You can now navigate to the Disks tab within the Virtual Machine extension, select a VM disk to delete, and click the DELETE DISK command: When you click the DELETE DISK button you have the option to delete the disk + associated .VHD file (completely clearing it from storage).  Alternatively you can delete the disk but still retain a .VHD copy of it in storage. Improved Cloud Service Status Notifications The Windows Azure portal now exposes more information of the health status of role instances.  If any of the instances are in a non-running state, the status at the top of the dashboard will summarize the status (and update automatically as the role health changes): Clicking the instance hyperlink within this status summary view will navigate you to a detailed role instance view, and allow you to get more detailed health status of each of the instances.  The portal has been updated to provide more specific status information within this detailed view – giving you better visibility into the health of your app: Monitoring Support for Media Services Windows Azure Media Services allows you to create media processing jobs (for example: encoding media files) in your Windows Azure Media Services account. In the Windows Azure Portal, you can now monitor the number of encoding jobs that are queued up for processing as well as active, failed and queued tasks for encoding jobs. On your media services account dashboard, you can visualize the monitoring data for last 6 hours, 24 hours or 7 days. Storage Container Creation and Access Control Support You can now create Windows Azure Storage storage containers from within the Windows Azure Portal.  After selecting a storage account, you can navigate to the CONTAINERS tab and click the ADD CONTAINER command: This will display a dialog that lets you name the new container and control access to it: You can also update the access setting as well as container metadata of existing containers by selecting one and then using the new EDIT CONTAINER command: This will then bring up the edit container dialog that allows you to change and save its settings: In addition to creating and editing containers, you can click on them within the portal to drill-in and view blobs within them.  Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming later this month – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5, and support .NET 4.5 with Websites).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

< Previous Page | 479 480 481 482 483 484 485 486 487 488 489 490  | Next Page >