Search Results

Search found 36032 results on 1442 pages for 'oracle enterprise service automation'.

Page 352/1442 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Making an AJAX WCF Web Service request during an Async Postback

    - by nekno
    I want to provide status updates during a long-running task on an ASP.NET WebForms page with AJAX. Is there a way to get the ScriptManager to execute and process a script for a web service request during an async postback? I have a script on the page that makes a web service request. It runs on page load and periodically using setInterval(). It's running correctly before the async postback is initiated, but it stops running during the async postback, and doesn't run again until after the async postback completes. I have an UpdatePanel with a button to trigger an async postback, which executes the long-running task. I also have an instance of an AJAX WCF Web service that is working correctly to fetch data and present it on the page but, like I said, it doesn't fetch and present the data until after the async postback completes. During the async postback, the long-running task sends updates from the page to the web service. The problem is that I can debug and step through the web service and see that the status updates are correctly set, but the updates aren't retrieved by the client script until the async postback completes. It seems the Script Manager is busy executing the async postback, so it doesn't run my other JavaScript via setInterval() until the postback completes. Is there a way to get the Script Manager, or otherwise, to run the script to fetch data from the WCF web service during the async postback? I've tried various methods of using the PageRequestManager to run the script on the client-side BeginRequest event for the async postback, but it runs the script, then stops processing the code that should be running via setInterval() while the page request executes.

    Read the article

  • Making a concurrent AJAX WCF Web Service request during an Async Postback

    - by nekno
    I want to provide status updates during a long-running task on an ASP.NET WebForms page with AJAX. Is there a way to get the ScriptManager to execute and process a script for a web service request concurrently with an async postback? I have a script on the page that makes a web service request. It runs on page load and periodically using setInterval(). It's running correctly before the async postback is initiated, but it stops running during the async postback, and doesn't run again until after the async postback completes. I have an UpdatePanel with a button to trigger an async postback, which executes the long-running task. I also have an instance of an AJAX WCF Web service that is working correctly to fetch data and present it on the page but, like I said, it doesn't fetch and present the data until after the async postback completes. During the async postback, the long-running task sends updates from the page to the web service. The problem is that I can debug and step through the web service and see that the status updates are correctly set, but the updates aren't retrieved by the client script until the async postback completes. It seems the Script Manager is busy executing the async postback, so it doesn't run my other JavaScript via setInterval() until the postback completes. Is there a way to get the Script Manager, or otherwise, to run the script to fetch data from the WCF web service during the async postback? I've tried various methods of using the PageRequestManager to run the script on the client-side BeginRequest event for the async postback, but it runs the script, then stops processing the code that should be running via setInterval() while the page request executes.

    Read the article

  • IIS error hosting WCF Data Service on shared web host

    - by jkohlhepp
    My client has a website hosted on a shared web server. I don't have access to IIS. I am trying to deploy a WCF Data Service onto his site. I am getting this error: IIS specified authentication schemes 'IntegratedWindowsAuthentication, Anonymous', but the binding only supports specification of exactly one authentication scheme. Valid authentication schemes are Digest, Negotiate, NTLM, Basic, or Anonymous. Change the IIS settings so that only a single authentication scheme is used. I have searched SO and other sites quite a bit but can't seem to find someone with my exact situation. I cannot change the IIS settings because this is a third party's server and it is a shared web server. So my only option is to change things in code or in the service config. My service config looks like this: <system.serviceModel xdt:Transform="Insert"> <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://www.somewebsite.com"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> <bindings> <webHttpBinding> <binding name="{Binding Name}" > <security mode="None" /> </binding> </webHttpBinding> </bindings> <services> <service name="{Namespace to Service}"> <endpoint address="" binding="webHttpBinding" bindingConfiguration="{Binding Name}" contract="System.Data.Services.IRequestHandler"> </endpoint> </service> </services> </system.serviceModel> As you can see I tried to set the security mode to "None" but that didn't seem to help. What should I change to resolve this error?

    Read the article

  • Silverlight client never calls WCF Service

    - by Doug Nelson
    Hi all, This one has me completed stumped. I have developed a silverlight application that calls back to WCF services ( it's a silverlight - basicHttpBinding) The site works perfectly fine from my development machine, but when it is deployed to the developement server. The application is delivered with the XAP just fine, but it never attempts to talk to the service. I have a service call in the bootstrapper so it should be calling this when the client starts up. The services are healthy. They can be browsed to and show the standard WCF service display. We have been through the bindings many times and everything seems to be ok. I have added an extensive amount of error handling for displaying any errors, but on this dev server, no service calls and no errors are being raised. Fiddler shows the page being loaded up, but my client never issues a call to the service. The service is in the same folder as the default.aspx which hosts the Silverlight client. This is a Silverlight 3.0 app. Anybody ever seen anything similar?

    Read the article

  • How to make dynamically generated .net service client read configuration from another location than

    - by Bryan
    Hi, I've currently written code to use the ServiceContractGenerator to generate web service client code based on a wsdl, and then compile it into an assembly in memory using the code dom. I'm then using reflection to set up the binding, endpoint, service values/types, and then ultimately invoke the web service method based on xml configuration that can be altered at run time. This all currently works fine. However, the problem I'm currently running into, is that I'm hitting several exotic web services that require lots of custom binding/security settings. This is forcing me to add more and more configuration into my custom xml configurations, as well as the corresponding updates to my code to interpret and set those binding/security settings in code. Ultimately, this makes adding these 'exotic' services slower, and I can see myself eventually reimplementing the 'system.serviceModel' section of the web or app.config file, which is never a good thing. My question is, and this is where my lack of experience .net and C# shows, is there a way to define the configuration normally found in the web.config or app.config 'system.serviceModel' section somewhere else, and at run time supply this to configuration to the web service client? Is there a way to attach an app.config directly to an assembly as a resource or any other way to supply this configuration to the client? Basically, I'd like attach an app.config only containing a 'system.serviceModel' to the assembly containing a web service client so that it can use its configuration. This way I wouldn't need to handle every configuration under the sun, I could let .net do it for me. Fyi, it's not an option for me to put the configuration for every service in the app.config for the running application. Any help would be greatly appreciated. Thanks in advance! Bryan

    Read the article

  • 11.2 Upgrade Companion has been updated!

    - by roy.swonger
    The long-awaited update of the 11.2 Upgrade Companion is now available in My Oracle Support, in the usual location (Note 785351.1). This comprehensive update incorporates lessons learned from adoption of the 11.2.0.2 patchset release. We have also included many more links for customers in a RAC/ASM (Grid Infrastructure) environment, information about the GoldenGate 11g release, and more! As always, the Upgrade Companion is available in PDF and HTML format in addition to the web-viewable java-based document.

    Read the article

  • Fusion Applications Enablement Toolkit: the Partner's single place of information for all OPN Fusion Apps resources

    - by Richard Lefebvre
    Take a look and then come back regularly at https://blogs.oracle.com/opnenablement/resource/fusion_applications.html ... a micro site designed to give our EMEA Fusion Partners all the Fusion enablement critical information (Key links, event, materials, etc.) that they need to achieve specialization. This site will be updated on a regular basis, especially for OPN events and training sessions.

    Read the article

  • 466 ADF sample applications and growing - ADF EMG Kaleidoscope announcement

    - by Chris Muir
    Interested in finding more ADF sample applications?  How does 466 applications take your fancy? Today at ODTUG's Kaleidoscope conference in San Antonio the ADF EMG announced the launch of a new ADF Samples website, an index of 466 ADF applications gathered from expert ADF bloggers including customers and Oracle staff. For more details on this great ADF community resource head over to the ADF EMG announcement.

    Read the article

  • Installing Forms and Reports on a development system

    - by Duncan Mills
    By popular demand I've resurrected / updated one of the old blog postings from Jan Carlin's Blog on GroundSide here. A recent (lengthy) post on the Forms forums chronicles the problems some of you have had installing F&R on a development machine. See the link in the headline of this post for the main one. When installing, here are some points to bear in mind: Download and install Weblogic Server first. http://www.oracle.com/technology/software/products/middleware/index.htmlFind the Forms and Reports (and Disco and Portal) zip files here. Download them to the desktop (or some other temporary directory of your choosing). Unzip both of the two zip files into the same new directory (maybe called 'stage') and check that you have 4 directories in the stage dir when you are finished unzipping: 'Disk1', 'Disk2', 'Disk3' and 'Disk4'. These folders are specified in the zip file structure and must be preserved for the setup executable to work. If you use WinZip and have a right click menu option that say "Extract to here", use that by right click-dragging the zip file onto the newly created directory. Don't use the "Extract to folder %HOME%\Desktop\ofm_pfrd...disk_1of2" option. That will get you into the trouble that was reported early in this thread. Free up as much memory as you can. Stop services and background processes and virus scanners and databases (you don't need a DB to install Forms) and other things lurking about on your machine. You can restart them when the install is done. Around 1.5 GB free real memory should do it. If it doesn't, free up more if you can. Don't change the swap space unless you know what you are doing. Let Windows handle it. A 1 GB machine will likely not be enough. You will likely need at least 2GB of RAM.Start the install with setup.exe from the 'Disk1' directoryChoose the Install and Configure option unless you have a good reason not to.Choose a unique instance name even if you deinstalled and removed the last install. I suggest using 'asinst_20090722_1' (today's date in ISO format with a running incremented number at the end if you install more than two times on a particular day).Unselect Portal and Discoverer and select the Builders you want.Unselect WebCacheUnselect OHS.Unselect the single sign-on option Check for any failures and choose the retry option if any occur. If that doesn't fix the problem, call Oracle Customer Support .

    Read the article

  • Discover How to Deliver Measurable Business Value from your HCM Strategy

    - by Jay Richey, HCM Product Marketing
    Join our live Webcast on Wednesday, July 13 to learn how to fine tune your HCM strategy and better utlize your Oracle HCM investment.  In this session you'll learn how to access, analyze and act on information from multiple sources to ensure that all workforce decisions are focused on meeting overall business objectives. Date:Wednesday, July 13, 2011Time:10:00 a.m. PT / 1:00 p.m. ET Register now!

    Read the article

  • How to fill DataGridView from nested table oracle

    - by arkadiusz85
    I want to create my type: CREATE TYPE t_read AS OBJECT ( id_worker NUMBER(20), how_much NUMBER(5,2), adddate_r DATE, date_from DATE, date_to DATE ); I create a table of my type: CREATE TYPE t_tab_read AS TABLE OF t_read; Next step is create a table with my type: enter code hereCREATE TABLE Reading ( id_watermeter NUMBER(20) constraint Watermeter_fk1 references Watermeters(id_watermeter), read t_tab_read ) NESTED TABLE read STORE AS store_read ; Microsoft Visual Studio can not display this type in DataGridView. I use Oracle.Command: C# using Oracle.DataAccess; using Oracle.DataAccess.Client; private void button1_Click(object sender, EventArgs e) { try { //my working class to connect to database ConnectionClass.BeginConnection(); OracleDataAdapter tmp = new OracleDataAdapter(); tmp = ConnectionClass.ReadCommand(ReadClass.test()); DataSet dataset4 = new DataSet(); tmp.Fill(dataset4, "Read1"); dataGridView4.DataSource = dataset4.Tables["Read1"]; } catch (Exception o) { MessageBox.Show(o.Message); } public class ReadClass { public static OracleCommand test() { string sql = "select c.id_watermeter, a. from reading c , table (c.read) a where id_watermeter=1"; ConnectionClass.Command1= new OracleCommand(sql, ConnectionClass.Connection); ConnectionClass.Command1.CommandType = CommandType.Text; return ConnectionClass.Command1; } } I tray: string sql = "select r.id_watermeter, o.id_worker, o.how_much, o.adddate_r, o.date_from, o.date_to from reading r, table (r.read) o where r.id_watermeter=1" string sql = "select a.from reading c , Table (c.read) a where id_watermeter=1" string sql = "select a.id_worker, a.how_much, a.adddate_r, a.date_from, a.date_to from reading c , table (c.read) a where id_watermeter=1" string sql = "select c.id_watermeter, a. from reading c , table (c.read) a where id_watermeter=1" Error : Unsuported Oracle data type USERDEFINED encountered Sombady can help me how to fill DataGridView using data from nested table. I am using Oracle 10g XE

    Read the article

  • JavaServer Faces 2.0 for the Cloud

    - by Janice J. Heiss
    A new article now up on otn/java by Deepak Vohra titled “JSF 2.0 for the Cloud, Part One,” shows how JavaServer Faces 2.0 provides features ideally suited for the virtualized computing resources of the cloud. The article focuses on @ManagedBean annotation, implicit navigation, and resource handling. Vohra illustrates how the container-based model found in Java EE 7, which allows portable applications to target single machines as well as large clusters, is well suited to the cloud architecture. From the article-- “Cloud services might not have been a factor when JavaServer Faces 2.0 (JSF 2.0) was developed, but JSF 2.0 provides features ideally suited for the cloud, for example:•    The path-based resource handling in JSF 2.0 makes handling virtualized resources much easier and provides scalability with composite components.•    REST-style GET requests and bookmarkable URLs in JSF 2.0 support the cloud architecture. Representational State Transfer (REST) software architecture is based on transferring the representation of resources identified by URIs. A RESTful resource or service is made available as a URI path. Resources can be accessed in various formats, such as XML, HTML, plain text, PDF, JPEG, and JSON, among others. REST offers the advantages of being simple, lightweight, and fast.•    Ajax support in JSF 2.0 is integrable with Software as a Service (SaaS) by providing interactive browser-based Web applications.” In Part Two of the series, Vohra will examine features such as Ajax support, view parameters, preemptive navigation, event handling, and bookmarkable URLs.Have a look at the article here.

    Read the article

  • Service reference addition issue in visual studio 2010

    - by user293072
    I am currently working on an application that allows reverse geocoding using silverlight + bing maps. The thing is that I want to add a reference to the reverse geocoding service provided in msdn ( http://msdn.microsoft.com/en-us/library/cc879136.aspx) i.e. http:// dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl, but when I try to get a reference in vs2010, I get the following error: The document at the url http:// dev.virtualearth.net/webservices/v1/metadata/geocodeservice/geocodeservice.wsdl was not recognized as a known document type. The error message from each known type may help you fix the problem: Report from 'XML Schema' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'DISCO Document' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'WSDL Document' is 'There is an error in XML document (1, 1).'. '', hexadecimal value 0x1F, is an invalid character. Line 1, position 1. Metadata contains a reference that cannot be resolved: 'http://dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl'. Content Type application/soap+xml; charset=utf-8 was not supported by service http: //dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl. The client and service bindings may be mismatched. The remote server returned an error: (415) Unsupported Media Type. If the service is defined in the current solution, try building the solution and adding the service reference again. It is good to mention that I can access the service URL from the browser (with a no style information warning). I am aware that there are other reverse geolocoding services out there, but I am somewhat forced by certain circumstances to use only Microsoft-related components/services. Please help :)

    Read the article

  • Getting a WCF service hosted in IIS 7.0

    - by gregarobinson
     This was not as easy as I thought it would be...lots of errors. These links saved me:  http://blah.winsmarts.com/2008-4-Host_a_WCF_Service_in_IIS_7_-and-amp;_Windows_2008_-_The_right_way.aspx http://blog.donnfelker.com/2007/03/26/iis-7-this-configuration-section-cannot-be-used-at-this-path/   

    Read the article

  • WCF - remote service without using IIS - base address?

    - by Mark Pim
    I'm trying to get my head around the addressing of WCF services. We have a client-server setup where the server occasionally (maybe once a day) needs to push data to each client. I want to have a lightweight WCF listener service on each client hosted in an NT service to receive that data. We already have such an NT service setup hosting some local WCF services for other tasks so the overhead of this is minimal. Because of existing legacy code on the server I believe the service needs to be exposed as ASMX and use basicHttpBinding to allow it to connect. Each client is registered on the server by the user (they need to configure them individually) so discovery is not the issue. My question is, how does the addressing work? I imagine the user entering the client's address on the server in the form http://0.0.0.0/MyService or even http://hostname/MyService If so, how do I configure the client service in its App.config? Do I use localhost? If not then what is the reccommended way of exposing the service to the server? Note: I don't want to host in IIS as that adds extra requirements to the hardware required for the client. The clients will be almost certainly located on LANs, not over the public internet

    Read the article

  • Developing Schema Compare for Oracle (Part 3): Ghost Objects

    - by Simon Cooper
    In the previous blog post, I covered how we solved the problem of dependencies between objects and between schemas. However, that isn’t the end of the issue. The dependencies algorithm I described works when you’re querying live databases and you can get dependencies for a particular schema direct from the server, and that’s all well and good. To throw a (rather large) spanner in the works, Schema Compare also has the concept of a snapshot, which is a read-only compressed XML representation of a selection of schemas that can be compared in the same way as a live database. This can be useful for keeping historical records or a baseline of a database schema, or comparing a schema on a computer that doesn’t have direct access to the database. So, how do snapshots interact with dependencies? Inter-database dependencies don't pose an issue as we store the dependencies in the snapshot. However, comparing a snapshot to a live database with cross-schema dependencies does cause a problem; what if the live database has a dependency to an object that does not exist in the snapshot? Take a basic example schema, where you’re only populating SchemaA: SOURCE   TARGET (using snapshot) CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); In this case, we want to generate a sync script to synchronize SchemaA.Table1 on the database represented by the snapshot. When taking a snapshot, database dependencies are followed, but because you’re not comparing it to anything at the time, the comparison dependencies algorithm described in my last post cannot be used. So, as you only take a snapshot of SchemaA on the target database, SchemaB.Table1 will not be in the snapshot. If this snapshot is then used to compare against the above source schema, SchemaB.Table1 will be included in the source, but the object will not be found in the target snapshot. This is the same problem that was solved with comparison dependencies, but here we cannot use the comparison dependencies algorithm as the snapshot has not got any information on SchemaB! We've now hit quite a big problem - we’re trying to include SchemaB.Table1 in the target, but we simply do not know the status of this object on the database the snapshot was taken from; whether it exists in the database at all, whether it’s the same as the target, whether it’s different... What can we do about this sorry state of affairs? Well, not a lot, it would seem. We can’t query the original database, as it may not be accessible, and we cannot assume any default state as it could be wrong and break the script (and we currently do not have a roll-back mechanism for failed synchronizes). The only way to fix this properly is for the user to go right back to the start and re-create the snapshot, explicitly including the schemas of these 'ghost' objects. So, the only thing we can do is flag up dependent ghost objects in the UI, and ask the user what we should do with it – assume it doesn’t exist, assume it’s the same as the target, or specify a definition for it. Unfortunately, such functionality didn’t make the cut for v1 of Schema Compare (as this is very much an edge case for a non-critical piece of functionality), so we simply flag the ghost objects up in the sync wizard as unsyncable, and let the user sort out what’s going on and edit the sync script as appropriate. There are some things that we do do to alleviate somewhat this rather unhappy situation; if a user creates a snapshot from the source or target of a database comparison, we include all the objects registered from the database, not just the ones in the schemas originally selected for comparison. This includes any extra dependent objects registered through the comparison dependencies algorithm. If the user then compares the resulting snapshot against the same database they were comparing against when it was created, the extra dependencies will be included in the snapshot as required and everything will be good. Fortunately, this problem will come up quite rarely, and only when the user uses snapshots and tries to sync objects with unknown cross-schema dependencies. However, the solution is not an easy one, and lead to some difficult architecture and design decisions within the product. And all this pain follows from the simple decision to allow schema pre-filtering! Next: why adding a column to a table isn't as easy as you would think...

    Read the article

  • Visual Studio 2010, Entity Framework, and Oracle

    - by Tobias Gunn
    While I was working on a SilverLight 4 demo I found out that Entity Framework is not supported directly through the .NET provider or ODP tools. In order to make them work you need to either write a wrapper of your own (wouldn't chance it) or else use a provider like DataDirect or Quest's upcoming tool. So far, I've been very happy with the DataDirect tool (found here http://www.datadirect.com/products/net/index.ssp). As I get a little farther along I'll post more on SL4, RIA, and EF.

    Read the article

  • UPK for Testing Webinar Recording Now Available!

    - by Karen Rihs
    For anyone who missed last week’s event, a recording of the UPK for Testing webinar is now available.  As an implementation and enablement tool, Oracle’s User Productivity Kit (UPK) provides value throughout the software lifecycle.  Application testing is one area where customers like Northern Illinois University (NIU) are finding huge value in UPK and are using it to validate their systems.  Hear Beth Renstrom, UPK Product Manager and Bettylynne Gregg, NIU ERP Coordinator, discuss how the Test It Mode, Test Scripts, and Test Cases of UPK can be used to facilitate applications testing.

    Read the article

  • Using CTAS & Exchange Partition Replace IAS for Copying Partition on Exadata

    - by Bandari Huang
    Usage Scenario: Copy data&index from one partition to another partition in a partitioned table. Solution: Create a partition definition Copy data from one partition to another partiton by 'Insert as select (IAS)' Create a nonpartitioned table by 'Create table as select (CTAS)' Convert a nonpartitioned table into a partition of partitoned table by exchangng their data segments. Rebuild unusable index Exchange Partition Convertion Mutual convertion between a partition (or subpartition) and a nonpartitioned table Mutual convertion between a hash-partitioned table and a partition of a composite *-hash partitioned table Mutual convertiton a [range | list]-partitioned table into a partition of a composite *-[range | list] partitioned table. Exchange Partition Usage Scenario High-speed data loading of new, incremental data into an existing partitioned table in DW environment Exchanging old data partitions out of a partitioned table, the data is purged from the partitioned table without actually being deleted and can be archived separately Exchange Partition Syntax ALTER TABLE schema.table EXCHANGE [PARTITION|SUBPARTITION] [partition|subprtition] WITH TABLE schema.table [INCLUDE|EXCLUDING] INDEX [WITH|WITHOUT] VALIDATION UPDATE [INDEXES|GLOBAL INDEXES] INCLUDING | EXCLUDING INDEXES Specify INCLUDING INDEXES if you want local index partitions or subpartitions to be exchanged with the corresponding table index (for a nonpartitioned table) or local indexes (for a hash-partitioned table). Specify EXCLUDING INDEXES if you want all index partitions or subpartitions corresponding to the partition and all the regular indexes and index partitions on the exchanged table to be marked UNUSABLE. If you omit this clause, then the default is EXCLUDING INDEXES. WITH | WITHOUT VALIDATION Specify WITH VALIDATION if you want Oracle Database to return an error if any rows in the exchanged table do not map into partitions or subpartitions being exchanged. Specify WITHOUT VALIDATION if you do not want Oracle Database to check the proper mapping of rows in the exchanged table. If you omit this clause, then the default is WITH VALIDATION.  UPADATE INDEX|GLOBAL INDEX Unless you specify UPDATE INDEXES, the database marks UNUSABLE the global indexes or all global index partitions on the table whose partition is being exchanged. Global indexes or global index partitions on the table being exchanged remain invalidated. (You cannot use UPDATE INDEXES for index-organized tables. Use UPDATE GLOBAL INDEXES instead.) Exchanging Partitions&Subpartitions Notes Both tables involved in the exchange must have the same primary key, and no validated foreign keys can be referencing either of the tables unless the referenced table is empty.  When exchanging partitioned index-organized tables: – The source and target table or partition must have their primary key set on the same columns, in the same order. – If key compression is enabled, then it must be enabled for both the source and the target, and with the same prefix length. – Both the source and target must be index organized. – Both the source and target must have overflow segments, or neither can have overflow segments. Also, both the source and target must have mapping tables, or neither can have a mapping table. – Both the source and target must have identical storage attributes for any LOB columns. 

    Read the article

  • Industry perspectives on managing content

    - by aahluwalia
    Earlier this week I was noodling over a topic for my first blog post. My intention for this blog is to bring a practitioner's perspective on ECM to the community; to share and collaborate on best practices and approaches that address today's business problems. Reviewing my past 14 years of experience with web technologies, I wondered what topic would serve as a good "conversation starter". During this time, I received a call from a friend who was seeking insights on how content management applies to specific industries. She approached me because she vaguely remembered that I had worked in the Health Insurance industry in the recent past. She wanted me to tell her about the specific business needs of this industry. She was in for quite a surprise as she found out that I had spent the better part of a decade managing content within the Health Insurance industry and I discovered a great topic for my first blog post! I offer some insights from Health Insurance and invite my fellow practitioners to share their insights from other industries. What does content management mean to these industries? What can solution providers be aware of when offering solutions to these industries? The United States health care system relies heavily on private health insurance, which is the primary source of coverage for approximately 58% Americans. In the late 19th century, "accident insurance" began to be available, which operated much like modern disability insurance. In the late 20th century, traditional disability insurance evolved into modern health insurance programs. The first thing a solution provider must be aware of about the Health Insurance industry is that it tends to be transaction intensive. They are the ones who manage and administer our health plans and process our claims when we visit our health care providers. It helps to keep in mind that they are in the business of delivering health insurance and not technology. You may find the mindset conservative in comparison to the IT industry, however, the Health Insurance industry has benefited and will continue to benefit from the efficiency that technology brings to traditionally paper-driven processes. We are all aware of the impact that Healthcare reform bill has had a significant impact on the Health Insurance industry. They are under a great deal of pressure to explore ways to reduce their administrative costs and increase operational efficiency. Overall, administrative costs of health insurance include the insurer's cost to administer the health plan, the costs borne by employers, health-care providers, governments and individual consumers. Inefficiencies plague health insurance, owing largely to the absence of standardized processes across the industry. To achieve this, industry leaders have come together to establish standards and invest in initiatives to help their healthcare provider partners transition to the next generation of healthcare technology. The move to online services and paperless explanation of benefits are some manifestations of technological advancements in health insurance. Several companies have adopted Toyota's LEAN methodology or Six Sigma principles to improve quality, reduce waste and excessive costs, thereby increasing the value of their plan offerings. A growing number of health insurance companies have transformed their business systems in the past decade alone and adopted some form of content management to reduce the costs involved in administering health plans. The key strategy has been to convert paper documents and forms into electronic formats, automate the content development process and securely distribute content to various audiences via diverse marketing channels, including web and mobile. Enterprise content management solutions can enable document capture of claim forms, manage digital assets, integrate with Enterprise Resource Planning (ERP) and Human Capital Management (HCM) solutions, build Business Process Management (BPM) processes, define retention and disposition instructions to comply with state and federal regulations and allow eBusiness and Marketing departments to develop and deliver web content to multiple websites, mobile devices and portals. Content can be shared securely within and outside the organization using Information Rights Management.  At the end of the day, solution providers who can translate strategic goals into solutions that maximize process automation, increase ease of use and minimize IT overhead are likely to be successful in today's health insurance environment.

    Read the article

  • WCF Data Service Pipeline

    - by Daniel Cazzulino
    For documentation purposes, I just draw the following UML sequence diagrams for the “Astoria” pipeline, using Visual Studio 2010 Ultimate: For a single-entity (or non-batched) request, this is the sequence: For a batch request, this is the sequence instead: DataService component is your own DataService<T>-derived class, and DataService.ProcessingPipeline refers to its ProcessingPipeline property pipeline events.   /kzu

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >