Search Results

Search found 8083 results on 324 pages for 'technical documents'.

Page 20/324 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Should I use a unit testing framework to validate XML documents?

    - by christofr
    From http://www.w3.org/XML/Schema: [XML Schemas] provide a means for defining the structure, content and semantics of XML documents. I'm using an XML Schema (XSD) to validate several large XML documents. While I'm finding plenty of support within XSD for checking the structure of my documents, there are no procedural if/else features that allow me to say, for instance, If Country is USA, then Zipcode cannot be empty. I'm comfortable using unit testing frameworks, and could quite happily use a framework to test content integrity. Am I asking for trouble doing it this way, rather than an alternative approach? Has anybody tried this with good / bad results? -- Edit: I didn't include this information to keep it technology agnostic, but I would be using C# / Linq / xUnit for deserialization / testing.

    Read the article

  • installshield: Windir returns c:\documents & settings\fcuser\windows instead of c:\windows

    - by Sakhawat Ali
    we have a setup developed in installshield vr 6.3. it is a self extractable single setup. it work fine in most on most of the Windows version but on Windows server 2003 64bit in Execution mode when doing RD it return user windows directory against WINDIR i.e. c:\documents & settings\fcuser\windows instead of C:\Windows. According to http://support.microsoft.com/?kbid=186499 it should work fine when i change the compatibility bit of Setup but it didn't. i tried changing compatibility bit of these key too (INSTRUN, SETUP and SETUP1 ) but it didn't work either. but when i when i run the setup within the self extractable by extracting it work fine.

    Read the article

  • gdata api + javascript library for accessing documents?

    - by user246114
    Hi, I wanted to do the following with the gdata javascript library: get a list of a user's documents. read the contents of a chosen document. edit its contents, update the document. If I'm reading correctly, this is not supported by the javascript library? I got this to work with the java library, but would like to push this functionality to the client. Is it possible? If not, are there any ways around it? Thank you

    Read the article

  • Embedding PDF documents into websites

    - by papermate
    I need to embed some PDF documents into a website. The last time I did this, I used a jQuery lightbox to popup an iFrame with the PDF document as the URL. The client's PDF viewer would then take care of the rest. Apparently though, that was a bit buggy on some other peoples browsers. I guess it was due to the large PDF file sizes and the effort it took for their computers to fire up Adobe. So I'm after ideas on how to go about this. How do you guys embed your PDF's into websites? Or do you just stick to adding a download link?

    Read the article

  • retrieving documents from sharepoint via web services using jquery

    - by femi
    Hi, I am trying to develop a mobile application which can interact with a MOSS Site via web services. i expect it to be be able to; 1) retrieve documents (pdf, doc, docx, excel) 2) retrieve reporting services reports in a PDF or excel form. i will be using either phonegap or rhomobile to develop this app and i know that i can consume web services using jquery. My question revolves around MOSS Web Services Security. How will i handle authentication? Thanks

    Read the article

  • Document Similarity: Comparing two documents efficiently

    - by seanieb
    I have a loop that calculates the similarity between two documents. It collects all the tokens in a document and their scores, and places them in dictionary. It then compares the dictionaries This is what I have so far, it works, but is super slow: # Doc A cursor1.execute("SELECT token, tfidf_norm FROM index WHERE doc_id = %s", (docid[i][0])) doca = cursor1.fetchall() #convert tuple to a dictionary doca_dic = dict((row[0], row[1]) for row in doca) #Doc B cursor2.execute("SELECT token, tfidf_norm FROM index WHERE doc_id = %s", (docid[j][0])) docb = cursor2.fetchall() #convert tuple to a dictionary docb_dic = dict((row[0], row[1]) for row in docb) # loop through each token in doca and see if one matches in docb for x in doca_dic: if docb_dic.has_key(x): #calculate the similarity by summing the products of the tf-idf_norm similarity += doca_dic[x] * docb_dic[x] print "similarity" print similarity I'm pretty new to Python, hence this mess. I need to speed it up, any help would be appreciated. Thanks.

    Read the article

  • TFS Security and Documents Folder

    - by pm_2
    I'm getting an issue with TFS where the documents folder is marked with a red cross. As far as I can tell, this seems to be a security issue, however, I am set-up as project admin on the relevant projects. I’ve come to the conclusion that it’s a security issue from running the TFS Project Admin tool (available here). When I run this, it tells me that I don’t have sufficient access rights to open the project. I’ve checked, and I’m not included in any groups that are denied access. Please can anyone shed any light as to why I may not have sufficient access to these projects?

    Read the article

  • Lucene.NET - Find documents that do not contain a specified field

    - by Brandon
    Let's say I have 2 instance of a class called 'Animal'. Animal has 3 fields: Name, Age, and Type The name field is nullable, so before I insert an instance of Animal as a Lucene indexed document, I check if Animal.Name == null, and if it does, I do not insert it as a field in my document. If I were to retrieve all animals, I would see that the Name field does not exist and I can set its value to null. However, there may be situations where I want to say "Get me all animals that do not have a name specified yet." In this situation I want to retrieve all Lucene.NET documents from my animal index that do not contain the Name field. Is there an easy way to do this with Lucene.NET? I want to stay away from having to perform some sort of hack to check if my name field has a value of 'null'.

    Read the article

  • How do I index documents in SOLR?

    - by Shane
    Hi there, Im running Solr 1.4 on Ubuntu 10.04 (installed via apt-get solr-tomcat) and it seems to be working fine. Im having some difficulty finding any coherent info on how to index documents though. Im new to SOLR so bear with me! I have a folder (/mnt/folder) that is a mounted windows share, which contains Word and PDF files that I would like indexed, whats the easiest way to get SOLR to index the entire folder? The documentation for SOLR is pretty poor, its impossilbe to find any decent tutorials on getting things done with it so any help is greatly appreciated! S

    Read the article

  • Add / Remove the default documents of IIS7 using vb.net 2.0 windows application

    - by Senthil S R
    Hai, I wrote the coding for add / remove the default documents of IIS7 in vb.net 2.0 windows application using Microsoft.Web.Administration.dll. The coding is working fine. After adding a new document, the messagebox showing that newly added document name. Simillarly if I remove any document, the messagebox not showing that document name. So that I am saying, the coding is working fine. Now the problem is, I can't see the newly added document name in II7 Manager GUI. The next proplem is, my website not access that newly added document. Please can any body suggest for these problems. What the mistake I did the vb.net 2.0 windows application coding and what are the modifications I need to do? Thanks in advance.

    Read the article

  • Google Translation API not working for even one page long documents

    - by Saubhagya
    I'm using Google Translation API to translate text from Chinese Simplified to English in my C# program. The problem is if the text is small (around one line) the API is able to translate it, but if the text is larger (more than 3 lines) is gives an exception saying "The remote server returned an unexpected response: (414) Request-URI Too Large.". However if I use translate.google.com in my browser that works fine. Please tell me how can I process large documents using Google Translate API in my desktop application written in C#.

    Read the article

  • Jquery: Calling functions from different documents

    - by Tom
    Hi, I've got some Jquery functions that I keep in a "custom.js" file. On some pages, I need to pass PHP variables to the Jquery so some Jquery bits need to remain in the HTML documents. However, as I'm now trying to refactor things to the minimum, I'm tripping over the following: If I put this in my custom.js: $(document).ready(function() { function sayHello() { alert("hello"); } } And this in a HTML document: <script type="text/javascript"> $(document).ready(function() { sayHello(); }); </script> ... the function doesn't get called. However, if both are placed in the HTML document, the function works fine. Is there some kind of public property I need to declare for the function or how do I get Jquery functions in my HTML to talk to external .js files? They're correctly included and work fine otherwise. Thanks.

    Read the article

  • SharePoint Documents Library - Change "Document Created By" field

    - by yellowblood
    Hello, I have a code that changes the username in various SharePoint lists, mostly by the "Author" column. It all works fine on normal lists, but it doesn't seem to work on the "Shared Documents" list which is a document library. Whether I change the username in "Created By" or "Document Created By", the change doesn't seem to take effect. The item.Update command doesn't throw any exception, but it clearly doesn't update the field(s). What can I do if I want to change this field through code? Thanks

    Read the article

  • JSON documents and SQL database tables

    - by Sharmi
    Do JSON documents in RavenDB cost more than the SQL Server tables in terms of the storage and query costs. And also for centralized access, which one is better? What are the disadvantages of NON-SQL databases like RavenDB,CouchDB,MongoDB, etc... ? I can get that some of these are open source and support more datatypes like enums,objects,etc. but otherwise i don't see any big advantage? Currently there is a problem of storing huge amount of logs from various locations. I am planning to suggest these to my manager so just need a clear idea.

    Read the article

  • "Error opening associated documents" message when loading VS

    - by kumar
    When loading up a solution in VS2008 I get this message: An error was encountered while opening associated documents the last time this solution was loaded. Document load is being skipped during this solution load in order to avoid that error. It shut down immediately the first time I opened it. The next time I opened it, VS popped up a message box but did not shut down at first; however, it did shut down when I clicked a usercontrol or ASPX page. How can I find which document is causing the problem? Thanks...

    Read the article

  • Design Documents for Python/Django?

    - by british_trader
    After working on a Django project for a while, I now have to do some design documents for it (UML type stuff). However the code doesn't have classes, but instead uses views.py with modules in it... What would be the best way to show the design of my application from the initial __init__.py, to the urls.py where the HTML requests are then filtered to the specific urls.py in each of the packages and then handled by the views.py? i.e. django-app urls.py views.py settings.py manager.py __init__.py django-package urls.py views.py

    Read the article

  • How to integrate technical line/functional manager into Scrum team?

    - by thegreendroid
    We have recently had a new line manager start who is managing our Scrum team. He is immensely experienced in our field but is relatively inexperienced at Agile/Scrum. He has extensive technical expertise in embedded software (the team's domain) that would go to waste if not utilised properly. However, the team is wary of making a line manager part of the Scrum team. The general consensus is that the line manager should not be part of the Scrum team at all. There are a number of issues that may crop up, e.g. the team may start "reporting" to the manager (i.e. a daily status update!), the manager may start to micro-manage team members etc etc. As it currently stands, he has already said that he feels like an outsider within the team. We really want to make use of his technical skills, we'd be foolish if we didn't because we are a relatively inexperienced and young team of twenty somethings. What would be the best approach to integrate a senior "technical" line manager in a Scrum team and make him feel like he is part of the team?

    Read the article

  • ODI 12c - Parallel Table Load

    - by David Allan
    In this post we will look at the ODI 12c capability of parallel table load from the aspect of the mapping developer and the knowledge module developer - two quite different viewpoints. This is about parallel table loading which isn't to be confused with loading multiple targets per se. It supports the ability for ODI mappings to be executed concurrently especially if there is an overlap of the datastores that they access, so any temporary resources created may be uniquely constructed by ODI. Temporary objects can be anything basically - common examples are staging tables, indexes, views, directories - anything in the ETL to help the data integration flow do its job. In ODI 11g users found a few workarounds (such as changing the technology prefixes - see here) to build unique temporary names but it was more of a challenge in error cases. ODI 12c mappings by default operate exactly as they did in ODI 11g with respect to these temporary names (this is also true for upgraded interfaces and scenarios) but can be configured to support the uniqueness capabilities. We will look at this feature from two aspects; that of a mapping developer and that of a developer (of procedures or KMs). 1. Firstly as a Mapping Developer..... 1.1 Control when uniqueness is enabled A new property is available to set unique name generation on/off. When unique names have been enabled for a mapping, all temporary names used by the collection and integration objects will be generated using unique names. This property is presented as a check-box in the Property Inspector for a deployment specification. 1.2 Handle cleanup after successful execution Provided that all temporary objects that are created have a corresponding drop statement then all of the temporary objects should be removed during a successful execution. This should be the case with the KMs developed by Oracle. 1.3 Handle cleanup after unsuccessful execution If an execution failed in ODI 11g then temporary tables would have been left around and cleaned up in the subsequent run. In ODI 12c, KM tasks can now have a cleanup-type task which is executed even after a failure in the main tasks. These cleanup tasks will be executed even on failure if the property 'Remove Temporary Objects on Error' is set. If the agent was to crash and not be able to execute this task, then there is an ODI tool (OdiRemoveTemporaryObjects here) you can invoke to cleanup the tables - it supports date ranges and the like. That's all there is to it from the aspect of the mapping developer it's much, much simpler and straightforward. You can now execute the same mapping concurrently or execute many mappings using the same resource concurrently without worrying about conflict.  2. Secondly as a Procedure or KM Developer..... In the ODI Operator the executed code shows the actual name that is generated - you can also see the runtime code prior to execution (introduced in 11.1.1.7), for example below in the code type I selected 'Pre-executed Code' this lets you see the code about to be processed and you can also see the executed code (which is the default view). References to the collection (C$) and integration (I$) names will be automatically made unique by using the odiRef APIs - these objects will have unique names whenever concurrency has been enabled for a particular mapping deployment specification. It's also possible to use name uniqueness functions in procedures and your own KMs. 2.1 New uniqueness tags  You can also make your own temporary objects have unique names by explicitly including either %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG in the name passed to calls to the odiRef APIs. Such names would always include the unique tag regardless of the concurrency setting. To illustrate, let's look at the getObjectName() method. At <% expansion time, this API will append %UNIQUE_STEP_TAG to the object name for collection and integration tables. The name parameter passed to this API may contain  %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. This API always generates to the <? version of getObjectName() At execution time this API will replace the unique tag macros with a string that is unique to the current execution scope. The returned name will conform to the name-length restriction for the target technology, and its pattern for the unique tag. Any necessary truncation will be performed against the initial name for the object and any other fixed text that may have been specified. Examples are:- <?=odiRef.getObjectName("L", "%COL_PRFEMP%UNIQUE_STEP_TAG", "D")?> SCOTT.C$_EABH7QI1BR1EQI3M76PG9SIMBQQ <?=odiRef.getObjectName("L", "EMP%UNIQUE_STEP_TAG_AE", "D")?> SCOTT.EMPAO96Q2JEKO0FTHQP77TMSAIOSR_ Methods which have this kind of support include getFrom, getTableName, getTable, getObjectShortName and getTemporaryIndex. There are APIs for retrieving this tag info also, the getInfo API has been extended with the following properties (the UNIQUE* properties can also be used in ODI procedures); UNIQUE_STEP_TAG - Returns the unique value for the current step scope, e.g. 5rvmd8hOIy7OU2o1FhsF61 Note that this will be a different value for each loop-iteration when the step is in a loop. UNIQUE_SESSION_TAG - Returns the unique value for the current session scope, e.g. 6N38vXLrgjwUwT5MseHHY9 IS_CONCURRENT - Returns info about the current mapping, will return 0 or 1 (only in % phase) GUID_SRC_SET - Returns the UUID for the current source set/execution unit (only in % phase) The getPop API has been extended with the IS_CONCURRENT property which returns info about an mapping, will return 0 or 1.  2.2 Additional APIs Some new APIs are provided including getFormattedName which will allow KM developers to construct a name from fixed-text or ODI symbols that can be optionally truncate to a max length and use a specific encoding for the unique tag. It has syntax getFormattedName(String pName[, String pTechnologyCode]) This API is available at both the % and the ? phase.  The format string can contain the ODI prefixes that are available for getObjectName(), e.g. %INT_PRF, %COL_PRF, %ERR_PRF, %IDX_PRF alongwith %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. The latter tags will be expanded into a unique string according to the specified technology. Calls to this API within the same execution context are guaranteed to return the same unique name provided that the same parameters are passed to the call. e.g. <%=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")%> <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")?> C$_MY_TAB7wDiBe80vBog1auacS1xB_AE <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG.log", "FILE")?> C2_MY_TAB7wDiBe80vBog1auacS1xB.log 2.3 Name length generation  As part of name generation, the length of the generated name will be compared with the maximum length for the target technology and truncation may need to be applied. When a unique tag is included in the generated string it is important that uniqueness is not compromised by truncation of the unique tag. When a unique tag is NOT part of the generated name, the name will be truncated by removing characters from the end - this is the existing 11g algorithm. When a unique tag is included, the algorithm will first truncate the <postfix> and if necessary  the <prefix>. It is recommended that users will ensure there is sufficient uniqueness in the <prefix> section to ensure uniqueness of the final resultant name. SUMMARY To summarize, ODI 12c make it much simpler to utilize mappings in concurrent cases and provides APIs for helping developing any procedures or custom knowledge modules in such a way they can be used in highly concurrent, parallel scenarios. 

    Read the article

  • What markup languages are good for programming articles/tutorials?

    - by Vilx-
    I very much wish to write a programming tutorial in my native language (Latvian). There are far too few of those. I am however unsure on what markup language to use for writing it. Here are a few things I would like to achieve: The same source can be compiled to both HTML for online viewing and printed form (PDF?). In HTML form it would allow superior interaction and appearance (see below), while the print form would look good on paper (layout etc). I have the idea that the tutorial could be multi-language. Different students have different requirements in their schools. For example, some schools teach Java, some teach C#. You could choose the language on the top of the HTML page and the relevant code snippets (and occasionally pieces of text) would swap out. Most of the text is the same anyway, only the language syntax is a bit different. The text would occasionally contain images too of course and these would need to be included in both the HTML and the printed version In the HTML version the code snippets should get automatic syntax coloring which should ideally be the same as in the recommended IDE for the tutorial. In case there are ambiguities, hints for the syntax colorer should be possible, but I don't want to do the whole coloring by hand. "Output" syntax coloring which would emulate a standard 80x25 text console (since many of the initial programs would be console applicatioins) Collapsible sections for answers to questions (aka "spoiler tags") Automatically generated index/table-of-contents Links to other parts of the tutorial (rendered as links in HTML and as references in print version) "Side note" sections, rendered as separate blocks on the side. Other functions useful in publications that I'm not aware of :) I know this is a bit much to ask, but is there something close enough that I could take it as a starting point and add the necessary features myself? Or is there something in the whole list (like the desire to have both HTML and print versions from the same source) that makes it all fundametally infeasible?

    Read the article

  • SOA Suite 11gR1 Patch Set 2 (PS2) released today!

    - by Demed L'Her
      We just released this morning SOA Suite 11gR1 Patch Set 2 (PS2)! You can download it as usual from: OTN (main platforms only) eDelivery (all platforms)   11gR1 PS2 is delivered as a sparse installer, that is to say that it is meant to be applied on the latest full release (11gR1 PS1). The good part is that it’s great for existing PS1 users who simply need to apply the patch and run the patch assistant – the not so good part is that new users will first need to download PS1. What’s in that release? Bug fixes of course but also several significant new features. Here is a short selection of the most significant features in PS2: Spring component (for native Java extensibility and integration) SOA Partitions (to organize and manage your composites) Direct Binding (for transactional invocations to and from Oracle Service Bus) HTTP binding (for those of you trying to do away with SOAP and looking for simple GET and POST) Resequencer (for ordering out-of-order messages) WS Atomic Transactions (WS-AT) support (for propagation of transactions across heterogeneous environments) Check out the complete list of new features in PS2 for more (including links to the documentation for the above)! But maybe even more importantly we are also releasing Oracle Service Bus 11gR1 and BPM Suite 11gR1 at the same time – all on the same base platform (WebLogic Server 10.3.3)! (NB: it might take a while for all pages and caches to be updated with the new content so if you don’t find what you need today, try again soon!)   Technorati Tags: ps1,11gr1ps2,new release,oracle soa suite,oracle

    Read the article

  • ODI 12c's Mapping Designer - Combining Flow Based and Expression Based Mapping

    - by Madhu Nair
    post by David Allan ODI is renowned for its declarative designer and minimal expression based paradigm. The new ODI 12c release has extended this even further to provide an extended declarative mapping designer. The ODI 12c mapper is a fusion of ODI's new declarative designer with the familiar flow based designer while retaining ODI’s key differentiators of: Minimal expression based definition, The ability to incrementally design an interface and to extract/load data from any combination of sources, and most importantly Backed by ODI’s extensible knowledge module framework. The declarative nature of the product has been extended to include an extensible library of common components that can be used to easily build simple to complex data integration solutions. Big usability improvements through consistent interactions of components and concepts all constructed around the familiar knowledge module framework provide the utmost flexibility. Here is a little taster: So what is a mapping? A mapping comprises of a logical design and at least one physical design, it may have many. A mapping can have many targets, of any technology and can be arbitrarily complex. You can build reusable mappings and use them in other mappings or other reusable mappings. In the example below all of the information from an Oracle bonus table and a bonus file are joined with an Oracle employees table before being written to a target. Some things that are cool include the one-click expression cross referencing so you can easily see what's used where within the design. The logical design in a mapping describes what you want to accomplish  (see the animated GIF here illustrating how the above mapping was designed) . The physical design lets you configure how it is to be accomplished. So you could have one logical design that is realized as an initial load in one physical design and as an incremental load in another. In the physical design below we can customize how the mapping is accomplished by picking Knowledge Modules, in ODI 12c you can pick multiple nodes (on logical or physical) and see common properties. This is useful as we can quickly compare property values across objects - below we can see knowledge modules settings on the access points between execution units side by side, in the example one table is retrieved via database links and the other is an external table. In the logical design I had selected an append mode for the integration type, so by default the IKM on the target will choose the most suitable/default IKM - which in this case is an in-built Oracle Insert IKM (see image below). This supports insert and select hints for the Oracle database (the ANSI SQL Insert IKM does not support these), so by default you will get direct path inserts with Oracle on this statement. In ODI 12c, the mapper is just that, a mapper. Design your mapping, write to multiple targets, the targets can be in the same data server, in different data servers or in totally different technologies - it does not matter. ODI 12c will derive and generate a plan that you can use or customize with knowledge modules. Some of the use cases which are greatly simplified include multiple heterogeneous targets, multi target inserts for Oracle and writing of XML. Let's switch it up now and look at a slightly different example to illustrate expression reuse. In ODI you can define reusable expressions using user functions. These can be reused across mappings and the implementations specialized per technology. So you can have common expressions across Oracle, SQL Server, Hive etc. shielding the design from the physical aspects of the generated language. Another way to reuse is within a mapping itself. In ODI 12c expressions can be defined and reused within a mapping. Rather than replicating the expression text in larger expressions you can decompose into smaller snippets, below you can see UNIT_TAX AMOUNT has been defined and is used in two downstream target columns - its used in the TOTAL_TAX_AMOUNT plus its used in the UNIT_TAX_AMOUNT (a recording of the calculation).  You can see the columns that the expressions depend on (upstream) and the columns the expression is used in (downstream) highlighted within the mapper. Also multi selecting attributes is a convenient way to see what's being used where, below I have selected the TOTAL_TAX_AMOUNT in the target datastore and the UNIT_TAX_AMOUNT in UNIT_CALC. You can now see many expressions at once now and understand much more at the once time without needlessly clicking around and memorizing information. Our mantra during development was to keep it simple and make the tool more powerful and do even more for the user. The development team was a fusion of many teams from Oracle Warehouse Builder, Sunopsis and BEA Aqualogic, debating and perfecting the mapper in ODI 12c. This was quite a project from supporting the capabilities of ODI in 11g to building the flow based mapping tool to support the future. I hope this was a useful insight, there is so much more to come on this topic, this is just a preview of much more that you will see of the mapper in ODI 12c.

    Read the article

  • 1 Million IOPS

    - by GrumpyOldDBA
    As a keen follower of storage performance I couldn't help but be drawn to this article in The Register http://www.theregister.co.uk/2010/04/14/lsi_million_iops/ this morning. I gave my 5 year old laptop a new lease of life with a SSD and in combination with the old drive made external managed to reduce the time of a demo query from 50 odd mins down to 6 mins. I also have 4 Silicon Power 32GB SSDs set up as a raid 0 on my home server, an overblown PC. http://www.futurestorage.co.uk/index.asp?selmanuf...(read more)

    Read the article

  • Bare Metal Restore Part 2

    - by GrumpyOldDBA
    I blogged previously about how Windows 2008 R2 has native "bare metal restore"   http://sqlblogcasts.com/blogs/grumpyolddba/archive/2011/05/13/windows-2008-r2-bare-metal-restore.aspx , see the Core Team's blog post here;  http://blogs.technet.com/b/askcore/archive/2011/05/12/bare-metal-restore.aspx Well since then I’ve actually had the chance not only to put the process to the test but to see if I could go one step further. I have a six identical IBM Servers, part of...(read more)

    Read the article

  • “Cloud Integration in Minutes” – True or False?

    - by Bruce Tierney
    The short answer is “yes”. Connecting on-premise and cloud applications “in minutes” is true…provided you only consider the connectivity subset of integration and have a small number of cloud integration touch points. At the recent Gartner AADI conference, 230 attendees filled up the Oracle session to get a more comprehensive answer to this question. During the session, titled “Simplifying Integration – The Cloud & Mobile Pre-requisite”, Oracle’s Tim Hall described cloud connectivity and then, equally importantly, the other essential and sometimes overlooked aspects of integration required to ensure a long term application and service integration strategy. To understand the challenges and opportunities faced by cloud integration, the session started off with a slide that describes how connectivity can quickly transition from simplicity to complexity as the number of applications and service vendor instances grows: Increased complexity puts increased demand on the integration platform As companies expand from on-premise applications into a hybrid on-premise/cloud infrastructure with support for mobile, cloud, and social, there is a new sense of urgency to implement a unified and comprehensive service integration platform. Without getting this unified platform in place, companies face increased complexity and cost managing a growing patchwork of niche integration toolsets as well as the disparate standards mandated by each SaaS vendor as shown in the image below: dddddddddddddddddddd Incomplete and overlapping offerings from a patchwork of niche vendors Also at Gartner AADI, Oracle SOA Suite customer Geeta Pyne, Director of Middleware at BMC presented their successful strategy on how BMC efficiently manages their cloud integration despite disparate requirements from each vendor. From one of Geeta’s slide: Interfaces are dictated by SaaS vendors; wide variety (SOAP, REST, Socket, HTTP/POX, SFTP); Flexibility of Oracle Service Bus/SOA Suite helps to support Every vendor has their way to handle Security; WS-Security, Custom Header; Support in Oracle Service Bus helps to adhere to disparate requirements At BMC, the flexibility of Oracle Service Bus and Oracle SOA Suite allowed them to support the wide variation in the functional requirements as mandated by their SaaS vendors. In contrast to the patchwork platform approach of escalating complexity from overlapping SaaS toolkits, Oracle’s strategy is to provide a unified platform to support disparate requirements from your SaaS vendors, on-premise apps, legacy apps, and more. Furthermore, Oracle SOA Suite includes the many aspects of comprehensive integration beyond basic connectivity including orchestration, analytics (BAM, events…), service virtualization and more in a single unified interface. Oracle SOA Suite – Unified and comprehensive To summarize, yes you can achieve “cloud integration in minutes” when considering the connectivity subset of integration but be sure to look for ways to simplify as you consider a more comprehensive view of integration beyond basic connectivity such as service virtualization, management, event processing and more. And finally, be sure your integration platform has the deep flexibility to handle the requirements of all your future SaaS applications…many of which are unknown to you now.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >