Search Results

Search found 27161 results on 1087 pages for 'information schema'.

Page 94/1087 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Tip #104: Did you know … How to view text for the ‘hint’ buttons on the Publish Web Dialog?

    - by The Official Microsoft IIS Site
    After the Beta 2 release of Visual Studio 2010, the Publish Web Dialog was modified to include two information buttons associated with the Service URL and Site/application text boxes. (See Figure 1) Figure 1 – New information (‘hint’) buttons (see circled question marks) There are two keys to remember when trying to view the ‘help’ text associated with these buttons: Patience Hover – don’t click In order to reveal the valuable information that these help icons can unlock, simply move your mouse to...(read more)

    Read the article

  • Don’t be a dinosaur. Use Calendar Tree!

    - by jamiet
    If one spends long enough in my company one will likely eventually have to listen to me bark on about subscribable calendars. I was banging on about them way back in 2009, I’ve cajoled SQLBits into providing one, provided one myself for the World Cup, and opined that they could be transformative for the delivery of BI. I believe subscribable calendars can change the world but have never been good at elucidating why I thought so, for that reason I always direct people to read a blog by Scott Adams (yes, the guy who draws Dilbert) entitled Calendar as Filter. In that blog post Scott writes: I think the family calendar is the organizing principle into which all external information should flow. I want the kids' school schedules for sports and plays and even lunch choices to automatically flow into the home calendar. Everything you do has a time dimension. If you are looking for a new home, the open houses are on certain dates, and certain houses that fit your needs are open at certain times. If you are shopping for some particular good, you often need to know the store hours. Your calendar needs to know your shopping list and preferences so it can suggest good times to do certain things I think the biggest software revolution of the future is that the calendar will be the organizing filter for most of the information flowing into your life. You think you are bombarded with too much information every day, but in reality it is just the timing of the information that is wrong. Once the calendar becomes the organizing paradigm and filter, it won't seem as if there is so much. I wholly agree and hence was delighted to discover (via the Hanselminutes podcast) that Scott has a startup called CalendarTree.com whose raison d’etre is to solve this very problem. What better way to describe a Scott Adams startup than with a Scott Adams comic: I implore you to check out Calendar Tree and make the world a tiny bit better by using it to share any information that has a time dimension to it. Don’t be a dinosaur, use Calendar tree! @Jamiet

    Read the article

  • An XEvent a Day (5 of 31) - Targets Week – ring_buffer

    - by Jonathan Kehayias
    Yesterday’s post, Querying the Session Definition and Active Session DMV’s , showed how to find information about the Event Sessions that exist inside a SQL Server and how to find information about the Active Event Sessions that are running inside a SQL Server using the Session Definition and Active Session DMV’s.  With the background information now out of the way, and since this post falls on the start of a new week I’ve decided to make this Targets Week, where each day we’ll look at a different...(read more)

    Read the article

  • An XEvent a Day (4 of 31) – Querying the Session Definition and Active Session DMV’s

    - by Jonathan Kehayias
    Yesterdays post, Managing Event Sessions , showed how to manage Event Sessions in Extended Events Sessions inside the Extended Events framework in SQL Server. In today's post, we’ll take a look at how to find information about the defined Event Sessions that already exist inside a SQL Server using the Session Definition DMV’s and how to find information about the Active Event Sessions that exist using the Active Session DMV’s. Session Definition DMV’s The Session Definition DMV’s provide information...(read more)

    Read the article

  • Upgrading Fusion Middleware 11.1.1.x to 11.1.1.4

    - by James Taylor
    This is a follow on from my previous post where we upgraded 11.1.1.2 to 11.1.1.3. The instructions I provide here will work for Fusion Middleware 11.1.1.2 and 11.1.1.3 wanting to upgrade to 11.1.1.4. In this example I’m just upgrading SOA Suite on OEL 64bit but the steps will be the same, some of the downloads may be different based on your environment. To upgrade to 11.1.1.4 you need to have access to http://support.oracle.com as this is where the downloads reside. Oracle provides 11.1.1.4 as a standalone download so you can do a fresh install if required using OTN downloads (http://www.oracle.com/technetwork/indexes/downloads/index.html). The high level steps to upgrade are as follows: Download software Shutdown you SOA Environment Upgrade WLS to 11.1.1.4 Upgrade SOA Suite to 11.1.1.4 Upgrade OSB to 11.1.1.4 Upgrade MSD Schemas Identify the downloads you require for your install. You will need the WebLogic Server Upgrade and the additional product downloads. If you are using 64bit then use the generic version. The downloads are found from the following location - http://download.oracle.com/docs/html/E18749_01/download_readme.htm#BABDDIIC For the purpose of this post I downloaded the following patches 11060985 – WLS Server Generic 11060960 – SOA Suite 11061005 – OSB Suite You must also download the 11.1.1.4 RCU tool to upgrade the DB schemas. It is available via OTN, or, Oracle Support, I have provided the link from Oracle Support.  11060956 – RCU Make sure you have set the Java executable in your PATH e.g. export PATH=$JAVA_HOME/bin:$PATH  Make sure all your WebLogic environment has been shut down before performing the upgrade. Extract the WLS patch 11060985 to a temporary directory and start the installer java –jar wls1034_upgrade_generic.jar Please note if you are not running 64BIT then the upgrade executable will be just a bin file which you can execute directly. Chose the right Oracle home for your WebLogic Server install. In the Register for Security Updates you can enter your details or just click Next. If you do not enter details confirm that you don’t want to receive these updates Select the products you want to upgrade and select next. It is recommended that you accept the defaults. Confirm the directories that will be upgraded Upgrade of WLS ahs been completed   Extract your both SOA downloads to a temporary directory and run the installer found in Disk1 ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Please note that the java location and version may be different for your environment Skip the Software Updates Ensure your system meets the prerequisites Set the Oracle home for your SOA install. You will be asked to confirm that you want to upgrade, click Yes Choose your application server. Since you are upgrading from 11.1.1.x you will be on WebLogic Start the Install Installation Upgrade of SOA Suite completed accept the default to finish.   In my environment I have OSB installed so I need to upgrade this next. If you don’t have SOA Suite you can go straight to completing the DB Schema updates at Step 24.  Extract the OSB upgrade files to a temporary directory and execute the installer found in the Disk1 folder. ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Skip the software updates Select the Oracle home for your environment Accept the warning to continue the upgrade Point to the location of your WebLogic Server installation Install the OSB upgrade Upgrade has been completed accept the defaults Change directory to $MW_HOME/oracle_common/bin where the Patch Set Assistant is installed Execute the following command to update the MDS schema. Please not for my examples I have the context set to DEV. your may be different. This means that all my schemas are prefixed by DEV. ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_MDS You will be asked you passwords for sys and the schema Enter the database administrator password for "sys": Enter the schema password for schema user "DEV_MDS": Change directory to $MW_HOME/Oracle_SOA1/bin to where the Patch Set Assistant is installed for SOA Suite. Execute the following command to update the SOA and BAM schemas ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_SOAINFRA   To check that you have the installed correctly run the following SQL as sysdba. SELECT owner, version, status FROM schema_version_registry; OWNER                          VERSION                        STATUS ------------------------------ ------------------------------ ----------- DEV_MDS                        11.1.1.4.0                     VALID DEV_SOAINFRA                   11.1.1.4.0                     VALID Don’t stress if the versions are not all sitting at version 11.1.1.4 as not all schemas need to be updated. The key ones are MDS and SOAINFRA

    Read the article

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • New User of UPK?

    - by [email protected]
    The UPK Developer comes with a variety of manuals to help support your organization in the development and deployment of content. The Developer manuals can be found in the \Documentation\Language Code\Reference folder where the Developer has been installed. As of 3.5.x the documentation can also be accessed via the Start menu, Start\Programs\User Productivity Kit\Documentation\Reference. Content Deployment.pdf: This manual provides information on how to deploy content to your audience. Content Development.pdf: This manual provides information on how to create, maintain, and publish content using the Developer. The content of this manual also appears in the Developer help system. Content Player.pdf: This manual provides instructions on how to view content using the Player. The content of this manual also appears in the Player help system. In-Application Support Guide.pdf: This manual provides information on how implement content-sensitive, in-application support for enterprise applications using Player content. Installation & Administration.pdf: This manual provides instructions for installing the Developer in a single-user or multi-user environment as well as information on how to add and manage users and content in a multi-user installation. An Administration help system also appears in the Developer for authors configured as administrators. This manual also provides instructions for installing and configuring Usage Tracking. Upgrade.pdf: This manual provides information on how to upgrade from a previous version to the current version. Usage Tracking Administration & Reporting.pdf: This manual provides instructions on how to manage users and usage tracking reports. - Kathryn Lustenberger, Oracle UPK Outbound Product Management

    Read the article

  • SharePoint 2010 Design & Deployment Best Practices

    - by Michael Van Cleave
    Well now that SharePoint 2010 has successfully launched and everyone is scratching for every piece of best practices information they can get their hands on, I would like to invite anyone and everyone to come and take part in ShareSquared's next webinar. The webinar will cover some key information such as: Pros and cons of the different approaches to installing and configuring SharePoint 2010 Configuration Best Practices for SharePoint 2010 farms Services architecture; dependencies, licensing, and topologies Information Architecture guidance for sizing, multilingual support, multi-tenancy, and more. Using tools such as SharePoint Composer and SharePoint Maestro to configure and deploy SharePoint 2010 And most of all, avoiding common pitfalls for installation and deployment. What is better than all of that? Well, the even more exciting thing is that the presenters will be our very own SharePoint MVP's Gary Lapointe and Paul Stork. If you don't know who these guys are then you should definitely check out their blogs and their contributions to the SharePoint community. To get more information and register click here: REGISTER Other great links to information in this post: ShareSquared, Inc Gary Lapointe's Blog Paul Stork's Blog SharePoint Composer Check it out and get up to speed from some of the best in the industry. Michael

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • What is the best database design and/or software to model a thesaurus?

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus : a long list of words with attributes, all of which are linked to each other. Wikipedia defines it as: In Information Science, Library Science, and Information Technology, specialized thesauri are designed for information retrieval. They are a type of controlled vocabulary, for indexing or tagging purposes. Such a thesaurus can be used as the basis of an index for online material. The Art and Architecture Thesaurus, for example, is used to index the Canadian Information retrieval thesauri are formally organized so that existing relationships between concepts are made explicit. What database software, design or model would best fit this? Are PHP and MySQL good technologies to handle it?

    Read the article

  • mappoint 2013 randomly crashes on import

    - by ErocM
    We are sending routes to Mappoint 2013 from our application using an access database. It seems to happen with Mappoint 2010 and 2011 also. It doesn't happen on all of our clients either and it happens randomly on those who it does happen. This is the message: Problem signature: Problem Event Name: BEX Application Name: MapPoint.exe Application Version: 19.0.18.1100 Application Timestamp: 4fd664bb Fault Module Name: StackHash_94b0 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: 7f82c94f Exception Code: c0000005 Exception Data: 00000008 OS Version: 6.0.6002.2.2.0.18.10 Locale ID: 1033 Additional Information 1: 94b0 Additional Information 2: 30950b6006304277980cdff17dfbd104 Additional Information 3: 098a Additional Information 4: 31c80150ac0b74b2dcb7884aa8fa1dac Does anyone know where I'd find out more information on this or how to resolve it? If this is not the correct exchange, pls point me to the right one and I'll delete and respost it. Thanks!

    Read the article

  • Handling Trailing Delimiters in HL7 Messages

    - by Thomas Canter
    Applies to: BizTalk Server 2006 with the HL7 1.3 Accelerator Outline of the problem Trailing Delimiters are empty values at the end of an object in a HL7 ER7 formatted message. Examples: Empty Field NTE|P| NTE|P|| Empty component ORC|1|725^ Empty Subcomponent ORC|1|||||27& Empty repeat OBR|1||||||||027~ Trailing delimiters indicate the following object exists and is empty, which is quite different from null, null is an explicit value indicated by a pair of double quotes -> "". The BizTalk HL7 Accelerator by default does not allow trailing delimiters. There are three methods to allow trailing delimiters. NOTE: All Schemas always allow trailing delimiters in the MSH Segment Using party identifiers MSH3.1 – Receive/inbound processing, using this value as a party allows you to configure the system to allow inbound trailing delimiters. MSH5.1 – Send/outbound processing, using this value as a party allows you to configure the system to allow outbound trailing delimiters. Generally, if you allow inbound trailing delimiters, unless you are willing to programmatically remove all trailing delimiters, then you need to configure the send to allow trailing delimiters. Add the appropriate parties to the BizTalk Parties list from these two fields in your message stream. Open the BizTalk HL7 Configuration tool and for each party check the "Allow trailing delimiters (separators)" check box on the Validation tab. Disadvantage – Each MSH3.1 and MSH5.1 value must be represented in the parties list and configured. Advantage – granular control over system behavior for each inbound/outbound system. Using instance properties of a pipeline used in a send port or receive location. Open the BizTalk Server Administration console locate the send port or receive location that contains the BTAHL72XReceivePipeline or BTAHL72XSendPipeline pipeline. Open the properties To the right of the pipeline selected locate the […] ellipses button In the property list, locate the "TrailingDelimiterAllowed" property and set it to True. Advantage – All messages through a particular Send Port or Receive Location will allow trailing delimiters. Disadvantage – Must configure each Send Port or Receive Location. No granular control over which remote parties will send or receive messages with trailing delimiters. Using a custom pipeline that uses a pre-configured BTA HL7 Pipeline component. Use Visual Studio to construct a custom receive and send pipeline using the appropriate assembler or dissasembler. Set the component property to "TrailingDelimitersAllowed" to True Compile and deploy the custom pipeline Use the custom pipeline instead of the standard pipeline for all HL7 message processing Advantage – All messages using the custom pipeline will automatically allow trailing delimiters. Disadvantage – Requires custom coding and development to create and deploy the custom pipeline. No granular control over which remote parties will send or receive messages with trailing delimiters. What does a Trailing Delimiter do to the XML Schema? Allowing trailing delimiters does not have the impact often expected in the actual XML Schema.The Schema reproduces the message with no data loss.Thus, the message when represented in XML must contain the extra fields, in order to reproduce the outbound message.Thus, a trialing delimiter results in an empty XML field.Trailing Delmiters are not stripped from the inbound message. Example:<PID_21>44172</PID_21><PID_21>9257</PID_21> -> the original maximum number of repeats<PID_21></PID_21> -> The empty repeated field Allowing trailing delimiters not remove the trailing delimiters from the message, it simply suppresses the check that will cause the message to fail parse with trailing delimiters. When can you not fix the problem by enabling trailing delimiters Each object in a message must have a location in the target BTAHL7 schema for its content to reside.If you have more objects in the message than are contained at that location, then enabling trailing delimiters will not resolve the problem. The schema must be extended to accommodate the empty message content.Examples: Extra Field NTE|P||||Only 4 fields in NTE Segment, the 4th field exists, but is empty. Extra component PID|1|1523|47^^^^^^^Only 5 components in a CX data type, the 5th component exists, but is empty Extra subcomponent ORC|1|||||27&&Only 2 subcomponents in a CQ data type, the 3rd subcomponent is empty, but exists. Extra Repeat PID|1||||||||||||||||||||4419~5217~Only 2 repeats allowed for the field "Mother's identifier", the repeat is empty, but exists. In each of these cases, you must locate the failing object and extend the type to allow an additional object of that type. FieldAdd a field of ST to the end of the segment with a suitable name in the segments_nnn.xsd Component Create a new Custom CX data type (i.e. CX_XtraComp) in the datatypes_nnn.xsd and add a new component to the custom CX data type. Update the field in the segments_nnn.xsd file to use the custom data type instead of the standard datatype. Subcomponent Create a new Custom CQ data type that accepts an additional TS value at the end of the data type. Create a custom TQ data type that uses the new custom CQ data type as the first subcomponent. Modify the ORC segment to use the new CQ data type at ORC.7 instead of the standard CQ data type. RepeatModify the Field definition for PID.21 in the segments_nnn.xsd to allow more repeats in the field.

    Read the article

  • July, the 31 Days of SQL Server DMO’s - Intro

    - by Tamarick Hill
    DMO’s burst onto the SQL Server scene in 2005 and when they did they unlocked a wealth of information. I’ve became a major fan of DMO’s as they tend to simplify my troubleshooting as well as provide me with valuable information about what is going on within the SQL Server engine. I would recommend that those of you who are not familiar with DMO’s, take the time to really learn more about them. For those of you who may not be familiar with DMO’s, for the month of July, I will be writing about one DMO per day. Don’t get me wrong, I’m no DMO expert or anything like that, but I’ve worked with them enough to feel that I can give you some good information about DMO’s to help you get started with using them. During these blog sessions, I will not be providing you with any complicated queries to solve all of your SQL Server problems that you may or may not have. I will be simply introducing you to various DMO’s and illustrating what type of information they provide. After you learn more about these individually, then you will be able to join whatever DMO’s you need to pull back the information you are seeking. I hope that you all benefit in some form or fashion from my next 31 DMO postings!!! Enjoy!

    Read the article

  • No search data in Google Analytics or Webmasters

    - by cjk
    I have a domain that has been registered in Google Webmasters and using Google Analytics for over 4 months. I get lots of analytics data, but am getting no information on Google searches in Webmasters, or Queries in Search Engine Optimisation in Analytics, even though I am getting keywords for traffic coming to my site from search engines. I have a test sub-domain with the same setup (except not HTTPS) that is getting some of this information through, even with much less data and visits. What could be wrong to stop me getting this information?

    Read the article

  • Blog Posts from Prepping for Last Year's Summit

    - by RickHeiges
    Last year, I had a series of blog posts that matched up with a webcast I did targeting First Timers to the PASS Summit 2011. Here is a link to the final blog post which is a summary of those posts and links to the main points in the series. A good deal of the information in those posts are still relevant. I am in the process of updating the webcast and will be presenting the information again this year on Oct 25, 2012 at 11am ET. There is a lot of great information out there for first timers that...(read more)

    Read the article

  • No search data in Goolge Analytics or Webmasters

    - by cjk
    I have a domain that has been registered in Google Webmasters and using Google Analytics for over 4 months. I get lots of analytics data, but am getting no information on Google searches in Webmasters, or Queries in Search Engine Optimisation in Analytics, even though I am getting keywords for traffic coming to my site from search engines. I have a test sub-domain with the same setup (except not HTTPS) that is getting some of this information through, even with much less data and visits. What could be wrong to stop me getting this information?

    Read the article

  • SQL SERVER – SmallDateTime and Precision – A Continuous Confusion

    - by pinaldave
    Some kinds of confusion never go away. Here is one of the ancient confusing things in SQL. The precision of the SmallDateTime is one concept that confuses a lot of people, proven by the many messages I receive everyday relating to this subject. Let me start with the question: What is the precision of the SMALLDATETIME datatypes? What is your answer? Write it down on your notepad. Now if you do not want to continue reading the blog post, head to my previous blog post over here: SQL SERVER – Precision of SMALLDATETIME. A Social Media Question Since the increase of social media conversations, I noticed that the amount of the comments I receive on this blog is a bit staggering. I receive lots of questions on facebook, twitter or Google+. One of the very interesting questions yesterday was asked on Facebook by Raghavendra. I am re-organizing his script and asking all of the questions he has asked me. Let us see if we could help him with his question: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO Now when the above script is ran, we will get the following result: Well, the expectation of the query was to have the following result. The row which was inserted last was expected to return as first row in result set as the ORDER BY descending. Side note: Because the requirement is to get the latest data, we can’t use any  column other than smalldatetime column in order by. If we use name column in the order by, we will get an incorrect result as it can be any name. My Initial Reaction My initial reaction was as follows: 1) DataType DateTime2: If file precision of the column is expected from the column which store date and time, it should not be smalldatetime. The precision of the column smalldatetime is One Minute (Read Here) for finer precision use DateTime or DateTime2 data type. Here is the code which includes above suggestion: CREATE TABLE #temp (name VARCHAR(100), registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY registered DESC GO DROP TABLE #temp GO 2) Tie Breaker Identity: There are always possibilities that two rows were inserted at the same time. In that case, you may need a tie breaker. If you have an increasing identity column, you can use that as a tie breaker as well. CREATE TABLE #temp (ID INT IDENTITY(1,1), name VARCHAR(100),registered datetime2) GO DECLARE @test datetime2 SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT * FROM #temp ORDER BY ID DESC GO DROP TABLE #temp GO Those two were the quick suggestions I provided. It is not necessary that you should use both advices. It is possible that one can use only DATETIME datatype or Identity column can have datatype of BIGINT or have another tie breaker. An Alternate NO Solution In the facebook thread this was also discussed as one of the solutions: CREATE TABLE #temp (name VARCHAR(100),registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO However, I believe it is not the solution and can be further misleading if used in a production server. Here is the example of why it is not a good solution: CREATE TABLE #temp (name VARCHAR(100) NOT NULL,registered smalldatetime) GO DECLARE @test smalldatetime SET @test=GETDATE() INSERT INTO #temp VALUES ('Value1',@test) INSERT INTO #temp VALUES ('Value2',@test) GO -- Before Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO -- Create Index ALTER TABLE #temp ADD CONSTRAINT [PK_#temp] PRIMARY KEY CLUSTERED (name DESC) GO -- After Index SELECT name, registered, ROW_NUMBER() OVER(ORDER BY registered DESC) AS "Row Number" FROM #temp ORDER BY 3 DESC GO DROP TABLE #temp GO Now let us examine the resultset. You will notice that an index which is created on the base table which is (indeed) schema change the table but can affect the resultset. As you can see, an index can change the resultset, so this method is not yet perfect to get the latest inserted resultset. No Schema Change Requirement After giving these two suggestions, I was waiting for the feedback of the asker. However, the requirement of the asker was there can’t be any schema change because the application was used by many other applications. I validated again, and of course, the requirement is no schema change at all. No addition of the column of change of datatypes of any other columns. There is no further help as well. This is indeed an interesting question. I personally can’t think of any solution which I could provide him given the requirement of no schema change. Can you think of any other solution to this? Need of Database Designer This question once again brings up another ancient question:  “Do we need a database designer?” I often come across databases which are facing major performance problems or have redundant data. Normalization is often ignored when a database is built fast under a very tight deadline. Often I come across a database which has table with unnecessary columns and performance problems. While working as Developer Lead in my earlier jobs, I have seen developers adding columns to tables without anybody’s consent and retrieving them as SELECT *.  There is a lot to discuss on this subject in detail, but for now, let’s discuss the question first. Do you have any suggestions for the above question? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: CodeProject, Developer Training, PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Visual Studio 2013 - Express for Web vs Professional [duplicate]

    - by TimS
    This question already has an answer here: Visual Studio 2012 - Express vs Professional 2 answers What are the main differences and limitations between Visual Studio 2013 Express and Visual Studio 2013 Professional? I'm specifically interested in information related to the Web edition. I need to be able to develop ASP.Net applications, Windows Services and console applications - not Desktop or Phone apps. Microsoft seems to hide this information well and I can only seem to find information relating to 2012 products and earlier.

    Read the article

  • how to send trackback and pingback using c# script

    - by anirudha
    This is a very interesting topic because if you want to search about them. you find much useless stuff even you use c# as prefix. 1. how trackback works ? Every blog who have support to trackback that in their every post they have some text comment like <rdf:/rdf></rdf:rdf>  inside this tag the attribute “trackback:ping” have a url where we can send trackback. 2. you need some information about your blog to post where you want to trackback like 1. URL where you want to send the trackback 2. your post title [may be page title] 3. your post URL [may be page url] 4.  Excerpt : information you want to send. 5. you blogname [may be sitename if you use site not blog] make the information like querystring just we use in asp.net ex: title=”pingpost&url=pingurl&excerpt=it’s me&blog=myblog” ; the information look like asp.net Querystring if you unsure that you can HTMLencode the information who you use in parameters. you need to be sure that your post have URL of post where you want to send trackback. make  a request to pingurl set the following property request.Method = “POST”; //because they support only POST request.ContentLength = param.length // choose the length of parameters we create for sending ping. request.ContentType = "application/x-www-form-urlencoded"; // required to set. now when you send the request then server respond you something about your request check that the request.statuscode is verify that’s work or not if (response.StatusCode < HttpStatusCode.OK && response.StatusCode >= HttpStatusCode.Ambiguous)                     throw new Exception(string.Format(response.StatusCode.ToString())); because you have the response in XML format you can parse the response that’s have Error tag inside them or not. i put here information not code the reason is that “i see some other blog from a week on the topic but i found that they[blogger] post code not the method and all their code are useless and not worked”. because i thing to be more declarative i post here the definition not code.

    Read the article

  • What is a Linux device name for RAID of sas drives?

    - by flashnik
    I have a RAID1 using Promise FastTrack TX2650 consisting of 2 SAS drives. What is a Linux device name for them? Like sda is for first sata drive. I have Windows server so I can't look it directly but need this information for smartctl usage. UPDATE. I found how to access RAID: smartctl -d scsi sdb (because I also have a SATA drive). But in this case I just get an information about just raid controller though I wantto get information about drives itself. Is it possible? Promises's control panel provides information only about their healthy status (boolean) and I want more. Mostly now I need information about temperature.

    Read the article

  • Découvrir la solution d'exploration de données structuré et non structuré

    - by David lefranc
    Explorer et découvrir l’information… Nous vous proposons un atelier découverte pour vous permettre d’explorer toute type de données grâce à la solution Oracle Endeca . Quand : 7 Décembre 2012 De 9h30 à 12h30  Lieu : Oracle 15 Boulevard Charles de gaulle 92715 Colombes Pour s'inscrire : [email protected] Réalisé pour des utilisateurs métiers, cet atelier vous permettera en une demi journée , de découvrir Oracle Endeca Information Discovery afin de : Comprendre et explorer toute information venant de différents horizons ( Big Data, réseaux sociaux, forums, sondages, blogs..) Découvrir en quoi et comment OEID est un complément à des solutions de BI classiques Par une navigation simple et rapide, vous découvrirez combien il est facile de trouver des réponses à des questions imprévues en utilisant OEID sans formation préalable. Utilisez la recherche et la navigation guidée pour voir comment les informations structurées et non structurées peuvent être rapidement réunies pour dégager la valeur cachée. Explorer toutes vos données dans n'importe quel format et à partir de n'importe quelle source, y compris les médias sociaux, documents, fichiers,…. Pouvoir découvrir et explorer vos données sans référentiel pour permettre aux utilisateurs d’être autonome et d’analyser leurs propres données de manière rapide Élaborer une stratégie visant à accroître la valeur des données de l'entreprise tout en réduisant le coût total de possession Découvrez l'incroyable performance d’ Endeca sur Oracle Exalytics la machine In Memory AgendaAprès une introduction sur la solution Oracle information Endeca, suivi d’un atelier, vous verrez comment il est facile de: Utiliser la navigation guidée et le moteur de recherche pour explorer les données structurées et non structurées intégrer rapidement les nouvelles sources de données comme les médias sociaux Construire de nouvelles interfaces utilisateur tout en découvrant l’information répondre rapidement aux besoins changeants des entreprises et des environnements de données

    Read the article

  • Ganglia and how it communicates

    - by MikeKulls
    I'm a little confused about how Ganglia sends information around and have found conflicting information on the web. I would have thought that the gmond process would either send info to gmetad at a regular interval or gmetad would request info from the various instances of gmond. At least one online article states this is how it works but from what I understand this is incorrect. It appears that you configure all gmond processes to send their info to a central gmond process and gmetad will request info from that central gmond. Is that correct? In my case I have 6 servers sending their information to 1 central server. If I set gmetad to get it's information from the central server then I get information from all 6 servers, all good.. Their weird thing is that if I point gmetad to one of the 6 servers then I still get info from all 6 servers. How is it that server 1 in my cluster if getting stats from all the other servers?

    Read the article

  • WebCenter Customer Spotlight: spectrumK Holding GmbH

    - by me
    Author: Peter Reiser - Social Business Evangelist Oracle WebCenter Solution Summary spectrumK Holding GmbH was founded in 2007 by various German health insurance funds and national insurance associations and is a service provider for the healthcare market, covering patient care management, financial management, and information management, as well as payment services and legal counseling. spectrumK Holding GmbH business objectives was to implement innovative new Web-based services and solution systems for health insurance funds by integrating a multitude of isolated solutions from different organizations. Using Oracle WebCenter Portal, Oracle WebCenter Content, and Site Studio, the customer created a multiple-portal environment and deployed the 1st three applications for patient receipt, a medication navigator, and disability information. spectrumK Holding GmbH accelerated time-to-market for new features by reducing the development time, achieved 40% development and cost savings using standard modules and realized 80% overall savings using the Oracle multiple portal environment, as compared to individual installations. Company Overview spectrumK Holding GmbH was founded in 2007 by various company health insurance funds and national insurance associations. A service provider for the healthcare market, spectrumK consists of one holding company and four operative subsidiaries. Its broad product portfolio of compulsory health funds covers patient care management, financial management, and information management, as well as payment services and legal counseling. Business ChallengesspectrumK Holding GmbH business objectives were to implement innovative new Web-based services and solution systems for the health insurance funds by integrating a multitude of isolated solutions from different organizations. Specifically, spectrumK was looking to: Establish a portal-based environment to provide health coverage information services to the insured, with the option to integrate a multitude of isolated solutions from different organizations Implement innovative new Web-based spectrumK service products and solutions systems for health insurance funds Lower costs while improving services for the health fund’s clients Find an infrastructure that supports the small development team in efficient implementation and operation of the solution Reuse standard modules while enabling easy, inexpensive adaptations to customer-specific corporate requirements Solution Deployed spectrumK Holding GmbH created a multiple-portal environment, called “KundenCenter+“ which is based on the integration of Oracle WebCenter Portal, Oracle WebCenter Content, and Site Studio. They initiated and launched the first three of the company’s KundenCenter+, Oracle-based modules for patient receipt, a medication navigator, and disability information, with numerous successful deployments and individual customer environment adaptations. Business ResultsspectrumK Holding GmbH accelerated time-to-market for new features by reducing the development time, achieved 40% development and cost savings using standard modules and realized 80% overall savings using the Oracle multiple portal environment, as compared to individual installations Additional Information  spectrumK Holding GmbH Snapshot Oracle WebCenter Suite Oracle Customer Support Oracle Consulting Oracle WebCenter Content

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >