Search Results

Search found 22170 results on 887 pages for 'multiple schema'.

Page 380/887 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • SQL SERVER – What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1

    - by Pinal Dave
    This is the first part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 Statistics are considered one of the most important aspects of SQL Server Performance Tuning. You might have often heard the phrase, with related to performance tuning. “Update Statistics before you take any other steps to tune performance”. Honestly, I have said above statement many times and many times, I have personally updated statistics before I start to do any performance tuning exercise. You may agree or disagree to the point, but there is no denial that Statistics play an extremely vital role in the performance tuning. SQL Server 2014 has a new feature called Incremental Statistics. I have been playing with this feature for quite a while and I find that very interesting. After spending some time with this feature, I decided to write about this subject over here. New in SQL Server 2014 – Incremental Statistics Well, it seems like lots of people wants to start using SQL Server 2014′s new feature of Incremetnal Statistics. However, let us understand what actually this feature does and how it can help. I will try to simplify this feature first before I start working on the demo code. Code for all versions of SQL Server Here is the code which you can execute on all versions of SQL Server and it will update the statistics of your table. The keyword which you should pay attention is WITH FULLSCAN. It will scan the entire table and build brand new statistics for you which your SQL Server Performance Tuning engine can use for better estimation of your execution plan. UPDATE STATISTICS TableName(StatisticsName) WITH FULLSCAN Who should learn about this? Why? If you are using partitions in your database, you should consider about implementing this feature. Otherwise, this feature is pretty much not applicable to you. Well, if you are using single partition and your table data is in a single place, you still have to update your statistics the same way you have been doing. If you are using multiple partitions, this may be a very useful feature for you. In most cases, users have multiple partitions because they have lots of data in their table. Each partition will have data which belongs to itself. Now it is very common that each partition are populated separately in SQL Server. Real World Example For example, if your table contains data which is related to sales, you will have plenty of entries in your table. It will be a good idea to divide the partition into multiple filegroups for example, you can divide this table into 3 semesters or 4 quarters or even 12 months. Let us assume that we have divided our table into 12 different partitions. Now for the month of January, our first partition will be populated and for the month of February our second partition will be populated. Now assume, that you have plenty of the data in your first and second partition. Now the month of March has just started and your third partition has started to populate. Due to some reason, if you want to update your statistics, what will you do? In SQL Server 2012 and earlier version You will just use the code of WITH FULLSCAN and update the entire table. That means even though you have only data in third partition you will still update the entire table. This will be VERY resource intensive process as you will be updating the statistics of the partition 1 and 2 where data has not changed at all. In SQL Server 2014 You will just update the partition of Partition 3. There is a special syntax where you can now specify which partition you want to update now. The impact of this is that it is smartly merging the new data with old statistics and update the entire statistics without doing FULLSCAN of your entire table. This has a huge impact on performance. Remember that the new feature in SQL Server 2014 does not change anything besides the capability to update a single partition. However, there is one feature which is indeed attractive. Previously, when table data were changed 20% at that time, statistics update were triggered. However, now the same threshold is applicable to a single partition. That means if your partition faces 20% data, change it will also trigger partition level statistics update which, when merged to your final statistics will give you better performance. In summary If you are not using a partition, this feature is not applicable to you. If you are using a partition, this feature can be very helpful to you. Tomorrow: We will see working code of SQL Server 2014 Incremental Statistics. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • Deploying Data Mining Models using Model Export and Import, Part 2

    - by [email protected]
    In my last post, Deploying Data Mining Models using Model Export and Import, we explored using DBMS_DATA_MINING.EXPORT_MODEL and DBMS_DATA_MINING.IMPORT_MODEL to enable moving a model from one system to another. In this post, we'll look at two distributed scenarios that make use of this capability and a tip for easily moving models from one machine to another using only Oracle Database, not an external file transport mechanism, such as FTP. The first scenario, consider a company with geographically distributed business units, each collecting and managing their data locally for the products they sell. Each business unit has in-house data analysts that build models to predict which products to recommend to customers in their space. A central telemarketing business unit also uses these models to score new customers locally using data collected over the phone. Since the models recommend different products, each customer is scored using each model. This is depicted in Figure 1.Figure 1: Target instance importing multiple remote models for local scoring In the second scenario, consider multiple hospitals that collect data on patients with certain types of cancer. The data collection is standardized, so each hospital collects the same patient demographic and other health / tumor data, along with the clinical diagnosis. Instead of each hospital building it's own models, the data is pooled at a central data analysis lab where a predictive model is built. Once completed, the model is distributed to hospitals, clinics, and doctor offices who can score patient data locally.Figure 2: Multiple target instances importing the same model from a source instance for local scoring Since this blog focuses on model export and import, we'll only discuss what is necessary to move a model from one database to another. Here, we use the package DBMS_FILE_TRANSFER, which can move files between Oracle databases. The script is fairly straightforward, but requires setting up a database link and directory objects. We saw how to create directory objects in the previous post. To create a database link to the source database from the target, we can use, for example: create database link SOURCE1_LINK connect to <schema> identified by <password> using 'SOURCE1'; Note that 'SOURCE1' refers to the service name of the remote database entry in your tnsnames.ora file. From SQL*Plus, first connect to the remote database and export the model. Note that the model_file_name does not include the .dmp extension. This is because export_model appends "01" to this name.  Next, connect to the local database and invoke DBMS_FILE_TRANSFER.GET_FILE and import the model. Note that "01" is eliminated in the target system file name.  connect <source_schema>/<password>@SOURCE1_LINK; BEGIN  DBMS_DATA_MINING.EXPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_SOURCE_DIR_OBJECT',                                 'name =''MY_MINING_MODEL'''); END; connect <target_schema>/<password>; BEGIN  DBMS_FILE_TRANSFER.GET_FILE ('MY_SOURCE_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '01.dmp',                               'SOURCE1_LINK',                               'MY_TARGET_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '.dmp' );  DBMS_DATA_MINING.IMPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_TARGET_DIR_OBJECT'); END; To clean up afterward, you may want to drop the exported .dmp file at the source and the transferred file at the target. For example, utl_file.fremove('&directory_name', '&model_file_name' || '.dmp');

    Read the article

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • Closing the gap between strategy and execution with Oracle Business Intelligence 11g

    - by manan.goel(at)oracle.com
    Wikipedia defines strategy as a plan of action designed to achieve a particular goal. An example of this is General Electric's acquisitions and divestiture strategy (plan) designed to propel GE to number 1 or 2 place (goal) in every business segment that it operated in. Execution on the other hand can be defined as the actions taken to getting things done. In GE's case execution will be steps followed for mergers/acquisitions or divestiture. Business press has written extensively about the importance of both strategy and execution in achieving desired business objectives. Perhaps the quote from Thomas Edison says it best - "vision without execution is hallucination". Conversely, it can be said that "execution without vision" is well may be "wishful thinking". Research overwhelmingly point towards the wide gap between strategy and execution. According to a published study, 49% of surveyed executives perceive a gap between their organizations' ability to develop and communicate sound strategies and their ability to implement those strategies. Further, of these respondents, 64% don't have full confidence that their companies will be able to close the gap. Having established the severity and importance of the problem let's talk about the reasons for the strategy-execution gap. The common reasons include: -        Lack of clearly defined goals -        Lack of consistent measure of success -        Lack of ownership -        Lack of alignment -        Lack of communication -        Lack of proper execution -        Lack of monitoring       There are multiple approaches to solving the problem including organizational development practices, technology enablement etc. In most cases a combination of approaches is required to achieve the desired result. For the purposes of this discussion, I'll focus on technology.  Imagine an integrated closed loop technology platform that automates the entire management cycle from defining strategy to assigning ownership to communicating goals to achieving alignment to collaboration to taking actions to monitoring progress and achieving mid course corrections. Besides, for best ROI and lowest TCO such a system should also have characteristics like:  Complete -        Full functionality -        Rich end user access Open -        Any data source -        Any business application -        Any technology stack  Integrated -        Common metadata -        Common security -        Common system management From a capabilities perspective the system should provide the following capabilities: Define -        Strategy -        Objectives -        Ownership -        KPI's Communicate -        Pervasive -        Collaborative -        Role based -        Secure Execute -        Integrated -        Intuitive -        Secure -        Ubiquitous Monitor -        Multiple styles and formats -        Exception based -        Push & Pull Having talked about the business problem and outlined the blueprint for a technology solution, let's talk about how Oracle Business Intelligence 11g can help. Oracle Business Intelligence is a comprehensive business intelligence solution for reporting, ad hoc query and analysis, OLAP, dashboards and scorecards. Oracle's best in class BI platform is based on an architecturally integrated technology foundation that provides a unified end user experience and features a Common Enterprise Information Model, with common security, query request generation and optimization, and system management. The BI platform is ·         Complete - meaning it delivers all modes and styles of BI including reporting, ad hoc query and analysis, OLAP, dashboards and scorecards with a rich end user experience that includes visualization, collaboration, alerts and notifications, search and mobile access. ·         Open - meaning the BI platform integrates with any data source, ETL tool, business application, application server, security infrastructure, portal technology as well as any ODBC compliant third party analytical tool. The suite accesses data from multiple heterogeneous sources--including popular relational and multidimensional data sources and major ERP and CRM applications from Oracle and SAP. ·         Integrated - meaning the BI platform is based on an architecturally integrated technology foundation built on an open, standards based service oriented architecture.  The platform features a common enterprise information model, common security model and a common configuration, deployment and systems management framework. To summarize, Oracle Business Intelligence is a comprehensive, integrated BI platform that lets you define strategy, identify objectives, assign ownership, define KPI's, collaborate, take action, monitor, report and do course corrections all form a single interface and a single system. The platform's integrated metadata model and task based design ensures that the entire workflow from defining strategy to execution to monitoring is completely integrated delivering end to end visibility, transparency and agility. Click here to learn more about Oracle BI 11g. 

    Read the article

  • Creating the Business Card Request InfoPath Form

    - by JKenderdine
    Business Card Request Demo Files Back in January I spoke at SharePoint Saturday Virginia Beach about InfoPath forms and Web Part deployment.  Below is some of the information and details regarding the form I created for the session.  There are many blogs and Microsoft articles on how to create a basic form so I won’t repeat that information here.   This blog will just explain a few of the options I chose when creating the solutions for SPS Virginia Beach.  The above link contains the zipped package files of the two InfoPath forms(no code solution and coded solution), the list template for the Location list I used, and the PowerPoint deck.  If you plan to use these templates, you will need to update the forms to work within your own environments (change data connections, code links, etc.).  Also, you must have the SharePoint Enterprise version, with InfoPath Services configured in order to use the Web Browser enabled forms. So what are the requirements for this template? Business Card Request Form Template Design Plan: Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Submitted directly to form library The base form was created using the blank template.  The table and rows were added using Insert tab and selecting Custom Table.  The use of tables is a great way to make sure everything lines up.  You do have to split the tables from time to time.  If you’ve ever split cells and then tried to re-align one to find that you impacted the others, you know why.  Here is what the base form looks like in InfoPath.   Show and hide fields as necessary for requirements You will notice I also used Sections within the form.  These show or hide depending on options selected or whether or not fields are blank.  This is a great way to prevent your users from feeling overwhelmed with a large form (this one wouldn’t apply).  Although not used in this one, you can also use various views with a tab interface.  I’ll show that in another post. Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Utilizing rules you can load data when the form initiates (Data tab, Form Load).  Anything you can automate is always appreciated by the user as that is data they don’t have to enter.  For example, loading their user id or other user information on load: Always keep in mind though how much data you load and the method for loading that data (through rules, code, etc.).  They have an impact on form performance.  The form will take longer to load if you bring in a ton of data from external sources.  Laura Rogers has a great blog post on using the User Information List to load user information.   If the user has logged into SharePoint, then this can be used quite effectively and without a huge performance hit.   What I have found is that using the User Profile service via code behind or the Web Service “GetUserProfileByName” (as above) can take more time to load the user data.  Just food for thought. You must add the data connection in order for the above rules to work.  You can connect to the data connection through the Data tab, Data Connections or select Manage Data Connections link which appears under the main data source.  The data connections can be SharePoint lists or libraries, SQL data tables, XML files, etc.  Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. You can also create multiple views for the users to enhance their experience.  Once they’ve entered the information and submitted their request for business cards, they don’t really need to see the main data input screen any more.  They just need to view what they entered. From the Page Design tab, select New View and give the view a name.  To review the existing views, click the down arrow under View: The ReviewView shows just what the user needs and nothing more: Once you have everything configured, the form should be tested within a Test SharePoint environment before final deployment to production.  This validates you don’t have any rules or code that could impact the server negatively. Submitted directly to form library   You will need to know the form library that you will be submitting to when publishing the template.  Configure the Submit data connection to connect to this library.  There is already one configured in the sample,  but it will need to be updated to your environment prior to publishing. The Design template is different from the Published template.  While both have the .XSN extension, the published template contains all the “package” information for the form.  The published form is what is loaded into Central Admin, not the design template. Browser based form integrated into SharePoint team site In Central Admin, under General Settings, select Manage Form Templates.  Upload the published form template and Activate it to a site collection. Now it is available as a content type to select in the form library.  Some documentation on publishing form templates:  Technet – Manage administrator approved form templates And that’s all our base requirements.  Hope this helps to give a good start.

    Read the article

  • The Sound of Two Toilets Flushing: Constructive Criticism for Virgin Atlantic Complaints Department

    - by Geertjan
    I recently had the experience of flying from London to Johannesburg and back with Virgin Atlantic. The good news was that it was the cheapest flight available and that the take off and landing were absolutely perfect. Hence I really have no reason to complain. Instead, I'd like to offer some constructive criticism which hopefully Richard Branson will find sometime while googling his name. Or maybe someone from the Virgin Atlantic Complaints Department will find it, whatever, just want to put this information out there. Arrangement of restroom facilities. Maybe next time you design an airplane, consider not putting your toilets at a right angle right next to your rows of seats. Being able to reach, without even needing to stretch your arm, from your seat to close, yet again, a toilet door that someone, someone obviously sitting very far from the toilets, carelessly forgot to close is not an indicator of quality interior design. Have you noticed how all other airplanes have their toilets in a cubicle separated from the rows of seats? On those airplanes, people sitting in the seats near the toilets are not constantly being woken up throughout the night whenever someone enters/exits the toilet, whenever the light in the toilet is suddenly switched on, and whenever one of the toilets flushes. Bonus points for Virgin Atlantic passengers in the seats adjoining the toilets is when multiple toilets are flushed simultaneously and multiple passengers enter/exit them at the same time, a bit like an unasked for low budget musical of suddenly illuminated grumpy people in crumpled clothes. What joy that brings at 3 AM is hard to describe. Seats with extra leg room. You know how other airplanes have the seats with the extra leg room? You know what those seats tend to have? Extra leg room. It's really interesting how Virgin Atlantic's seats with extra leg room actually have no extra leg room at all. It should have been a give away, the fact that these special seats are found in the same rows as the standard seats, rather than on the cusp of real glory which is where most airlines put their extra leg room seats, with the only actual difference being that they have a slightly different color. Had you called them "seats with a different color" (i.e., almost not quite green, rather than something vaguely hinting at blue), at least I'd have known what I was getting. Picture the joy at 3 AM, rudely awakened from nightmarish slumber, partly grateful to have been released from a grayish dream of faceless zombies resembling one or two of those in a recent toilet line, by multiple adjoining toilets flushing simultaneously, while you're sitting in a seat with extra leg room that has exactly as much leg room as the seats in neighboring rows. You then have a choice of things to be sincerely annoyed about. Food from the '80's. In the '80's, airplane food came in soggy containers and even breakfast, the most important meal of the day, was a sad heap of vaguely gray colors. The culinary highlight tended to be a squashed tomato, which must have been mashed to a pulp with a brick prior to being regurgitated by a small furry animal, and there was also always a piece of immensely horrid pumpkin, as well as a slice of spongy something you'd never seen before. Sausages and mash at 6 AM on an airplane was always a heavy lump of horribleness. Thankfully, all airlines throughout the world changed from this puke inducing strategy around 1987 sometime. Not Virgin Atlantic, of course. The fatty sausages and mash are still there, bringing you flashbacks to Duran Duran, which is what you were listening to (on your walkman) the last time you saw it in an airplane. Even the golden oldie "squashed tomato attached by slime to three wet peas" is on the menu. How wonderful to have all this in a cramped seat with a long row of early morning bleariness lined up for the toilets, right at your side, bumping into your elbow, groggily, one by one, one after another, more and more, fumble-open-door-silence-flush-fumble-open-door, and on and on, while you tentatively push your fork through a soggy pile of colorless mush, fighting the urge to throw up on the stinky socks of whatever nightmarish zombie is bumping into your elbow at the time. But, then again, the plane landed without a hitch, in fact, extremely smoothly, so I'm certainly not blaming the pilots.

    Read the article

  • Silverlight Relay Commands

    - by George Evjen
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} I am fairly new at Silverlight development and I usually have an issue that needs research every day. Which I enjoy, since I like the idea of going into a day knowing that I am  going to learn something new. The issue that I am currently working on centers around relay commands. I have a pretty good handle on Relay Commands and how we use them within our applications. <Button Command="{Binding ButtonCommand}" CommandParameter="NewRecruit" Content="New Recruit" /> Here in our xaml we have a button. The button has a Command and a CommandParameter. The command binds to the ButtonCommand that we have in our ViewModel RelayCommand _buttonCommand;         /// <summary>         /// Gets the button command.         /// </summary>         /// <value>The button command.</value>         public RelayCommand ButtonCommand         {             get             {                 if (_buttonCommand == null)                 {                     _buttonCommand = new RelayCommand(                         x => x != null && x.ToString().Length > 0 && CheckCommandAvailable(x.ToString()),                         x => ExecuteCommand(x.ToString()));                 }                 return _buttonCommand;             }         }   In our relay command we then do some checks with a lambda expression. We check if the command  parameter is null, is the length greater than 0 and we have a CheckCommandAvailable method that will tell  us if the button is even enabled. After we check on these three items we then pass the command parameter to an action method. This is all pretty straight forward, the issue that we solved a few days ago centered around having a control that needed to use a Relay Command and this control was a nested control and was using a different DataContext. The example below illustrates how we handled this scenario. In our xaml usercontrol we had to set a name to this control. <Controls3:RadTileViewItem x:Class="RecruitStatusTileView"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"      xmlns:Controls1="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls"      xmlns:Controls2="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Input"      xmlns:Controls3="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Navigation"      mc:Ignorable="d" d:DesignHeight="400" d:DesignWidth="800" Header="{Binding Title,Mode=TwoWay}" MinimizedHeight="100"                             x:Name="StatusView"> Here we are using a telerik RadTileViewItem. We set the name of this control to “StatusView”. In our button control we set our command parameters and commands different than the example above. <HyperlinkButton Content="{Binding BigBoardButtonText, Mode=TwoWay}" CommandParameter="{Binding 'Position.PositionName'}" Command="{Binding ElementName=StatusView, Path=DataContext.BigBoardCommand, Mode=TwoWay}" /> This hyperlink button lives in a ListBox control and this listbox has an ItemSource of PositionSelectors. The Command Parameter is binding to the Position.Position property of that PositionSelectors object. This again is pretty straight forward again. What gets a bit tricky is the Command property in the hyperlink. It is binding to the element name we created in the user control (StatusView) Because this hyperlink is in a listbox and is in the item template it doesn’t have a direct handle on the DataContext that the RadTileViewItem has so we have to make sure it does. We do that by binding to the element name of status view then set the path to DataContext.BigBoardCommand. BigBoardCommand is the name of the RelayCommand in the view model. private RelayCommand _bigBoardCommand = null;         /// <summary>         /// Gets the big board command.         /// </summary>         /// <value>The big board command.</value>         public RelayCommand BigBoardCommand         {             get             {                 if (_bigBoardCommand == null)                 {                     _bigBoardCommand = new RelayCommand(x => true, x => AddToBigBoard(x.ToString()));                 }                 return _bigBoardCommand;             }         } From there we check for true again and then call the action and pass in the parameter that we had as the command parameter. What we are working on now is a bit trickier than this second example. In the above example we are only creating this TileViewItem with this name “StatusView” once. In another part of our application we are generating multiple TileViewItems, so we cannot set the name in the control as we cant have multiple controls with the same name. When we run the application we get an error that reads that the value is out of expected range. My searching has led me to think we cannot have multiple controls with the same name. This is today’s problem and Ill post the solution to this once it is found.

    Read the article

  • await, WhenAll, WaitAll, oh my!!

    - by cibrax
    If you are dealing with asynchronous work in .NET, you might know that the Task class has become the main driver for wrapping asynchronous calls. Although this class was officially introduced in .NET 4.0, the programming model for consuming tasks was much more simplified in C# 5.0 in .NET 4.5 with the addition of the new async/await keywords. In a nutshell, you can use these keywords to make asynchronous calls as if they were sequential, and avoiding in that way any fork or callback in the code. The compiler takes care of the rest. I was yesterday writing some code for making multiple asynchronous calls to backend services in parallel. The code looked as follow, var allResults = new List<Result>(); foreach(var provider in providers) { var results = await provider.GetResults(); allResults.AddRange(results); } return allResults; You see, I was using the await keyword to make multiple calls in parallel. Something I did not consider was the overhead this code implied after being compiled. I started an interesting discussion with some smart folks in twitter. One of them, Tugberk Ugurlu, had the brilliant idea of actually write some code to make a performance comparison with another approach using Task.WhenAll. There are two additional methods you can use to wait for the results of multiple calls in parallel, WhenAll and WaitAll. WhenAll creates a new task and waits for results in that new task, so it does not block the calling thread. WaitAll, on the other hand, blocks the calling thread. This is the code Tugberk initially wrote, and I modified afterwards to also show the results of WaitAll. class Program { private static Func<Stopwatch, Task>[] funcs = new Func<Stopwatch, Task>[] { async (watch) => { watch.Start(); await Task.Delay(1000); Console.WriteLine("1000 one has been completed."); }, async (watch) => { await Task.Delay(1500); Console.WriteLine("1500 one has been completed."); }, async (watch) => { await Task.Delay(2000); Console.WriteLine("2000 one has been completed."); watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds + "ms has been elapsed."); } }; static void Main(string[] args) { Console.WriteLine("Await in loop work starts..."); DoWorkAsync().ContinueWith(task => { Console.WriteLine("Parallel work starts..."); DoWorkInParallelAsync().ContinueWith(t => { Console.WriteLine("WaitAll work starts..."); WaitForAll(); }); }); Console.ReadLine(); } static async Task DoWorkAsync() { Stopwatch watch = new Stopwatch(); foreach (var func in funcs) { await func(watch); } } static async Task DoWorkInParallelAsync() { Stopwatch watch = new Stopwatch(); await Task.WhenAll(funcs[0](watch), funcs[1](watch), funcs[2](watch)); } static void WaitForAll() { Stopwatch watch = new Stopwatch(); Task.WaitAll(funcs[0](watch), funcs[1](watch), funcs[2](watch)); } } After running this code, the results were very concluding. Await in loop work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 4532ms has been elapsed. Parallel work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 2007ms has been elapsed. WaitAll work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 2009ms has been elapsed. The await keyword in a loop does not really make the calls in parallel.

    Read the article

  • Computer Networks UNISA - Chap 8 &ndash; Wireless Networking

    - by MarkPearl
    After reading this section you should be able to Explain how nodes exchange wireless signals Identify potential obstacles to successful transmission and their repercussions, such as interference and reflection Understand WLAN architecture Specify the characteristics of popular WLAN transmission methods including 802.11 a/b/g/n Install and configure wireless access points and their clients Describe wireless MAN and WAN technologies, including 802.16 and satellite communications The Wireless Spectrum All wireless signals are carried through the air by electromagnetic waves. The wireless spectrum is a continuum of the electromagnetic waves used for data and voice communication. The wireless spectrum falls between 9KHZ and 300 GHZ. Characteristics of Wireless Transmission Antennas Each type of wireless service requires an antenna specifically designed for that service. The service’s specification determine the antenna’s power output, frequency, and radiation pattern. A directional antenna issues wireless signals along a single direction. An omnidirectional antenna issues and receives wireless signals with equal strength and clarity in all directions The geographical area that an antenna or wireless system can reach is known as its range Signal Propagation LOS (line of sight) uses the least amount of energy and results in the reception of the clearest possible signal. When there is an obstacle in the way, the signal may… pass through the object or be obsrobed by the object or may be subject to reflection, diffraction or scattering. Reflection – waves encounter an object and bounces off it. Diffraction – signal splits into secondary waves when it encounters an obstruction Scattering – is the diffusion or the reflection in multiple different directions of a signal Signal Degradation Fading occurs as a signal hits various objects. Because of fading, the strength of the signal that reaches the receiver is lower than the transmitted signal strength. The further a signal moves from its source, the weaker it gets (this is called attenuation) Signals are also affected by noise – the electromagnetic interference) Interference can distort and weaken a wireless signal in the same way that noise distorts and weakens a wired signal. Frequency Ranges Older wireless devices used the 2.4 GHZ band to send and receive signals. This had 11 communication channels that are unlicensed. Newer wireless devices can also use the 5 GHZ band which has 24 unlicensed bands Narrowband, Broadband, and Spread Spectrum Signals Narrowband – a transmitter concentrates the signal energy at a single frequency or in a very small range of frequencies Broadband – uses a relatively wide band of the wireless spectrum and offers higher throughputs than narrowband technologies The use of multiple frequencies to transmit a signal is known as spread-spectrum technology. In other words a signal never stays continuously within one frequency range during its transmission. One specific implementation of spread spectrum is FHSS (frequency hoping spread spectrum). Another type is known as DSS (direct sequence spread spectrum) Fixed vs. Mobile Each type of wireless communication falls into one of two categories Fixed – the location of the transmitted and receiver do not move (results in energy saved because weaker signal strength is possible with directional antennas) Mobile – the location can change WLAN (Wireless LAN) Architecture There are two main types of arrangements Adhoc – data is sent directly between devices – good for small local devices Infrastructure mode – a wireless access point is placed centrally, that all devices connect with 802.11 WLANs The most popular wireless standards used on contemporary LANs are those developed by IEEE’s 802.11 committee. Over the years several distinct standards related to wireless networking have been released. Four of the best known standards are also referred to as Wi-Fi. They are…. 802.11b 802.11a 802.11g 802.11n These four standards share many characteristics. i.e. All 4 use half duplex signalling Follow the same access method Access Method 802.11 standards specify the use of CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) to access a shared medium. Using CSMA/CA before a station begins to send data on an 802.11 network, it checks for existing wireless transmissions. If the source node detects no transmission activity on the network, it waits a brief period of time and then sends its transmission. If the source does detect activity, it waits a brief period of time before checking again. The destination node receives the transmission and, after verifying its accuracy, issues an acknowledgement (ACT) packet to the source. If the source receives the ACK it assumes the transmission was successful, – if it does not receive an ACK it assumes the transmission failed and sends it again. Association Two types of scanning… Active – station transmits a special frame, known as a prove, on all available channels within its frequency range. When an access point finds the probe frame, it issues a probe response. Passive – wireless station listens on all channels within its frequency range for a special signal, known as a beacon frame, issued from an access point – the beacon frame contains information necessary to connect to the point. Re-association occurs when a mobile user moves out of one access point’s range and into the range of another. Frames Read page 378 – 381 about frames and specific 802.11 protocols Bluetooth Networks Sony Ericson originally invented the Bluetooth technology in the early 1990s. In 1998 other manufacturers joined Ericsson in the Special Interest Group (SIG) whose aim was to refine and standardize the technology. Bluetooth was designed to be used on small networks composed of personal communications devices. It has become popular wireless technology for communicating among cellular telephones, phone headsets, etc. Wireless WANs and Internet Access Refer to pages 396 – 402 of the textbook for details.

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Adaptive Connections For ADFBC

    - by Duncan Mills
    Some time ago I wrote an article on Adaptive Bindings showing how the pageDef for a an ADF UI does not have to be wedded to a fixed data control or collection / View Object. This article has proved pretty popular, so as a follow up I wanted to cover another "Adaptive" feature of your ADF applications, the ability to make multiple different connections from an Application Module, at runtime. Now, I'm sure you'll be aware that if you define your application to use a data-source rather than a hard-coded JDBC connection string, then you have the ability to change the target of that data-source after deployment to point to a different database. So that's great, but the reality of that is that this single connection is effectively fixed within the application right?  Well no, this it turns out is a common misconception. To be clear, yes a single instance of an ADF Application Module is associated with a single connection but there is nothing to stop you from creating multiple instances of the same Application Module within the application, all pointing at different connections.  If fact this has been possible for a long time using a custom extension point with code that which extends oracle.jbo.http.HttpSessionCookieFactory. This approach, however, involves writing code and no-one likes to write any more code than they need to, so, is there an easier way? Yes indeed.  It is in fact  a little publicized feature that's available in all versions of 11g, the ELEnvInfoProvider. What Does it Do?  The ELEnvInfoProvider  is  a pre-existing class (the full path is  oracle.jbo.client.ELEnvInfoProvider) which you can plug into your ApplicationModule configuration using the jbo.envinfoprovider property. Visuallty you can set this in the editor, or you can also set it directly in the bc4j.xcfg (see below for an example) . Once you have plugged in this envinfoprovider, here's the fun bit, rather than defining the hard-coded name of a datasource instead you can plug in a EL expression for the connection to use.  So what's the benefit of that? Well it allows you to defer the selection of a connection until the point in time that you instantiate the AM. To define the expression itself you'll need to do a couple of things: First of all you'll need a managed bean of some sort – e.g. a sessionScoped bean defined in your ViewController project. This will need a getter method that returns the name of the connection. Now this connection itself needs to be defined in your Application Server, and can be managed through Enterprise Manager, WLST or through MBeans. (You may need to read the documentation [http://docs.oracle.com/cd/E28280_01/web.1111/b31974/deployment_topics.htm#CHDJGBDD] here on how to configure connections at runtime if you're not familiar with this)   The EL expression (e.g. ${connectionManager.connection} is then defined in the configuration by editing the bc4j.xcfg file (there is a hyperlink directly to this file on the configuration editing screen in the Application Module editor). You simply replace the hardcoded JDBCName value with the expression.  So your cfg file would end up looking something like this (notice the reference to the ELEnvInfoProvider that I talked about earlier) <BC4JConfig version="11.1" xmlns="http://xmlns.oracle.com/bc4j/configuration">   <AppModuleConfigBag ApplicationName="oracle.demo.model.TargetAppModule">   <AppModuleConfig DeployPlatform="LOCAL"  JDBCName="${connectionManager.connection}" jbo.project="oracle.demo.model.Model" name="TargetAppModuleLocal" ApplicationName="oracle.demo.model.TargetAppModule"> <AM-Pooling jbo.doconnectionpooling="true"/> <Database jbo.locking.mode="optimistic">       <Security AppModuleJndiName="oracle.demo.model.TargetAppModule"/>    <Custom jbo.envinfoprovider="oracle.jbo.client.ELEnvInfoProvider"/> </AppModuleConfig> </AppModuleConfigBag> </BC4JConfig> Still Don't Quite Get It? So far you might be thinking, well that's fine but what difference does it make if the connection is resolved "just in time" rather than up front and changed as required through Enterprise Manager? Well a trivial example would be where you have a single application deployed to your application server, but for different users you want to connect to different databases. Because, the evaluation of the connection is deferred until you first reference the AM you have a decision point that can take the user identity into account. However, think about it for a second.  Under what circumstances does a new AM get instantiated? Well at the first reference of the AM within the application yes, but also whenever a Task Flow is entered -  if the data control scope for the Task Flow is ISOLATED.  So the reality is, that on a single screen you can embed multiple Task Flows, all of which are pointing at different database connections concurrently. Hopefully you'll find this feature useful, let me know... 

    Read the article

  • Spring & hibernate configuration (using maven): java.lang.ClassNotFoundException: org.hibernate.cfg.

    - by Marcos Carceles
    Hi, I am trying to include spring and hibernate in an application running on a Weblogic 10.3 server. When I run the application in the server, while accessing an TestServlet to check my configuration I get the following exception: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mySessionFactory' defined in class path resource [spring-config/HorizonModelPeopleConnectionsSpringContext.xml]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.springframework.orm.hibernate3.LocalSessionFactoryBean]: Constructor threw exception; nested exception is java.lang.NoClassDefFoundError: org.hibernate.cfg.Configuration at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:448) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:251) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:156) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:248) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:160) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:284) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:352) at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:91) at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:75) at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:65) at view.com.horizon.test.SpringHibernateServlet.doGet(SpringHibernateServlet.java:27) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292) at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) at oracle.security.wls.filter.SSOSessionSynchronizationFilter.doFilter(SSOSessionSynchronizationFilter.java:279) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) at oracle.dms.wls.DMSServletFilter.doFilter(DMSServletFilter.java:326) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) at weblogic.servlet.internal.RequestEventsFilter.doFilter(RequestEventsFilter.java:27) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3592) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2202) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2108) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1432) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Caused by: org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.springframework.orm.hibernate3.LocalSessionFactoryBean]: Constructor threw exception; nested exception is java.lang.NoClassDefFoundError: org.hibernate.cfg.Configuration at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:100) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:61) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:756) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:721) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:384) ... 31 more Caused by: java.lang.NoClassDefFoundError: org.hibernate.cfg.Configuration at org.springframework.orm.hibernate3.LocalSessionFactoryBean.class$(LocalSessionFactoryBean.java:158) at org.springframework.orm.hibernate3.LocalSessionFactoryBean.(LocalSessionFactoryBean.java:158) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:85) ... 35 more I have checked my application and the hibernate jar file is included and it contains the class it says its missing: org.hibernate.cfg.Configuration. The application is built with maven. These are the dependencies of the JAR file using spring and hibernate: <!-- Frameworks --> <!-- Hibernate framework --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate</artifactId> <version>3.2.7.ga</version> </dependency> <!-- Hibernate uses slf4j for logging, for our purposes here use the simple backend --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.5.2</version> </dependency> <!-- Hibernate gives you a choice of bytecode providers between cglib and javassist --> <dependency> <groupId>javassist</groupId> <artifactId>javassist</artifactId> <version>3.4.GA</version> </dependency> <!-- Spring framework --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>2.5.6</version> </dependency> At first I thought it could be an issue with the versions in the spring and hibernate libraries, so I have tried with different ones, but still I couldn't find anywhere where it says which library versions are compatible,. just got that Spring 2.5.x needs hibernate =3.1 And this is my Spring config file: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd"> <bean id="myDataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>jdbc/WebCenterDS</value> </property> <!--property name="resourceRef"> <value>true</value> </property> <property name="jndiEnvironment"> <props> <prop key="java.naming.factory.initial">weblogic.jndi.WLInitialContextFactory</prop> <prop key="java.naming.provider.url">t3://localhost:7001</prop> </props> </property--> </bean> <bean id="mySessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="myDataSource"/> <property name="configLocation"> <value>classpath:hibernate-config/hibernate.cfg.xml</value> </property> <property name="mappingResources"> <list> <value>classpath:com/horizon/model/peopleconnections/profile/internal/bean/CustomAttribute.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <value>hibernate.dialect=org.hibernate.dialect.HSQLDialect</value> </property> </bean> <bean id="profileExtensionDAO" class="com.horizon.model.peopleconnections.profile.internal.dao.ProfileExtensionDAOImpl"> <property name="sessionFactory" ref="mySessionFactory"/> </bean> </beans> The WAR structure I get is the following: J2EETestApplication ¦ springhibernate.jsp ¦ +---WEB-INF ¦ faces-config.xml ¦ web.xml ¦ weblogic.xml ¦ +---classes ¦ +---view ¦ +---com ¦ +---horizon ¦ +---test ¦ SpringHibernateServlet.class ¦ +---lib activation-1.1.jar antlr-2.7.6.jar aopalliance-1.0.jar asm-1.5.3.jar asm-attrs-1.5.3.jar cglib-2.1_3.jar commons-codec-1.3.jar commons-collections-2.1.1.jar commons-logging-1.1.1.jar dom4j-1.6.1.jar ehcache-1.2.3.jar hibernate-3.2.7.ga.jar horizon-model-commons-1.0-SNAPSHOT.jar horizon-model-peopleconnections-1.0-SNAPSHOT.jar horizon-shared-commons-1.0-SNAPSHOT.jar horizon-shared-logging-1.0-SNAPSHOT.jar horizon-shared-util-1.0-SNAPSHOT.jar horizon-shared-webcenter-1.0-SNAPSHOT.jar horizon-shared-webcenter.jar httpclient-4.0.1.jar httpcore-4.0.1.jar javassist-3.4.GA.jar jta-1.0.1B.jar log4j-1.2.14.jar mail-1.4.1.jar peopleconnections-profile-model-11.1.1.2.0.jar saxon-9.1.0.8.jar serviceframework-11.1.1.2.0.jar slf4j-api-1.5.2.jar slf4j-log4j12-1.5.2.jar spring-beans-2.5.6.jar spring-context-2.5.6.jar spring-core-2.5.6.jar spring-orm-2.5.6.jar spring-tx-2.5.6.jar Is there any dependency or configuration I am missing? If I use hibernate without spring I don't get the ClassDefNotFoundException.

    Read the article

  • Prevent Custom Entity being deleted from Entity Framework during Update Model Wizard

    - by rbg
    In Entity Framework v1 If you create a Custom Entity to map it to a Stored Procedure during FunctionImport and then Select "Update Model from Database" the Update Model Wizard removes the Custom Entity (as added Manually in the SSDL XML prior to functionImport) from the SSDL. Does anyone know if this limitation has been dealt with Entity Framework v4? I mean is there a way to prevent the Wizard from removing the Custom Entity from the Storage Schema (SSDL) during Model Update from Database????

    Read the article

  • RIA Services query: Nested "from" and include children

    - by Stéphane Bebrone
    Hi guys, I have the following schema. What I'd like to do is to retrieve all customers (and children properties) from a selected StaffAssignment by StaffId. So I've written the following query: from c in ObjectContext.Customers.Include("Assignments.Activities") from a in c.Assignments from sa in a.StaffAssignments where sa.StaffId == staffId select c But children properties aren't loaded (I added the [Include] in service metadata file as well). What did I do wrong? Gtz, Stéphane.

    Read the article

  • Fluent Nhibernate - Mapping two entities to same table

    - by Andy
    Hi, I'm trying to map two domain entities to the same table. We're doing a smart entity for our domain model, so we have the concept of an Editable Address and a readonly Address. I have both mapped using Classmaps, and everything seems to go fine until we try to export the schema using the SchemaExport class from NHibernate. It errors out saying the table already exists. I assume it's something simple that I'm just not seeing. Any ideas? Thanks

    Read the article

  • Drupal: Exposing a module's data to Views2 using its API

    - by Sepehr Lajevardi
    I'm forking the filefield_stats module to provide it with the ability of exposing data into the Views module via the API. The filefield_stats module db table schema is as follow: <?php function filefield_stats_schema() { $schema['filefield_stats'] = array( 'fields' => array( 'fid' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'Primary Key: the {files}.fid'), 'vid' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'Primary Key: the {node}.vid'), 'uid' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'The {users}.uid of the downloader'), 'timestamp' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'The timestamp of the download'), 'hostname' => array('type' => 'varchar', 'length' => 128, 'not null' => TRUE, 'default' => '', 'description' => 'The hostname downloading the file (usually IP)'), 'referer' => array('type' => 'text', 'not null' => FALSE, 'description' => 'Referer for the download'), ), 'indexes' => array('fid_vid' => array('fid', 'vid')), ); return $schema; } ?> Well, so I implemented the hook_views_api() in filefield_stats.module & added a filefield_stats.views.inc file in the module's root directory, here it is: <?php // $Id$ /** * @file * Provide the ability of exposing data to Views2, for filefield_stats module. */ function filefield_stats_views_data() { $data = array(); $data['filefield_stats']['table']['group'] = t('FilefieldStats'); // Referencing the {node_revisions} table. $data['filefield_stats']['table']['join'] = array( 'node_revisions' => array( 'left_field' => 'vid', 'field' => 'vid', ), /*'files' => array( 'left_field' => 'fid', 'field' => 'fid', ), 'users' => array( 'left_field' => 'uid', 'field' => 'uid', ),*/ ); // Introducing filefield_stats table fields to Views2. // vid: The node's revision ID which wrapped the downloaded file $data['filefield_stats']['vid'] = array( 'title' => t('Node revision ID'), 'help' => t('The node\'s revision ID which wrapped the downloaded file'), 'relationship' => array( 'base' => 'node_revisions', 'field' => 'vid', 'handler' => 'views_handler_relationship', 'label' => t('Node Revision Reference.'), ), ); // uid: The ID of the user who downloaded the file. $data['filefield_stats']['uid'] = array( 'title' => t('User ID'), 'help' => t('The ID of the user who downloaded the file.'), 'relationship' => array( 'base' => 'users', 'field' => 'uid', 'handler' => 'views_handler_relationship', 'label' => t('User Reference.'), ), ); // fid: The ID of the downloaded file. $data['filefield_stats']['fid'] = array( 'title' => t('File ID'), 'help' => t('The ID of the downloaded file.'), 'relationship' => array( 'base' => 'files', 'field' => 'fid', 'handler' => 'views_handler_relationship', 'label' => t('File Reference.'), ), ); // hostname: The hostname which the file has been downloaded from. $data['filefield_stats']['hostname'] = array( 'title' => t('The Hostname'), 'help' => t('The hostname which the file has been downloaded from.'), 'field' => array( 'handler' => 'views_handler_field', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort', ), 'filter' => array( 'handler' => 'views_handler_filter_string', ), 'argument' => array( 'handler' => 'views_handler_argument_string', ), ); // referer: The referer address which the file download link has been triggered from. $data['filefield_stats']['referer'] = array( 'title' => t('The Referer'), 'help' => t('The referer which the file download link has been triggered from.'), 'field' => array( 'handler' => 'views_handler_field', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort', ), 'filter' => array( 'handler' => 'views_handler_filter_string', ), 'argument' => array( 'handler' => 'views_handler_argument_string', ), ); // timestamp: The time of the download. $data['filefield_stats']['timestamp'] = array( 'title' => t('Download Time'), 'help' => t('The time of the download.'), 'field' => array( 'handler' => 'views_handler_field_date', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort_date', ), 'filter' => array( 'handler' => 'views_handler_filter_date', ), ); return $data; } // filefield_stats_views_data() ?> According to the Views2 documentations this should work as a minimum, I think. But it doesn't! Also there is no error of any kind, when I come through the views UI, there's nothing about filefield_stats data. Any idea?

    Read the article

  • What's better way to build NSPredicate with to-many deep relationships?

    - by Victor
    Hello, I have three entities: EntityA, EntityB and EntityC connected with to-many relationships. See schema for details: For getting all instance of EntityA which depend from EntityB.name I use the predicate like this: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"ANY EntityB.name like 'SomeName'"]; What should be predicate for getting all instance of EntityA which depend from EntityC.name? I tried query like @"ANY EntityB.entitiesC.name like 'SomeName'" but get exception "multiple to-many keys not allowed here". Best regards, Victor

    Read the article

  • Default Database Collations got messed up

    - by dominicdinada
    I am using Ubuntu 9.10 with XAMPP ( Lampp "MYSQL 5.1.45 PHPMYADMIN 3.3.1 PHP 5.3.2 ) What my problem is, is that I set up my testing env to debug my scripts locally and when I did so there arose a problem. This problem is that I used firefox's addon SQLinject ME to test for weakness' and upon doing so it caused mysql to change the default local collations; character sets dir /opt/lampp/share/mysql/charsets/ collation connection latin1_general_ci (Global value) latin1_swedish_ci collation database latin1_swedish_ci collation server latin1_swedish_ci I have searched for quite sometime in regards to a solution to this problem and have come up with searching for the db.opt file which stores this information without success. Upon not finding a solution I removed lampp with the "sudo rm -fR /opt" command and reinstall and the problem still persists. I have tried to change the collations manually and still come up with the database displaying latin1_swedish_ci as the default language. Why is this a problem?? Why is it a problem with mysql? Because the application I am testing and debugging locally is built on the CodeIgnitor with Smarty framework and since this combination of framework is built to detect the LOCALES, Rather what the database defaults are I keep getting errors saying no language file for swedish...... Of course I could get the swedish language file to work around this problem but I do not feel the need to make this work around a perminant solution as with time when I move on to projects I will run into simular problems every time that A; When importing database files, backups etc it will default to import such databases as the locale swedish. B; As time passes on I might completly forget of this error and will be back to square one. I have found this code in searches for a fix,which seems to alter the tables to a desired Collaion; $value) { mysql_query("ALTER TABLE $value COLLATE latin1_general_ci"); }} echo "The collation of your database has been successfully changed!"; ? Which is handy to switch collations in One Schema at a time however this is not a fix when a framework doesnt care that the said database is in one langugae. It tests for the Default of the entire server. Someone with any knowledge of a purge or fix to this I would greatly appricate the help. One more final note is that when I was testing I only figured to back up the applications DataBase and not the entire Schema of the install. No matter if I uninstall or reinstall the database still seems to carry these problems.

    Read the article

  • Symfony "cannot fetch TableMap" error when using propel:build-all

    - by James Skidmore
    Cannot fetch TableMap for undefined table: order_product. Make sure you have the static MapBuilder registration code after your peer stub class definition. Even if I erase the entire schema, clear the cache, and delete everything but "vendor" in the lib folder, I still get the error. I also get the error when doing propel:build-filters or propel:build-forms. Thanks for your help in advance!

    Read the article

  • Fluent NHibernate - How to map a non nullable foreign key that exists in two joined tables

    - by vakman
    I'm mapping a set of membership classes for my application using Fluent NHibernate. I'm mapping the classes to the asp.net membership database structure. The database schema relevant to the problem looks like this: ASPNET_USERS UserId PK ApplicationId FK NOT NULL other user columns ... ASPNET_MEMBERSHIP UserId PK,FK ApplicationID FK NOT NULL other membership columns... There is a one to one relationship between these two tables. I'm attempting to join the two tables together and map data from both tables to a single 'User' entity which looks like this: public class User { public virtual Guid Id { get; set; } public virtual Guid ApplicationId { get; set; } // other properties to be mapped from aspnetuser/membership tables ... My mapping file is as follows: public class UserMap : ClassMap<User> { public UserMap() { Table("aspnet_Users"); Id(user => user.Id).Column("UserId").GeneratedBy.GuidComb(); Map(user => user.ApplicationId); // other user mappings Join("aspnet_Membership", join => { join.KeyColumn("UserId"); join.Map(user => user.ApplicationId); // Map other things from membership to 'User' class } } } If I try to run with the code above I get a FluentConfiguration exception Tried to add property 'ApplicationId' when already added. If I remove the line "Map(user = user.ApplicationId);" or change it to "Map(user = user.ApplicationId).Not.Update().Not.Insert();" then the application runs but I get the following exception when trying to insert a new user: Cannot insert the value NULL into column 'ApplicationId', table 'ASPNETUsers_Dev.dbo.aspnet_Users'; column does not allow nulls. INSERT fails. The statement has been terminated. And if I leave the .Map(user = user.ApplicationId) as it originally was and make either of those changes to the join.Map(user = user.ApplicationId) then I get the same exception above except of course the exception is related to an insert into the aspnet_Membership table So... how do I do this kind of mapping assuming I can't change my database schema?

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >