Search Results

Search found 1115 results on 45 pages for 'relationships'.

Page 12/45 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Codifying a natural language requirements spec

    - by ProfK
    I have a fairly technical functionality requirements spec, expressed in English prose, produced by my project manager. It is structured as a collection of UI tabs, where the requirements for each tab are expressed as a lit of UI fields and a list of business rules for the tab. Most business rules are for UI fields on a tab, e.g: a) Must be alphanumeric, max length 20. b) Must be a dropdown, with values from table x. c) Is mandatory. d) Is mandatory under certain conditions, e.g. another field is just populated, or has a specific value. Then other business rules get a little more complex. The spec is for a job application, so the central business object (table) is the Applicant, and we have several other tables with one-to-many relationships with applicant, such as Degree, HighSchool, PreviousEmployer, Diploma, etc. e) One such complex rule says a status field can only be assigned a certain value if a many-side record exists in at least one of the many-side tables. E.g. the Applicant has at least one HighSchool or at least one Diploma record. I am looking for advice on how to codify these requirements into a more structured specification defined in terms of tables, fields, and relationships, especially for the conditional rules for fields and for the presence of related records. Any suggestions and advice will be most welcome, but I would be overjoyed if i could find an already defined system or structure for expressing things like this.

    Read the article

  • Taking our Friendships to the next level.

    - by RedAndTheCommunity
    Red Gate have been running the Friends of Red Gate program for years now, and over that time we've built some great relationships with some truly awesome members of the SQL and .NET communities. When I took over the running of the program from Annabel in 2011, I was overwhelmed by the enthusiasm and commitment of our Friends. There were just so many of them, however, that it was hard to make the most of the relationships we had with people, and I wanted to fix that. I decided to survey all our Friends, to find out what they wanted to get out of, and put into, being in the Friends of Red Gate (FoRG) program. From the results of that survey, I identified 30 FoRGs that were really willing and able to go that step further to help Red Gate improve their tools, improve their relationship with the community, and improve the Friends of Red Gate program. Those 30 Friends of Red Gate have been awarded 'FoRG+' status. That means they'll: Have a closer relationship with the product teams, by getting involved in projects Have even more access to the inside track about the tools they're interested in Get the opportunity to come visit us at the Red Gate office and really influence the development of the tools. Plus more, depending on how the individual FoRG+ wants to work with us. This doesn't mean I've forgotten our other Friends; I'm working on ways to improve their experience of the Friends of Red Gate program. I'll write about them in another post. If you're an existing Friend of Red Gate, and you're interested in finding out how to get involved in the FoRG+ program, then I'd love to chat to you. For anyone that's interested in joining the Friend of Red Gate program, take a look at the web page dedicated to the program, and get in touch at [email protected] to be put on the waiting list for our 2013 program.

    Read the article

  • What are some of the benefits of a "Micro-ORM"?

    - by Wayne M
    I've been looking into the so-called "Micro ORMs" like Dapper and (to a lesser extent as it relies on .NET 4.0) Massive as these might be easier to implement at work than a full-blown ORM since our current system is highly reliant on stored procedures and would require significant refactoring to work with an ORM like NHibernate or EF. What is the benefit of using one of these over a full-featured ORM? It seems like just a thin layer around a database connection that still forces you to write raw SQL - perhaps I'm wrong but I was always told the reason for ORMs in the first place is so you didn't have to write SQL, it could be automatically generated; especially for multi-table joins and mapping relationships between tables which are a pain to do in pure SQL but trivial with an ORM. For instance, looking at an example of Dapper: var connection = new SqlConnection(); // setup here... var person = connection.Query<Person>("select * from people where PersonId = @personId", new { PersonId = 42 }); How is that any different than using a handrolled ADO.NET data layer, except that you don't have to write the command, set the parameters and I suppose map the entity back using a Builder. It looks like you could even use a stored procedure call as the SQL string. Are there other tangible benefits that I'm missing here where a Micro ORM makes sense to use? I'm not really seeing how it's saving anything over the "old" way of using ADO.NET except maybe a few lines of code - you still have to write to figure out what SQL you need to execute (which can get hairy) and you still have to map relationships between tables (the part that IMHO ORMs help the most with).

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • How to show or direct a business analyst to a data modelling subject?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

  • Example of persisting an inheritance relationship using ORM

    - by Schemer
    I have some experience with OOP and RDBs, but very little exposure to web programming. I am trying to understand what non-trivial types of problems are solved by ORM. Of course, I am familiar with the need for data persistence, but I have never encountered a need for persisting relationships between objects, a situation which is indicated in many online articles about ORM. I am not asking about the process of persisting a POJO to a database and restoring it later. Nor am I asking about why ORM frameworks are useful -- or a pain in the butt -- for doing so. I am particularly interested in how the need arises to persist and restore relationships between objects. In various documentation, I have seen many examples of persisting POJOs to a database, but the examples have all been for only very simple objects that are essentially nothing more than records anyway: a constructor, some private fields, and getter/setter methods. The motivation for persisting such an "object-record" seems obvious and trivial. This example: Hibernate ORM Tutorial offers such an example, but goes on to discuss mismatch issues of granularity, inheritance, identity, associations, and navigation that are not motivated by the example. If someone could offer a toy example of an instance where, say, the need arises to persist an inheritance relationship, I would be grateful. This might be blindingly obvious for anyone who has already encountered this situation but I have not and a great deal of searching and reading have not turned up any examples.

    Read the article

  • How to show or direct a business analyst to do data modelling?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • Taking our Friendships to the next level.

    - by RedAndTheCommunity
    Red Gate have been running the Friends of Red Gate program for years now, and over that time we've built some great relationships with some truly awesome members of the SQL and .NET communities. When I took over the running of the program from Annabel in 2011, I was overwhelmed by the enthusiasm and commitment of our Friends. There were just so many of them, however, that it was hard to make the most of the relationships we had with people, and I wanted to fix that. I decided to survey all our Friends, to find out what they wanted to get out of, and put into, being in the Friends of Red Gate (FoRG) program. From the results of that survey, I identified 30 FoRGs that were really willing and able to go that step further to help Red Gate improve their tools, improve their relationship with the community, and improve the Friends of Red Gate program. Those 30 Friends of Red Gate have been awarded 'FoRG+' status. That means they'll: Have a closer relationship with the product teams, by getting involved in projects Have even more access to the inside track about the tools they're interested in Get the opportunity to come visit us at the Red Gate office and really influence the development of the tools. Plus more, depending on how the individual FoRG+ wants to work with us. This doesn't mean I've forgotten our other Friends; I'm working on ways to improve their experience of the Friends of Red Gate program. I'll write about them in another post. If you're an existing Friend of Red Gate, and you're interested in finding out how to get involved in the FoRG+ program, then I'd love to chat to you. For anyone that's interested in joining the Friend of Red Gate program, take a look at the web page dedicated to the program, and get in touch at [email protected] to be put on the waiting list for our 2013 program.

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • What is a Relational Database Management System (RDBMS)?

    A Relational Database Management System (RDBMS)  can also be called a traditional database that uses a Structured Query Language (SQL) to provide access to stored data while insuring the integrity of the data. The data is stored in a collection of tables that is defined by relationships between data items. In addition, data permitted to be joined in new relationships. Traditional databases primarily process data through transactions called transaction processing. Transaction processing is the methodology of grouping related business operations based predefined business events. An example of this can be seen when a person attempts to purchase an item from an online e-tailor. The business must execute specific operations for a related  business event. In this case, a business must store the following information: Customer Info, Order Info, Order Item Info, Customer Payment Data, Payment Results, and Current Order Status. Example: Pseudo SQL Operations needed for processing an online e-tailor sale. Insert Customer into Customers Insert New Order into Orders Insert Each New Order Item into OrderItems Insert Customer Payment Info into PaymentInfo Insert Payment Processing Result into PaymentDetails Update Customer for Current Order Status Common Relational Database Management System Microsoft SQL Server Microsoft Access Oracle MySQL DB2 It is important to note that no current RDBMS has fully implemented all of the Relational Principles. Common RDBMS Traits Volatile Data Supports Transaction Processing Optimized for Updates and Simple Queries 

    Read the article

  • OOP Design: relationship between entity classes

    - by beginner_
    I have at first sight a simple issue but can't wrap my head around on how to solve. I have an abstract class Compound. A Compound is made up of Structures. Then there is also a Container which holds 1 Compound. A "special" implementation of Compound has Versions. For that type of Compound I want the Container to hold the Versionof the Compound and not the Compound itself. You could say "just create an interface Containable" and a Container holds 1 Containable. However that won't work. The reason is I'm creating a framework and the main part of that framework is to simplify storing and especially searching for special data type held by Structure objects. Hence to search for Containers which contain a Compound made up of a specific Structure requires that the "Path" from Containerto Structure is well defined (Number of relationships or joins). I hope this was understandable. My question is how to design the classes and relationships to be able to do what I outlined.

    Read the article

  • RightNow CX @ OpenWorld: What to Experience

    - by Tony Berk
    We want to welcome our RightNow CX customers to Oracle OpenWorld next week. Get ready for a great week and a whole new experience! For a high level overview of what is going on during the week, please review these previous posts: Is There a Cloud Over OpenWorld? and What to "CRM" in San Francisco? CRM Highlights for OpenWorld '12. Also, don't forget you can add on the Customer Experience Summit @ OpenWorld to make your week even more complete and get involved with the Experience Revolution! Below is a highlight of only some of the RightNow related sessions at OpenWorld. Please use OpenWorld Schedule Builder or check the OpenWorld Content Catalog for all of the session details and any time or location changes. Tip: Pre-enrolled session registrants via Schedule Builder are allowed into the session rooms before anyone else, so Schedule Builder will guarantee you a seat. Many of the sessions below will likely be at capacity. No better way to start off than hearing where Oracle RightNow is going! Oracle RightNow CX Cloud Service Vision and Roadmap (CON9764) - Oct 1, 10:45 AM. Oracle RightNow CX Cloud Service combines Web, social, and contact center experiences for a unified, cross-channel service solution in the cloud, enabling organizations to increase sales and adoption, build trust, strengthen relationships, and reduce costs and effort. Come to this session to hear from David Vap and his team of Oracle experts about where the product is going and how Oracle is committed to accelerating the pace of innovation and value to its customers. Interested in the Cloud and want to know why some leading CIOs are moving to the cloud? You can hear first hand from CIOs from Emerson, Intuit and Overstock.com: CIOs and Governance in the Cloud (CON9767) - Oct 3, 11:45 AM.   And of course there are a number of sessions that drill down into more specific areas. Here are just a few: Deliver Outstanding Customer Experiences: Oracle RightNow Dynamic Agent Desktop Cloud Service (CON9771) - Oct 1, 4:45 PM. This session covers how companies have delivered exceptional customer experiences and how the Oracle RightNow Dynamic Agent Desktop Cloud Service roadmap will evolve in the future. The Oracle RightNow Contact Center Experience suite includes incident management, knowledge, guided processes, and other service capabilities to unify the customer experience across channels. Come learn about the powerful tools that enable even your junior agents to consistently provide outstanding service across all customer interaction channels. Self-Service in the Age of Data Intimacy (CON11516) - Oct 1, 3:15. Even though businesses are generating more and more data around their relationships and interactions with customers, very little of the information a business generates ends up available to the contact center and even less is made available to the online service experience. The generic one-size-fits-all approach that typifies most online service experiences ultimately fails to address all user needs, and that failure ultimately leads to the continued use of high-cost agent-assisted channels for low-value interactions. This session introduces Oracle RightNow Web Experience’s Virtual Assistant and discusses how you can deliver rich, engaging, highly personalized experiences with the quality of agent-assisted service at a much lower cost. Improve Chat Experiences: Best Practices for Chat Pilots and Deployments (CON11517) - Oct 1, 4:45 PM. Today’s organizations are challenged to grow revenue and retain customers with fewer resources, and many have turned to chat as an approach to improving the customer experience, increasing sales conversions, and reducing costs at the same time. From setting goals and metrics and training staff to customizing and tuning the solution, this session provides best practices and lessons learned from a broad set of implementations to help you get the most out of your chat solution. Differentiated Experience with Web Service (CON9770) - Oct 2, 1:15 PM. A reputation for excellent customer service can differentiate your brand and drive revenue. In this session, learn how to develop that reputation by transforming your online self-service into a highly interactive, branded customer experience. See live examples of how Oracle RightNow Web Experience has helped customers deliver on their Web service strategies. Unifying the Agent’s Engagement Console (CON11518) - Oct 2, 1:15 PM. Does your customer experience suffer because your agents are toggling between multiple tools? Do your agent productivity and morale suffer as well? Come to this session to learn how Oracle RightNow CX Cloud Service seamlessly unifies these disparate systems into a single engagement console. Regardless of channel, powerful adaptive tools consistently guide agents across contextually aware personalized workflows. Great agent experiences drive great customer experiences. Oracle RightNow CX Cloud Service and the Oracle Customer Experience Portfolio (CON9775) - Oct 3, 10:15 AM. This session covers how Oracle’s integrated suite of customer experience (CX) products fits with the Oracle CX portfolio of products (Oracle Fusion Customer Relationship Management; the Oracle ATG, Oracle Endeca, and Oracle Knowledge product families; and Oracle Business Intelligence) to increase revenues, strengthen customer relationships, and reduce costs across the entire end-to-end customer lifecycle for companies that sell to consumers and those that sell to businesses. Greater Insights from Customer Engagements (CON9773) Oct 4, 12:45 PM. In this session, hear how to leverage service interaction insights, customer feedback, and segmented service engagements to improve the customer experience. Discover how customers, such as J&P Cycles, learn and take action based on business insights gained through their customer engagements. Again, these are just some of the sessions, so check out the Content Catalog for details on Knowledge Management, Customization, Integration and more in the Oracle Develop stream for Customer Experience. Be sure to visit the Oracle DEMOgrounds in the Moscone West Exhibit Hall. If this is your first OpenWorld, welcome! If you are returning, hi again and enjoy!

    Read the article

  • How to update correctly an Entity Model after changes of database structure?

    - by Slauma
    I've made some changes in the table structure and especially the relationships between tables in my SQL Server database. Now I want to update my Entity model based on this new database structure. Right clicking on the edmx file I find the option "Update model from database". But when I do this I get kind of a 50% update: The new columns appear in the Entity classes but I am confused about a lot of navigation properties which are still there in the model although the corresponding foreign key relationships do not exist anymore in the database. Am I doing something wrong? Or is there another option to update the model including deletion of navigation properties? Or do I have to delete those navigation properties manually in the model files? I am using Entity Framework Version 1 (VS 2008 SP1). Thanks for help in advance!

    Read the article

  • MS SQL Bridge Table Constraints

    - by greg
    Greetings - I have a table of Articles and a table of Categories. An Article can be used in many Categories, so I have created a table of ArticleCategories like this: BridgeID int (PK) ArticleID int CategoryID int Now, I want to create constraints/relationships such that the ArticleID-CategoryID combinations are unique AND that the IDs must exist in the respective primary key tables (Articles and Categories). I have tried using both VS2008 Server Explorer and Enterprise Manager (SQL-2005) to create the FK relationships, but the results always prevent Duplicate ArticleIDs in the bridge table, even though the CategoryID is different. I am pretty sure I am doing something obviously wrong, but I appear to have a mental block at this point. Can anyone tell me please how should this be done? Greaty appreciated!

    Read the article

  • Entity Relationship Diagram - Relationship Strength?

    - by 01010011
    Hi, I am trying to figure out under what circumstances I should use weak (non-identifying) relationships (where the primary key of the related entity does not contain a primary key component of the parent entity), verses when I should use strong (identifying) relationships (primary key of the related entity contains a primary key component of the parent entity). For example, when designing an Entity Relationship Diagram , if I have two entities, (e.g. book and purchaser), how do I know when to choose the solid Crows Foot or the dashed Crows Foot to connect the two entities? Any assistance will be appreciated. Thanks in advance.

    Read the article

  • Database structure - To join or not to join

    - by Industrial
    Hi! We're drawing up the database structure with the help of mySQL Workbench for a new app and the number of joins required to make a listing of the data is increasing drastically as the many-to-many relationships increases. The application will be quite read-heavy and have a couple of hundred thousand rows per table. The questions: Is it really that bad to merge tables where needed and thereby reducing joins? Should we start looking at horizontal partitioning? (in conjunction with merging tables) Is there a better way then pivot tables to take care of many-to-many relationships? We discussed about instead storing all data in serialized text columns and having the application make the sorting instead of the database, but this seems like a very bad idea, even though that the database will be heavily cached. What do you think? Thanks!

    Read the article

  • Is it worthwhile to implement observer pattern in PHP?

    - by Extrakun
    I have been meaning to make use of design pattern in PHP, such as the observer pattern, but that I have to recreate the observers' relationship each time the page is loaded pains me. As references are saved as a new concrete objects in session, there is no way to preserve relationships between subscribers and their observers unless you use a GUID or some other properties to form a lookup, and store that property instead. With the cost of recreating the relationships each time a page is loaded, is it worthwhile to use design patterns such as observers in PHP, compared to having a clean design? Any real-world experience to share?

    Read the article

  • Database Instance

    - by Sam
    I read a statement from an exercise: construct a database instance which conforms to diagram 1 but not to diagram 2. The diagrams are different n-ary relationships that have different relationships. Diagram 1 has a many to one to many to one relationship. Diagram 2 has many to many to many to one relationship. So, to really understand this problem, what does a database instance mean? Is it to make an example or abstract entities like a1, a2, or a3. Thanks for your time.

    Read the article

  • Adding unique objects to Core Data

    - by absolut
    I'm working on an iPhone app that gets a number of objects from a database. I'd like to store these using Core Data, but I'm having problems with my relationships. A Detail contains any number of POIs (points of interest). When I fetch a set of POI's from the server, they contain a detail ID. In order to associate the POI with the Detail (by ID), my process is as follows: Query the ManagedObjectContext for the detailID. If that detail exists, add the poi to it. If it doesn't, create the detail (it has other properties that will be populated lazily). The problem with this is performance. Performing constant queries to Core Data is slow, to the point where adding a list of 150 POI's takes a minute thanks to the multiple relationships involved. In my old model, before Core Data (various NSDictionary cache objects) this process was super fast (look up a key in a dictionary, then create it if it doesn't exist) I have more relationships than just this one, but pretty much every one has to do this check (some are many to many, and they have a real problem). Does anyone have any suggestions for how I can help this? I could perform fewer queries (by searching for a number of different ID's), but I'm not sure how much this will help. Some code: POI *poi = [NSEntityDescription insertNewObjectForEntityForName:@"POI" inManagedObjectContext:[(AppDelegate*)[UIApplication sharedApplication].delegate managedObjectContext]]; poi.POIid = [attributeDict objectForKey:kAttributeID]; poi.detailId = [attributeDict objectForKey:kAttributeDetailID]; Detail *detail = [self findDetailForID:poi.POIid]; if(detail == nil) { detail = [NSEntityDescription insertNewObjectForEntityForName:@"Detail" inManagedObjectContext:[(AppDelegate*)[UIApplication sharedApplication].delegate managedObjectContext]]; detail.title = poi.POIid; detail.subtitle = @""; detail.detailType = [attributeDict objectForKey:kAttributeType]; } -(Detail*)findDetailForID:(NSString*)detailID { NSManagedObjectContext *moc = [[UIApplication sharedApplication].delegate managedObjectContext]; NSEntityDescription *entityDescription = [NSEntityDescription entityForName:@"Detail" inManagedObjectContext:moc]; NSFetchRequest *request = [[[NSFetchRequest alloc] init] autorelease]; [request setEntity:entityDescription]; NSPredicate *predicate = [NSPredicate predicateWithFormat: @"detailid == %@", detailID]; [request setPredicate:predicate]; NSLog(@"%@", [predicate description]); NSError *error; NSArray *array = [moc executeFetchRequest:request error:&error]; if (array == nil || [array count] != 1) { // Deal with error... return nil; } return [array objectAtIndex:0]; }

    Read the article

  • Is there a Drupal module for Forms with powerful CRUD style behaviour?

    - by Petras
    We are building a website that contains a large number of database tables that need to be edited by the CMS administrator. Some of the tables are fed by form submissions from users on the front end of the website. Some of the tables are purely in the CMS and are used to create custom modules on the front end of the website. Although there is a forms module in Drupal, I think our requirements cannot be met by it. Does anyone know of a system that allows forms to be saved to a CRUD style database with the following features? Export of all database fields. View a summary of the records in a filterable table. With paging You can have one to many relationships in records eg To code this manually for 10 forms is A LOT of work. Particularly the one to many relationships. If there is a powerful module available it would save us writing one.

    Read the article

  • Sqlalchemy+elixir: How query with a ManyToMany relationship?

    - by Hugo
    Hi, I'm using sqlalchemy with Elixir and have some troubles trying to make a query.. I have 2 entities, Customer and CustomerList, with a many to many relationship. customer_lists_customers_table = Table('customer_lists_customers', metadata, Column('id', Integer, primary_key=True), Column('customer_list_id', Integer, ForeignKey("customer_lists.id")), Column('customer_id', Integer, ForeignKey("customers.id"))) class Customer(Entity): [...] customer_lists = ManyToMany('CustomerList', table=customer_lists_customers_table) class CustomerList(Entity): [...] customers = ManyToMany('Customer', table=customer_lists_customers_table) I'm tryng to find CustomerList with some customer: customer = [...] CustomerList.query.filter_by(customers.contains(customer)).all() But I get the error: NameError: global name 'customers' is not defined customers seems to be unrelated to the entity fields, there's an special query form to work with relationships (or ManyToMany relationships)? Thanks

    Read the article

  • Database designing for a bug tracking system in SQL

    - by Yasser
    I am an ASP.NET developer and I really dont do such of database stuff. But for my open source project on Codeplex I am required to setup a database schema for the project. So reading from here and there I have managed to do the following. As being new to database schema designing, I wanted some one else who has a better idea on this topic to help me identify any issues with this design. Most of the relationships are self explanatory I think, but still I will jot each one down. The two keys between UserProfile and Issues are for relationships between UserId and IssueCreatedBy and IssueClosedBy Thanks

    Read the article

  • Using Multiple Foreign Keys to the same table in LINQ

    - by Graeme
    I have a table Users and a table Items In the Items table, I have fields such as ModifiedBy CreatedBy AssignedTo which all have a userId integer. The database is set up to have these as foreign keys back to the Users table. When using LINQToSQL, the relationships which are automatically built from the dbml end up giving me names like User, User1 and User2 e.g. myItem.User1.Name or myItem.User2.Name Obviously this isn't very readable and I'd like it be along the lines of myItem.CreatedByUser.Name or myItem.ModifiedByUser.Name etc I could change the names of the relationships but that means I have to redo that every time I change the db schema and refresh the dbml. Is there any way round this?

    Read the article

  • Entity Framework: Data Centric vs. Object Centric

    - by Eric J.
    I'm having a look at Entity Framework and everything I'm reading takes a data centric approach to explaining EF. By that I mean that the fundamental relationships of the system are first defined in the database and objects are generated that reflect those relationships. Examples Quickstart (Entity Framework) Using Entity Framework entities as business objects? The EF documentation implies that it's not necessary to start from the database layer, e.g. Developers can work with a consistent application object model that can be mapped to various storage schemas When designing a new system (simplified version), I tend to first create a class model, then generate business objects from the model, code business layer stuff that can't be generated, and then worry about persistence (or rather work with a DBA and let him worry about the most efficient persistence strategy). That object centric approach is well supported by ORM technologies such as (n)Hibernate. Is there a reasonable path to an object centric approach with EF? Will I be swimming upstream going that route? Any good starting points?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >