Search Results

Search found 1334 results on 54 pages for 'constraint satisfaction'.

Page 16/54 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • The type of field isn't supported by declared persistence strategy "OneToMany"

    - by Robert
    We are new to JPA and trying to setup a very simple one to many relationship where a pojo called Message can have a list of integer group id's defined by a join table called GROUP_ASSOC. Here is the DDL: CREATE TABLE "APP"."MESSAGE" ( "MESSAGE_ID" INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1) ); ALTER TABLE "APP"."MESSAGE" ADD CONSTRAINT "MESSAGE_PK" PRIMARY KEY ("MESSAGE_ID"); CREATE TABLE "APP"."GROUP_ASSOC" ( "GROUP_ID" INTEGER NOT NULL, "MESSAGE_ID" INTEGER NOT NULL ); ALTER TABLE "APP"."GROUP_ASSOC" ADD CONSTRAINT "GROUP_ASSOC_PK" PRIMARY KEY ("MESSAGE_ID", "GROUP_ID"); ALTER TABLE "APP"."GROUP_ASSOC" ADD CONSTRAINT "GROUP_ASSOC_FK" FOREIGN KEY ("MESSAGE_ID") REFERENCES "APP"."MESSAGE" ("MESSAGE_ID"); Here is the pojo: @Entity @Table(name = "MESSAGE") public class Message { @Id @Column(name = "MESSAGE_ID") @GeneratedValue(strategy = GenerationType.IDENTITY) private int messageId; @OneToMany(fetch=FetchType.LAZY, cascade=CascadeType.PERSIST) private List groupIds; public int getMessageId() { return messageId; } public void setMessageId(int messageId) { this.messageId = messageId; } public List getGroupIds() { return groupIds; } public void setGroupIds(List groupIds) { this.groupIds = groupIds; } } When we try to execute the following test code we get <openjpa-1.2.3-SNAPSHOT-r422266:907835 fatal user error> org.apache.openjpa.util.MetaDataException: The type of field "pojo.Message.groupIds" isn't supported by declared persistence strategy "OneToMany". Please choose a different strategy. Message msg = new Message(); List groups = new ArrayList(); groups.add(101); groups.add(102); EntityManager em = Persistence.createEntityManagerFactory("TestDBWeb").createEntityManager(); em.getTransaction().begin(); em.persist(msg); em.getTransaction().commit(); Help!

    Read the article

  • Symfony fk issue on insertion

    - by Daniel Hertz
    Hi, I posted a similar problem but it could not be resolved. I create a relational database of users and groups but for some reason I cannot insert test data with fixtures properly. Here is a sample of the schema: User: actAs: { Timestampable: ~ } columns: name: { type: string(255), notnull: true } email: { type: string(255), notnull: true, unique: true } nickname: { type: string(255), unique: true } password: { type: string(300), notnull: true } image: { type: string(255) } Group: actAs: { Timestampable: ~ } columns: name: { type: string(500), notnull: true } image: { type: string(255) } type: { type: string(255), notnull: true } created_by_id: { type: integer } relations: User: { onDelete: SET NULL, class: User, local: created_by_id, foreign: id, foreignAlias: groups_created } FanOf: actAs: { Timestampable: ~ } columns: user_id: { type: integer, primary: true } group_id: { type: integer, primary: true } relations: User: { onDelete: CASCADE, local: user_id, foreign: id, foreignAlias: fanhood } Group: { onDelete: CASCADE, local: group_id, foreign: id, foreignAlias: fanhood } And this is the data i try to input: User: user1: name: Danny email: [email protected] nickname: danny password: f05050400c5e586fa6629ef497be Group: group1: name: Mets type: sports FanOf: fans1: user_id: user1 group_id: group1 I keep getting this error: SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (`krowdd`.`fan_of`, CONSTRAINT `fan_of_user_id_user_id` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`) ON DELETE CASCADE) The users and groups are clearly being created before the "fanhood" is so why am I getting this error?? Thanks!

    Read the article

  • Entity Framework 4 and SYSUTCDATETIME ()

    - by GIbboK
    Hi, I use EF4 and C#. I have a Table in my DataBase (MS SQL 2008) with a default value for a column SYSUTCDATETIME (). The Idea is to automatically add Date and Time as soon as a new record is Created. I create my Conceptual Model using EF4, and I have created an ASP.PAGE with a DetailsView Control in INSERT MODE. My problems: When I create a new Record. EF is not able to insert the actual Date and Time value but it inserts instead this value 0001-01-01 00:00:00.00. I suppose the EF is not able to use SYSUTCDATETIME () defined in my DataBase Any idea how to solve it? Thanks Here my SQL script CREATE TABLE dbo.CmsAdvertisers ( AdvertiserId int NOT NULL IDENTITY CONSTRAINT PK_CmsAdvertisers_AdvertiserId PRIMARY KEY, DateCreated dateTime2(2) NOT NULL CONSTRAINT DF_CmsAdvertisers_DateCreated DEFAULT sysutcdatetime (), ReferenceAdvertiser varchar(64) NOT NULL, NoteInternal nvarchar(256) NOT NULL CONSTRAINT DF_CmsAdvertisers_NoteInternal DEFAULT '' ); My Temporary solution: Please guys help me on this e.Values["DateCreated"] = DateTime.UtcNow; More info here: http://msdn.microsoft.com/en-us/library/bb387157.aspx How to use the default Entity Framework and default date values http://msdn.microsoft.com/en-us/library/dd296755.aspx

    Read the article

  • How I shoud use BIT in MS SQL 2005

    - by adopilot
    Regarding to SQL performance. I have Scalar-Valued function for checking some specific condition in base, It returns BIT value True or False, I now do not know how I should fill @BIT parameter If I write. set @bit = convert(bit,1) or set @bit = 1 or set @bit='true' Function will work anyway but I do not know which method is recommended for daily use. Another Question, I have table in my base with around 4 million records, Daily insert is about 4K records in that table. Now I want to add CONSTRAINT on that table whit scalar valued function that I mentioned already Something like this ALTER TABLE fin_stavke ADD CONSTRAINT fin_stavke_knjizenje CHECK ( dbo.fn_ado_chk_fin(id)=convert(bit,1)) Where is filed "id" primary key of table fin_stavke and dbo.fn_ado_chk_fin looks like create FUNCTION fn_ado_chk_fin ( @stavka_id int ) RETURNS bit AS BEGIN declare @bit bit if exists (select * from fin_stavke where id=@stavka_id and doc_id is null and protocol_id is null) begin set @bit=0 end else begin set @bit=1 end return @bit; END GO Will this type and method of cheeking constraint will affect badly performance on my table and SQL at all ? If there is also better way to add control on this table please let me know.

    Read the article

  • Select highest rated, oldest track

    - by Blair McMillan
    I have several tables: CREATE TABLE [dbo].[Tracks]( [Id] [uniqueidentifier] NOT NULL, [Artist_Id] [uniqueidentifier] NOT NULL, [Album_Id] [uniqueidentifier] NOT NULL, [Title] [nvarchar](255) NOT NULL, [Length] [int] NOT NULL, CONSTRAINT [PK_Tracks_1] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TrackHistory]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [Datetime] [datetime] NOT NULL, CONSTRAINT [PK_TrackHistory] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[TrackHistory] ([Track_Id] ,[Datetime]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,GETDATE()) CREATE TABLE [dbo].[Ratings]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [User_Id] [uniqueidentifier] NOT NULL, [Rating] [tinyint] NOT NULL, CONSTRAINT [PK_Ratings] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[Ratings] ([Track_Id] ,[User_Id] ,[Rating]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,"C7D62450-8BE6-40F6-80F1-A539DA301772" ,1) Users User_Id|Guid Other fields Links between the tables are pretty obvious. TrackHistory has each track added to it as a row whenever it is played ie. a track will appear in there many times. Ratings value will either be 1 or -1. What I'm trying to do is select the Track with the highest rating, that is more than 2 hours old, and if there is a duplicate rating for a track (ie a track receives 6 +1 ratings and 1 - rating, giving that track a total rating of 5, another track also has a total rating of 5), the track that was last played the longest ago should be returned. (If all tracks have been played within the last 2 hours, no rows should be returned) I'm getting somewhere doing each part individually using the link above, SUM(Value) and GROUP BY Track_Id, but I'm having trouble putting it all together. Hopefully someone with a bit more (MS)SQL knowledge will be able to help me. Many thanks!

    Read the article

  • Updating nullability of columns in SQL 2008

    - by Shaul
    I have a very wide table, containing lots and lots of bit fields. These bit fields were originally set up as nullable. Now we've just made a decision that it doesn't make sense to have them nullable; the value is either Yes or No, default No. In other words, the schema should change from: create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit null, BitField2 bit null, ... BitFieldN bit null ) to create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit not null, BitField2 bit not null, ... BitFieldN bit not null ) alter table MyTable add constraint DF_BitField1 default 0 for BitField1 alter table MyTable add constraint DF_BitField2 default 0 for BitField2 alter table MyTable add constraint DF_BitField3 default 0 for BitField3 So I've just gone in through the SQL Management Studio, updating all these fields to non-nullable, default value 0. And guess what - when I try to update it, SQL Mgmt studio internally recreates the table and then tries to reinsert all the data into the new table... including the null values! Which of course generates an error, because it's explicitly trying to insert a null value into a non-nullable column. Aaargh! Obviously I could run N update statements of the form: update MyTable set BitField1 = 0 where BitField1 is null update MyTable set BitField2 = 0 where BitField2 is null but as I said before, there are n fields out there, and what's more, this change has to propagate out to several identical databases. Very painful to implement manually. Is there any way to make the table modification just ignore the null values and allow the default rule to kick in when you attempt to insert a null value?

    Read the article

  • SQL Server: Clustering by timestamp; pros/cons

    - by Ian Boyd
    I have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means I want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But I can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. I could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic I want for a candidate cluster key. So I cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, I use what I already have. What I'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • SQL Server: Clutering by timestamp; pros/cons

    - by Ian Boyd
    i have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means i want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But i can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. i could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic i want for a candidate cluster key. So i cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, i use what i already have. What i'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • How to reference using Entity Framework and Asp.Net Mvc 2

    - by Picflight
    Tables CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [UserName] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Email] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [BirthDate] [smalldatetime] NULL, [CountryId] [int] NULL, CONSTRAINT [PK_Users] PRIMARY KEY CLUSTERED ([UserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TeamMember]( [UserId] [int] NOT NULL, [TeamMemberUserId] [int] NOT NULL, [CreateDate] [smalldatetime] NOT NULL CONSTRAINT [DF_TeamMember_CreateDate] DEFAULT (getdate()), CONSTRAINT [PK_TeamMember] PRIMARY KEY CLUSTERED ([UserId] ASC, [TeamMemberUserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] dbo.TeamMember has both UserId and TeamMemberUserId as the index key. My goal is to show a list of Users on my View. In the list I want to flag, or highlight the Users that are Team Members of the LoggedIn user. My ViewModel public class UserViewModel { public int UserId { get; private set; } public string UserName { get; private set; } public bool HighLight { get; private set; } public UserViewModel(Users users, bool highlight) { this.UserId = users.UserId; this.UserName = users.UserName; this.HighLight = highlight; } } View <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MvcPaging.IPagedList<MyProject.Mvc.Models.UserViewModel>>" %> <% foreach (var item in Model) { %> <%= item.UserId %> <%= item.UserName %> <%if (item.HighLight) { %> Team Member <% } else { %> Not Team Member <% } %> How do I toggle the TeamMember or Not If I add dbo.TeamMember to the EDM, there are no relationships on this table, how will I wire it to Users object? So I am comparing the LoggedIn UserId with this list(SELECT TeamMemberUserId FROM TeamMember WHERE UserId = @LoggedInUserId)

    Read the article

  • how to use a parameterized function for the Default Binding of a Sql Server column

    - by Walt Gaber
    I have a table that catalogs selected files from multiple sources. I want to record whether a file is a duplicate of a previously cataloged file at the time the new file is cataloged. I have a column in my table (“primary_duplicate”) to record each entry as ‘P’ (primary) or ‘D’ (duplicate). I would like to provide a Default Binding for this column that would check for other occurrences of this file (i.e. name, length, timestamp) at the time the new file is being recorded. I have created a function that performs this check (see “GetPrimaryDuplicate” below). But I don’t know how to bind this function which requires three parameters to the table’s “primary_duplicate” column as its Default Binding. I would like to avoid using a trigger. I currently have a stored procedure used to insert new records that performs this check. But I would like to ensure that the flag is set correctly if an insert is performed outside of this stored procedure. How can I call this function with values from the row that is being inserted? USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[FileCatalog]( [id] [uniqueidentifier] NOT NULL, [catalog_timestamp] [datetime] NOT NULL, [primary_duplicate] nchar NOT NULL, [name] nvarchar NULL, [length] [bigint] NULL, [timestamp] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_id] DEFAULT (newid()) FOR [id] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_catalog_timestamp] DEFAULT (getdate()) FOR [catalog_timestamp] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_primary_duplicate] DEFAULT (N'GetPrimaryDuplicate(name, length, timestamp)') FOR [primary_duplicate] GO USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [dbo].[GetPrimaryDuplicate] ( @name nvarchar(255), @length bigint, @timestamp datetime ) RETURNS nchar(1) AS BEGIN DECLARE @c int SELECT @c = COUNT(*) FROM FileCatalog WHERE name=@name and length=@length and timestamp=@timestamp and primary_duplicate = 'P' IF @c > 0 RETURN 'D' -- Duplicate RETURN 'P' -- Primary END GO

    Read the article

  • "date_part('epoch', now() at time zone 'UTC')" not the same time as "now() at time zone 'UTC'" in po

    - by sirlark
    I'm writing a web based front end to a database (PHP/Postgresql) in which I need to store various dates/times. The times are meant to be always be entered on the client side in the local time, and displayed in the local time too. For storage purposes, I store all dates/times as integers (UNIX timestamps) and normalised to UTC. One particular field has a restriction that the timestamp filled in is not allowed to be in the future, so I tried this with a database constraint... CONSTRAINT not_future CHECK (timestamp-300 <= date_part('epoch', now() at time zone 'UTC')) The -300 is to give 5 minutes leeway in case of slightly desynchronised times between browser and server. The problem is, this constraint always fails when submitting the current time. I've done testing, and found the following. In PostgreSQL client: SELECT now() -- returns correct local time SELECT date_part('epoch', now()) -- returns a unix timestamp at UTC (tested by feeding the value into the date function in PHP correcting for its compensation to my time zone) SELECT date_part('epoch', now() at time zone 'UTC') -- returns a unix timestamp at two time zone offsets west, e.g. I am at GMT+2, I get a GMT-2 timestamp. I've figured out obviously that dropping the "at time zone 'UTC'" will solve my problem, but my question is if 'epoch' is meant to return a unix timestamp which AFAIK is always meant to be in UTC, why would the 'epoch' of a time already in UTC be corrected? Is this a bug, or I am I missing something about the defined/normal behaviour here.

    Read the article

  • Multi-client C# ODBC (Sybase/Oracle/MSSQL) table access question.

    - by Hamish Grubijan
    I am working on a feature that would allow clients pick a unique identifier (ci_name). The code below is a generic version that gets expanded to the right sql depending on the vendor. Hopefully it makes sense. #include "sql.h" create table client_identification ( ci_id TYPE_ID IDENTITY, ci_name varchar(64) not null, constraint ci_pk primary key (ci_name) ); go CREATE_SEQUENCE(ci_id) There will be simple stored procedures for adding, retrieving, and deleting these user records. This will be used by several admins. This will not happen very frequently, but there is still a possibility that something will be added or deleted since the list was initially retrieved. I have not yet decided if I need to detect the case of a double delete, but the user name cannot be created twice - primary key constraint will object. I want to be able to detect this particular case and display something like: "you snooze - you loose." :) I would like to leverage the pk constraint instead of doing some extra sql gymnastics. So, how can I detect this case cleanly, so that it works in MS SQL 2008, Sybase, and Oracle? I hope to do better than catch a general ODBC exception and parse out the text and look for what Sybase, Oracle, and MSSQL would give me back. Oracle is a little different. We actually prepend these variables to the Oracle version of stored procedures because they are not available otherwise: Vret_val out number, Vtran_count in out number, Vmessage_count in out number, Thanks. General helpful tips and comments are welcome, except for naming convention ones ( I do not have a choice here, plus I mangled the actual names a bit).

    Read the article

  • Question on SQL Grouping

    - by Lijo
    Hi Team, I am trying to achieve the following without using sub query. For a funding, I would like to select the latest Letter created date and the ‘earliest worklist created since letter created’ date for a funding. FundingId Leter (1, 1/1/2009 )(1, 5/5/2009) (1, 8/8/2009) (2, 3/3/2009) FundingId WorkList (1, 5/5/2009 ) (1, 9/9/2009) (1, 10/10/2009) (2, 2/2/2009) Expected Result - FundingId Leter WorkList (1, 8/8/2009, 9/9/2009) I wrote a query as follows. It has a bug. It will omit those FundingId for which the minimum WorkList date is less than latest Letter date (even though it has another worklist with greater than letter created date). CREATE TABLE #Funding( [Funding_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_No] [int] NOT NULL, CONSTRAINT [PK_Center_Center_ID] PRIMARY KEY NONCLUSTERED ([Funding_ID] ASC) ) ON [PRIMARY] CREATE TABLE #Letter( [Letter_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_ID] [int] NOT NULL, [CreatedDt] [SMALLDATETIME], CONSTRAINT [PK_Letter_Letter_ID] PRIMARY KEY NONCLUSTERED ([Letter_ID] ASC) ) ON [PRIMARY] CREATE TABLE #WorkList( [WorkList_ID] [int] IDENTITY(1,1) NOT NULL, [Funding_ID] [int] NOT NULL, [CreatedDt] [SMALLDATETIME], CONSTRAINT [PK_WorkList_WorkList_ID] PRIMARY KEY NONCLUSTERED ([WorkList_ID] ASC) ) ON [PRIMARY] SELECT F.Funding_ID, Funding_No, MAX (L.CreatedDt), MIN(W.CreatedDt) FROM #Funding F INNER JOIN #Letter L ON L.Funding_ID = F.Funding_ID LEFT OUTER JOIN #WorkList W ON W.Funding_ID = F.Funding_ID GROUP BY F.Funding_ID,Funding_No HAVING MIN(W.CreatedDt) MAX (L.CreatedDt) How can I write a correct query without using subquery? Please help Thanks Lijo

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • Bunny Inc. – Episode 2. Mr. CIO meets Mrs. Sales Manager

    - by kellsey.ruppel(at)oracle.com
    How can you take advantage of a modern customer experience in your sales cycle? What can Mr. CIO come up with to improve customer interaction and satisfaction? See how Enterprise 2.0 solutions can help Bunny Inc. improve business responsiveness to market requests, sell more and simplify post sales support! Bunny Inc. - Episode 2. Mr. CIO meets Mrs. Sales ManagerTechnorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, e20bunnies

    Read the article

  • What's new in the RightNow November 2012 release?

    - by Richard Lefebvre
    What new in the RightNow November 2012? In order to find out, please watch this tutorial with imbedded demonstration or read the November 2012 Release notes.   News Facts The November 2012 release of     Oracle’s RightNow CX Cloud Service marks the completion of development efforts for 2012 and continues Oracle’s commitment to enhancing the Oracle RightNow offering following the acquisition. New release delivers key capabilities designed to help organizations improve customer experiences in order to increase customer acquisition and retention, while reducing total cost of ownership. Part of the Oracle Cloud, Oracle RightNow CX Cloud Service now integrates Oracle RightNow Chat Cloud Service with Oracle Engagement Engine Cloud Service, helping organizations intelligently and proactively engage with customers through the right channel at the right time. Chat solutions have emerged as an important component of a cross-channel customer experience strategy. According to Forrester Research, Inc., chat adoption has risen dramatically between 2009 and 2011 from 19% to 37%, and it has the highest satisfaction level of all customer service channels at 62% satisfaction. (*) To help companies deliver enhanced customer experiences, Oracle has made significant investments in Oracle RightNow Chat Cloud Service throughout 2012. With the addition of rules-based engagement to existing capabilities such as co-browse, mobile chat, and cross-channel knowledge integration with the contact center, all delivered via the cloud, Oracle RightNow Chat Cloud Service is differentiated as the industry-leading chat solution. The Oracle Cloud offers a broad portfolio of software as-a-service applications, including Oracle Customer Service and Support Cloud Service, which is based on the Oracle RightNow CX Cloud Service. New Capabilities Key Oracle RightNow Chat Cloud Service and other cross-channel capabilities include: Chat Business Rules, with over 70 built-in rule conditions, leverage the Oracle Engagement Engine to help enable organizations capture rich visitor data and invoke complex actions and triggers. Chat Business Rules allow granular control over when to engage a customer via the chat channel based on customer behavior, customer profile information and operational information. Click-to-Call provides the option for a customer to engage with a live agent over the phone during the Web browsing experience. Chat Availability Controls provide organizations with the ability to throttle volume through the chat channel based on real-time agent availability and wait time thresholds. This ability to manage the channel more efficiently allows organizations to provide a better experience to customers using the chat channel. Strategic and Operational Chat Channel Analytics provide better insight into channel and agent productivity and utilization and effectiveness with both out-of-the-box reports and ad hoc reports. New chat channel analytics provide comprehensive metrics with full data transparency. Background Service Updates improve high availability metrics for Oracle RightNow Chat Cloud Service during service update periods, setting the industry leading standard for sales and service delivery to customers via the chat channel. Additional Capabilities include: Improved Web developer tools for more efficient self-service user interface design Improved administration for enhanced user sessions management Increased cross-channel community collaboration Enhanced extensibility widgets and syndication management Streamlined content management and analytics capabilities Read the full announcement here

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • Oracle ERP Cloud Solution Defines Revenue Recognition Software Market

    - by Steve Dalton
    Normal 0 false false false EN-US X-NONE X-NONE Revenue is a fundamental yardstick of a company's performance, and one of the most important metrics for investors in the capital markets. So it’s no surprise that the accounting standard boards have devoted significant resources to this topic, with a key goal of ensuring that companies use a consistent method of recognizing revenue. Due to the myriad of revenue-generating transactions, and the divergent ways organizations recognize revenue today, the IFRS and FASB have been working for 12 years on a common set of accounting standards that apply to all industries in virtually all countries. Through their joint efforts on May 28, 2014 the FASB and IFRS released the IFRS 15 / ASU 2014-9 (Revenue from Contracts with Customers) converged accounting standard. This standard applies to revenue in all public companies, but heavily impacts organizations in any industry that might have complex sales contracts with multiple distinct deliverables (obligations). For example, an auto dealer who bundles free service with the sale of a car can only recognize the service revenue once the owner of the car brings it in for work. Similarly, high-tech companies that bundle software licenses, consulting, and support services on a sales contract will recognize bundled service revenue once the services are delivered. Now all companies need to review their revenue for hidden bundling and implicit obligations. Numerous time-consuming and judgmental activities must be performed to properly recognize revenue for complex sales contracts. To illustrate, after the contract is identified, organizations must identify and examine the distinct deliverables, determine the estimated selling price (ESP) for each deliverable, then allocate the total contract price to each deliverable based on the ESPs. In terms of accounting, organizations must determine whether the goods or services have been delivered or performed to the customer’s satisfaction, then either book revenue in the current period or record a liability for the obligation if revenue will be recognized in a future accounting period. Oracle Revenue Management Cloud was architected and developed so organizations can simplify and streamline revenue recognition. Among other capabilities, the solution uses business rules to efficiently identify and examine contracts, intelligently calculate and allocate deliverable prices based on prescribed inputs, and accurately recognize revenue for each deliverable based on customer satisfaction. "Oracle works very closely with our customers, the Big 4 accounting firms, and the accounting standard boards to deliver an adaptive, comprehensive, new generation revenue recognition solution,” said Rondy Ng, Senior Vice President, Applications Development. “With the recently announced IFRS 15 / ASU 2014-9, Oracle is ready to support customer adoption of the new standard with our Revenue Management Cloud,” said Rondy. Oracle Revenue Management Cloud, an integral part of Oracle Financials Cloud, helps organizations comply with accounting standards, provides them with confidence that reported revenue is materially accurate, and simplifies the accounting process for revenue recognition. Stay tuned to this blog for regular updates on Oracle Revenue Management Cloud. We also invite you to review our new oracle.com ERP pages @ oracle.com/erp. We will be updating these pages very soon with more information about Oracle Revenue Management Cloud.

    Read the article

  • Supercharging the Performance of Your Front-Office Applications @ OOW'12

    - by Sanjeev Sharma
    You can increase customer satisfaction, brand equity, and ultimately top-line revenue by deploying  Oracle ATG Web Commerce, Oracle WebCenter Sites, Oracle Endeca applications, Oracle’s  Siebel applications, and other front-office applications on Oracle Exalogic, Oracle’s combination  of hardware and software for applications and middleware. Join me (Sanjeev Sharma) and my colleague, Kelly Goetsch, at the following conference session at Oracle Open World to find out how Customer Experience can be transformed with Oracle Exalogic: Session:  CON9421 - Supercharging the Performance of Your Front-Office Applications with Oracle ExalogicDate: Wednesday, 3 Oct, 2012Time: 10:15 am - 11:15 am (PST)Venue: Moscone South (309)

    Read the article

  • Value Chain Execution E-Book

    - by John Murphy
    Taking a smart approach to logistics – from streamlining transport networks and global trade management, to optimizing everyday warehouse operations – can simultaneously reduce costs and maximize competitive advantage.Download your exclusive Oracle e-book, Oracle Value Chain Execution: Reinventing Logistics Excellence, to learn why our world-leading, unified solution is relied on by market-leading companies across the planet.Discover how it can help you: Drive business agility, scalability and innovation Reduce costs and increase efficiency Enhance visibility, productivity and inventory accuracy Simplify compliance and mitigate risk Measure and boost customer satisfaction See what reinventing logistics excellence could mean for your organization.

    Read the article

  • Delight and Excite

    - by Applications User Experience
    Mick McGee, CEO & President, EchoUser Editor’s Note: EchoUser is a User Experience design firm in San Francisco and a member of the Oracle Usability Advisory Board. Mick and his staff regularly consult on Oracle Applications UX projects. Being part of a user experience design firm, we have the luxury of working with a lot of great people across many great companies. We get to help people solve their problems.  At least we used to. The basic design challenge is still the same; however, the goal is not necessarily to solve “problems” anymore; it is, “I want our products to delight and excite!” The question for us as UX professionals is how to design to those goals, and then how to assess them from a usability perspective. I’m not sure where I first heard “delight and excite” (A book? blog post? Facebook  status? Steve Jobs quote?), but now I hear these listed as user experience goals all the time. In particular, somewhat paradoxically, I routinely hear them in enterprise software conversations. And when asking these same enterprise companies what will make the project successful, we very often hear, “Make it like Apple.” In past days, it was “make it like Yahoo (or Amazon or Google“) but now Apple is the common benchmark. Steve Jobs and Apple were not secrets, but with Jobs’ passing and Apple becoming the world’s most valuable company in the last year, the impact of great design and experience is suddenly very widespread. In particular, users’ expectations have gone way up. Being an enterprise company is no shield to the general expectations that users now have, for all products. Designing a “Minimum Viable Product” The user experience challenge has historically been, to echo the words of Eric Ries (author of Lean Startup) , to create a “minimum viable product”: the proverbial, “make it good enough”. But, in our profession, the “minimum viable” part of that phrase has oftentimes, unfortunately, referred to the design and user experience. Technology typically dominated the focus of the biggest, most successful companies. Few have had the laser focus of Apple to also create and sell design and user experience alongside great technology. But now that Apple is the most valuable company in the world, copying their success is a common undertaking. Great design is now a premium offering that everyone wants, from the one-person startup to the largest companies, consumer and enterprise. This emerging business paradigm will have significant impact across the user experience design process and profession. One area that particularly interests me is, how are we going to evaluate these new emerging “delight and excite” experiences, which are further customized to each particular domain? How to Measure “Delight and Excite” Traditional usability measures of task completion rate, assists, time, and errors are still extremely useful in many situations; however, they are too blunt to offer much insight into emerging experiences “Satisfaction” is usually assessed in user testing, in roughly equivalent importance to the above objective metrics. Various surveys and scales have provided ways to measure satisfying UX, with whatever questions they include. However, to meet the demands of new business goals and keep users at the center of design and development processes, we have to explore new methods to better capture custom-experience goals and emotion-driven user responses. We have had success assessing custom experiences, including “delight and excite”, by employing a variety of user testing methods that tend to combine formative and summative techniques (formative being focused more on identifying usability issues and ways to improve design, and summative focused more on metrics). Our most successful tool has been one we’ve been using for a long time, Magnitude Estimation Technique (MET). But it’s not necessarily about MET as a measure, rather how it is created. Caption: For one client, EchoUser did two rounds of testing.  Each test was a mix of performing representative tasks and gathering qualitative impressions. Each user participated in an in-person moderated 1-on-1 session for 1 hour, using a testing set-up where they held the phone. The primary goal was to identify usability issues and recommend design improvements. MET is based on a definition of the desired experience, which users will then use to rate items of interest (usually tasks in a usability test). In other words, a custom experience definition needs to be created. This can then be used to measure satisfaction in accomplishing tasks; “delight and excite”; or anything else from strategic goals, user demands, or elsewhere. For reference, our standard MET definition in usability testing is: “User experience is your perception of how easy to use, well designed and productive an interface is to complete tasks.” Articulating the User Experience We’ve helped construct experience definitions for several clients to better match their business goals. One example is a modification of the above that was needed for a company that makes medical-related products: “User experience is your perception of how easy to use, well-designed, productive and safe an interface is for conducting tasks. ‘Safe’ is how free an environment (including devices, software, facilities, people, etc.) is from danger, risk, and injury.” Another example is from a company that is pushing hard to incorporate “delight” into their enterprise business line: “User experience is your perception of a product’s ease of use and learning, satisfaction and delight in design, and ability to accomplish objectives.” I find the last one particularly compelling in that there is little that identifies the experience as being for a highly technical enterprise application. That definition could easily be applied to any number of consumer products. We have gone further than the above, including “sexy” and “cool” where decision-makers insisted they were part of the desired experience. We also applied it to completely different experiences where the “interface” was, for example, riding public transit, the “tasks” were train rides, and we followed the participants through the train-riding journey and rated various aspects accordingly: “A good public transportation experience is a cost-effective way of reliably, conveniently, and safely getting me to my intended destination on time.” To construct these definitions, we’ve employed both bottom-up and top-down approaches, depending on circumstances. For bottom-up, user inputs help dictate the terms that best fit the desired experience (usually by way of cluster and factor analysis). Top-down depends on strategic, visionary goals expressed by upper management that we then attempt to integrate into product development (e.g., “delight and excite”). We like a combination of both approaches to push the innovation envelope, but still be mindful of current user concerns. Hopefully the idea of crafting your own custom experience, and a way to measure it, can provide you with some ideas how you can adapt your user experience needs to whatever company you are in. Whether product-development or service-oriented, nearly every company is ultimately providing a user experience. The Bottom Line Creating great experiences may have been popularized by Steve Jobs and Apple, but I’ll be honest, it’s a good feeling to be moving from “good enough” to “delight and excite,” despite the challenge that entails. In fact, it’s because of that challenge that we will expand what we do as UX professionals to help deliver and assess those experiences. I’m excited to see how we, Oracle, and the rest of the industry will live up to that challenge.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >