Search Results

Search found 7266 results on 291 pages for 'entity relationship'.

Page 143/291 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • How to implement behavior in a component-based game architecture?

    - by ghostonline
    I am starting to implement player and enemy AI in a game, but I am confused about how to best implement this in a component-based game architecture. Say I have a following player character that can be stationary, running and swinging a sword. A player can transit to the swing sword state from both the stationary and running state, but then the swing must be completed before the player can resume standing or running around. During the swing, the player cannot walk around. As I see it, I have two implementation approaches: Create a single AI-component containing all player logic (either decoupled from the actual component or embedded as a PlayerAIComponent). I can easily how to enforce the state restrictions without creating coupling between individual components making up the player entity. However, the AI-component cannot be broken up. If I have, for example, an enemy that can only stand and walk around or only walks around and occasionally swing a sword, I have to create new AI-components. Break the behavior up in components, each identifying a specific state. I then get a StandComponent, WalkComponent and SwingComponent. To enforce the transition rules, I have to couple each component. SwingComponent must disable StandComponent and WalkComponent for the duration of the swing. When I have an enemy that only stands around, swinging a sword occasionally, I have to make sure SwingComponent only disables WalkComponent if it is present. Although this allows for better mix-and-matching components, it can lead to a maintainability nightmare as each time a dependency is added, the existing components must be updated to play nicely with the new requirements the dependency places on the character. The ideal situation would be that a designer can build new enemies/players by dragging components into a container, without having to touch a single line of engine or script code. Although I am not sure script coding can be avoided, I want to keep it as simple as possible. Summing it all up: Should I lob all AI logic into one component or break up each logic state into separate components to create entity variants more easily?

    Read the article

  • Translating an object along its heading

    - by Kuros
    I am working on a simulation that requires me to have several objects moving around in 3D space (text output of their current position on the grid and heading is fine, I do not need graphics), and I am having some trouble getting objects to move along their relative headings. I have a basic understanding of vectors and matrices. I am using a vector to represent their position, and I am also using Euler Angles. I can translate one of my entities with a matrix along whatever axis, and I can alter their heading. For example, if I have an entity at (order is XYZ) 1, 1, 1, with a heading of 0, I can apply a translation matrix to get them to talk to 1, 1, 2 fine. However, if I change their heading to 270, they still walk to 1, 1, 3, instead of 2, 1, 2 as I desire. I have a feeling that my problem lies in not translating my matrix from world space to object space, but I am not sure how to go about that. How can I do this? Addition: I am using 3D vectors to represent their current position and their heading (using the three euler angles). For now, all I want to do is have an entity walk in a square, reporting their current position at each step. So, assuming it starts at 10, 10, 10 I want it to walk as follows: 10,10,10 -> 10, 10, 15 10, 10, 15 -> 5, 10, 15 5, 10, 15 -> 5, 10, 10 5, 10, 10 -> 10, 10, 10 My 1 Z unit translation matrix is as follows: [1 0 0 0] [0 1 0 0] [0 0 1 1] [0 0 0 1] My rotation matrix is as follows: [0 0 1 0] [0 1 0 0] [-1 0 0 0] [0 0 0 1]

    Read the article

  • Call DB Stored Procedure using @NamedStoredProcedureQuery Injection

    - by anwilson
    Oracle Database Stored Procedure can be called from EJB business layer to perform complex DB specific operations. This approach will avoid overhead from frequent network hits which could impact end-user result. DB Stored Procedure can be invoked from EJB Session Bean business logic using org.eclipse.persistence.queries.StoredProcedureCall API. Using this approach requires more coding to handle the Session and Arguments of the Stored Procedure, thereby increasing effort on maintenance. EJB 3.0 introduces @NamedStoredProcedureQuery Injection to call Database Stored Procedure as NamedQueries. This blog will take you through the steps to call Oracle Database Stored Procedure using @NamedStoredProcedureQuery.EMP_SAL_INCREMENT procedure available in HR schema will be used in this sample.Create Entity from EMPLOYEES table.Add @NamedStoredProcedureQuery above @NamedQueries to Employees.java with definition as given below - @NamedStoredProcedureQuery(name="Employees.increaseEmpSal", procedureName = "EMP_SAL_INCREMENT", resultClass=void.class, resultSetMapping = "", returnsResultSet = false, parameters = { @StoredProcedureParameter(name = "EMP_ID", queryParameter = "EMPID"), @StoredProcedureParameter(name = "SAL_INCR", queryParameter = "SALINCR")} ) Observe how Stored Procedure's arguments are handled easily in  @NamedStoredProcedureQuery using @StoredProcedureParameter.Expose Entity Bean by creating a Session Facade.Business method need to be added to Session Bean to access the Stored Procedure exposed as NamedQuery. public void salaryRaise(Long empId, Long salIncrease) throws Exception { try{ Query query = em.createNamedQuery("Employees.increaseEmpSal"); query.setParameter("EMPID", empId); query.setParameter("SALINCR", salIncrease); query.executeUpdate(); } catch(Exception ex){ throw ex; } } Expose business method through Session Bean Remote Interface. void salaryRaise(Long empId, Long salIncrease) throws Exception; Session Bean Client is required to invoke the method exposed through remote interface.Call exposed method in Session Bean Client main method. final Context context = getInitialContext(); SessionEJB sessionEJB = (SessionEJB)context.lookup("Your-JNDI-lookup"); sessionEJB.salaryRaise(new Long(200), new Long(1000)); Deploy Session BeanRun Session Bean Client.Salary of Employee with Id 200 will be increased by 1000.

    Read the article

  • 2D Tile Collision free movement

    - by andrepcg
    I'm coding a 3D game for a project using OpenGL and I'm trying to do tile collision on a surface. The surface plane is split into a grid of 64x64 pixels and I can simply check if the (x,y) tile is empty or not. Besides having a grid for collision, there's still free movement inside a tile. For each entity, in the end of the update function I simply increase the position by the velocity: pos.x += v.x; pos.y += v.y; I already have a collision grid created but my collide function is not great, i'm not sure how to handle it. I can check if the collision occurs but the way I handle is terrible. int leftTile = repelBox.x / grid->cellSize; int topTile = repelBox.y / grid->cellSize; int rightTile = (repelBox.x + repelBox.w) / grid->cellSize; int bottomTile = (repelBox.y + repelBox.h) / grid->cellSize; for (int y = topTile; y <= bottomTile; ++y) { for (int x = leftTile; x <= rightTile; ++x) { if (grid->getCell(x, y) == BLOCKED){ Rect colBox = grid->getCellRectXY(x, y); Rect xAxis = Rect(pos.x - 20 / 2.0f, pos.y - 20 / 4.0f, 20, 10); Rect yAxis = Rect(pos.x - 20 / 4.0f, pos.y - 20 / 2.0f, 10, 20); if (colBox.Intersects(xAxis)) v.x *= -1; if (colBox.Intersects(yAxis)) v.y *= -1; } } } If instead of reversing the direction I set it to false then when the entity tries to get away from the wall it's still intersecting the tile and gets stuck on that position. EDIT: I've worked with Flashpunk and it has a great function for movement and collision called moveBy. Are there any simplified implementations out there so I can check them out?

    Read the article

  • Interface (contract), Generics (universality), and extension methods (ease of use). Is it a right design?

    - by Saeed Neamati
    I'm trying to design a simple conversion framework based on these requirements: All developers should follow a predefined set of rules to convert from the source entity to the target entity Some overall policies should be able to be applied in a central place, without interference with developers' code Both the creation of converters and usage of converter classes should be easy To solve these problems in C# language, A thought came to my mind. I'm writing it here, though it doesn't compile at all. But let's assume that C# compiles this code: I'll create a generic interface called IConverter public interface IConverter<TSource, TTarget> where TSource : class, new() where TTarget : class, new() { TTarget Convert(TSource source); List<TTarget> Convert(List<TSource> sourceItems); } Developers would implement this interface to create converters. For example: public class PhoneToCommunicationChannelConverter : IConverter<Phone, CommunicationChannle> { public CommunicationChannel Convert(Phone phone) { // conversion logic } public List<CommunicationChannel> Convert(List<Phone> phones) { // conversion logic } } And to make the usage of this conversion class easier, imagine that we add static and this keywords to methods to turn them into Extension Methods, and use them this way: List<Phone> phones = GetPhones(); List<CommunicationChannel> channels = phones.Convert(); However, this doesn't even compile. With those requirements, I can think of some other designs, but they each lack an aspect. Either the implementation would become more difficult or chaotic and out of control, or the usage would become truly hard. Is this design right at all? What alternatives I might have to achieve those requirements?

    Read the article

  • Following a set of points?

    - by user1010005
    Lets assume that i have a set of path that an entity should follow : const int Paths = 2 Vector2D<float> Path[Paths] = { Vector2D(100,0),Vector2D(100,50) }; Now i define my entity's position in a 2D vector as follows : Vector2D<float> FollowerPosition(0,0); And now i would like to move the "follower" to the path at index 1 : int PathPosition = 0; //Start with path 1 Currently i do this : Vector2D<float>& Target = Path[PathPosition]; bool Changed = false; if (FollowerPosition.X < Target.X) FollowerPosition.X += Vel,Changed = true; if (FollowerPosition.X > Target.X) FollowerPosition.X -= Vel,Changed = true; if (FollowerPosition.Y < Target.Y) FollowerPosition.Y += Vel;,Changed = true; if (FollowerPosition.Y > Target.Y) FollowerPosition.Y -= Vel,Changed = true; if (!Changed) { PathPosition = PathPosition + 1; if (PathPosition > Paths) PathPosition = 0; } Which works except for one little detail : The movement is not smooth!! ...So i would like to ask if anyone sees anything wrong with my code. Thanks and sorry for my english.

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • Need some critique on .NET/WCF SOA architecture plan

    - by user998101
    I am working on a refactoring of some services and would appreciate some critique on my general approach. I am working with three back-end data systems and need to expose an authenticated front-end API over http binding, JSON, and REST for internal apps as well as 3rd party integration. I've got a rough idea below that's a hybrid of what I have and where I intend to wind up. I intend to build guidance extensions to support this architecture so that devs can build this out quickly. Here's the current idea for our structure: Front-end WCF routing service (spread across multiple IIS servers via hardware load balancer) Load balancing of services behind routing is handled within routing service, probably round-robin One of the services will be a token Multiple bindings per-service exposed to address JSON, REST, and whatever else comes up later All in/out is handled via POCO DTOs Use unity to scan for what services are available and expose them The front-end services behind the routing service do nothing more than expose the API and do conversion of DTO<-Entity Unity inject service implementation to allow mocking automapper for DTO/Entity conversion Invoke WF services where response required immediately Queue to ESB for async WF -- ESB will invoke WF later Business logic WF layer Expose same api as front-end services Implement business logic Wrap transaction context where needed Call out to composite/atomic services Composite/Atomic Services Exposed as WCF One service per back-end system Standard atomic CRUD operations plus composite operations Supports transaction context The questions I have are: Are the separation of concerns outlined above beneficial? Current thought is each layer below is its own project, except the backend stuff, where each system gets one project. The project has a servicehost and all the services are under a services folder. Interfaces live in a separate project at each layer. DTO and Entities are in two separate projects under a shared folder. I am currently planning to build dedicated services for shared functionality such as logging and overload things like tracelistener to call those services. Is this a valid approach? Any other suggestions/comments?

    Read the article

  • [LINQ] Master &ndash; Detail Same Record(II)

    - by JTorrecilla
    In my previous post, I introduced my problem, but I didn’t explain the problem with Entity Framework When you try the solution indicated you will take the following error: LINQ to Entities don’t recognize the method 'System.String Join(System.String, System.Collections.Generic.IEnumerable`1[System.String])’ of the method, and this method can’t be translated into a stored expression. The query that produces that error was: 1: var consulta = (from TCabecera cab in 2: contexto_local.TCabecera 3: let Detalle = (from TDetalle detalle 4: in cab.TDetalle 5: select detalle.Nombre) 6: let Nombres = string.Join(",",Detalle ) 7: select new 8: { 9: cab.Campo1, 10: cab.Campo2, 11: Nombres 12: }).ToList(); 13: grid.DataSource=consulta;   Why is this error happening? This error happens when the query couldn’t be translated into T-SQL. Solutions? To quit that error, we need to execute the query on 2 steps: 1: var consulta = (from TCabecera cab in 2: contexto_local.TCabecera 3: let Detalle = (from TDetalle detalle 4: in cab.TDetalle 5: select detalle.Nombre) 6: select new 7: { 8: cab.Campo1, 9: cab.Campo2, 10: Detalle 11: }).ToList(); 12: var consulta2 = (from dato in consulta 13: let Nombes = string.Join(",",dato.Detalle) 14: select new 15: { 16: dato.Campo1, 17: dato.Campo2, 18: Nombres 19: }; 20: grid.DataSource=consulta2.ToList(); Curiously This problem happens with Entity Framework but, the same problem can’t be reproduced on LINQ – To – SQL, that it works fine in one unique step. Hope It’s helpful Best Regards

    Read the article

  • Architecture for dashboard showing aggregated stats [on hold]

    - by soulnafein
    I'd like to know what are common architectural pattern for the following problem. Web application A has information on sales, users, responsiveness score, etc. Some of this information are computationally intensive and or have a complex business logic (e.g. responsiveness score). I'm building a separate application (B) for internal admin tasks that modifies data in web application A and report on data from web application A. For writing I'm planning to use a restful api. E.g. create a new entity, update entity, etc. In application B I'd like to show some graphs and other aggregate data for the previous 12 months. I'm planning to store the aggregate data for each month in redis. Some data should update more often, e.g every 10 minutes. I can think of 3 ways of doing this. A scheduled task in app B that connects to an api of app A that provides some aggregated data. Then app B stores it in Redis and use that to visualise pages. Cons: it makes complex calculation within a web request, requires lot's of work e.g. api server and client, storing, etc., pros: business logic still lives in app A. A scheduled task in app A that aggregates data in an non-web process and stores it directly in Redis to be accessed by app B. A scheduled task in app A that aggregates data in a non-web process and uses an api in app B to save it. I'd like to know if there is a well known architectural solution to this type of problems and if not what are other pros/cons for the solution I've suggested?

    Read the article

  • Problems using HibernateTemplate: java.lang.NoSuchMethodError: org.hibernate.SessionFactory.openSession()Lorg/hibernate/classic/Session;

    - by user2104160
    I am quite new in Spring world and I am going crazy trying to integrate Hibernate in Spring application using HibernateTemplate abstract support class I have the following class to persist on database table: package org.andrea.myexample.HibernateOnSpring.entity; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name="person") public class Person { @Id @GeneratedValue(strategy=GenerationType.AUTO) private int pid; private String firstname; private String lastname; public int getPid() { return pid; } public void setPid(int pid) { this.pid = pid; } public String getFirstname() { return firstname; } public void setFirstname(String firstname) { this.firstname = firstname; } public String getLastname() { return lastname; } public void setLastname(String lastname) { this.lastname = lastname; } } Next to it I have create an interface named PersonDAO in wich I only define my CRUD method. So I have implement this interface by a class named PersonDAOImpl that also extend the Spring abstract class HibernateTemplate: package org.andrea.myexample.HibernateOnSpring.dao; import java.util.List; import org.andrea.myexample.HibernateOnSpring.entity.Person; import org.springframework.orm.hibernate3.support.HibernateDaoSupport; public class PersonDAOImpl extends HibernateDaoSupport implements PersonDAO{ public void addPerson(Person p) { getHibernateTemplate().saveOrUpdate(p); } public Person getById(int id) { // TODO Auto-generated method stub return null; } public List<Person> getPersonsList() { // TODO Auto-generated method stub return null; } public void delete(int id) { // TODO Auto-generated method stub } public void update(Person person) { // TODO Auto-generated method stub } } (at the moment I am trying to implement only the addPerson() method) Then I have create a main class to test the operation of insert a new object into the database table: package org.andrea.myexample.HibernateOnSpring; import org.andrea.myexample.HibernateOnSpring.dao.PersonDAO; import org.andrea.myexample.HibernateOnSpring.entity.Person; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; public class MainApp { public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext("Beans.xml"); System.out.println("Contesto recuperato: " + context); Person persona1 = new Person(); persona1.setFirstname("Pippo"); persona1.setLastname("Blabla"); System.out.println("Creato persona1: " + persona1); PersonDAO dao = (PersonDAO) context.getBean("personDAOImpl"); System.out.println("Creato dao object: " + dao); dao.addPerson(persona1); System.out.println("persona1 salvata nel database"); } } As you can see the PersonDAOImpl class extends HibernateTemplate so I think that it have to contain the operation of setting of the sessionFactory... The problem is that when I try to run this MainApp class I obtain the following exception: Exception in thread "main" java.lang.NoSuchMethodError: org.hibernate.SessionFactory.openSession()Lorg/hibernate/classic/Session; at org.springframework.orm.hibernate3.SessionFactoryUtils.doGetSession(SessionFactoryUtils.java:323) at org.springframework.orm.hibernate3.SessionFactoryUtils.getSession(SessionFactoryUtils.java:235) at org.springframework.orm.hibernate3.HibernateTemplate.getSession(HibernateTemplate.java:457) at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:392) at org.springframework.orm.hibernate3.HibernateTemplate.executeWithNativeSession(HibernateTemplate.java:374) at org.springframework.orm.hibernate3.HibernateTemplate.saveOrUpdate(HibernateTemplate.java:737) at org.andrea.myexample.HibernateOnSpring.dao.PersonDAOImpl.addPerson(PersonDAOImpl.java:12) at org.andrea.myexample.HibernateOnSpring.MainApp.main(MainApp.java:26) Why I have this problem? how can I solve it? To be complete I also insert my pom.xml containing my dependencies list: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.andrea.myexample</groupId> <artifactId>HibernateOnSpring</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>HibernateOnSpring</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <!-- Dipendenze di Spring Framework --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>3.2.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>3.2.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>3.2.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-support</artifactId> <version>3.2.1.RELEASE</version> </dependency> <dependency> <!-- Usata da Hibernate 4 per LocalSessionFactoryBean --> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>3.2.0.RELEASE</version> </dependency> <!-- Dipendenze per AOP --> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> <version>2.2.2</version> </dependency> <!-- Dipendenze per Persistence Managment --> <dependency> <!-- Apache BasicDataSource --> <groupId>commons-dbcp</groupId> <artifactId>commons-dbcp</artifactId> <version>1.4</version> </dependency> <dependency> <!-- MySQL database driver --> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.23</version> </dependency> <dependency> <!-- Hibernate --> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>4.1.9.Final</version> </dependency> </dependencies> </project>

    Read the article

  • CRMIT Solution´s CRM++ Asterisk Telephony Connector Achieves Oracle Validated Integration with Oracle Sales Cloud

    - by Richard Lefebvre
    To achieve Oracle Validated Integration, Oracle partners are required to meet a stringent set of requirements that are based on the needs and priorities of the customers. Based on a Telephony Application Programming Interface (TAPI) framework the CRM++ Asterisk Telephony Connector integrates the Asterisk telephony solutions with Oracle® Sales Cloud. "The CRM++ Asterisk Telephony Connector for Oracle® Sales Cloud showcases CRMIT Solutions focus and commitment to extend the Customer Experience (CX) expertise to our existing and potential customers," said Vinod Reddy, Founder & CEO, CRMIT Solutions. "Oracle® Validated Integration applies a rigorous technical review and test process," said Kevin O’Brien, senior director, ISV and SaaS Strategy, Oracle®. "Achieving Oracle® Validated Integration through Oracle® PartnerNetwork gives our customers confidence that the CRM++ Asterisk Telephony Connector for Oracle® Sales Cloud has been validated and that the products work together as designed. This helps reduce deployment risk and improves the user experience for our joint customers." CRM++ is a suite of native Customer Experience solutions for Oracle® CRM On Demand, Oracle® Sales Cloud and Oracle® RightNow Cloud Service. With over 3000+ users the CRM++ framework helps extend the Customer Experience (CX) and the power of Customer Relations Management features including Email WorkBench, Self Service Portal, Mobile CRM, Social CRM and Computer Telephony Integration.. About CRMIT Solutions CRMIT Solutions is a pioneer in delivering SaaS-based customer experience (CX) consulting and solutions. With more than 200 certified customer relationship management (CRM) consultants and more than 175 successful CRM deployments globally, CRMIT Solutions offers a range of CRM++ applications for accelerated deployments including various rapid implementation and migration utilities for Oracle® Sales Cloud, Oracle® CRM On Demand, Oracle® Eloqua, Oracle® Social Relationship Management and Oracle® RightNow Cloud Service. About Oracle Validated Integration Oracle Validated Integration, available through the Oracle PartnerNetwork (OPN), gives customers confidence that the integration of complementary partner software products with Oracle Applications and specific Oracle Fusion Middleware solutions have been validated, and the products work together as designed. This can help customers reduce risk, improve system implementation cycles, and provide for smoother upgrades and simpler maintenance. Oracle Validated Integration applies a rigorous technical process to review partner integrations. Partners who have successfully completed the program are authorized to use the “Oracle Validated Integration” logo. For more information, please visit Oracle.com at http://www.oracle.com/us/partnerships/solutions/index.html.

    Read the article

  • Using a "white list" for extracting terms for Text Mining

    - by [email protected]
    In Part 1 of my post on "Generating cluster names from a document clustering model" (part 1, part 2, part 3), I showed how to build a clustering model from text documents using Oracle Data Miner, which automates preparing data for text mining. In this process we specified a custom stoplist and lexer and relied on Oracle Text to identify important terms.  However, there is an alternative approach, the white list, which uses a thesaurus object with the Oracle Text CTXRULE index to allow you to specify the important terms. INTRODUCTIONA stoplist is used to exclude, i.e., black list, specific words in your documents from being indexed. For example, words like a, if, and, or, and but normally add no value when text mining. Other words can also be excluded if they do not help to differentiate documents, e.g., the word Oracle is ubiquitous in the Oracle product literature. One problem with stoplists is determining which words to specify. This usually requires inspecting the terms that are extracted, manually identifying which ones you don't want, and then re-indexing the documents to determine if you missed any. Since a corpus of documents could contain thousands of words, this could be a tedious exercise. Moreover, since every word is considered as an individual token, a term excluded in one context may be needed to help identify a term in another context. For example, in our Oracle product literature example, the words "Oracle Data Mining" taken individually are not particular helpful. The term "Oracle" may be found in nearly all documents, as with the term "Data." The term "Mining" is more unique, but could also refer to the Mining industry. If we exclude "Oracle" and "Data" by specifying them in the stoplist, we lose valuable information. But it we include them, they may introduce too much noise. Still, when you have a broad vocabulary or don't have a list of specific terms of interest, you rely on the text engine to identify important terms, often by computing the term frequency - inverse document frequency metric. (This is effectively a weight associated with each term indicating its relative importance in a document within a collection of documents. We'll revisit this later.) The results using this technique is often quite valuable. As noted above, an alternative to the subtractive nature of the stoplist is to specify a white list, or a list of terms--perhaps multi-word--that we want to extract and use for data mining. The obvious downside to this approach is the need to specify the set of terms of interest. However, this may not be as daunting a task as it seems. For example, in a given domain (Oracle product literature), there is often a recognized glossary, or a list of keywords and phrases (Oracle product names, industry names, product categories, etc.). Being able to identify multi-word terms, e.g., "Oracle Data Mining" or "Customer Relationship Management" as a single token can greatly increase the quality of the data mining results. The remainder of this post and subsequent posts will focus on how to produce a dataset that contains white list terms, suitable for mining. CREATING A WHITE LIST We'll leverage the thesaurus capability of Oracle Text. Using a thesaurus, we create a set of rules that are in effect our mapping from single and multi-word terms to the tokens used to represent those terms. For example, "Oracle Data Mining" becomes "ORACLEDATAMINING." First, we'll create and populate a mapping table called my_term_token_map. All text has been converted to upper case and values in the TERM column are intended to be mapped to the token in the TOKEN column. TERM                                TOKEN DATA MINING                         DATAMINING ORACLE DATA MINING                  ORACLEDATAMINING 11G                                 ORACLE11G JAVA                                JAVA CRM                                 CRM CUSTOMER RELATIONSHIP MANAGEMENT    CRM ... Next, we'll create a thesaurus object my_thesaurus and a rules table my_thesaurus_rules: CTX_THES.CREATE_THESAURUS('my_thesaurus', FALSE); CREATE TABLE my_thesaurus_rules (main_term     VARCHAR2(100),                                  query_string  VARCHAR2(400)); We next populate the thesaurus object and rules table using the term token map. A cursor is defined over my_term_token_map. As we iterate over  the rows, we insert a synonym relationship 'SYN' into the thesaurus. We also insert into the table my_thesaurus_rules the main term, and the corresponding query string, which specifies synonyms for the token in the thesaurus. DECLARE   cursor c2 is     select token, term     from my_term_token_map; BEGIN   for r_c2 in c2 loop     CTX_THES.CREATE_RELATION('my_thesaurus',r_c2.token,'SYN',r_c2.term);     EXECUTE IMMEDIATE 'insert into my_thesaurus_rules values                        (:1,''SYN(' || r_c2.token || ', my_thesaurus)'')'     using r_c2.token;   end loop; END; We are effectively inserting the token to return and the corresponding query that will look up synonyms in our thesaurus into the my_thesaurus_rules table, for example:     'ORACLEDATAMINING'        SYN ('ORACLEDATAMINING', my_thesaurus)At this point, we create a CTXRULE index on the my_thesaurus_rules table: create index my_thesaurus_rules_idx on        my_thesaurus_rules(query_string)        indextype is ctxsys.ctxrule; In my next post, this index will be used to extract the tokens that match each of the rules specified. We'll then compute the tf-idf weights for each of the terms and create a nested table suitable for mining.

    Read the article

  • Book Review: Oracle ADF Real World Developer’s Guide

    - by Frank Nimphius
    Recently PACKT Publishing published "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman, a product manager in our team. Though already the sixth book dedicated to Oracle ADF, it has a lot of great information in it that none of the previous books covered, making it a safe buy even for those who own the other books published by Oracle Press (McGrwHill) and PACKT Publishing. More than the half of the "Oracle ADF Real World Developer’s Guide" book is dedicated to Oracle ADF Business Components in a depth and clarity that allows you to feel the expertise that Jobinesh gained in this area. If you enjoy Jobinesh blog (http://jobinesh.blogspot.co.uk/) about Oracle ADF, then, no matter what expert you are in Oracle ADF, this book makes you happy as it provides you with detail information you always wished to have. If you are new to Oracle ADF, then this book alone doesn't get you flying, but, if you have some Java background, accelerates your learning big, big, big times. Chapter 1 is an introduction to Oracle ADF and not only explains the layers but also how it compares to plain Java EE solutions (page 13). If you are new to Oracle JDeveloper and ADF, then at the end of this chapter you know how to start JDeveloper and begin your ADF development Chapter 2 starts with what Jobinesh really is good at: ADF Business Components. In this chapter you learn about the architecture ingredients of ADF Business Components: View Objects, View Links, Associations, Entities, Row Sets, Query Collections and Application Modules. This chapter also provides a introduction to ADFBC SDO services, as well as sequence diagrams for what happens when you execute queries or commit updates. Chapter 3 is dedicated to entity objects and  is one of many chapters in this book you will enjoy and never want to miss. Jobinesh explains the artifacts that make up an entity object, how to work with entities and resource bundles, and many advanced topics, including inheritance, change history tracking, custom properties, validation and cursor handling.  Chapter 4 - you guessed it - is all about View objects. Comparable to entities, you learn about the XM files and classes that make a view object, as well as how to define and work with queries. List-of-values, inheritance, polymorphism, bind variables and data filtering are interesting - and important topics that follow. Again the chapter provides helpful sequence diagrams for you to understand what happens internally within a view object. Chapter 5 focuses on advanced view object and entity object topics, like lifecycle callback methods and when you want to override them. This chapter is a good digest of Jobinesh's blog entries (which most ADF developers have in their bookmark list). Really worth reading ! Chapter 6 then is bout Application Modules. Beside of what application modules are, this chapter covers important topics like properties, passivation, activation, application module pooling, how and where to write custom logic. In addition you learn about the AM lifecycle and request sequence. Chapter 7 is about the ADF binding layer. If you are new to Oracle ADF and got lost in the more advanced ADF Business Components chapters, then this chapter is where you get back into the game. In very easy terms, Jobinesh explains what the ADF binding is, how it fits into the JSF request lifecycle and what are the metadata file involved. Chapter 8 then goes into building data bound web user interfaces. In this chapter you get the basics of JavaServer Faces (e.g. managed beans) and learn about the interaction between the JSF UI and the ADF binding layer. Later this chapter provides advanced solutions for working with tree components and list of values. Chapter 9 introduces bounded task flows and ADF controller. This is a chapter you want to read if you are new to ADF of have started. Experts don't find anything new here, which doesn't mean that it is not worth reading it (I for example, enjoyed the controller talk very much) Chapter 10 is an advanced coverage of bounded task flow and talks about contextual events  Chapter 11 is another highlight and explains error handling, trains, transactions and more. I can only recommend you read this chapter. I am aware of many documents that cover exception handling in Oracle ADF (and my Oracle Magazine article for January/February 2013 does the same), but none that covers it in such a great depth. Chapter 12 covers ADF best practices, which is a great round-up of all the tips provided in this book (without Jobinesh to repeat himself). Its all cool stuff that helps you with your ADF projects. In summary, "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman is a great book and addition for all Oracle ADF developers and those who want to become one. Frank

    Read the article

  • Selling Solutions, Not Products

    - by David Dorf
    When I think about next-generation retailers, the names that come to mind are Apple, Whole Foods, Lulu Lemon, and IKEA.  They may not be the biggest retailers, but they are certainly growing fast. Success is never defined by just one dimension, and these retailers execute well across many dimensions, but the one that stands out for me is customer experience.  These stores feel...approachable...part of the community...local.  Customers are not intimidated to ask questions, and staff seem to go out of their way to help. What's makes these retailers stand out in the industry?  These retailers aren't selling products -- they're selling solutions.  Think about that.  You think you're going to the Apple store to buy a phone, but you're actually buying a communications solution that handles much, much more.  If you carry an iPhone, your life has changed.  The way you do things is different.  The impacts go much beyond a simple phone. Solutions start with a problem, which is why these retailers greet customers with "what brought you in today," or "can I answer any questions for you?"  Good retailers establish a relationship, even if it lasts only a few minutes. You don't walk into Whole Foods looking for cans of soup.  You are looking for meals: healthy snacks, interesting lunches, exotic dinners.  Its a learning experience where you might discover solutions to problems you didn't know you had.  Mention what foods you like, and you'll get a list of similar items you had not considered.  I didn't know I needed a closet organizer until I visited an IKEA and learned about all the options.  They were able to customize the solution to meet my needs, and now I'm much more organized. One of the differences between selling products and selling solutions is training.  Visit any of these retailers' sites and you'll see a long list of in-store events for the benefit of customers.  You can buy exercise clothing from Lulu Lemon, and also learn new yoga techniques, meet like-minded people, and branch off to other fitness regimes via their ambassadors.  You can visit the Geek Bar at Apple, eat lunch at IKEA, and learn to cook at Whole Foods. These retailers are making an investment in a relationship with their customers.  They are showing loyalty to their customers before asking for it back.  In the long-run, this strategic approach will outlive any scan-and-bag mentality.

    Read the article

  • Independence Day for Software Components &ndash; Loosening Coupling by Reducing Connascence

    - by Brian Schroer
    Today is Independence Day in the USA, which got me thinking about loosely-coupled “independent” software components. I was reminded of a video I bookmarked quite a while ago of Jim Weirich’s “Grand Unified Theory of Software Design” talk at MountainWest RubyConf 2009. I finally watched that video this morning. I highly recommend it. In the video, Jim talks about software connascence. The dictionary definition of connascence (con-NAY-sense) is: 1. The common birth of two or more at the same time 2. That which is born or produced with another. 3. The act of growing together. The brief Wikipedia page about Connascent Software Components says that: Two software components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. Connascence is a way to characterize and reason about certain types of complexity in software systems. The term was introduced to the software world in Meilir Page-Jones’ 1996 book “What Every Programmer Should Know About Object-Oriented Design”. The middle third of that book is the author’s proposed graphical notation for describing OO designs. UML became the standard about a year later, so a revised version of the book was published in 1999 as “Fundamentals of Object-Oriented Design in UML”. Weirich says that the third part of the book, in which Page-Jones introduces the concept of connascence “is worth the price of the entire book”. (The price of the entire book, by the way, is not much – I just bought a used copy on Amazon for $1.36, so that was a pretty low-risk investment. I’m looking forward to getting the book and learning about connascence from the original source.) Meanwhile, here’s my summary of Weirich’s summary of Page-Jones writings about connascence: The stronger the form of connascence, the more difficult and costly it is to change the elements in the relationship. Some of the connascence types, ordered from weak to strong are: Connascence of Name Connascence of name is when multiple components must agree on the name of an entity. If you change the name of a method or property, then you need to change all references to that method or property. Duh. Connascence of name is unavoidable, assuming your objects are actually used. My main takeaway about connascence of name is that it emphasizes the importance of giving things good names so you don’t need to go changing them later. Connascence of Type Connascence of type is when multiple components must agree on the type of an entity. I assume this is more of a problem for languages without compilers (especially when used in apps without tests). I know it’s an issue with evil JavaScript type coercion. Connascence of Meaning Connascence of meaning is when multiple components must agree on the meaning of particular values, e.g that “1” means normal customer and “2” means preferred customer. The solution to this is to use constants or enums instead of “magic” strings or numbers, which reduces the coupling by changing the connascence form from “meaning” to “name”. Connascence of Position Connascence of positions is when multiple components must agree on the order of values. This refers to methods with multiple parameters, e.g.: eMailer.Send("[email protected]", "[email protected]", "Your order is complete", "Order completion notification"); The more parameters there are, the stronger the connascence of position is between the component and its callers. In the example above, it’s not immediately clear when reading the code which email addresses are sender and receiver, and which of the final two strings are subject vs. body. Connascence of position could be improved to connascence of type by replacing the parameter list with a struct or class. This “introduce parameter object” refactoring might be overkill for a method with 2 parameters, but would definitely be an improvement for a method with 10 parameters. This points out two “rules” of connascence:  The Rule of Degree: The acceptability of connascence is related to the degree of its occurrence. The Rule of Locality: Stronger forms of connascence are more acceptable if the elements involved are closely related. For example, positional arguments in private methods are less problematic than in public methods. Connascence of Algorithm Connascence of algorithm is when multiple components must agree on a particular algorithm. Be DRY – Don’t Repeat Yourself. If you have “cloned” code in multiple locations, refactor it into a common function.   Those are the “static” forms of connascence. There are also “dynamic” forms, including… Connascence of Execution Connascence of execution is when the order of execution of multiple components is important. Consumers of your class shouldn’t have to know that they have to call an .Initialize method before it’s safe to call a .DoSomething method. Connascence of Timing Connascence of timing is when the timing of the execution of multiple components is important. I’ll have to read up on this one when I get the book, but assume it’s largely about threading. Connascence of Identity Connascence of identity is when multiple components must reference the entity. The example Weirich gives is when you have two instances of the “Bob” Employee class and you call the .RaiseSalary method on one and then the .Pay method on the other does the payment use the updated salary?   Again, this is my summary of a summary, so please be forgiving if I misunderstood anything. Once I get/read the book, I’ll make corrections if necessary and share any other useful information I might learn.   See Also: Gregory Brown: Ruby Best Practices Issue #24: Connascence as a Software Design Metric (That link is failing at the time I write this, so I had to go to the Google cache of the page.)

    Read the article

  • How to propagate http response code from back-end to client

    - by Manoj Neelapu
    Oracle service bus can be used as for pass through casses. Some use cases require propagating the http-response code back to the caller. http://forums.oracle.com/forums/thread.jspa?messageID=4326052&#4326052 is one such example we will try to accomplish in this tutorial.We will try to demonstrate this feature using Oracle Service Bus (11.1.1.3.0. We will also use commons-logging-1.1.1, httpcomponents-client-4.0.1, httpcomponents-core-4.0.1 for writing the client to demonstrate.First we create a simple JSP which will always set response code to 304.The JSP snippet will look like <%@ page language="java"     contentType="text/xml;     charset=UTF-8"        pageEncoding="UTF-8" %><%      System.out.println("Servlet setting Responsecode=304");    response.setStatus(304);    response.flushBuffer();%>We will now deploy this JSP on weblogic server with URI=http://localhost:7021/reponsecode/For this JSP we will create a simple Any XML BS We will also create proxy service as shown below Once the proxy is created we configure pipeline for the proxy to use route node, which invokes the BS(JSPCaller) created in the first place. So now we will create a error handler for route node and will add a stage. When a HTTP BS sends a request, the JSP sends the response back. If the response code is not 200, then the http BS will consider that as error and the above configured error handler is invoked. We will print $outbound to show the response code sent by the JSP. The next actions. To test this I had create a simple clientimport org.apache.http.Header;import org.apache.http.HttpEntity;import org.apache.http.HttpHost;import org.apache.http.HttpResponse;import org.apache.http.HttpVersion;import org.apache.http.client.methods.HttpGet;import org.apache.http.conn.ClientConnectionManager;import org.apache.http.conn.scheme.PlainSocketFactory;import org.apache.http.conn.scheme.Scheme;import org.apache.http.conn.scheme.SchemeRegistry;import org.apache.http.impl.client.DefaultHttpClient;import org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager;import org.apache.http.params.BasicHttpParams;import org.apache.http.params.HttpParams;import org.apache.http.params.HttpProtocolParams;import org.apache.http.util.EntityUtils;/** * @author MNEELAPU * */public class TestProxy304{    public static void main(String arg[]) throws Exception{     HttpHost target = new HttpHost("localhost", 7021, "http");     // general setup     SchemeRegistry supportedSchemes = new SchemeRegistry();     // Register the "http" protocol scheme, it is required     // by the default operator to look up socket factories.     supportedSchemes.register(new Scheme("http",              PlainSocketFactory.getSocketFactory(), 7021));     // prepare parameters     HttpParams params = new BasicHttpParams();     HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1);     HttpProtocolParams.setContentCharset(params, "UTF-8");     HttpProtocolParams.setUseExpectContinue(params, true);     ClientConnectionManager connMgr = new ThreadSafeClientConnManager(params,              supportedSchemes);     DefaultHttpClient httpclient = new DefaultHttpClient(connMgr, params);     HttpGet req = new HttpGet("/HttpResponseCode/ProxyExposed");     System.out.println("executing request to " + target);     HttpResponse rsp = httpclient.execute(target, req);     HttpEntity entity = rsp.getEntity();     System.out.println("----------------------------------------");     System.out.println(rsp.getStatusLine());     Header[] headers = rsp.getAllHeaders();     for (int i = 0; i < headers.length; i++) {         System.out.println(headers[i]);     }     System.out.println("----------------------------------------");     if (entity != null) {         System.out.println(EntityUtils.toString(entity));     }     // When HttpClient instance is no longer needed,      // shut down the connection manager to ensure     // immediate deallocation of all system resources     httpclient.getConnectionManager().shutdown();     }}On compiling and executing this we see the below output in STDOUT which clearly indicates the response code was propagated from Business Service to Proxy serviceexecuting request to http://localhost:7021----------------------------------------HTTP/1.1 304 Not ModifiedDate: Tue, 08 Jun 2010 16:13:42 GMTContent-Type: text/xml; charset=UTF-8X-Powered-By: Servlet/2.5 JSP/2.1----------------------------------------  

    Read the article

  • SQL Windowing screencast session for Cuppa Corner - rolling totals, data cleansing

    - by tonyrogerson
    In this 10 minute screencast I go through the basics of what I term windowing, which is basically the technique of filtering to a set of rows given a specific value, for instance a Sub-Query that aggregates or a join that returns more than just one row (for instance on a one to one relationship). http://sqlserverfaq.com/content/SQL-Basic-Windowing-using-Joins.aspx SQL below... USE tempdb go CREATE TABLE RollingTotals_Nesting ( client_id int not null, transaction_date date not null, transaction_amount...(read more)

    Read the article

  • Product Development Investment: A Measure of Vendor Performance

    - by Jim Mcglothlin
    The relationship between a large, complex organization and its key suppliers of information technology is normally more than just "strategic". Expectations about the duration of the relationship typically exceed 20 years. Enterprise applications and technology infrastructure are not expected to be changed out like petunias. So how would you rate the due diligence processes as performed in Higher Education when selecting critical, transformational information technology? My observation: I see a lot of effort put into elaborate demonstration of basic software functionality. I see a lot of attention paid to the cost element of technology acquisition, including the contracted cost of implementation consulting services. But the factor that receives only cursory analysis and due diligence is long-term performance--the ability of a vendor to grow, expand, and develop, and bring its customers along with it. So what should you look for in a long-term IT supplier? Oracle has a public track record for product development. The annual investment has been on a run rate of almost $3 Billion organic product development. Oracle's well-publicized acquisitions and mergers have been supplemental to its R&D. This is important for Higher Education. Another meaningful way to evaluate a company is to look at the tangible track record of enhancement. Consider the Oracle-PeopleSoft enterprise business platform since acquired by Oracle 6 years ago: Product or Technology Enhancement Customer or User Impact Service Oriented Architecture (SOA) 300+ new web services delivered in versions 9.0 & 9.1 provide flexibility, so that customers can integrate PeopleSoft with other applications. Campus Solutions has added Admissions and Constituent Web Services. Constituent Relationship Management PeopleSoft CRM 9.1 for Higher Education introduced new process flows for student recruiting and retention to support "Student Success" initiatives. A 360 view of the constituent is now delivered, and the concept of a single-stop Student Services Center is now in CRM 9.1 with tight integration to PeopleSoft Campus Solutions. Human Capital Management Contract Pay for Education, with flexibility for configuration and calculation, has been extended in HCM 9.1. New chartfield integration among Project Costing - Time & Labor - Payroll to serve the labor distribution requirements for Grants / Sponsored Research. Talent Management PeopleSoft 9.0 and 9.1 feature an integrated talent management approach centered on definitions in "Profile Manager", with all new usability improvements. Internal and external candidate pools, and the entire recruitment process, are driven by delivered configurable selection and on-boarding processes. Interview scheduling, and online job offers are newly delivered processes. Performance Management PeopleSoft HCM ePerformance 9.1 will include significant new functionality designed to help organizations more effectively align business objectives with employee goals. Using an Organization Chart view, your business goals can flow down to become tangible objectives per employee. Succession Planning / Workforce Development New in HCM 9.0, enhanced in 9.1, is a planning capability for regular or unusual (major organizational change) succession of internal or external candidates. PeopleSoft supports employee-based career planning, which ultimately increases the integrity of the succession planning process (identify their career needs, plans, preferences, and interests). Dashboards / Oracle Business Intelligence Application Suite Oracle Human Resources Analytics provides the workforce information foundation that integrates data from HR functional areas and Finance. Oracle Human Resources Analytics delivers 9 dashboards and over 200 reports. Provide your HR professionals and front-line managers the tools to analyze workforce staffing, retention, productivity, to better source high-quality applicants, and to reduce absence costs. Multi-year Planning and Commitment Control External funding sources, especially Grants, require a multi-year encumbrance business process. PeopleSoft HCM 9.1 adds multi-year funding and commitment control, including budget checking. The newly designed Real Time Budget Checking will provide the customer with an updated snapshot of their budget and encumbrances at any given time. Position Budgeting with Hyperion Hyperion Planning world-class products now include delivered integration to PeopleSoft HCM. Position Budgeting is available in the new Public Sector Planning module of Hyperion. Web 2.0 features for the latest in usability PeopleSoft 9.1 features a contemporary internet user experience: Partial-page refreshing Drag and drop pagelets New menu structure Navigation pagelets Modal popup message windows Favorites & recently used links Type-ahead Drag and drop grid columns, pop-out grids Portal Workspaces Enterprise 2.0 for your collaborative web communities, using new content management, along with Wikis, blogs, and discussion forums in PeopleSoft Portal 9.1. PeopleTools enhanced by Oracle Fusion Middleware Standards-based tools have been added to the PeopleTools application infrastructure: BI (XML) Publisher, Java tools. Certified for use with PeopleSoft: Oracle Business Intelligence (OBIEE), Oracle Enterprise Manager, Oracle Weblogic Server, Oracle SOA Suite. Hosting for PeopleSoft applications A solid new deployment option: Oracle On Demand remote hosting center for high scalability, security, and continuity of operations. Business Process Outsourcing (BPO) for HCM / Payroll functions Partnership with AT&T provides hosting of HR/Payroll application along with payroll business process operations, and subscription-based service fees (SaaS). AT&T BPO full service includes pay sheet processing, bank and 3rd party file transfer, payroll tax handling, etc. Continuous Delivery Model Feature Packs provide faster time-to-benefit; new features become available in PeopleSoft 9.1 (or Campus Solutions 9.0) without need to perform upgrade. Golden person data model across all campus applications Oracle Higher Education Constituent Hub provides synchronization and data governance of person data across any application, e.g. HR/ Payroll, Student Information System, Housing, Emergency Contact, LMS, CRM. Oracle's aggressive enhancement plans within the "Applications Unlimited" program continue, as new functionality is under development for a new version of a PeopleSoft release planned for 2012. Meanwhile, new capabilities are planned on an annual basis in Feature Packs. PeopleSoft just delivered the HCM 2010 Feature Pack and another is planned for 2011. In February we plan to have over 100 customers from our Customer Advisory Boards at our PeopleSoft Development Center in California to review designs for all of these releases. For those of you near New York City The investment and progressive development story described above is the subject of an Oracle road show event on February 9, 2011. Charting Your Course with Oracle Applications is a global event series designed to help business and IT executives assess the impact of new inflection points on their business and applications roadmap: changing workforces, shifting customer and constituent bases, and increased volatility. Learn how innovations ranging from new deployment models like cloud computing to the introduction of social applications and smart devices are delivering results across all areas of business and industry. THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT.

    Read the article

  • Avoid the “Social Silo” - Learn Why and How

    - by Brian Dayton
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} I’m not going to spend any more real estate than needed on this—social media is big. Facebook hit the Billion user mark in October, that’s 1 out of every 7 humans on the planet. This past Summer (in the Northern hemisphere) Twitter passed the 400 Million Tweet/day mark. The list of social properties and data points goes on and on. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} With social your customer, prospect, or constituent has pervasive access—through mobile—to a global audience, the ability to influence friends, friends of friends, and even people they will never meet. They also have the unique opportunity to forge a deeper relationship with your business—telling you what they like, what they don’t like, how you can help, and what they’d like to see more of. Are you listening? Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} What’s the Bottom Line for Business? Businesses need to be where their customers are—on social properties. They need to be available and responsive in those channels—24x7x365. They need to engage and communicate in new ways—sometimes in less than 140 characters and with empathy, not a 1-way megaphone. Finally, businesses need to look at social as an extension of their existing business practices. Not as a silo’d communication channel limited to marketing. Social Can’t Be a Silo – Learn Why @ Oracle CloudWorld When a business is on social networks they represent the whole business. That’s how a customer, constituent, partner or potential candidate sees it. Those organizations that have moved on the opportunity to build closer relationships through social marketing have already made the first step. Social Selling, Service, eCommerce, and Recruiting are external-facing opportunities that leading organizations are moving on right now. This strategy, one of weaving social into and across your business processes—and leveraging social concepts and technologies for internal collaboration—is something you can learn about during an Oracle CloudWorld event in a city near you. You’ll hear and see social relationship management concepts, best-practices, and recommendations woven into topics, discussions, and demonstrations throughout the event—from Marketing and Sales to Service and Human Resources. Stay Tuned and Avoid Potholes By all indications social is here to stay but it’s moving fast and social business strategies are evolving rapidly. At Oracle CloudWorld you’ll also get the opportunity to learn how to avoid some of the potholes on the road to #socialbusiness. Stay tuned to this blog. In future posts I’ll cover some of those potholes including the challenges of Social@Scale and Parallel Processes. Jump-start your social business strategy or learn how to refine and expand what you’re doing already at Oracle CloudWorld. Want to learn more about what Oracle is doing in social? Check out www.oracle.com/social or, if you're looking for a quick read my co-worker, Pat Ma, has a great post on this blog summarizing some popular Social Relationship Management use cases.

    Read the article

  • BigData and Customer Experience: Happy Together

    - by Isabel F. Peñuelas
    The two big buzzes of the year may lay closer than it appears. Both concepts intersect at various points: BigData and Return of Investment of Marketing Campaigns On a recent post Big Data Is The Future Of Marketing Jeff Dachis explains very clearly how “Big data analytics finally allows marketers to identify, measure, and manage what is positively impacting their Brand”. Regression analysis applied to big data volumes coming from social media will substitute the failed attempts to justify marketing investments on social media in terms of followers and likes, he continues, “the measurement models applied by marketers on TV Campaigns don´t work on social”, we need to study the data with fresh eyes and maybe then we will start understanding and measuring brand engagemet. Social CRM and BigData The real value of Social CRM start by analyzing mass of big data from social media in order of applying social intelligence techniques that allow us to classify new customer niches and communities and define appropriated strategies to contact potential customers. Gartner Says that the Market for Social CRM is on pace to surpass $1 Billion in Revenue by Year-End 2012 but in words of Zach Hofer-Shall, Analyst at Forrester Research “Social customer relationship management is hard” (The Social CRM Arms Race Heats ). To succeed brands need three things: Investing in new social tools, investing in consultancy and investing in infrastructure for massive data storage and analysis. Neither CeX or BigData are easy and cheap wins. But what are the customer benefits of such investments? Big Data and Brand Engagement Time is the most valuable asset of todays consumers: tired of information overload, exhausted by the terabytes of offering, anxious because of not having the same fast multichannel experience with their services’ marketers or preferred goods providers than the one they found on their social media. Yes, I know you have read this before- me too. But is real. The motto of the Customer Experience philosophy of providing a consistent experience through multiple touchpoints that makes the relationship customer/brand easier and valuable finds it basis on understanding customer/s preferences and context for which BigData analysis is another imperative. In summary, I believe that using BigData Analysis in combination with appropriated CeX strategies and technologies is a promising direction for achieving: efficiency and marketing cost-savings; growing the customer base; and increasing customer conversion and retention. In a world: The Direction of Future Marketing.

    Read the article

  • Hosted CRM Solutions from a Manager?s Perspective

    CRM, Customer Relationship Management is a method or process that is used to know the customer?s behavior and their needs and it helps in developing a stronger relation with them. This is only a defi... [Author: James Wong - Computers and Internet - April 29, 2010]

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >