Search Results

Search found 298 results on 12 pages for 'billion'.

Page 8/12 | < Previous Page | 4 5 6 7 8 9 10 11 12  | Next Page >

  • Image Drawing on UIView

    - by user1180261
    I'm trying to create an application where I can draw a lot of pictures at a specific point (determined for each image) on one view. I have a coordinates where I need draw a picture, width and height of it For example: I have 2 billion jpeg's images. for each images I have a specific origin point and size. In 1 second I need draw on view 20-50 images in specific point. I have already tryid solve that in the next way: UIGraphicsBeginImageContextWithOptions(self.previewScreen.bounds.size, YES, 0); [self.previewScreen.image drawAtPoint:CGPointMake(0, 0)]; [image drawAtPoint:CGPointMake(nRect.left, nRect.top)]; UIImage *imagew = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); [self.previewScreen setImage:imagew]; but in this solution I have a very big latency with displaying images and big CPU usage WBR Maxim Tartachnik

    Read the article

  • What happens when auto_increment on integer column reaches the max_value in databases?

    - by Sanoj
    I am implementing a database application and I will use both JavaDB and MySQL as database. I have an ID column in my tables that has integer as type and I use the databases auto_increment-function for the value. But what happens when I get more than 2 (or 4) billion posts and integer is not enough? Is the integer overflowed and continues or is an exception thrown that I can handle? Yes, I could change to long as datatype, but how do I check when that is needed? And I think there is problem with getting the last_inserted_id()-functions if I use long as datatype for the ID-column.

    Read the article

  • How to keep an old VB6 application running in Windows Vista and Windows 7?

    - by MusiGenesis
    I have an old VB6 app which I'm still trying to support. A few users have reported weird crashes when running the app in Vista or Windows 7. The log files don't show anything after one of these crashes, but the customers report that the error message said "OLE something ...", if they saw anything at all. I've never been able to reproduce these crashes while running the program on my own Vista or Windows 7 boxes, so I have essentially no information on what the problem is. My suspicion is that it's a problem with their versions of one or more of the umpteen billion DLLs that a VB6 application is dependent on. The app also uses lame_enc.dll, which introduces a few more dependencies. I'm guessing this is a common problem with VB6 apps (although it's possible that I just sucked as a programmer 10 years ago). Is there some magical installer/updater out there that makes sure all the VB6 dependencies are what they need to be for a VB6 app to function properly?

    Read the article

  • What are the Worst Software Project Failures Ever?

    - by Warren P
    Is there a good list of "worst software project failures ever" in the history of software development? For example in Canada a "gun registry" project spent around two billion dollars. (http://en.wikipedia.org/wiki/Gun_registry). This is of course, insane, even if the final product "sort of worked". I have heard of an FBI Case file system which there have been several attempts to rewrite, all of them so far, failures. There is a book on the subject (Software Runaways). There doesn't seem to be be a software "boondoggle" list or "fiasco" list on Wikipedia that I can see. (Update: Therac-25 would be the 'winner' of this question, except that I was internally thinking more of Software projects that had as their deliverable, mainly software, as opposed to firmware projects like Therac-25, where the hardware and firmware together are capable of killing people. In terms of pure software monetary debacles, which was my intended question, there are several contenders.)

    Read the article

  • SQL-query task, decision?

    - by Sirius Lampochkin
    There is a table of currencies rates in MS SQL Server 2005: ID | CURR | RATE | DATE 1   | USD   | 30      | 01.10.2010 3   | GBP   | 45      | 07.10.2010 5   | USD   | 31      | 08.10.2010 7   | GBP   | 46      | 09.10.2010 9   | USD   | 32      | 12.10.2010 11 | GBP   | 48      | 03.10.2010 Rate are updated in real time and there are more than 1 billion rows in the table. It needs to write a SQL-query, wich will provide latest rates per each currency. My decision is: SELECT c.[id],c.[curr],c.[rate],c.[date] FROM [curr_rate] c, (SELECT curr, MAX(date) AS rate_date FROM [curr_rate] GROUP BY curr) t WHERE c.date = t.rate_date AND c.curr = t.curr ORDER BY c.[curr] ASC Is it possible to write a query without sub-queries and join's with derived tables?

    Read the article

  • How to force oracle to use index range scan?

    - by wsb3383
    Hi, all. I have a series of extremely similar queries that I run against a table of 1.4 billion records (with indexes), the only problem is that at least 10% of those queries take 100x more time to execute than others. I ran an explain plan and noticed that the for the fast queries (roughly 90%) Oracle is using an index range scan (on my created), while on the slow one, it's using a full index scan. Is there a way to force Oracle to do a an index range scan? Thanks!

    Read the article

  • MySQL - how long to create an index?

    - by user293594
    Can anyone tell me how adding a key scales in MySQL? I have 500,000,000 rows in a database, trans, with columns i (INT UNSIGNED), j (INT UNSIGNED), nu (DOUBLE), A (DOUBLE). I try to index a column, e.g. ALTER TABLE trans ADD KEY idx_A (A); and I wait. For a table of 14,000,000 rows it took about 2 minutes to execute on my MacBook Pro, but for the whole half a billion, it's taking 15hrs and counting. Am I doing something wrong, or am I just being naive about how indexing a database scales with the number of rows?

    Read the article

  • How to find a programmer for my project?

    - by Al
    I'm building a web application to generate monthly subscription fees, but I've quickly realised I'm going to need some help with the project to finish it this century. I don't have any money upfront for a freelancer and every website I've found takes bids for project work. The tasks that need doing are flexible too because I can do whatever the other coder doesn't want to. I'm also happy to guide the developer and offer tips for performance/security/etc etc. My question is; how do I go about finding someone to work with on a profit-share basis? I'm sure there are a billion people like me with the "next killer app" but I genuinely believe in it. Can anyone offer some advice? Thanks in advance! EDIT: I guess the trick is to find someone passionate enough about the subject as I am. Where would I find someone? Are there websites that broker profit-share deals on programming work?

    Read the article

  • NoSQL and meteorological data

    - by christian studer
    So there's this new cool thing, these NoSQL-databases. And so there's my data: Rows of rows of rows of meteorological data: Values, representing certain measurements at a certain station (Identified by a WMO number, not coordinates), at a certain time. Not every station measures every parameter, not every parameter is measured all the time. I store this data (30 years worth of hourly values, resulting in ~1 billion values) currently in MySQL. The continous growth and the forseeable addition of even more data give me a little headache. Reading about the document based NoSQL systems which seem to scale rather easily, I was wondering if NoSQL is a viable data storage concept for meteorological data too. Do you have any experience with this?

    Read the article

  • MySQL primary/foreign key size?

    - by David
    I seem to see a lot of people arbitrarily assigning large sizes to primary/foreign key fields in their MySQL schemas, such as INT(11) and even BIGINT(20) as WordPress uses. Now correct me if I'm wrong, but even an INT(4) would support (unsigned) values up to over 4 billion. Change it to INT(5) and you allow for values up to a quadrillion, which is more than you would ever need, unless possibly you're storing geodata at NASA/Google, which I'm sure most of us aren't. Is there a reason people use such large sizes for their primary keys? Seems like a waste to me...

    Read the article

  • SQL Server 2008 BULK INSERT causes more reads than writes. Why?

    - by sh1ng
    I've huge a table (a few billion rows) with a clustered index and two non-clustered indices. A BULK INSERT operation produces 112000 reads and only 383 writes (duration 19948ms). It's very confusing to me. Why do reads exceed writes? How can I reduce it? update query insert bulk DenormalizedPrice4 ([DP_ID] BigInt, [DP_CountryID] Int, [DP_OperatorID] SmallInt, [DP_OperatorPriceID] BigInt, [DP_SpoID] Int, [DP_TourTypeID] Int, [DP_CheckinDate] Date, [DP_CurrencyID] SmallInt, [DP_Cost] Decimal(9,2), [DP_FirstCityID] Int, [DP_FirstHotelID] Int, [DP_FirstBuildingID] Int, [DP_FirstHotelGlobalStarID] Int, [DP_FirstHotelGlobalMealID] Int, [DP_FirstHotelAccommodationTypeID] Int, [DP_FirstHotelRoomCategoryID] Int, [DP_FirstHotelRoomTypeID] Int, [DP_Days] TinyInt, [DP_Nights] TinyInt, [DP_ChildrenCount] TinyInt, [DP_AdultsCount] TinyInt, [DP_TariffID] Int, [DP_DepartureCityID] Int, [DP_DateCreated] SmallDateTime, [DP_DateDenormalized] SmallDateTime, [DP_IsHide] Bit, [DP_FirstHotelAccommodationID] Int) with (CHECK_CONSTRAINTS) No triggers & foreign keys Cluster Index by DP_ID and two non-unique indexes(with fillfactor=90%) And one more thing DB stored on RAID50 with stripe size 256K

    Read the article

  • How do I Put Several Select Statements into Different Columns

    - by Russ Bradberry
    I basically have 7 select statement that I need to have the results output into separate columns. Normally I would use a crosstab for this but I need a fast efficient way to go about this as there are over 7 billion rows in the table. I am using the vertica database system. Below is an example of my statements: SELECT COUNT(user_id) AS '1' FROM event_log_facts WHERE date_dim_id=20100101 SELECT COUNT(user_id) AS '2' FROM event_log_facts WHERE date_dim_id=20100102 SELECT COUNT(user_id) AS '3' FROM event_log_facts WHERE date_dim_id=20100103 SELECT COUNT(user_id) AS '4' FROM event_log_facts WHERE date_dim_id=20100104 SELECT COUNT(user_id) AS '5' FROM event_log_facts WHERE date_dim_id=20100105 SELECT COUNT(user_id) AS '6' FROM event_log_facts WHERE date_dim_id=20100106 SELECT COUNT(user_id) AS '7' FROM event_log_facts WHERE date_dim_id=20100107

    Read the article

  • Ridiculously Simple Erlang Question

    - by Aiden Bell
    Read (skimmed enough to get coding) through Erlang Programming and Programming Erlang. One question, which is as simple as it sounds: If you have a process Pid1 on machine m1 and a billion million messages are sent to Pid1, are messages handled in parallel by that process (I get the impression no) and is there any guarantee of order when processing messages? ie. Received in order sent? Coming from the whole C/Thread pools/Shared State background ... I want to get this concrete. I understand distributing an application, but want to ensure the 'raw bones' are what I expect before building processes and distributing workload. Also, am I right in thinking the whole world is currently flicking through Erlang texts ;)

    Read the article

  • Time order of messages

    - by Aiden Bell
    Read (skimmed enough to get coding) through Erlang Programming and Programming Erlang. One question, which is as simple as it sounds: If you have a process Pid1 on machine m1 and a billion million messages are sent to Pid1, are messages handled in parallel by that process (I get the impression no) and(answered below) is there any guarantee of order when processing messages? ie. Received in order sent? If so, how is clock skew handled in high traffic situations for ordering? Coming from the whole C/Thread pools/Shared State background ... I want to get this concrete. I understand distributing an application, but want to ensure the 'raw bones' are what I expect before building processes and distributing workload. Also, am I right in thinking the whole world is currently flicking through Erlang texts ;)

    Read the article

  • Best scaling methodologies for a highly traffic web application?

    - by tester2001
    We have a new project for a web app that will display banners ads on websites (as a network) and our estimate is for it to handle 20 to 40 billion impressions a month. Our current language is in ASP...but are moving to PHP. Does PHP 5 has its limit with scaling web application? Or, should I have our team invest in picking up JSP? Or, is it a matter of the app server and/or DB? We plan to use Oracle 10g as the database.

    Read the article

  • Parallel Haskell in order to find the divisors of a huge number

    - by Dragno
    I have written the following program using Parallel Haskell to find the divisors of 1 billion. import Control.Parallel parfindDivisors :: Integer->[Integer] parfindDivisors n = f1 `par` (f2 `par` (f1 ++ f2)) where f1=filter g [1..(quot n 4)] f2=filter g [(quot n 4)+1..(quot n 2)] g z = n `rem` z == 0 main = print (parfindDivisors 1000000000) I've compiled the program with ghc -rtsopts -threaded findDivisors.hs and I run it with: findDivisors.exe +RTS -s -N2 -RTS I have found a 50% speedup compared to the simple version which is this: findDivisors :: Integer->[Integer] findDivisors n = filter g [1..(quot n 2)] where g z = n `rem` z == 0 My processor is a dual core 2 duo from Intel. I was wondering if there can be any improvement in above code. Because in the statistics that program prints says: Parallel GC work balance: 1.01 (16940708 / 16772868, ideal 2) and SPARKS: 2 (1 converted, 0 overflowed, 0 dud, 0 GC'd, 1 fizzled) What are these converted , overflowed , dud, GC'd, fizzled and how can help to improve the time.

    Read the article

  • javan-whenever not writing crontab with Capistrano deploy

    - by Gordon Isnor
    I’ve been trying to get whenever running on an ec2 instance that was created with ec2 on rails. When I deploy with Capistrano it indicates that the crontab was written, but when I log into the server and run crontab -l it does not seem to have been changed. If I go into the release folder and manually run whenever --write-crontab then run crontab -l - it gets updated properly. Any ideas what could be causing this? Capistrano is not indicating any errors so not sure how to debug, have tried a billion permutations and combinations and nothing changes.

    Read the article

  • How can I select all records between two dates without using date fields?

    - by Hayden Bech
    Hi, I have a MySQL DB and I need to be able to store dates earlier then 1970 - in my case, as early as 0 AD and earlier too, so I need a custom way to store dates. I have thought to use this format: Year - int(6) | Month -int(2) | day - int (2) | time - time | AD tinyint (1) | mya - int (11) But when it comes to actually using data in this format it becomes difficult. For example, if I want to get all records between two dates it would be like (pseudocode not SQL): get all where year between minYear and maxYear if year == minYear, month = minMonth if year == maxYear, month <= maxMonth if month == minMonth, day = minDay if month == maxMonth, day <= maxDay if day == minDay, time = minTime if day == maxDay, time <= maxTime or something, which seems like a right pain. I could store seconds before/after 0 AD, but that would take up way too much data! 2010 (EDIT: 2011) = 6.4 billion seconds since 0 AD. Does anybody have any ideas for this problem?

    Read the article

  • Erlang Question - Time order of messages

    - by Aiden Bell
    Read (skimmed enough to get coding) through Erlang Programming and Programming Erlang. One question, which is as simple as it sounds: If you have a process Pid1 on machine m1 and a billion million messages are sent to Pid1, are messages handled in parallel by that process (I get the impression no) and(answered below) is there any guarantee of order when processing messages? ie. Received in order sent? If so, how is clock skew handled in high traffic situations for ordering? Coming from the whole C/Thread pools/Shared State background ... I want to get this concrete. I understand distributing an application, but want to ensure the 'raw bones' are what I expect before building processes and distributing workload. Also, am I right in thinking the whole world is currently flicking through Erlang texts ;)

    Read the article

  • Serious about Embedded: Java Embedded @ JavaOne 2012

    - by terrencebarr
    It bears repeating: More than ever, the Java platform is the best technology for many embedded use cases. Java’s platform independence, high level of functionality, security, and developer productivity address the key pain points in building embedded solutions. Transitioning from 16 to 32 bit or even 64 bit? Need to support multiple architectures and operating systems with a single code base? Want to scale on multi-core systems? Require a proven security model? Dynamically deploy and manage software on your devices? Cut time to market by leveraging code, expertise, and tools from a large developer ecosystem? Looking for back-end services, integration, and management? The Java platform has got you covered. Java already powers around 10 billion devices worldwide, with traditional desktops and servers being only a small portion of that. And the ‘Internet of Things‘ is just really starting to explode … it is estimated that within five years, intelligent and connected embedded devices will outnumber desktops and mobile phones combined, and will generate the majority of the traffic on the Internet. Is your platform and services strategy ready for the coming disruptions and opportunities? It should come as no surprise that Oracle is keenly focused on Java for Embedded. At JavaOne 2012 San Francisco the dedicated track for Java ME, Java Card, and Embedded keeps growing, with 52 sessions, tutorials, Hands-on-Labs, and BOFs scheduled for this track alone, plus keynotes, demos, booths, and a variety of other embedded content. To further prove Oracle’s commitment, in 2012 for the first time there will be a dedicated sub-conference focused on the business aspects of embedded Java: Java Embedded @ JavaOne. This conference will run for two days in parallel to JavaOne in San Francisco, will have its own business-oriented track and content, and targets C-level executives, architects, business leaders, and decision makers. Registration and Call For Papers for Java Embedded @ JavaOne are now live. We expect a lot of interest in this new event and space is limited, so be sure to submit your paper and register soon. Hope to see you there! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: ARM, Call for Papers, Embedded Java, Java Embedded, Java Embedded @ JavaOne, Java ME, Java SE Embedded, Java SE for Embedded, JavaOne San Francisco, PowerPC

    Read the article

  • On Turning 30&hellip;

    - by MOSSLover
    I know I am not a wise old sage like some people in the community.  I just turned 30 however I feel like all my years looking back have changed me.  My collective experiences and thoughts have given me a different perspective on life recently.  Seven months ago my head was in a gutter and since then a lot of things have happened.  I was always the weird kid in the corner reading Star Trek books.  When I was in elementary school I thought that kids would throw me birthday parties out of pity because I was the poor kid who everyone hated.  I am no longer that person.  I realized that during the worst possible period between my 29th and 30th year when I hit rock bottom.  You all know the insane story as I’ve told it two billion times over.  Honestly it was the best thing that ever happened to me in my life time, because many things would not have happened.  My friends came through for me at every given moment people from all over were checking up on me all over the world.  I fell and I landed on a bunch of people it was awesome.  I landed on family and friends who I thought I was never close enough to talk about these things.  They helped me realize I had a ton of unfulfilled dreams.  I got to move to New York City one of the greatest cities in the universe.  I got to do whatever I wanted without judgment from anyone.  I got to meet some great people at a few meetup groups in the past few months.  I got to meet an awesome person that I have been dating for 3 months.  I am trying to run for the 8 billionth time and keep up with it.  I got to go to Europe and next week for the first time New Orleans.  I got renewed for MVP for 2012.  I am grateful for all the people and things in my life.  I understand that sometimes when things seem bad you can always seek friends and family.  They will always help me.  I have to learn to lean on people sometimes just how they occasionally lean on me.  That is the biggest thing I have learned from the decade of 20 to 30.  I hope that 30 to 40 will be the best decade.  I hope that I can continue to grow.  I will catch you all later. Technorati Tags: Turning 30,Wisdom

    Read the article

  • Skechers Leverages Oracle Applications, Business Intelligence and On Demand Offerings to Drive Long-Term Growth

    - by user801960
    This month Oracle Retail in the USA announced that Skechers - a world leading lifestyle footwear retailer - would be adopting several Oracle Retail products as part of their global growth strategy and to maximise business efficiency.  While based primarily in the USA, Skechers is a respected retailer across the world and has been an Oracle customer since 1997.  The key information about the announcement is below.  To find out more about Skechers visit their website: http://www.skechers.com/  Skechers U.S.A. Inc., an award-winning global leader in the lifestyle footwear industry, has upgraded and expanded its Oracle® Applications investment, implemented Oracle Database and moved to Oracle On Demand, Oracle’s premier cloud service to support rapid growth across its retail and wholesale channels. The new business information systems are part of a larger initiative for the billion-dollar-plus footwear company to fuel growth, reduce total cost of ownership and enable the business to respond faster to market opportunities. With more than 3,000 styles of shoes to design, develop and market, Skechers upgraded to Oracle’s PeopleSoft Enterprise Financial Management and PeopleSoft Supply Chain Management to increase operational efficiencies and improve controls by establishing an integrated, industry-specific platform. An Oracle customer since 1997, Skechers implemented PeopleSoft Enterprise Real Estate Management to meet the rapid growth of its retail stores worldwide. The company is the first customer to go live on the Real Estate Management module and worked closely with Oracle to provide development insight. Skechers also implemented Oracle Fusion Governance, Risk, and Compliance applications. This deployment enabled the company to leverage its existing corporate governance and compliance efforts throughout the global enterprise and more effectively manage the audit processes across multiple business units, processes and systems while reducing audit costs. Next, Skechers leveraged Oracle Financial Analytics, a pre-built Oracle Business Intelligence Application and PeopleSoft Enterprise Project Costing and PeopleSoft Enterprise Contracts to develop a custom Royalty Management dashboard, providing managers with better financial visibility to the company’s licensing contracts. The company switched to Oracle Database and moved database hosting and management to Oracle On Demand to reduce maintenance, implementation and system administration costs. As a result, Skechers is also achieving a better response time and is delivering a higher level of 24x7 support. OSI Consulting, a Platinum partner in Oracle PartnerNetwork (OPN), provided implementation and integration services to Skechers.   To view the full announcement please click here

    Read the article

  • BigData and Customer Experience: Happy Together

    - by Isabel F. Peñuelas
    The two big buzzes of the year may lay closer than it appears. Both concepts intersect at various points: BigData and Return of Investment of Marketing Campaigns On a recent post Big Data Is The Future Of Marketing Jeff Dachis explains very clearly how “Big data analytics finally allows marketers to identify, measure, and manage what is positively impacting their Brand”. Regression analysis applied to big data volumes coming from social media will substitute the failed attempts to justify marketing investments on social media in terms of followers and likes, he continues, “the measurement models applied by marketers on TV Campaigns don´t work on social”, we need to study the data with fresh eyes and maybe then we will start understanding and measuring brand engagemet. Social CRM and BigData The real value of Social CRM start by analyzing mass of big data from social media in order of applying social intelligence techniques that allow us to classify new customer niches and communities and define appropriated strategies to contact potential customers. Gartner Says that the Market for Social CRM is on pace to surpass $1 Billion in Revenue by Year-End 2012 but in words of Zach Hofer-Shall, Analyst at Forrester Research “Social customer relationship management is hard” (The Social CRM Arms Race Heats ). To succeed brands need three things: Investing in new social tools, investing in consultancy and investing in infrastructure for massive data storage and analysis. Neither CeX or BigData are easy and cheap wins. But what are the customer benefits of such investments? Big Data and Brand Engagement Time is the most valuable asset of todays consumers: tired of information overload, exhausted by the terabytes of offering, anxious because of not having the same fast multichannel experience with their services’ marketers or preferred goods providers than the one they found on their social media. Yes, I know you have read this before- me too. But is real. The motto of the Customer Experience philosophy of providing a consistent experience through multiple touchpoints that makes the relationship customer/brand easier and valuable finds it basis on understanding customer/s preferences and context for which BigData analysis is another imperative. In summary, I believe that using BigData Analysis in combination with appropriated CeX strategies and technologies is a promising direction for achieving: efficiency and marketing cost-savings; growing the customer base; and increasing customer conversion and retention. In a world: The Direction of Future Marketing.

    Read the article

  • The First Annual Crappy Code Games

    - by Testas
    SQLBits announced some super-exciting news! A tie-up with our platinum sponsor, Fusion-io. Together we'll be running a series of events called "The Crappy Code Games" where SQL Server developers will compete to write the worst-performing code and win some very cool prizes including:   •        Gold: A hands-on, high performance flying day for two at Ultimate High plus Fusion-io flight jackets•        Silver: One day racing experience at Palmer Sports where you will drive seven different high performance cars•        Bronze: Pure Tech Racing 10 person package at PTR’s F1 racing facility includes FI tees, food and drinks. …plus iPods, Windows Mobile phones, X-box 360s, t-shirts and much more. There will be two qualifying events in Manchester on March 17th and London on March 31st, and the third qualifier as well as the grand finale will be held in the evening of Thursday April 7th at SQLBits. And if that isn’t cool enough, Fusion-io's Chief Scientist Steve Wozniak (yes, that Steve Wozniak, tech industry legend and co-founder of Apple) will be on hand in Brighton to hand out the prizes! If you'd like to take part you'll need to register, and since places are limited we recommend you do so right away. For more details and to register, go to http://www.crappycodegames.com/ The Games: In conjunction with SQL Bits, dbA-thletes (that’s you) will compete  head-to-head in one of three separate qualifying events to be held in Manchester, London and Brighton.  Four separate SQL  rounds make up the evening’s Games, and will challenge you to write code that pushes the boundaries of SQL performance.  The four events are: ?  The High Jump: Generate the highest I/O per second ?  The 100 m dash: Cumulative highest number of I/O’s in 60 seconds ?  The SSIS-athon: Load one billion row fact table in the shortest time ?  The Marathon: Generate the highest MB per second in 60 seconds

    Read the article

  • A Complete Customer Experience Solution (3 of 3 in 'No Customer Left Behind' Series)

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development In my previous post, I talked about taking three concrete steps to improve your customers' overall experiences: 1) understand your customer, 2) empower your ecosystem, and 3) adapt your business. To do these effectively and efficiently, it's important to find the right technology that can bridge the gaps across your channels, interactions, departments, and repositories. Oracle has spent the past three years and more than six billion dollars acquiring and developing some of the world's best-of-breed applications. The result is the most comprehensive customer experience (CX) portfolio offering in the World - bar none: ATG Best in Class Selling Experiences Fatwire Best in Class Marketing Experiences Inquira Best in Class Support Experiences Endecca Best in Class Search Experiences RightNow Best in Class Service Experiences Vitrue & Involver Best in Class Social Marketing Collective Intellect Best In Class Social Listening We don't expect organizations to eat the CX elephant in one bite, nor should they try to. There are key strategic initiatives within each of the four main pillars of our customer experience offering for which we deliver solutions: 1. Customer Experience for Marketing Social Listening and Engagement Social Marketing Marketing Websites Demand Generation and Lead Management Marketing and Loyalty Management 2. Customer Experience for Commerce Search, Navigation & Content Delivery Cross-Channel Commerce Targeting & Product Recommendations Social Commerce Order Management & Fulfillment Retail Store Operations 3. Customer Experience for Sales Sales Force Automation Social Selling Territory & Quota Management Revenue Forecasting Partner Relationship Management Quote to Cash Incentive Compensation 4. Customer Experience for Service Cross-Channel Customer Service Knowledge Management Social Customer Service Eligibility Management Contracts, Assets, and Entitlements Industry-Specific Solutions eBilling Oracle's customer experience portfolio is socially infused at each layer of our pillars rather than simply bolted on as a side process. This combines with the power of the Cloud to run the parts of the solution that need the access, efficiency, and agility from a managed infrastructure. You can get the compliance control from on-premise backbone infrastructure systems that run your business and don't change that often. Please take advantage of our teams of Oracle customer experience professionals and our key agency and technology partner ecosystem. They can help you develop strategic solution roadmaps that build and deliver customer experience and that are tailored to your business needs and objectives. No one has built a better customer service portfolio to manage the entire customer journey than Oracle. It is backed by CX thought leadership programs, a commitment from our executives, and a worldview that your technology decisions must be driven by your customer experiences to succeed. If you’d like to follow up on this conversation, please leave a comment or contact me at [email protected]. You can get more information on Oracle’s complete customer experience solution here.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12  | Next Page >