Search Results

Search found 9067 results on 363 pages for 'big fizzy'.

Page 5/363 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Tricky Big-O complexity

    - by timeNomad
    public void foo (int n, int m) { int i = m; while (i > 100) i = i/3; for (int k=i ; k>=0; k--) { for (int j=1; j<n; j*=2) System.out.print(k + "\t" + j); System.out.println(); } } I figured the complexity would be O(logn). That is as a product of the inner loop, the outer loop -- will never be executed more than 100 times, so it can be omitted. What I'm not sure about is the while clause, should it be incorporated into the Big-O complexity? For very large i values it could make an impact, or arithmetic operations, doesn't matter on what scale, count as basic operations and can be omitted?

    Read the article

  • Database indexes and their Big-O notation

    - by miket2e
    I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O(1) lookup time. Querying on a non-unique index will also give a O(1) time, albeit maybe the '1' is slower than for the unique index (?) Querying on a column without an index will give a O(N) lookup time (full table scan). Is this generally correct ? Will querying on a primary key ever give worse performance than O(1) ? My specific concern is for SQLite, but I'd be interested in knowing to what extent this varies between different databases too.

    Read the article

  • Can someone help with big O notation?

    - by Dann
    void printScientificNotation(double value, int powerOfTen) { if (value >= 1.0 && value < 10.0) { System.out.println(value + " x 10^" + powerOfTen); } else if (value < 1.0) { printScientificNotation(value * 10, powerOfTen - 1); } else // value >= 10.0 { printScientificNotation(value / 10, powerOfTen + 1); } } I understand how the method goes but I cannot figure out a way to represent the method. For example, if value was 0.00000009 or 9e-8, the method will call on printScientificNotation(value * 10, powerOfTen - 1); eight times and System.out.println(value + " x 10^" + powerOfTen); once. So the it is called recursively by the exponent for e. But how do I represent this by big O notation? Thanks!

    Read the article

  • Big Data for Retail

    - by David Dorf
    Right up there with mobile, social, and cloud is the term "big data," which seems to be popping up lots in the press these days.  Companies like Google, Yahoo, and Facebook have popularized a new class of data technologies meant to solve the problem of processing large amounts of data quickly.  I first mentioned this in a posting back in March 2009.  Put simply, big data implies datasets so large they can't normally be processed using a standard transactional database.  The term "noSQL" is often used in this context as well. Actually, using parallel processing within the Oracle database combined with Exadata can achieve impressive results.  Look for more from Oracle at OpenWorld as hinted by Jean-Pierre Dijcks. McKinsey recently released a report on big data in which retail was specifically mentioned as an industry that can benefit from the new technologies.  I won't rehash that report because my friend Rama already did such a good job in his posting, Impact of "Big Data" on Retail. The presentation below does a pretty good job of framing the problem, although it doesn't really get into the available technologies (e.g. Exadata, Hadoop, Cassandra, etc.) and isn't retail specific. Determine the Right Analytic Database: A Survey of New Data Technologies So when a retailer asks me about big data, here's what I say:  Big data refers to a set of technologies for processing large volumes of structured and unstructured data.  Imagine collecting everything uttered by your customers on Facebook and Twitter and combining it with all the data you can find about the products you sell (e.g. reviews, images, demonstration videos), including competitive data.  Assuming you could process all that data, you could then personalize offers to specific customers based on their tastes, ensure prices are competitive, and implement better local assortments.  It's really not that far off.

    Read the article

  • Recurrence relation solution

    - by Travis
    I'm revising past midterms for a final exam this week and am trying to make sense of a solution my professor posted for one of past exams. (You can see the original pdf here, question #6). I'm given the original recurrence relation T(m)=3T(n/2) + n and am told T(1) = 1. I'm pretty sure the solution I've been given is wrong in a few places. The solution is as follows: Let n=2^m T(2^m) = 3T(2^(m-1)) + 2^m 3T(2^(m-1)) = 3^2*T(2^(m-2)) + 2^(m-1)*3 ... 3^(m-1)T(2) = T(1) + 2*3^(m-1) I'm pretty sure this last line is incorrect and they forgot to multiply T(1) by 3^m. He then (tries to) sum the expressions: T(2^m) = 1 + (2^m + 2^(m-1)*3 + ... + 2*3(m-1)) = 1 + 2^m(1 + (3/2)^1 + (3/2)^2 + ... + (3/2)^(m-1)) = 1 + 2^m((3/2)^m-1)*(1/2) = 1 + 3^m - 2^(m-1) = 1 + n^log 3 - n/2 Thus the algorithm is big Theta of (n^log 3). I'm pretty sure that he also got the summation wrong here. By my calculations this should be as follows: T(2^m) = 2^m + 3 * 2^(m-1) + 3^2 * 2^(m-2) + ... + 3^m (3^m because 3^m*T(1) = 3^m should be added, not 1) = 2^m * ((3/2)^1 + (3/2)^2 + ... + (3/2)^m) = 2^m * sum of (3/2)^i from i=0 to m = 2^m * ((3/2)^(m+1) - 1)/(3/2 - 1) = 2^m * ((3/2)^(m+1) - 1)/(1/2) = 2^(m+1) * 3^(m+1)/2^(m+1) - 2^(m+1) = 3^(m+1) - 2 * 2^m Replacing n = 2^m, and from that m = log n T(n) = 3*3^(log n) - 2*n n is O(3^log n), thus the runtime is big Theta of (3^log n) Does this seem right? Thanks for your help!

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Reporting a WCF application's status to F5's Big IP products

    - by ng5000
    In a Windows Server 2003 environment with a self hosted .Net 3.5/WCF application, how can an application report its status to a BigIP Local Traffic Manager? Example: One of my services errors. My custom WCF application hosting software (written because Windows Server 2008 is not yet available and I'm using WCF TCP bindings) detects this and wants to report itself as down until it can recover the errant service. It needs to report itself as down to the BigIP LTM so that it is no longer sent client originated requests.

    Read the article

  • analysis Big Oh notation psuedocode

    - by tesshu
    I'm having trouble getting my head around algorithm analysis. I seem to be okay identifying linear or squared algorithms but am totally lost with nlogn or logn algorithms, these seem to stem mainly from while loops? Heres an example I was looking at: Algorithm Calculate(A,n) Input: Array A of size n t?0 for i?0 to n-1 do if A[i] is an odd number then Q.enqueue(A[i]) else while Q is not empty do t?t+Q.dequeue() while Q is not empty do t?t+Q.dequeue() return t My best guess is the for loop is executed n times, its nested while loop q times making NQ and the final while loop also Q times resulting in O(NQ +Q) which is linear? I am probably totally wrong. Any help would be much appreciated. thanks

    Read the article

  • Big-O for calculating all routes from GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is the amount of neighbour vertices. Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • Big-O for GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is an arbitrary constants Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • Big-O for Eight Year Olds?

    - by Jason Baker
    I'm asking more about what this means to my code. I understand the concepts mathematically, I just have a hard time wrapping my head around what they mean conceptually. For example, if one were to perform an O(1) operation on a data structure, I understand that the amount of operations it has to perform won't grow because there are more items. And an O(n) operation would mean that you would perform a set of operations on each element. Could somebody fill in the blanks here? Like what exactly would an O(n^2) operation do? And what the heck does it mean if an operation is O(n log(n))? And does somebody have to smoke crack to write an O(x!)?

    Read the article

  • big background without scrolls

    - by mkoso
    I have layout that has wider background picture than the content area. I have made 970ppx wrapper where the content is. And in body I have backgroud image but I need to have anothen background image above of tht body background image so I have made class bgimg. So basically the markup is like this: But the bgimg is about 1050px wide and thus it gives scrolls when users browser is 1024x768. Is there way of getting rid the scrolls? I mean I want to have have scrollbars if users browrser is narrower thant the 970x wrapper of course. So can I put something like overflow hidden for bgimg class? Hopefully you did understannd what I mean.

    Read the article

  • Tackling Big Data Analytics with Oracle Data Integrator

    - by Irem Radzik
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}  By Mike Eisterer  The term big data draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. Beyond that critical data, however, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and documents that can be mined for useful information.  Companies are facing emerging technologies, increasing data volumes, numerous data varieties and the processing power needed to efficiently analyze data which changes with high velocity. Oracle offers the broadest and most integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships Oracle Data Integrator Enterprise Edition(ODI) is critical to any enterprise big data strategy. ODI and the Oracle Data Connectors provide native access to Hadoop, leveraging such technologies as MapReduce, HDFS and Hive. Alongside with ODI’s metadata driven approach for extracting, loading and transforming data; companies may now integrate their existing data with big data technologies and deliver timely and trusted data to their analytic and decision support platforms. In this session, you’ll learn about ODI and Oracle Big Data Connectors and how, coupled together, they provide the critical integration with multiple big data platforms. Tackling Big Data Analytics with Oracle Data Integrator October 1, 2012 12:15 PM at MOSCONE WEST – 3005 For other data integration sessions at OpenWorld, please check our Focus-On document.  If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • Big level objects collision system for 2d game

    - by Aristarhys
    I read many variants today and get some knowledge in general, so here is a steps of mine thoughts in pictures (horrible paint.net ones). We need to develop grid system, so we check only thing near, perform simple check to cut out deep check, and at - last deep check like per-pixel collision check. Step 1 - Let p1, p2 are some sprites lets first just check with circle collision - because large distance between p1, p2 this fails and of course so we don't need test more deeply. But if we have not 2, but 20 objects, why we need to even circle test something so far outside of our view. Step 2 - Add basic column system, now we don't bother with p2 if it's in a column far from p1 column, so we even don't do circle test. But p3 is in the same col, so let do circle test, which of course will fail. Step 3 - Lets improve column system to the grid system with grid cell size just like p1, p2, p3 collision boxes, so we cut out things much top or below p1. And this is all great until comes BIG OBJs which is some kind of platforms. They are much bigger then grid cell. Circle test for will be successful, but deep check for whole big obj will fail And that the part I can't get. How do I store the grid position of big object? Like 4 grid coords for big object vertexes? And if one of them close to p1 do circle check for centre of big object then a deep one if succeed? Am I do it wrong? My possible solution:

    Read the article

  • Two interesting big data sessions around Openworld

    - by Jean-Pierre Dijcks
    For those who want to talk (not listen) about big data, here are 2 very cool sessions: BOF9877 - A birds of a feather session around all things big data. It is on Monday, Oct 1, 6:15 PM - 7:00 PM - Marriott Marquis - Golden Gate. While all guests on the panel are special, we will have very special guest on the panel. He is a proud owner of a Big Data Appliance (see here). Then there is a Big Data SIG meeting (the invite from Gwen): I'd like to invite everyone to our OOW12 meet up. We'll meet on Tuesday, October 2nd, 8:45 to 9:45 at Moscone West Level 3, Overlook 3. We will network, socialize and discuss plans for the group. Which topics interest us for webinars? Which conferences do we want to meet in? What other activities we are interested in? We can also discuss big data topics, show off our great work, and seek advice on the challenges. Other than figuring out what we are collectively interested in, the discussion will be pretty open. Here is the official invite. See you at Openworld!!

    Read the article

  • “Big Data” Is A Small Concept Unless You Can Apply It To The Customer Experience

    - by Michael Hylton
    There’s been a lot of recent talk in the industry about “big data”.  Much can be said about the importance of big data and the results from it, but you need to always consider the customer experience when analyzing and applying customer data. Personalization and merchandising drive the user experience.  Big data should enable you to gain valuable insight into each of your customers and apply that insight at the moment they are on your Web site, talking to one of your call center agents, or any other touchpoint.  While past customer experience is important, you need to combine that with what your customer is doing on your Web site now as well what they are doing and saying on social networking sites.  It’s key to have a 360 degree view of your customer across all of your touchpoints in order to provide that relevant and consistent experience that they come to expect when interacting with your brand. Big data can enable you to effectively market, merchandize, and recommend the right products to the right customers and the right time.  By taking customer data and applying it to product recommendations, you have an opportunity to gain a greater share of wallet through the cross-selling and up-selling of additional products and services.  You can also build sustaining loyalty programs to continue to engage with your customers throughout their long-term relationship with your brand.

    Read the article

  • “Big Data” Is A Small Concept Unless You Can Apply It To The Customer Experience

    - by Michael Hylton
    There’s been a lot of recent talk in the industry about “big data”.  Much can be said about the importance of big data and the results from it, but you need to always consider the customer experience when analyzing and applying customer data. Personalization and merchandising drive the user experience.  Big data should enable you to gain valuable insight into each of your customers and apply that insight at the moment they are on your Web site, talking to one of your call center agents, or any other touchpoint.  While past customer experience is important, you need to combine that with what your customer is doing on your Web site now as well what they are doing and saying on social networking sites.  It’s key to have a 360 degree view of your customer across all of your touchpoints in order to provide that relevant and consistent experience that they come to expect when interacting with your brand. Big data can enable you to effectively market, merchandize, and recommend the right products to the right customers and the right time.  By taking customer data and applying it to product recommendations, you have an opportunity to gain a greater share of wallet through the cross-selling and up-selling of additional products and services.  You can also build sustaining loyalty programs to continue to engage with your customers throughout their long-term relationship with your brand.

    Read the article

  • Big Data Appliance

    - by David Dorf
    Today Oracle announced the next release of it's Big Data Appliance, an engineered system composed of hardware and software targeting the efficient processing of big data.  The solution leverages 288 Intel cores running Cloudera's distribution of Apache Hadoop in 1.1 TB of main memory.  This monster helps companies acquire, organize, and analyze large volumes of structured and un-structured data. Additionally a new versions of the Oracle Big Data Connectors and Oracle NoSQL Database were released. Why is this important to retailers?  As the infographic below conveys, mobile and social have added even more data to the already huge collections of POS transactions and e-commerce weblogs.  Retailers know that mining that data will help them make better decisions that lead to increased sales, better customer service, and ultimately a successful retail business. Monetate

    Read the article

  • "Adoption des Big Data : ce n'est que le commencement", selon Talend, qui analyse cette nouvelle tendance

    « Adoption des Big Data : ce n'est que le commencement », selon Talend qui affirme que les entreprises mettent en place des stratégies de Big Data Les volumes de données augmentent à un rythme croissant. De plus en plus, les entreprises explorent leurs usages et trouvent des moyens pour traiter, exploiter, analyser et fouiller les données qu'elles collectent, afin d'en tirer les connaissances qui serviront de base à leurs décisions futures. Yves de Montcheuil, VP Marketing, Talend, livre son analyse suite à une nouvelle enquête sur l'adoption des Big Data réalisée par l'éditeur auprès de professionnels impliqués dans la délivrance de solutions de données, qui confirme cette mat...

    Read the article

  • R Statistical Analytics with Faster Performance for Enterprise Database Access and Big Data

    - by Mike.Hallett(at)Oracle-BI&EPM
    Further demonstrating commitment to the open source community, Oracle has just released enhanced support of the R statistical programming language for Oracle Solaris and AIX in addition to Linux and Windows, connectivity to Oracle TimesTen In-Memory Database in addition to Oracle Database, and integration of hardware-specific Math libraries for faster performance.  Oracle’s Open Source distribution of R is available with the Oracle Big Data Appliance and available for download now. Oracle also offers Oracle R Enterprise, a component of Oracle Advanced Analytics that enables R processing on Oracle Database servers.   This all goes to make big data analytics more accessible in the enterprise and improving data scientist productivity with faster performance Since its introduction in 1995, R has attracted more than two million users and is widely used today for developing statistical applications that analyze big data. Analyst Report: Oracle Advances its Advanced Analytics Strategy  

    Read the article

  • Darth Vader Wins Big [Humorous Comic]

    - by Asian Angel
    Everyone’s favorite Star Wars villain receives a notice in the mail saying he won a contest, but did he really hit it big or is karma dishing out some payback? Note: Make sure to take a close look at the letter shown in the second panel for an additional laugh! Darth Vader Wins Big (Dorkly) [via Neatorama] HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • Desktop Fun: Big Game Cats Wallpaper Collection Series 2

    - by Asian Angel
    Two years ago we shared a wonderful collection of big game cats wallpapers with you and today we are back with more cattitude goodness for you. Fill your desktop with these sleek and graceful friends from the animal kingdom with the second in our series of Big Game Cats Wallpaper collections. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >