Search Results

Search found 32660 results on 1307 pages for 'big number'.

Page 6/1307 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • analysis Big Oh notation psuedocode

    - by tesshu
    I'm having trouble getting my head around algorithm analysis. I seem to be okay identifying linear or squared algorithms but am totally lost with nlogn or logn algorithms, these seem to stem mainly from while loops? Heres an example I was looking at: Algorithm Calculate(A,n) Input: Array A of size n t?0 for i?0 to n-1 do if A[i] is an odd number then Q.enqueue(A[i]) else while Q is not empty do t?t+Q.dequeue() while Q is not empty do t?t+Q.dequeue() return t My best guess is the for loop is executed n times, its nested while loop q times making NQ and the final while loop also Q times resulting in O(NQ +Q) which is linear? I am probably totally wrong. Any help would be much appreciated. thanks

    Read the article

  • Return a number between 0 and 4

    - by munchine
    How do I return a number between 0 and 4, depending the input number? For example if I pass it number 23 it will return 3. The number set should look like 0 5 10 15 20 .. 1 6 11 16 21 .. 2 7 12 17 22 .. 3 8 13 18 23 .. 4 9 14 19 24 What's the math for this?

    Read the article

  • Big Data for Retail

    - by David Dorf
    Right up there with mobile, social, and cloud is the term "big data," which seems to be popping up lots in the press these days.  Companies like Google, Yahoo, and Facebook have popularized a new class of data technologies meant to solve the problem of processing large amounts of data quickly.  I first mentioned this in a posting back in March 2009.  Put simply, big data implies datasets so large they can't normally be processed using a standard transactional database.  The term "noSQL" is often used in this context as well. Actually, using parallel processing within the Oracle database combined with Exadata can achieve impressive results.  Look for more from Oracle at OpenWorld as hinted by Jean-Pierre Dijcks. McKinsey recently released a report on big data in which retail was specifically mentioned as an industry that can benefit from the new technologies.  I won't rehash that report because my friend Rama already did such a good job in his posting, Impact of "Big Data" on Retail. The presentation below does a pretty good job of framing the problem, although it doesn't really get into the available technologies (e.g. Exadata, Hadoop, Cassandra, etc.) and isn't retail specific. Determine the Right Analytic Database: A Survey of New Data Technologies So when a retailer asks me about big data, here's what I say:  Big data refers to a set of technologies for processing large volumes of structured and unstructured data.  Imagine collecting everything uttered by your customers on Facebook and Twitter and combining it with all the data you can find about the products you sell (e.g. reviews, images, demonstration videos), including competitive data.  Assuming you could process all that data, you could then personalize offers to specific customers based on their tastes, ensure prices are competitive, and implement better local assortments.  It's really not that far off.

    Read the article

  • Getting Serial Number of the Hard Drive Provided by the manufacturer through PHP

    - by dta
    Getting Serial Number of the Hard Drive Provided by the manufacturer through PHP : How can it be done? I want to store it in a file. OS : windows 2000,XP,ME,Vista... Yes, I want the serial number of the hard drive of the Server. Or can it be done through Adobe AIR? Or can it be done through a C program on Windows? C:\Documents and Settings\Administrator>dir Volume in drive C has no label. Volume Serial Number is BC16-5D5F Is this number : BC16-5d5f unique for a hard drive? How is it different from the manufacturer given serial number? This command **wmic DISKDRIVE GET SerialNumber** Displays only the following text on my Vista Machine : SerialNumber On my XP machine, the command is unrecognized

    Read the article

  • C++: compute a number's complement and its number of possible mismatches

    - by Eagle
    I got a bit stuck with my algorithm and I need some help to solve my problem. I think an example would explain better my problem. Assuming: d = 4 (maximum number of allowed bits in a number, 2^4-1=15). m_max = 1 (maximum number of allowed bits mismatches). kappa = (maximum number of elements to find for a given d and m, where m in m_max) The main idea is for a given number, x, to compute its complement number (in binary base) and all the possible combinations for up to m_max mismatches from x complement's number. Now the program start to scan from i = 0 till 15. for i = 0 and m = 0, kappa = \binom{d}{0} = 1 (this called a perfect match) possible combinations in bits, is only 1111 (for 0: 0000). for i = 0 and m = 1, kappa = \binom{d}{1} = 4 (one mismatch) possible combinations in bits are: 1000, 0100, 0010 and 0001 My problem was to generalize it to general d and m. I wrote the following code: #include <stdlib.h> #include <iomanip> #include <boost/math/special_functions/binomial.hpp> #include <iostream> #include <stdint.h> #include <vector> namespace vec { typedef std::vector<unsigned int> uint_1d_vec_t; } int main( int argc, char* argv[] ) { int counter, d, m; unsigned num_combination, bits_mask, bit_mask, max_num_mismatch; uint_1d_vec_t kappa; d = 4; m = 2; bits_mask = 2^num_bits - 1; for ( unsigned i = 0 ; i < num_elemets ; i++ ) { counter = 0; for ( unsigned m = 0 ; m < max_num_mismatch ; m++ ) { // maximum number of allowed combinations num_combination = boost::math::binomial_coefficient<double>( static_cast<unsigned>( d ), static_cast<unsigned>(m) ); kappa.push_back( num_combination ); for ( unsigned j = 0 ; j < kappa.at(m) ; j++ ) { if ( m == 0 ) v[i][counter++] = i^bits_mask; // M_0 else { bit_mask = 1 << ( num_bits - j ); v[i][counter++] = v[i][0] ^ bits_mask } } } } return 0; } I got stuck in the line v[i][counter++] = v[i][0] ^ bits_mask since I was unable to generalize my algorithm to m_max1, since I needed for m_max mismatches m_max loops and in my original problem, m is unknown until runtime.

    Read the article

  • Row Number Transformation

    The Row Number Transformation calculates a row number for each row, and adds this as a new output column to the data flow. The column number is a sequential number, based on a seed value. Each row receives the next number in the sequence, based on the defined increment value. The final row number can be stored in a variable for later analysis, and can be used as part of a process to validate the integrity of the data movement. The Row Number transform has a variety of uses, such as generating surrogate keys, or as the basis for a data partitioning scheme when combined with the Conditional Split transformation. Properties Property Data Type Description Seed Int32 The first row number or seed value. Increment Int32 The value added to the previous row number to make the next row number. OutputVariable String The name of the variable into which the final row number is written post execution. (Optional). The three properties have been configured to support expressions, or they can set directly in the normal manner. Expressions on components are only visible on the hosting Data Flow task, not at the individual component level. Sometimes the data type of the property is incorrectly set when the properties are created, see the Troubleshooting section below for details on how to fix this. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Number transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation  is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Number Transformation for SQL Server 2005 Row Number Transformation for SQL Server 2008 Row Number Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.2.0.34 – Updated installer. (25 Jun 2008) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.0.0.0 - Public Release for SQL Server 2005 IDW 15 June CTP (29 Aug 2005) Screenshot Code Sample The following code sample demonstrates using the Data Generator Source and Row Number Transformation programmatically in a very simple package. Package package = new Package(); package.Name = "Data Generator & Row Number"; // Add the Data Flow Task Executable taskExecutable = package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = taskExecutable as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add Data Generator Source IDTSComponentMetaData100 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "Data Generator"; componentSource.ComponentClassID = "Konesans.Dts.Pipeline.DataGenerator.DataGenerator, Konesans.Dts.Pipeline.DataGenerator, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceSource = componentSource.Instantiate(); instanceSource.ProvideComponentProperties(); instanceSource.SetComponentProperty("RowCount", 10000); // Add Row Number Tx IDTSComponentMetaData100 componentRowNumber = dataFlowTask.ComponentMetaDataCollection.New(); componentRowNumber.Name = "FlatFileDestination"; componentRowNumber.ComponentClassID = "Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceRowNumber = componentRowNumber.Instantiate(); instanceRowNumber.ProvideComponentProperties(); instanceRowNumber.SetComponentProperty("Increment", 10); // Connect the two components together IDTSPath100 path = dataFlowTask.PathCollection.New(); path.AttachPathAndPropagateNotifications(componentSource.OutputCollection[0], componentRowNumber.InputCollection[0]); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); #endif package.Execute(); foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); } package.Dispose(); Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? Please also make sure you have installed a minimum of SP1 for SQL 2005. The IDtsPipelineEnvironmentService was added in SQL Server 2005 Service Pack 1 (SP1) (See  http://support.microsoft.com/kb/916940). If you get an error Could not load type 'Microsoft.SqlServer.Dts.Design.IDtsPipelineEnvironmentService' from assembly 'Microsoft.SqlServer.Dts.Design, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'. when trying to open the user interface, it implies that your development machine has not had SP1 applied. Very occasionally we get a problem to do with the properties not being created with the correct data type. Since there is no way to programmatically to define the data type of a pipeline component property, it can only infer it. Whilst we set an integer value as we create the property, sometimes SSIS decides to define it is a decimal. This is often highlighted when you use a property expression against the property and get an error similar to Cannot convert System.Int32 to System.Decimal. Unfortunately this is beyond our control and there appears to be no pattern as to when this happens. If you do have more information we would be happy to hear it. To fix this issue you can manually edit the package file. In Visual Studio right click the package file from the Solution Explorer and select View Code, which will open the package as raw XML. You can now search for the properties by name or the component name. You can then change the incorrect property data types highlighted below from Decimal to Int32. <component id="37" name="Row Number Transformation" componentClassID="{BF01D463-7089-41EE-8F05-0A6DC17CE633}" … >     <properties>         <property id="38" name="UserComponentTypeName" …>         <property id="41" name="Seed" dataType="System.Int32" ...>10</property>         <property id="42" name="Increment" dataType="System.Decimal" ...>10</property>         ... If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • Recurrence relation solution

    - by Travis
    I'm revising past midterms for a final exam this week and am trying to make sense of a solution my professor posted for one of past exams. (You can see the original pdf here, question #6). I'm given the original recurrence relation T(m)=3T(n/2) + n and am told T(1) = 1. I'm pretty sure the solution I've been given is wrong in a few places. The solution is as follows: Let n=2^m T(2^m) = 3T(2^(m-1)) + 2^m 3T(2^(m-1)) = 3^2*T(2^(m-2)) + 2^(m-1)*3 ... 3^(m-1)T(2) = T(1) + 2*3^(m-1) I'm pretty sure this last line is incorrect and they forgot to multiply T(1) by 3^m. He then (tries to) sum the expressions: T(2^m) = 1 + (2^m + 2^(m-1)*3 + ... + 2*3(m-1)) = 1 + 2^m(1 + (3/2)^1 + (3/2)^2 + ... + (3/2)^(m-1)) = 1 + 2^m((3/2)^m-1)*(1/2) = 1 + 3^m - 2^(m-1) = 1 + n^log 3 - n/2 Thus the algorithm is big Theta of (n^log 3). I'm pretty sure that he also got the summation wrong here. By my calculations this should be as follows: T(2^m) = 2^m + 3 * 2^(m-1) + 3^2 * 2^(m-2) + ... + 3^m (3^m because 3^m*T(1) = 3^m should be added, not 1) = 2^m * ((3/2)^1 + (3/2)^2 + ... + (3/2)^m) = 2^m * sum of (3/2)^i from i=0 to m = 2^m * ((3/2)^(m+1) - 1)/(3/2 - 1) = 2^m * ((3/2)^(m+1) - 1)/(1/2) = 2^(m+1) * 3^(m+1)/2^(m+1) - 2^(m+1) = 3^(m+1) - 2 * 2^m Replacing n = 2^m, and from that m = log n T(n) = 3*3^(log n) - 2*n n is O(3^log n), thus the runtime is big Theta of (3^log n) Does this seem right? Thanks for your help!

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Reporting a WCF application's status to F5's Big IP products

    - by ng5000
    In a Windows Server 2003 environment with a self hosted .Net 3.5/WCF application, how can an application report its status to a BigIP Local Traffic Manager? Example: One of my services errors. My custom WCF application hosting software (written because Windows Server 2008 is not yet available and I'm using WCF TCP bindings) detects this and wants to report itself as down until it can recover the errant service. It needs to report itself as down to the BigIP LTM so that it is no longer sent client originated requests.

    Read the article

  • Big-O for Eight Year Olds?

    - by Jason Baker
    I'm asking more about what this means to my code. I understand the concepts mathematically, I just have a hard time wrapping my head around what they mean conceptually. For example, if one were to perform an O(1) operation on a data structure, I understand that the amount of operations it has to perform won't grow because there are more items. And an O(n) operation would mean that you would perform a set of operations on each element. Could somebody fill in the blanks here? Like what exactly would an O(n^2) operation do? And what the heck does it mean if an operation is O(n log(n))? And does somebody have to smoke crack to write an O(x!)?

    Read the article

  • big background without scrolls

    - by mkoso
    I have layout that has wider background picture than the content area. I have made 970ppx wrapper where the content is. And in body I have backgroud image but I need to have anothen background image above of tht body background image so I have made class bgimg. So basically the markup is like this: But the bgimg is about 1050px wide and thus it gives scrolls when users browser is 1024x768. Is there way of getting rid the scrolls? I mean I want to have have scrollbars if users browrser is narrower thant the 970x wrapper of course. So can I put something like overflow hidden for bgimg class? Hopefully you did understannd what I mean.

    Read the article

  • Maximizing the number of true concurrent / parrallel http requests in Silverlight

    - by Clems
    Hi all. I'm using SL 4 beta and my app needs to do a lot of small http requests to the server. I believe that when exceeding the number of allowed concurrent requests, the subsequent requests are put in a queue. I am also aware that SL 4 has both a http browser stack and a http client stack, with both different limit in terms of the number of concurrent requests. Let's say call those limits MAX_BROWSER and MAX_CLIENT. Also I think I read somewhere that the number of concurrent requests is limited per domain, not overall. But I'm sure if this applies to both the http client stack. That means that you CAN have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to domain2.com at the same time. And I even believe that sub domains are considered different so you can also have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to sub.domain1.com at the same time. I have ownership of the services and domain names so I could easily setup sub domains for my services. Given those considerations I'm trying to optimize the number of concurrent http requests to my server. Here are few questions ? Is is possible to use both stack at the same time ? Is the subdomain/domain story true for both stacks ? None ? If so that would mean that I could potentially have a number of concurrent requests equal to : (MAX_BROWSER + MAX_CLIENT) * NUMBER_OF_DOMAINS which would be fairly good. Is this correct ? I'm kind of sharing my morning thoughts here, hoping somebody has experimented with those things. Thank you.

    Read the article

  • Probability Random Number Generator

    - by Excl
    Let's say I'm writing a simple luck game - each player presses Enter and the game assign him a random number between 1-6. Just like a cube. At the end of the game, the player with the highest number wins. Now, let's say I'm a cheater. I want to write the game so player #1 (which will be me) has a probability of 90% to get six, and 2% to get each of the rest numbers (1, 2, 3, 4, 5). So, how can I generate a number random, and set the probability for each number? Thanks.

    Read the article

  • Need a regular expression for an Irish phone number

    - by Eoghan O'Brien
    I need to validate an Irish phone number but I don't want to make it too user unfriendly, many people are used to writing there phone number with brackets wrapping their area code followed by 5 to 7 digits for their number, some add spaces between the area code or mobile operator. The format of Irish landline numbers is an area code of between 1 and 4 digits and a number of between 5 to 8 digits. e.g. (021) 9876543 (01)9876543 01 9876543 (0402)39385 I'm looking for a regular expression for Javascript/PHP. Thanks.

    Read the article

  • Determine phone number based on time zone?

    - by Zachary Burt
    I have a (potentially international) phone number. It may or may not have a country code prefix. Does anyone know of a database that will help me map phone number = time zone? I would even just like to map phone number = country, since I can probably create country = time zone by scraping existing data on the web. This is a more complicated problem than it looks; for example, how do I know if it's a US-based number -- e.g. is it a USA area code, or an international country calling code? Any language is fine; I can port it.

    Read the article

  • Tackling Big Data Analytics with Oracle Data Integrator

    - by Irem Radzik
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}  By Mike Eisterer  The term big data draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. Beyond that critical data, however, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and documents that can be mined for useful information.  Companies are facing emerging technologies, increasing data volumes, numerous data varieties and the processing power needed to efficiently analyze data which changes with high velocity. Oracle offers the broadest and most integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships Oracle Data Integrator Enterprise Edition(ODI) is critical to any enterprise big data strategy. ODI and the Oracle Data Connectors provide native access to Hadoop, leveraging such technologies as MapReduce, HDFS and Hive. Alongside with ODI’s metadata driven approach for extracting, loading and transforming data; companies may now integrate their existing data with big data technologies and deliver timely and trusted data to their analytic and decision support platforms. In this session, you’ll learn about ODI and Oracle Big Data Connectors and how, coupled together, they provide the critical integration with multiple big data platforms. Tackling Big Data Analytics with Oracle Data Integrator October 1, 2012 12:15 PM at MOSCONE WEST – 3005 For other data integration sessions at OpenWorld, please check our Focus-On document.  If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • Big level objects collision system for 2d game

    - by Aristarhys
    I read many variants today and get some knowledge in general, so here is a steps of mine thoughts in pictures (horrible paint.net ones). We need to develop grid system, so we check only thing near, perform simple check to cut out deep check, and at - last deep check like per-pixel collision check. Step 1 - Let p1, p2 are some sprites lets first just check with circle collision - because large distance between p1, p2 this fails and of course so we don't need test more deeply. But if we have not 2, but 20 objects, why we need to even circle test something so far outside of our view. Step 2 - Add basic column system, now we don't bother with p2 if it's in a column far from p1 column, so we even don't do circle test. But p3 is in the same col, so let do circle test, which of course will fail. Step 3 - Lets improve column system to the grid system with grid cell size just like p1, p2, p3 collision boxes, so we cut out things much top or below p1. And this is all great until comes BIG OBJs which is some kind of platforms. They are much bigger then grid cell. Circle test for will be successful, but deep check for whole big obj will fail And that the part I can't get. How do I store the grid position of big object? Like 4 grid coords for big object vertexes? And if one of them close to p1 do circle check for centre of big object then a deep one if succeed? Am I do it wrong? My possible solution:

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >