Search Results

Search found 9138 results on 366 pages for 'big daddy'.

Page 23/366 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Wednesday at Oracle OpenWorld 2012 - Must See Session: “Event-Driven Patterns and Best Practices: Even More Important with Big Data”

    - by Lionel Dubreuil
    Don’t miss this “CON8636 - Event-Driven Patterns and Best Practices: Even More Important with Big Data“ session: Speakers: Faisal Nazir - Senior Solutions Architect, Motorola Shinichiro Takahashi - Senior Manager, Service Platform Department, NTT DOCOMO, INC. Robin Smith - Product Management/Strategy Director - Oracle Event Processing, Oracle Date: Wednesday, Oct 3 Time: 10:15 AM - 11:15 AM Location: Moscone South - 310 As the demand for big data analytics and integration grows across all industries, this session focuses on the role of the Oracle event-driven solution platform in delivering vital real-time integrated analysis intelligence to the data streams consumed and emitted from these large distributed data stores. Objectives for this session are to: Increase awareness of Oracle Event Processing, showcasing tight alignment with big data solutions Highlight emerging usage patterns in relation to streaming event data and distributed data stores Show a significant Oracle competitive advantage over IBM solutions advertised in this domain Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • What does O(log n) mean exactly?

    - by Andreas Grech
    I am currently learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the growth of the algorithm proportionally...and the same goes for, for example, quadratic time O(n2) etc..even algorithms, such as permutation generators, with O(n!) times, that grow by factorials. For example, the following function is O(n) because the algorithm grows in proportion to its input n: f(int n) { int i; for (i = 0; i < n; ++i) printf("%d", i); } Similarly, if there was a nested loop, the time would be O(n2). But what exactly is O(log n)? For example, what does it mean to say that the height of a complete binary tree is O(log n)? I do know (maybe not in great detail) what Logarithm is, in the sense that: log10 100 = 2, but I cannot understand how to identify a function with a logarithmic time.

    Read the article

  • Runtime of optimized Primehunter

    - by Setton
    Ok so I need some serious runtime help here! This method should take in an int value, check its primality, and return true if the number is indeed a prime. I understand why the loop only needs to go up to i squared, I understand that the worst case scenario is the case in which either the number is prime (or a multiple of a prime). But I don't understand how to quantify the actual runtime. I have done the loop myself by hand to try to understand the pattern or correlation of the number (n) and how many loops occur, but I literally feel like I keep falling into the same trap every time. I need a new way of thinking about this! I have a hint: "Think about the SIZE of the integer" which makes me want to quantify the literal number of integers in a number in relation to how many iterations it does in the for loop (floor log(n)) +1). BUT IT'S NOT WORKIIIING?! I KNOW it isn't square root n, obviously. I'm asking for Big O notation. public class PrimeHunter { public static boolean isPrime(int n) { boolean answer = (n > 1) ? true : false; //runtime = linear runtime for (int i = 2; i * i <= n; i++) //runtime = ????? { if (n % i == 0) //doesn't occur if it is a prime { answer = false; break; } } return answer; //runtime = linear runtime } }

    Read the article

  • Markus Zirn, "Big Data with CEP and SOA" @ SOA, Cloud &amp; Service Technology Symposium 2012

    - by JuergenKress
    ORACLE PROMOTIONAL DISCOUNT FOR EXCLUSIVE ORACLE DISCOUNT, ENTER PROMO CODE: DJMXZ370 Early-Bird Registration is Now Open with Special Pricing! Register before July 1, 2012 to qualify for discounts. Visit the Registration page for details. The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). Big Data with CEP and SOA - September 25, 2012 - 14:15 Speaker: Markus Zirn, Oracle and Baz Kuthi, Avocent The "Big Data" trend is driving new kinds of IT projects that process machine-generated data. Such projects store and mine using Hadoop/ Map Reduce, but they also analyze streaming data via event-driven patterns, which can be called "Fast Data" complementary to "Big Data". This session highlights how "Big Data" and "Fast Data" design patterns can be combined with SOA design principles into modern, event-driven architectures. We will describe specific architectures that combines CEP, Distributed Caching, Event-driven Network, SOA Composites, Application Development Framework, as well as Hadoop. Architecture patterns include pre-processing and filtering event streams as close as possible to the event source, in memory master data for event pattern matching, event-driven user interfaces as well as distributed event processing. Focus is on how "Fast Data" requirements are elegantly integrated into a traditional SOA architecture. Markus Zirn is Vice President of Product Management covering Oracle SOA Suite, SOA Governance, Application Integration Architecture, BPM, BPM Solutions, Complex Event Processing and UPK, an end user learning solution. He is the author of “The BPEL Cookbook” (rated best book on Services Oriented Architecture in 2007) as well as “Fusion Middleware Patterns”. Previously, he was a management consultant with Booz Allen & Hamilton’s High Tech practice in Duesseldorf as well as San Francisco and Vice President of Product Marketing at QUIQ. Mr. Zirn holds a Masters of Electrical Engineering from the University of Karlsruhe and is an alumnus of the Tripartite program, a joint European degree from the University of Karlsruhe, Germany, the University of Southampton, UK, and ESIEE, France. KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. CONFERENCE THEMES & TRACKS Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualization Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Oracle Specialized SOA & BPM Partners Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Markus Zirn,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • I prefer C/C++ over Unity and other tools: is it such a big downer for a game developper ?

    - by jokoon
    We have a big game project on Unity at school on which we are 12 to work on. My teacher seems to be convinced it's an important tool to teach students, since it makes students look from the high level to the lower level. I can understand his view, and I'm wondering: Is unity such an important engine in game developping companies ? Are there a lot of companies using it because they can't afford to use something else ? He is talking like Unity is a big player in game making, but I only see it fit small indie game companies who want to do a game as fast as possible. Do you think Unity is that much important in the industry ? Does it endangers the value of C++ skills ? It's not that I don't like Unity, it's just that I don't learn nothing with it, I prefer to achieve little steps with Ogre or SFML instead. Also, we also have C++ practice exercises, but those are just practice with theory, nothing much.

    Read the article

  • How do you react to non-programmers with ideas of 'The Next Big Thing' ?

    - by jiceo
    Recently and quite often, people with no programming background come and say they have this great idea that can be the next big thing and that the idea(s) is worth a fortune by itself. Then as they know I'm a programmer, they ask me if I'm willing to "code it up" for them or find someone willing to do it for next to nothing. Judging from the enthusiasm, it's like they're drunk on their idea and that that by itself is the most important thing, but they just need a programmer. My response to them, depending on my mood and their general attitude towards what we do, is something along the lines of: "Having the core of an idea is one thing. Developing it to the point that it becomes a platform that changes the world in which it lives is another, and you're going to be willing to pay proportionately to how big you think your idea is worth." Have you been approached by these business type entrepreneurs (with no technical/developer's knowledge) with such a proposal and how do you react to them?

    Read the article

  • I prefer C/C++ over Unity and other tools: is it such a big downer for a game developer?

    - by jokoon
    We have a big game project using Unity at school. There are 12 of us working on it. My teacher seems to be convinced it's an important tool to teach students, since it makes students look from the high level to the lower level. I can understand his view, and I'm wondering: Is unity such an important engine in game development companies? Are there a lot of companies using it because they can't afford to use something else? He is talking like Unity is a big player in game making, but I only see it fit small indie game companies who want to do a game as fast as possible. Do you think Unity is of that much importance in the industry? Does it endanger the value of C++ skills? It's not that I don't like Unity, it's just that I don't learn anything with it, I prefer to achieve little steps with Ogre or SFML instead. Also, we also have C++ practice exercises, but those are just practice with theory, nothing much.

    Read the article

  • What are some concepts people should understand before programming "big" projects?

    - by Abafei
    A person new to programming may be able to make a good small program. However, when starting to work on anything bigger than a small (think 1 C source file or Python module) program, there are some general concepts which become much more important when working on "big" (think many Python modules or C files) programs; one example is modularity, another is having a set aim. Some of these may be obvious to people who went to school to learn programming; however, people like me who did not go to programming classes sometimes have to learn these things from experience, possibly creating failed projects in the meantime. ================================================== Please explain what the concept is, and why the concept becomes more important for big programs than by small programs. Please give only 1 concept per answer.

    Read the article

  • IBM : "les ordinateurs pourront voir, sentir, toucher, gouter et entendre" d'ici 5 ans, Big Blue livre ses prédictions "5 in 5"

    IBM : « les systèmes informatiques pourront voir, sentir, toucher, gouter et entendre » d'ici 5 ans Big Blue livre ses prédictions « Five in Five » Comme il est de coutume en chaque fin d'année, IBM vient de livrer ses cinq prédictions sur l'évolution de la technologie au cours des cinq années à venir. Big Blue lors de son événement « Five In Five » a publié sa vision d'un futur ou les dispositifs informatiques seront dotés des cinq sens. Ils seront capables de voir, sentir, toucher, gouter et entendre. Le toucher : un téléphone sera capable de reproduire une sensation du toucher De nos jours, les technologies haptiques et graphiques utilisées dans le domaine ...

    Read the article

  • Split a big folder (and its contents) into several small new folders ?

    - by David
    Hello, By luck do you know a software on Windows (Vista) which can easily split a big folder (and its contents) into several small new folders ? notes: the new small folders should be independent the "file splitter" that I have already tested don't do alas the "folder splitter" job ! ;( Example : I have a main big folder C:\big\ (size: about 3Go) The result I need : C:\big01\ (size: about 200MB) C:\big02\ (size: about 200MB) etc... Thanks in advance ;)

    Read the article

  • WCF Service behind F5 Bigip endpoint issue

    - by John
    we have WCF services hosted on IIS in 4 web servers, these web servers are behind the F5 Bigip load balancing system, SSL accelarator. When the client calls the service it'll be calling https myserver.com/myservices.svc but actually .svc will be in different servers like http web1.mysever.com/myservices.svc, http web2.mysever.com/myservices.svc, etc how do we handle this issue?

    Read the article

  • WCF, Metadata and BIGIP - Can I force the correct url for the WSDL items?

    - by Yossi Dahan
    We have a WCF service hosted on ServerA which is a server with no-direct Internet access and has a non-Internet routable IP address. The service is fronted by BIGIP which handles SSL encryption and decryption and forwards the unencrypted request to ServerA (at the moment it does NOT actually do any load balancing, but that is likely to be added in the future) on a specific port. What that means is that our clients would be calling the service through https://www.OurDomain.com/ServiceUrl and would get to our service on http://SeverA:85/ServiceUrl through the BIGIP device; When we browse to the WSDL published on https://www.OurDomain.com/ServiceUrl all the addresses contained in the WSDL are based on the http://SeverA:85/ServiceUrl base address We figured out that we could use the host headers setting to set the domain, but our problem is that while this would sort out the domain, we would still be using the wrong scheme – it would use http://www.OurDomain.com/ServiceUrl while we need it to be Https. Also – as we have other services (asmx based) hosted on that server we had some issues setting the host headers, and so we thought we could get away with creating another site on the server (using, say, port 82) and set the host header on that; now, on top of the http/https problem we have an issue as the WSDL contains the port number in all the urls, where BigIP works on port 443 (for the SSL) Is there a more flexible solution than implementing Host Headers? Ideally we need to retain flexibility and ease of supportability. Thanks for any help…

    Read the article

  • Are there any worse sorting algorithms than Bogosort (a.k.a Monkey Sort)?

    - by womp
    My co-workers took me back in time to my University days with a discussion of sorting algorithms this morning. We reminisced about our favorites like StupidSort, and one of us was sure we had seen a sort algorithm that was O(n!). That got me started looking around for the "worst" sorting algorithms I could find. We postulated that a completely random sort would be pretty bad (i.e. randomize the elements - is it in order? no? randomize again), and I looked around and found out that it's apparently called BogoSort, or Monkey Sort, or sometimes just Random Sort. Monkey Sort appears to have a worst case performance of O(∞), a best case performance of O(n), and an average performance of O(n * n!). Are there any named algorithms that have worse average performance than O(n * n!)? Or are just sillier than Monkey Sort in general?

    Read the article

  • Linear complexity and quadratic complexity

    - by jasonline
    I'm just not sure... If you have a code that can be executed in either of the following complexities: A sequence of O(n), like for example: two O(n) in sequence O(n²) The preferred version would be the one that can be executed in linear time. Would there be a time such that the sequence of O(n) would be too much and that O(n²) would be preferred? In other words, is the statement C x O(n) < O(n²) always true for any constant C? Why or why not? What are the factors that would affect the condition such that it would be better to choose the O(n²) complexity?

    Read the article

  • Best way to do powerOf(int x, int n)?

    - by Mike
    So given x, and power, n, solve for X^n. There's the easy way that's O(n)... I can get it down to O(n/2), by doing numSquares = n/2; numOnes = n%2; return (numSquares * x * x + numOnes * x); Now there's a log(n) solution, does anyone know how to do it? It can be done recursively.

    Read the article

  • How to analyze the efficiency of this algorithm Part 2

    - by Leonardo Lopez
    I found an error in the way I explained this question before, so here it goes again: FUNCTION SEEK(A,X) 1. FOUND = FALSE 2. K = 1 3. WHILE (NOT FOUND) AND (K < N) a. IF (A[K] = X THEN 1. FOUND = TRUE b. ELSE 1. K = K + 1 4. RETURN Analyzing this algorithm (pseudocode), I can count the number of steps it takes to finish, and analyze its efficiency in theta notation, T(n), a linear algorithm. OK. This following code depends on the inner formulas inside the loop in order to finish, the deal is that there is no variable N in the code, therefore the efficiency of this algorithm will always be the same since we're assigning the value of 1 to both A & B variables: 1. A = 1 2. B = 1 3. UNTIL (B > 100) a. B = 2A - 2 b. A = A + 3 Now I believe this algorithm performs in constant time, always. But how can I use Algebra in order to find out how many steps it takes to finish?

    Read the article

  • How is schoolbook long division an O(n^2) algorithm?

    - by eSKay
    Premise: This Wikipedia page suggests that the computational complexity of Schoolbook long division is O(n^2). Deduction: Instead of taking "Two n-digit numbers", if I take one n-digit number and one m-digit number, then the complexity would be O(n*m). Contradiction: Suppose you divide 100000000 (n digits) by 1000 (m digits), you get 100000, which takes six steps to arrive at. Now, if you divide 100000000 (n digits) by 10000 (m digits), you get 10000 . Now this takes only five steps. Conclusion: So, it seems that the order of computation should be something like O(n/m). Question: Who is wrong, me or Wikipedia, and where?

    Read the article

  • Asymptotic runtime of list-to-tree function

    - by Deestan
    I have a merge function which takes time O(log n) to combine two trees into one, and a listToTree function which converts an initial list of elements to singleton trees and repeatedly calls merge on each successive pair of trees until only one tree remains. Function signatures and relevant implementations are as follows: merge :: Tree a -> Tree a -> Tree a --// O(log n) where n is size of input trees singleton :: a -> Tree a --// O(1) empty :: Tree a --// O(1) listToTree :: [a] -> Tree a --// Supposedly O(n) listToTree = listToTreeR . (map singleton) listToTreeR :: [Tree a] -> Tree a listToTreeR [] = empty listToTreeR (x:[]) = x listToTreeR xs = listToTreeR (mergePairs xs) mergePairs :: [Tree a] -> [Tree a] mergePairs [] = [] mergePairs (x:[]) = [x] mergePairs (x:y:xs) = merge x y : mergePairs xs This is a slightly simplified version of exercise 3.3 in Purely Functional Data Structures by Chris Okasaki. According to the exercise, I shall now show that listToTree takes O(n) time. Which I can't. :-( There are trivially ceil(log n) recursive calls to listToTreeR, meaning ceil(log n) calls to mergePairs. The running time of mergePairs is dependent on the length of the list, and the sizes of the trees. The length of the list is 2^h-1, and the sizes of the trees are log(n/(2^h)), where h=log n is the first recursive step, and h=1 is the last recursive step. Each call to mergePairs thus takes time (2^h-1) * log(n/(2^h)) I'm having trouble taking this analysis any further. Can anyone give me a hint in the right direction?

    Read the article

  • Unix: millionth number in the serie 2 3 4 6 9 13 19 28 42 63 ... ?

    - by HH
    It takes about minute to achieve 3000 in my comp but I need to know the millionth number in the serie. The definition is recursive so I cannot see any shortcuts except to calculate everything before the millionth number. How can you fast calculate millionth number in the serie? Serie Def n_{i+1} = \floor{ 3/2 * n_{i} } and n_{0}=2. Interestingly, only one site list the serie according to Goolge: this one. Too slow Bash code #!/bin/bash function serie { n=$( echo "3/2*$n" | bc -l | tr '\n' ' ' | sed -e 's@\\@@g' -e 's@ @@g' ); # bc gives \ at very large numbers, sed-tr for it n=$( echo $n/1 | bc ) #DUMMY FLOOR func } n=2 nth=1 while [ true ]; #$nth -lt 500 ]; do serie $n # n gets new value in the function throught global value echo $nth $n nth=$( echo $nth + 1 | bc ) #n++ done

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >