Search Results

Search found 20573 results on 823 pages for 'pretty big scheme'.

Page 25/823 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Fast Data: Go Big. Go Fast.

    - by J Swaroop
    Cross-posting Dain Hansen's excellent recap of the Big Data/Fast Data announcement during OOW: For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details.

    Read the article

  • Developing an analytics's system processing large amounts of data - where to start

    - by Ryan
    Imagine you're writing some sort of Web Analytics system - you're recording raw page hits along with some extra things like tagging cookies etc and then producing stats such as Which pages got most traffic over a time period Which referers sent most traffic Goals completed (goal being a view of a particular page) And more advanced things like which referers sent the most number of vistors who later hit a goal. The naieve way of approaching this would be to throw it in a relational database and run queries over it - but that won't scale. You could pre-calculate everything (have a queue of incoming 'hits' and use to update report tables) - but what if you later change a goal - how could you efficiently re-calculate just the data that would be effected. Obviously this has been done before ;) so any tips on where to start, methods & examples, architecture, technologies etc.

    Read the article

  • single for-loop runtime explanation problem

    - by owwyess
    I am analyzing some running times of different for-loops, and as I'm getting more knowledge, I'm curious to understand this problem which I have still yet to find out. I have this exercise called "How many stars are printed": for (int i = N; i > 1; i = i/2) System.out.println("*"); The answers to pick from is A: ~log N B: ~N C: ~N log N D: ~0.5N^2 So the answer should be A and I agree to that, but on the other side.. Let's say N = 500 what would Log N then be? It would be 2.7. So what if we say that N=500 on our exercise above? That would most definitely print more han 2.7 stars? How is that related? Because it makes sense to say that if the for-loop looked like this: for (int i = 0; i < N; i++) it would print N stars. I hope to find an explanation for this here, maybe I'm interpreting all these things wrong and thinking about it in a bad way. Thanks in advance.

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • Vector with Constant-Time Remove - still a Vector?

    - by Darrel Hoffman
    One of the drawbacks of most common implementations of the Vector class (or ArrayList, etc. Basically any array-backed expandable list class) is that their remove() operation generally operates in linear time - if you remove an element, you must shift all elements after it one space back to keep the data contiguous. But what if you're using a Vector just to be a list-of-things, where the order of the things is irrelevant? In this case removal can be accomplished in a few simple steps: Swap element to be removed with the last element in the array Reduce size field by 1. (No need to re-allocate the array, as the deleted item is now beyond the size field and thus not "in" the list any more. The next add() will just overwrite it.) (optional) Delete last element to free up its memory. (Not needed in garbage-collected languages.) This is clearly now a constant-time operation, since only performs a single swap regardless of size. The downside is of course that it changes the order of the data, but if you don't care about the order, that's not a problem. Could this still be called a Vector? Or is it something different? It has some things in common with "Set" classes in that the order is irrelevant, but unlike a Set it can store duplicate values. (Also most Set classes I know of are backed by a tree or hash map rather than an array.) It also bears some similarity to Heap classes, although without the log(N) percolate steps since we don't care about the order.

    Read the article

  • how to disable all system sounds - permanently

    - by studiohack
    I prefer the sound scheme of Windows 7 HP to be set to "No Sounds" ALL the time. The problem is, when I switch themes, the themes switch sound schemes also. I know I can go in and disable/change to "No Sounds" every time by hand, but is there a way to turn it off and keep it off, regardless of the theme? Thanks!

    Read the article

  • OSS App Hackathon @ National Information Society Agency

    - by Edward J. Yoon
    Yesterday, there was a OSS App Hackathon arranged by the NIA (National Information Society Agency) in Seoul. I attended as a panel of judges w/ Prof. Lee of the Next, NHN University. A lot of people were in there. You can read more details (Korean news) here:  - http://news.naver.com/main/read.nhn?mode=LSD&mid=sec&sid1=105&oid=138&aid=0001997038

    Read the article

  • questions on a particular algorithm

    - by paul smith
    Upon searching for a fast primr algorithm, I stumbled upon this: public static boolean isP(long n) { if (n==2 || n==3) return true; if ((n&0x1)==0 || n%3==0 || n<2) return false; long root=(long)Math.sqrt(n)+1L; // we check just numbers of the form 6*k+1 and 6*k-1 for (long k=6;k<=root;k+=6) { if (n%(k-1)==0) return false; if (n%(k+1)==0) return false; } return true; } My questions are: Why is long being used everywhere instead of int? Because with a long type the argument could be much larger than Integer.MAX thus making the method more flexible? In the second 'if', is n&0x1 the same as n%2? If so why didn't the author just use n%2? To me it's more readable. The line that sets the 'root' variable, why add the 1L? What is the run-time complexity? Is it O(sqrt(n/6)) or O(sqrt(n)/6)? Or would we just say O(n)?

    Read the article

  • Read how a customer uses Oracle NoSQL Database

    - by Jean-Pierre Dijcks
    For those who have had the pleasure to be in SF for Oracle Openworld, you might have seen or heard about this story already. If you did not, here is a great story on how to use Oracle NoSQL Database. Apart from all the cool technology, I'm just excited that this is a company founded by a football international and dealing with sports data, games and other cool things. Like an all things cool combo in one place.

    Read the article

  • Can you work for the big (Google, Microsoft, Facebook etc.) without getting too much involved?

    - by Developer Art
    Having seen people talking about interviewing and working for the big companies, I keep wondering how much are you expected to actually get involved in there. 1) That's because I keep seeing folks from Google and Microsoft and others writing in forums, blogging, tweeting, speaking at conferences and seemingly doing this on the 24/7/365 basis from their office, apartment, hotel and even plane. Are you really expected to commit that much if you come to work for them? Do they want you to think about your work while you're eating, sleeping, taking a shower, making love and so on? Can you in fact "switch off" at five and go home forgetting everything? Perhaps you have a hobby, family life, kids, friends, personal projects anyone? Is it so that if you work for the big then you're expected not to have any life outside of the company? You can't develop own projects, have own clients and just have another life? 2) One other thing is the work contracts the big use. I've heard for instance that when you join Microsoft you need to provide a list of projects you're currently working on and after that anything new you'll come up with during your employment automatically belongs to the company. Are all of the big doing this? Can you deny signing a contract until such clause is removed or with the big it is "take it or leave it" because the legal department won't accept any change? Can you make them write the contract in that manner that they step away from anything you've developed in your private time? Of all the big I have only been at SAP during my internship. Lately while browsing through the old papers I've found my old contact which stipulated they owned everything I developed or invented during my employment, which I would never have signed these days. On a side note I don't think I would return to SAP since I remember most people there were clueless and provided the impression they were simply sitting out their years waiting for the retirement. But anyway, what do the other big put in their contracts? How far do you get involved when you go working for the big? Or perhaps fully committed with your body and soul? P.S. I'm not planning to join any of them I'm just curious.

    Read the article

  • Using replacement to get possible outcomes to then search through HUGE amount of data

    - by Samuel Cambridge
    I have a database table holding 40 million records (table A). Each record has a string a user can search for. I also have a table with a list of character replacements (table B) i.e. i = Y, I = 1 etc. I need to be able to take the string a user is searching for, iterate through each letter and create an array of every possible outcome (the users string, then each outcome with alternative letters used). I need to check for alternatives on both lower and uppercase letters in the word A search string can be no longer than 10 characters long. I'm using PHP and a MySQL database. Does anyone have any thoughts / articles / guidance on doing this in an efficient way?

    Read the article

  • Oracle Subscribes To The Big Data Journal: So Can You!

    - by Roxana Babiciu
    Oracle Product Development has funded access to the Big Data Journal for all Oracle employees. Big Data is a highly innovative, open-access, peer-reviewed journal of world-class research, exploring the challenges and opportunities in collecting, analyzing, and disseminating vast amounts of data. This includes data science, big data infrastructure and analytics, and pervasive computing. Register here to receive Big Data articles online or sign up for the table of content alert or the RSS feed.

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Java: very slow tomcat and too big war file

    - by NaN
    I created some sort of RESTful API backend for a mobile app. It's written completely in Java using Jersey as Framework. At the moment no database is used, it's all in the memory, but this is no problem so far (it's only for prototyping purposes). I ordered the smallest package from digital ocean and installed tomcat7. All in all tomcat works, but I have three major problems: 1) It takes a long time until tomcat deploys the app: I deploy it per tomcat manager and it takes about 2 minutes unit the site works (excl. war upload time). 2) The war files are quite big (16MB): I don't know why they are so big. There are no database dependencies and most logic is written in plain java. Okay, we are using jersey, but 16MB are a lot for the logic of a small webservice. 3) I have to restart tomcat all 3 days or so. It looks like a memory leak or something similar. If the app runs for a few days the response time is quite high and the server seems to be frozen. It works again, if I restart tomcat per ssh. You can find my mvn pom file right here. Do you have some tips? Are there good tomcat alternatives?

    Read the article

  • How to implement continuations?

    - by Kyle Cronin
    I'm working on a Scheme interpreter written in C. Currently it uses the C runtime stack as its own stack, which is presenting a minor problem with implementing continuations. My current solution is manual copying of the C stack to the heap then copying it back when needed. Aside from not being standard C, this solution is hardly ideal. What is the simplest way to implement continuations for Scheme in C?

    Read the article

  • Python Macros: Use Cases?

    - by Rick Copeland
    If Python had a macro facility similar to Lisp/Scheme (something like MetaPython), how would you use it? If you are a Lisp/Scheme programmer, what sorts of things do you use macros for (other than things that have a clear syntactic parallel in Python such as a while loop)?

    Read the article

  • Moving users folder on Windows-7 to another partition - bad idea?

    - by Donat
    Hi, I'd like to re-submit here a question posted by Benjol on Aug 17at 5:57 "Moving users folder on Windows Vista to another partition - bad idea?" (I can't post one than one link until I earn "10 reputation" and removed my "answer" there to post my follow-up questions here). I am anxiously getting ready at long last to to carry out a clean install (using custom install option) from Vista to Windows-7 Home Premium 64bit with the free upgrade I received late October. For my Vista system I successfully set-up last Summer a multi-partitions scheme with Users and Program Data on a a different partition than the operating system (see link below, and its subsequent links in my comment for details). http://tuts4tech.net/2009/08/05/windows-7-move-the-users-and-program-files-directories-to-a-different-partition/comment-page-1/#comment-562 I was planning a similar set-up for windows 7, a little more streamlined, with OS, Program Files on C:, Users and Program Data on D:, and TV media recording on a separate partition. Reading the Question submitted by Benjol, I am second guessing too. Is moving Users and Program Data on a different partition than the default primary partition with OS and Program Files such a good idea? The couple of people I talked to at the official Microsoft Windows 7 booth at CES 2010 gave the same answer to the intention of moving the Users profile folder to another partition. In a nutshell, they all told me that they used to do this in XP and less in Vista but not anymore with Windows 7... "It is stable, after two months still no problem" I had the feeling it was a scripted answer to emphasize how Windows 7 is so stable and efficient... (Will Windows-7 system not become bugged down over the course of several months to a year or two? Only time will tell) Long story short, I share the same view than Benjol expressed with respect to being "able to backup and restore system and user data independently." I just received a 2TB usb2, eSATA external hard drive as a back-up drive, which includes NTI Shadow 4 (4.1.0.150) for back-up solution. I took note of the issue with NTUSER.DAT and I will read more about Volume Shadow Copy Service (VSS) for Windows 7. I am willing to put the effort if placing Users and Program Data on a different partition would allow to restore a fresher OS+Program image when the system gets bugged down. Questions: Is it such a bad idea? What is the "easy route" referred by Benjol in his post? Is it to just relocate folders to another partition using the Folder property tool? (It is not practical for several users and might not provide a straightforward restore process of just OS and Program Files when needed.) I am starting to learn about Windows 7 libraries. Would Windows 7 libraries be another alternative to achieve this? All this reading to decide how to organize the partition scheme for my custom system is starting to be confusing. I apologize for this lengthy Question. It is my first day here on SuperUser and I am just learning how different from a discussion thread it is. Thank you in advance for all your suggestions and comments. Donat

    Read the article

  • Unexpected results from custom partitioning scheme - Ubuntu preseed unattended install

    - by Mark Renouf
    Hi... I need some help with partman-auto custom recipe in preseed.... it's doing unexpected things, the docs aren't so clear. This is in my preseed file: d-i partman-auto/expert_recipe string \ boot-root-var :: \ 1024 100 1024 ext4 \ $primary{ } $bootable{ } \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /boot } \ . \ 2048 50 4096 ext4 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ / } \ . \ 4096 25 100000 ext4 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /var } \ . \ 256 75 100% linux-swap \ method{ swap } format{ } \ . Given an 80GB drive. I ended up with these partitions: / 2GB /boot 1GB /var 4GB swap 72GB What went wrong? What I want is: / 2GB /boot 1GB /var 72GB swap 4GB

    Read the article

  • Power scheme faulty [closed]

    - by user40287
    Hi, I use a laptop. When I take the power cable out to run on batteries, the screen darkens, as it is no longer so bright, to save on battery power. When I put the power cable back in, the screen brightens back to it's original brightness. However, I have since replaced the power cable, and the screen remains darkened. How do I regain the original brightness? Thanks, Steve

    Read the article

  • Cache coherence literature for big (>=16CPU) systems

    - by osgx
    Hello What books and articles can you recommend to learn basis of cache coherence problems in big SMP systems (which are NUMA and ccNUMA really) with =16 cpu sockets? Something like SGI Altix architecture analysis may be interesting. What protocols (MOESI, smth else) can scale up well?

    Read the article

  • Power scheme faulty

    - by user40287
    Hi, I use a laptop. When I take the power cable out to run on batteries, the screen darkens, as it is no longer so bright, to save on battery power. When I put the power cable back in, the screen brightens back to it's original brightness. However, I have since replaced the power cable, and the screen remains darkened. How do I regain the original brightness? Thanks, Steve

    Read the article

  • zero downtime during database scheme upgrade on SQL 2008

    - by eject
    I have web application on IIS7 with SQL server 2008 as RDBMS. Need get 0 downtime during future upgrades of ASP.NET code and DB schema as well. I need to get right scenario for this. I have 2 web servers and 2 sql servers and one http load balancer whcih allows to switch web backend server for web requests. Main goal is to make 1st web server and DB server up and running, update code and db schema on 2nd server and then switch all the requests to 2nd server and then main problem - how to copy data from 1st database 2nd (which was changed during upgrade).

    Read the article

  • How big are most production databases?

    - by TheLQ
    Seeing some posts that say 10 million rows in a table is nothing made me wonder: Just how big are most production databases? I'm not talking about physical disk size of the database (saying 60 GB tells me nothing), I'm wondering how many rows. Personally the largest DB I've ever worked with was a test DB of a production system with 10 million rows. But I've seen people brag about DB's in the billions of rows.

    Read the article

  • Ideal permissions scheme for multiple Apache/PHP sites...

    - by Omega
    I'm hosting multiple sites from one server where each site has it's own user and www directory in their home dir. Currently our web server runs as user nobody(99). We're noticing that to run several popular scripts and engines, they require write access to their own files. As the home directory is owned by the user, not nobody(99), what is the best policy or change in hosting configuration that would: ...make it so that all the various engines and platforms work? ...still allow us to work with files and edit them without having to diddle with permissions as root? Thanks for the advice!

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >