Search Results

Search found 11362 results on 455 pages for 'big o analysis'.

Page 23/455 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Add title to meta analysis forest plot

    - by Timothy Alston
    I am meta-analysing some studies and drawing a forest plot for my results. However I can`t seem to get the forest plot to display the title. An example of my code is: require(meta) parameter1<-metaprop(sm="PLOGIT", event=c(4,16,3,2,10,1,0,2), n=c(90,402,89,29,153,86,21,48), level = 0.95, studlab=c("study 1", "study 2", "study 3", "study 4", "study 5", "study 6", "study 7", "study 8"), title="meta analysis 1") forest(parameter1) When it produces the forest plot, the title "meta analysis 1" is missing. How can I add this in? Thanks in advance, Timothy

    Read the article

  • MS Analysis Services OLAP API for Python

    - by Kaloyan Todorov
    I am looking for a way to connect to a MS Analysis Services OLAP cube, run MDX queries, and pull the results into Python. In other words, exactly what Excel does. Is there a solution in Python that would let me do that? Someone with a similar question going pointed to Django's ORM. As much as I like the framework, this is not what I am looking for. I am also not looking for a way to pull rows and aggregate them -- that's what Analysis Services is for in the first place. Ideas? Thanks.

    Read the article

  • Drag big picture in small layer?

    - by Tronic
    Hi, I need a plugin for jquery or another js framework, where I can define a small div where i can drag around a big picture, so i get only a clipping of the picture. any ideas? edit: i try to explain i have a small div, like 600px x 450px. this div behaves like a clipping window for a big picture with like 3000px x 2000px. so i only see a specific cutout of the big picture. and i need to drag that big picture around in this small clipping window! c

    Read the article

  • Static code analysis tools

    - by Anil Namde
    Whether JavaSript, C# or C++ main problem i face while reading the code is which function is called by which function. This problem is big when dealing with BIG code. Is there any static code analysis tool/technique/plugins using which a graphical representation of the code can be generated(something like below) so that reading/analyzing code becomes easy? .... --outerFuntion() ---innerFunction() ----innerFunction2() --outerFunction2() .... Please provide your inputs/opinions on this Thanks all,

    Read the article

  • Data structure for pattern matching.

    - by alvonellos
    Let's say you have an input file with many entries like these: date, ticker, open, high, low, close, <and some other values> And you want to execute a pattern matching routine on the entries(rows) in that file, using a candlestick pattern, for example. (See, Doji) And that pattern can appear on any uniform time interval (let t = 1s, 5s, 10s, 1d, 7d, 2w, 2y, and so on...). Say a pattern matching routine can take an arbitrary number of rows to perform an analysis and contain an arbitrary number of subpatterns. In other words, some patterns may require 4 entries to operate on. Say also that the routine (may) later have to find and classify extrema (local and global maxima and minima as well as inflection points) for the ticker over a closed interval, for example, you could say that a cubic function (x^3) has the extrema on the interval [-1, 1]. (See link) What would be the most natural choice in terms of a data structure? What about an interface that conforms a Ticker object containing one row of data to a collection of Ticker so that an arbitrary pattern can be applied to the data. What's the first thing that comes to mind? I chose a doubly-linked circular linked list that has the following methods: push_front() push_back() pop_front() pop_back() [] //overloaded, can be used with negative parameters But that data structure seems very clumsy, since so much pushing and popping is going on, I have to make a deep copy of the data structure before running an analysis on it. So, I don't know if I made my question very clear -- but the main points are: What kind of data structures should be considered when analyzing sequential data points to conform to a pattern that does NOT require random access? What kind of data structures should be considered when classifying extrema of a set of data points?

    Read the article

  • need explanation on amortization in algorithm

    - by Pradeep
    I am a learning algorithm analysis and came across a analysis tool for understanding the running time of an algorithm with widely varying performance which is called as amortization. The autor quotes " An array with upper bound of n elements, with a fixed bound N, on it size. Operation clear takes O(n) time, since we should dereference all the elements in the array in order to really empty it. " The above statement is clear and valid. Now consider the next content: "Now consider a series of n operations on an initially empty array. if we take the worst case viewpoint, the running time is O(n^2), since the worst case of a sigle clear operation in the series is O(n) and there may be as many as O(n) clear operations in the series." From the above statement how is the time complexity O(n^2)? I did not understand the logic behind it. if 'n' operations are performed how is it O(n ^2)? Please explain what the autor is trying to convey..

    Read the article

  • Common request: export #Tabular model and data to #PowerPivot

    - by Marco Russo (SQLBI)
    I received this request in many courses, messages and also forum discussions: having an Analysis Services Tabular model, it would be nice being able to extract a correspondent PowerPivot data model. In order of priority, here are the specific feature people (including me) would like to see: Create an empty PowerPivot workbook with the same data model of a Tabular model Change the connections of the tables in the PowerPivot workbook extracting data from the Tabular data model Every table should have an EVALUATE ‘TableName’ query in DAX Apply a filter to data extracted from every table For example, you might want to extract all data for a single country or year or customer group Using the same technique of applying filter used for role based security would be nice Expose an API to automate the process of creating a PowerPivot workbook Use case: prepare one workbook for every employee containing only its data, that he can use offline Common request for salespeople who want a mini-BI tool to use in front of the customer/lead/supplier, regardless of a connection available This feature would increase the adoption of PowerPivot and Tabular (and, therefore, Business Intelligence licenses instead of Standard), and would probably raise the sales of Office 2013 / Office 365 driven by ISV, who are the companies who requests this feature more. If Microsoft would do this, it would be acceptable it only works on Office 2013. But if a third-party will do that, it will make sense (for their revenues) to cover both Excel 2010 and Excel 2013. Another important reason for this feature is that the “Offline cube” feature that you have in Excel is not available when your PivotTable is connected to a Tabular model, but it can only be used when you connect to Analysis Services Multidimensional. If you think this is an important features, you can vote this Connect item.

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • Cache coherence literature for big (>=16CPU) systems

    - by osgx
    Hello What books and articles can you recommend to learn basis of cache coherence problems in big SMP systems (which are NUMA and ccNUMA really) with =16 cpu sockets? Something like SGI Altix architecture analysis may be interesting. What protocols (MOESI, smth else) can scale up well?

    Read the article

  • How to cross-reference many character encodings with ASCII OR UTFx?

    - by Garet Claborn
    I'm working with a binary structure, the goal of which is to index the significance of specific bits for any character encoding so that we may trigger events while doing specific checks against the profile. Each character encoding scheme has an associated system record. This record's leading value will be a C++ unsigned long long binary value and signifies the length, in bits, of encoded characters. Following the length are three values, each is a bit field of that length. offset_mask - defines the occurrence of non-printable characters within the min,max of print_mask range_mask - defines the occurrence of the most popular 50% of printable characters print_mask - defines the occurrence value of printable characters The structure of profiles has changed from the op of this question. Most likely I will try to factorize or compress these values in the long-term instead of starting out with ranges after reading more. I have to write some of the core functionality for these main reasons. It has to fit into a particular event architecture we are using, Better understanding of character encoding. I'm about to need it. Integrating into non-linear design is excluding many libraries without special hooks. I'm unsure if there is a standard, cross-encoding mechanism for communicating such data already. I'm just starting to look into how chardet might do profiling as suggested by @amon. The Unicode BOM would be easily enough (for my current project) if all encodings were Unicode. Of course ideally, one would like to support all encodings, but I'm not asking about implementation - only the general case. How can these profiles be efficiently populated, to produce a set of bitmasks which we can use to match strings with common characters in multiple languages? If you have any editing suggestions please feel free, I am a lightweight when it comes to localization, which is why I'm trying to reach out to the more experienced. Any caveats you may be able to help with will be appreciated.

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by J Swaroop
    Cross-posting Dain Hansen's excellent recap of the Big Data/Fast Data announcement during OOW: For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details.

    Read the article

  • Developing an analytics's system processing large amounts of data - where to start

    - by Ryan
    Imagine you're writing some sort of Web Analytics system - you're recording raw page hits along with some extra things like tagging cookies etc and then producing stats such as Which pages got most traffic over a time period Which referers sent most traffic Goals completed (goal being a view of a particular page) And more advanced things like which referers sent the most number of vistors who later hit a goal. The naieve way of approaching this would be to throw it in a relational database and run queries over it - but that won't scale. You could pre-calculate everything (have a queue of incoming 'hits' and use to update report tables) - but what if you later change a goal - how could you efficiently re-calculate just the data that would be effected. Obviously this has been done before ;) so any tips on where to start, methods & examples, architecture, technologies etc.

    Read the article

  • single for-loop runtime explanation problem

    - by owwyess
    I am analyzing some running times of different for-loops, and as I'm getting more knowledge, I'm curious to understand this problem which I have still yet to find out. I have this exercise called "How many stars are printed": for (int i = N; i > 1; i = i/2) System.out.println("*"); The answers to pick from is A: ~log N B: ~N C: ~N log N D: ~0.5N^2 So the answer should be A and I agree to that, but on the other side.. Let's say N = 500 what would Log N then be? It would be 2.7. So what if we say that N=500 on our exercise above? That would most definitely print more han 2.7 stars? How is that related? Because it makes sense to say that if the for-loop looked like this: for (int i = 0; i < N; i++) it would print N stars. I hope to find an explanation for this here, maybe I'm interpreting all these things wrong and thinking about it in a bad way. Thanks in advance.

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • Vector with Constant-Time Remove - still a Vector?

    - by Darrel Hoffman
    One of the drawbacks of most common implementations of the Vector class (or ArrayList, etc. Basically any array-backed expandable list class) is that their remove() operation generally operates in linear time - if you remove an element, you must shift all elements after it one space back to keep the data contiguous. But what if you're using a Vector just to be a list-of-things, where the order of the things is irrelevant? In this case removal can be accomplished in a few simple steps: Swap element to be removed with the last element in the array Reduce size field by 1. (No need to re-allocate the array, as the deleted item is now beyond the size field and thus not "in" the list any more. The next add() will just overwrite it.) (optional) Delete last element to free up its memory. (Not needed in garbage-collected languages.) This is clearly now a constant-time operation, since only performs a single swap regardless of size. The downside is of course that it changes the order of the data, but if you don't care about the order, that's not a problem. Could this still be called a Vector? Or is it something different? It has some things in common with "Set" classes in that the order is irrelevant, but unlike a Set it can store duplicate values. (Also most Set classes I know of are backed by a tree or hash map rather than an array.) It also bears some similarity to Heap classes, although without the log(N) percolate steps since we don't care about the order.

    Read the article

  • OSS App Hackathon @ National Information Society Agency

    - by Edward J. Yoon
    Yesterday, there was a OSS App Hackathon arranged by the NIA (National Information Society Agency) in Seoul. I attended as a panel of judges w/ Prof. Lee of the Next, NHN University. A lot of people were in there. You can read more details (Korean news) here:  - http://news.naver.com/main/read.nhn?mode=LSD&mid=sec&sid1=105&oid=138&aid=0001997038

    Read the article

  • questions on a particular algorithm

    - by paul smith
    Upon searching for a fast primr algorithm, I stumbled upon this: public static boolean isP(long n) { if (n==2 || n==3) return true; if ((n&0x1)==0 || n%3==0 || n<2) return false; long root=(long)Math.sqrt(n)+1L; // we check just numbers of the form 6*k+1 and 6*k-1 for (long k=6;k<=root;k+=6) { if (n%(k-1)==0) return false; if (n%(k+1)==0) return false; } return true; } My questions are: Why is long being used everywhere instead of int? Because with a long type the argument could be much larger than Integer.MAX thus making the method more flexible? In the second 'if', is n&0x1 the same as n%2? If so why didn't the author just use n%2? To me it's more readable. The line that sets the 'root' variable, why add the 1L? What is the run-time complexity? Is it O(sqrt(n/6)) or O(sqrt(n)/6)? Or would we just say O(n)?

    Read the article

  • Read how a customer uses Oracle NoSQL Database

    - by Jean-Pierre Dijcks
    For those who have had the pleasure to be in SF for Oracle Openworld, you might have seen or heard about this story already. If you did not, here is a great story on how to use Oracle NoSQL Database. Apart from all the cool technology, I'm just excited that this is a company founded by a football international and dealing with sports data, games and other cool things. Like an all things cool combo in one place.

    Read the article

  • Can you work for the big (Google, Microsoft, Facebook etc.) without getting too much involved?

    - by Developer Art
    Having seen people talking about interviewing and working for the big companies, I keep wondering how much are you expected to actually get involved in there. 1) That's because I keep seeing folks from Google and Microsoft and others writing in forums, blogging, tweeting, speaking at conferences and seemingly doing this on the 24/7/365 basis from their office, apartment, hotel and even plane. Are you really expected to commit that much if you come to work for them? Do they want you to think about your work while you're eating, sleeping, taking a shower, making love and so on? Can you in fact "switch off" at five and go home forgetting everything? Perhaps you have a hobby, family life, kids, friends, personal projects anyone? Is it so that if you work for the big then you're expected not to have any life outside of the company? You can't develop own projects, have own clients and just have another life? 2) One other thing is the work contracts the big use. I've heard for instance that when you join Microsoft you need to provide a list of projects you're currently working on and after that anything new you'll come up with during your employment automatically belongs to the company. Are all of the big doing this? Can you deny signing a contract until such clause is removed or with the big it is "take it or leave it" because the legal department won't accept any change? Can you make them write the contract in that manner that they step away from anything you've developed in your private time? Of all the big I have only been at SAP during my internship. Lately while browsing through the old papers I've found my old contact which stipulated they owned everything I developed or invented during my employment, which I would never have signed these days. On a side note I don't think I would return to SAP since I remember most people there were clueless and provided the impression they were simply sitting out their years waiting for the retirement. But anyway, what do the other big put in their contracts? How far do you get involved when you go working for the big? Or perhaps fully committed with your body and soul? P.S. I'm not planning to join any of them I'm just curious.

    Read the article

  • Using replacement to get possible outcomes to then search through HUGE amount of data

    - by Samuel Cambridge
    I have a database table holding 40 million records (table A). Each record has a string a user can search for. I also have a table with a list of character replacements (table B) i.e. i = Y, I = 1 etc. I need to be able to take the string a user is searching for, iterate through each letter and create an array of every possible outcome (the users string, then each outcome with alternative letters used). I need to check for alternatives on both lower and uppercase letters in the word A search string can be no longer than 10 characters long. I'm using PHP and a MySQL database. Does anyone have any thoughts / articles / guidance on doing this in an efficient way?

    Read the article

  • Oracle Subscribes To The Big Data Journal: So Can You!

    - by Roxana Babiciu
    Oracle Product Development has funded access to the Big Data Journal for all Oracle employees. Big Data is a highly innovative, open-access, peer-reviewed journal of world-class research, exploring the challenges and opportunities in collecting, analyzing, and disseminating vast amounts of data. This includes data science, big data infrastructure and analytics, and pervasive computing. Register here to receive Big Data articles online or sign up for the table of content alert or the RSS feed.

    Read the article

  • Java: very slow tomcat and too big war file

    - by NaN
    I created some sort of RESTful API backend for a mobile app. It's written completely in Java using Jersey as Framework. At the moment no database is used, it's all in the memory, but this is no problem so far (it's only for prototyping purposes). I ordered the smallest package from digital ocean and installed tomcat7. All in all tomcat works, but I have three major problems: 1) It takes a long time until tomcat deploys the app: I deploy it per tomcat manager and it takes about 2 minutes unit the site works (excl. war upload time). 2) The war files are quite big (16MB): I don't know why they are so big. There are no database dependencies and most logic is written in plain java. Okay, we are using jersey, but 16MB are a lot for the logic of a small webservice. 3) I have to restart tomcat all 3 days or so. It looks like a memory leak or something similar. If the app runs for a few days the response time is quite high and the server seems to be frozen. It works again, if I restart tomcat per ssh. You can find my mvn pom file right here. Do you have some tips? Are there good tomcat alternatives?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >