Search Results

Search found 922 results on 37 pages for 'linear'.

Page 30/37 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Python fit polynomial, power law and exponential from data

    - by Nadir
    I have some data (x and y coordinates) coming from a study and I have to plot them and to find the best curve that fits data. My curves are: polynomial up to 6th degree; power law; and exponential. I am able to find the best fit for polynomial with while(i < 6): coefs, val = poly.polyfit(x, y, i, full=True) and I take the degree that minimizes val. When I have to fit a power law (the most probable in my study), I do not know how to do it correctly. This is what I have done. I have applied the log function to all x and y and I have tried to fit it with a linear polynomial. If the error (val) is lower than the others polynomial tried before, I have chosen the power law function. Am I correct? Now how can I reconstruct my power law starting from the line y = mx + q in order to draw it with the original points? I need also to display the function found. I have tried with: def power_law(x, m, q): return q * (x**m) using x_new = np.linspace(x[0], x[-1], num=len(x)*10) y1 = power_law(x_new, coefs[0], coefs[1]) popt, pcov = curve_fit(power_law, x_new, y1) but it seems not to work well.

    Read the article

  • What is Adobe Flex? Is it just Flash II?

    - by Adam Davis
    Question Alright, I'm confused by all the buzzwords and press release bingo going on. What is the relationship between flash and flex: Replace flash (not really compatible) Enhance flash The next version of flash but still basically compatible Separate technology altogether ??? If I'm starting out in Flash now, should I just skip to Flex? Follow up Ok, so what I'm hearing is that there's three different parts to the puzzle: Flash The graphical editor used to make "Flash Movies", ie it's an IDE that focuses on the visual aspect of "Flash" (Officially Flash CS3?) The official name for the display plugins (ie, "Download Flash Now!") A general reference to the entire technology stack In terms of the editor, it's a linear timeline based editor, best used for animations with complex interactivity. Actionscript The "Flash" programming language Flex An Adobe Flash IDE that focuses on the coding/programming aspect of "Flash" (Flex Builder?) A Flash library that enhances Flash and makes it easier to program for (Flex SDK?) Is not bound to a timeline (as the Flash IDE is) and so "standard" applications are more easily accomplished. Is this correct?

    Read the article

  • Big-O of PHP functions?

    - by Kendall Hopkins
    After using PHP for a while now, I've noticed that not all PHP built in functions as fast as expected. Consider the below two possible implementations of a function that finds if a number is prime using a cached array of primes. //very slow for large $prime_array $prime_array = array( 2, 3, 5, 7, 11, 13, .... 104729, ... ); $result_array = array(); foreach( $array_of_number => $number ) { $result_array[$number] = in_array( $number, $large_prime_array ); } //still decent performance for large $prime_array $prime_array => array( 2 => NULL, 3 => NULL, 5 => NULL, 7 => NULL, 11 => NULL, 13 => NULL, .... 104729 => NULL, ... ); foreach( $array_of_number => $number ) { $result_array[$number] = array_key_exists( $number, $large_prime_array ); } This is because in_array is implemented with a linear search O(n) which will linearly slow down as $prime_array grows. Where the array_key_exists function is implemented with a hash lookup O(1) which will not slow down unless the hash table gets extremely populated (in which case it's only O(logn)). So far I've had to discover the big-O's via trial and error, and occasionally looking at the source code. Now for the question... I was wondering if there was a list of the theoretical (or practical) big O times for all* the PHP built in functions. *or at least the interesting ones For example find it very hard to predict what the big O of functions listed because the possible implementation depends on unknown core data structures of PHP: array_merge, array_merge_recursive, array_reverse, array_intersect, array_combine, str_replace (with array inputs), etc.

    Read the article

  • How to analyze the efficiency of this algorithm Part 2

    - by Leonardo Lopez
    I found an error in the way I explained this question before, so here it goes again: FUNCTION SEEK(A,X) 1. FOUND = FALSE 2. K = 1 3. WHILE (NOT FOUND) AND (K < N) a. IF (A[K] = X THEN 1. FOUND = TRUE b. ELSE 1. K = K + 1 4. RETURN Analyzing this algorithm (pseudocode), I can count the number of steps it takes to finish, and analyze its efficiency in theta notation, T(n), a linear algorithm. OK. This following code depends on the inner formulas inside the loop in order to finish, the deal is that there is no variable N in the code, therefore the efficiency of this algorithm will always be the same since we're assigning the value of 1 to both A & B variables: 1. A = 1 2. B = 1 3. UNTIL (B > 100) a. B = 2A - 2 b. A = A + 3 Now I believe this algorithm performs in constant time, always. But how can I use Algebra in order to find out how many steps it takes to finish?

    Read the article

  • Read line and change the line that not consist of certain words and not end with dot

    - by igo
    I wanna read some text files in a folder line by line. for example of 1 txt : Fast and Effective Text Mining Using Linear-time Document Clustering Bjornar Larsen WORD2 Chinatsu Aone SRA International AK, Inc. 4300 Fair Lakes Cow-l Fairfax, VA 22033 {bjornar-larsen, WORD1 I wanna remove line that does not contain of words = word, word2, word3, and does not end with dot . so. from the example, the result will be : Bjornar Larsen WORD2 Chinatsu Aone SRA International, Inc. {bjornar-larsen, WORD1 I am confused, hw to remove the line? it that possible? or can we replace them with a space? here's the code : $url = glob($savePath.'*.txt'); foreach ($url as $file => $files) { $handle = fopen($files, "r") or die ('can not open file'); $ori_content= file_get_contents($files); foreach(preg_split("/((\r?\n)|(\r\n?))/", $ori_content) as $buffer){ $pos1 = stripos($buffer, $word1); $pos2 = stripos($buffer, $word2); $pos3 = stripos($buffer, $word3); $last = $str[strlen($buffer)-1];//read the las character if (true !== $pos1 OR true !== $pos2 OR true !==$pos3 && $last != '.'){ //how to remove } } } please help me, thank you so much :)

    Read the article

  • Stock management of assemblies and its sub parts (relations)

    - by The Disintegrator
    I have to track the stock of individual parts and kits (assemblies) and can't find a satisfactory way of doing this. Sample bogus and hyper simplified database: Table prod: prodID 1 prodName Flux capacitor prodCost 900 prodPrice 1350 (900*1.5) prodStock 3 - prodID 2 prodName Mr Fusion prodCost 300 prodPrice 600 (300*2) prodStock 2 - prodID 3 prodName Time travel kit prodCost 1650 (1350+300) prodPrice 2145 (1650*1.3) prodStock 2 Table rels relID 1 relSrc 1 (Flux capacitor) relType 4 (is a subpart of) relDst 3 (Time travel kit) - relID 2 relSrc 2 (Mr Fusion) relType 4 (is a subpart of) relDst 3 (Time travel kit) prodPrice: it's calculated based on the cost but not in a linear way. In this example for costs of 500 or less, the markup is a 200%. For costs of 500-1000 the markup is 150%. For costs of 1000+ the markup is 130% That's why the time travel kit is much cheaper than the individual parts prodStock: here is my problem. I can sell kits or the individual parts, So the stock of the kits is virtual. The problem when I buy: Some providers sell me the Time Travel kit as a whole (with one barcode) and some sells me the individual parts (with a different barcode) So when I load the stock I don't know how to impute it. The problem when I sell: If I only sell kits, calculate the stock would be easy: "I have 3 Flux capacitors and 2 Mr Fusions, so I have 2 Time travel kits and a Flux Capacitor" But I can sell Kits or individual parts. So, I have to track the stock of the individual parts and the possible kits at the same time (and I have to compensate for the sell price) Probably this is really simple, but I can't see a simple solution. Resuming: I have to find a way of tracking the stock and the database/program is the one who has to do it (I cant ask the clerk to correct the stock) I'm using php+MySql. But this is more a logical problem than a programing one

    Read the article

  • Updating a Minimum spanning tree when a new edge is inserted

    - by Lynette
    Hello, I've been presented the following problem in University: Let G = (V, E) be an (undirected) graph with costs ce = 0 on the edges e € E. Assume you are given a minimum-cost spanning tree T in G. Now assume that a new edge is added to G, connecting two nodes v, tv € V with cost c. a) Give an efficient algorithm to test if T remains the minimum-cost spanning tree with the new edge added to G (but not to the tree T). Make your algorithm run in time O(|E|). Can you do it in O(|V|) time? Please note any assumptions you make about what data structure is used to represent the tree T and the graph G. b)Suppose T is no longer the minimum-cost spanning tree. Give a linear-time algorithm (time O(|E|)) to update the tree T to the new minimum-cost spanning tree. This is the solution I found: Let e1=(a,b) the new edge added Find in T the shortest path from a to b (BFS) if e1 is the most expensive edge in the cycle then T remains the MST else T is not the MST It seems to work but i can easily make this run in O(|V|) time, while the problem asks O(|E|) time. Am i missing something? By the way we are authorized to ask for help from anyone so I'm not cheating :D Thanks in advance

    Read the article

  • LaTeX printing only first two pages of a document

    - by Peter Flom
    I am working in LaTeX, and when I create a pdf file (using LaTeX button or pdfLaTeX button or using yap) the pdf has only the first two pages. No errors. It just stops. If I make the first page longer by adding text, it still stops at end of 2nd page. Any ideas? OK, responding to first comment, here is the code \documentclass{article} \title{Outline of Book} \author{Peter L. Flom} \begin{document} \maketitle \section*{Preface} \subsection*{Audience} \subsection*{What makes this book different?} \subsection*{Necessary background} \subsection*{How to read this book} \section{Introduction} \subsection{The purpose of logistic regression} \subsection{The need for logistic regression} \subsection{Types of logistic regression} \section{General issues in logistic regression} \subsection{Transforming independent and dependent variables} \subsection{Interactions} \subsection{Model selection} \subsection{Parameter estimates, confidence intervals, p values} \subsection{Summary and further reading} \section{Dichotomous logistic regression} \subsection{Introduction, theory, examples} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Ordinal logistic regression} \subsection{Introduction, theory, examples} \subsubsection{Introduction - what are ordinal variables?} \subsubsection{Theory of the model} \subsubsection{Examples for this chapter} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Multinomial logistic regression} \subsection{Introduction, theory, examples} \subsection{Exploratory plots and analysis} \subsection{Basic model fitting} \subsection{Advanced and special issues in model fitting} \subsection{Diagnostic and descriptive plots and analysis} \subsection{Traps and gotchas} \subsection{Power analysis} \subsection{Summary and further reading} \subsection{Exercises} \section{Choosing a model} \subsection{NOIR and its problems} \subsection{Linear vs. ordinal} \subsection{Ordinal vs. multinomial} \subsection{Summary and further reading} \subsection{Exercises} \section{Extensions and related models} \subsection{Other logistic models} \subsection{Multilevel models - PROC NLMIXED and GLIMMIX} \subsection{Loglinear models - PROC CATMOD} \section{Summary} \end{document} thanks Peter

    Read the article

  • stack overflow problem in program

    - by Jay
    So I am currently getting a strange stack overflow exception when i try to run this program, which reads numbers from a list in a data/text file and inserts it into a binary search tree. The weird thing is that when the program works when I have a list of 4095 numbers in random order. However when i have a list of 4095 numbers in increasing order (so it makes a linear search tree), it throws a stack overflow message. The problem is not the static count variable because even when i removed it, and put t=new BinaryNode(x,1) it still gave a stack overflow exception. I tried debugging it, and it broke at if (t == NULL){ t = new BinaryNode(x,count); Here is the insert function. BinaryNode *BinarySearchTree::insert(int x, BinaryNode *t) { static long count=0; count++; if (t == NULL){ t = new BinaryNode(x,count); count=0; } else if (x < t->key){ t->left = insert(x, t->left); } else if (x > t->key){ t->right = insert(x, t->right); } else throw DuplicateItem(); return t; }

    Read the article

  • Using ARIMA to model and forecast stock prices using user-friendly stats program

    - by Brian
    Hi people, Can anyone please offer some insight into this for me? I'm coming from a functional magnetic resonance imaging research background where I analyzed a lot of time series data, and I'd like to analyze the time series of stock prices (or returns) by: 1) modeling a successful stock in a particular market sector and then cross-correlating the time series of this historically successful stock with that of other newer stocks to look for significant relationships; 2) model a stock's price time series and use forecasting (e.g., exponential smoothing) to predict future values of it. I'd like to use non-linear modeling methods (ARIMA and ARCH) to do this. Several questions: How often do ARIMA and ARCH modeling methods (given that the individual who implements them does so accurately) actually fit the stock time series data they target, and what is the optimal fit I can expect? Is the extent to which this model fits the data commensurate with the extent to which it predicts this stock time series' future values? Rather than randomly selecting stocks to compare or model, if profit is my goal, what is an efficient approach, if any, to selecting the stocks I'm going to analyze? Which stats program is the most user-friendly for this? Any thoughts on this would be great and would go a long way for me. Thanks, Brian

    Read the article

  • Time complexity with bit cost

    - by Keyser
    I think I might have completely misunderstood bit cost analysis. I'm trying to wrap my head around the concept of studying an algorithm's time complexity with respect to bit cost (instead of unit cost) and it seems to be impossible to find anything on the subject. Is this considered to be so trivial that no one ever needs to have it explained to them? Well I do. (Also, there doesn't even seem to be anything on wikipedia which is very unusual). Here's what I have so far: The bit cost of multiplication and division of two numbers with n bits is O(n^2) (in general?) So, for example: int number = 2; for(int i = 0; i < n; i++ ){ number = i*i; } has a time complexity with respect to bit cost of O(n^3), because it does n multiplications (right?) But in a regular scenario we want the time complexity with respect to the input. So, how does that scenario work? The number of bits in i could be considered a constant. Which would make the time complexity the same as with unit cost except with a bigger constant (and both would be linear). Also, I'm guessing addition and subtraction can be done in constant time, O(1). Couldn't find any info on it but it seems reasonable since it's one assembler operation.

    Read the article

  • How Can I: Generate 40/64 Bit WEP Key In Python?

    - by Aktariel
    So, I've been beating my head against the wall of this issue for several months now, partly because it's a side interest and partly because I suck at programming. I've searched and researched all across the web, but have not had any luck (except one small bit of success; see below), so I thought I might try asking the experts. What I am trying to do is, as the title suggests, generate a 40/64 bit WEP key from a passphrase, according to the "de facto" standard. (A site such as [http://www.powerdog.com/wepkey.cgi] produces the expected outputs.) I have already written portions of the script that take inputs and write them to a file; one of the inputs would be the passphrase, sanitized to lower case. For the longest time I had no idea what the defacto standard was, much less how to even go about implementing it. I finally stumbled across a paper (http://www.lava.net/~newsham/wlan/WEP_password_cracker.pdf) that sheds as much light as I've had yet on the issue (page 18 has the relevant bits). Apparently, the passphrase is "mapped to a 32-bit value with XOR," the result of which is then used as the seed for a "linear congruential PRNG (which one of the several PRNGs Python has would fit this description, I don't know), and then from that result several bits of the result are taken. I have no idea how to go about implementing this, since the description is rather vague. What I need is help in writing the generator in Python, and also in understanding how exactly the key is generated. I'm not much of a programmer, so explanations are appreciated as well. (Yes, I know that WEP isn't secure.)

    Read the article

  • What does O(log n) mean exactly?

    - by Andreas Grech
    I am currently learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the growth of the algorithm proportionally...and the same goes for, for example, quadratic time O(n2) etc..even algorithms, such as permutation generators, with O(n!) times, that grow by factorials. For example, the following function is O(n) because the algorithm grows in proportion to its input n: f(int n) { int i; for (i = 0; i < n; ++i) printf("%d", i); } Similarly, if there was a nested loop, the time would be O(n2). But what exactly is O(log n)? For example, what does it mean to say that the height of a complete binary tree is O(log n)? I do know (maybe not in great detail) what Logarithm is, in the sense that: log10 100 = 2, but I cannot understand how to identify a function with a logarithmic time.

    Read the article

  • How can I apply a PSSM efficiently?

    - by flies
    I am fitting for position specific scoring matrices (PSSM aka Position Specific Weight Matrices). The fit I'm using is like simulated annealing, where I the perturb the PSSM, compare the prediction to experiment and accept the change if it improves agreement. This means I apply the PSSM millions of times per fit; performance is critical. In my particular problem, I'm applying a PSSM for an object of length L (~8 bp) at every position of a DNA sequence of length M (~30 bp) (so there are M-L+1 valid positions). I need an efficient algorithm to apply a PSSM. Can anyone help improve performance? My best idea is to convert the DNA into some kind of a matrix so that applying the PSSM is matrix multiplication. There are efficient linear algebra libraries out there (e.g. BLAS), but I'm not sure how best to turn an M-length DNA sequence into a matrix M x 4 matrix and then apply the PSSM at each position. The solution needs to work for higher order/dinucleotide terms in the PSSM - presumably this means representing the sequence-matrix for mono-nucleotides and separately for dinucleotides. My current solution iterates over each position m, then over each letter in word from m to m+L-1, adding the corresponding term in the matrix. I'm storing the matrix as a multi-dimensional STL vector, and profiling has revealed that a lot of the computation time is just accessing the elements of the PSSM (with similar performance bottlenecks accessing the DNA sequence). If someone has an idea besides matrix multiplication, I'm all ears.

    Read the article

  • JQuery - Sticky sidebar not working in Firefox

    - by user1473358
    I'm working on a site with a Sticky sidebar (fixed position under certain conditions, etc) that is working fine in Chrome & Safari, but breaks in Firefox. I'm not sure what the issue is since I didn't write the script: $(document).ready(function() { /* var defaults = { containerID: 'toTop', // fading element id containerHoverID: 'toTopHover', // fading element hover id scrollSpeed: 1200, easingType: 'linear' }; */ $().UItoTop({ easingType: 'easeOutQuart' }); if (!!$('.sticky').offset()) { // make sure ".sticky" element exists var stickyTop = $('.sticky').offset().top; // returns number var newsTop = $('#news_single').offset().top; // returns number $(window).scroll(function(){ // scroll event var windowTop = $(window).scrollTop(); // returns number var width = $(window).width(); var height = $(window).height(); if (stickyTop < windowTop && width > 960 && height > 450){ $('.sticky').css({ position: 'fixed', top: 40 }); $('#news_single').css({ left: 230 }); } else { $('.sticky').css('position','static'); $('#news_single').css({ left: 0 }); } }); } }); Here's the site (the sidebar in question is the one with the red header, to the left): http://www.parisgaa.org/parisgaels/this-is-a-heading-too-a-longer-one I'd appreciate any help with this.

    Read the article

  • Classification: Dealing with Abstain/Rejected Class

    - by abner.ayala
    I am asking for your input and/help on a classification problem. If anyone have any references that I can read to help me solve my problem even better. I have a classification problem of four discrete and very well separated classes. However my input is continuous and has a high frequency (50Hz), since its a real-time problem. The circles represent the clusters of the classes, the blue line the decision boundary and Class 5 equals the (neutral/resting do nothing class). This class is the rejected class. However the problem is that when I move from one class to the other I activate a lot of false positives in the transition movements, since the movement is clearly non-linear. For example, every time I move from class 5 (neutral class) to 1 I first see a lot of 3's before getting to the 1 class. Ideally, I will want my decision boundary to look like the one in the picture below where the rejected class is Class =5. Has a higher decision boundary than the others classes to avoid misclassification during transition. I am currently implementing my algorithm in Matlab using naive bayes, kNN, and SVMs optimized algorithms using Matlab. Question: What is the best/common way to handle abstain/rejected classes classes? Should I use (fuzzy logic, loss function, should I include resting cluster in the training)?

    Read the article

  • How to make UISlider output nice rounded numbers exponentially?

    - by RickiG
    Hi I am implementing a UISlider a user can manipulate to set a distance. I have never used the CocoaTouch UISlider, but have used other frameworks sliders, usually there is a variable for setting the "step" and other "helper" properties. The documentation for the UISlider deals only with a max and min value, and the output is always a 6 decimal float with a linear relation to the position of the "slider nob". I guess I will have to implement the desired functionality step by step. To the user, the min/max values range from 10 m to 999 Km, I am trying to implement this in an exponential way, that will feel natural to the user. I.e. the user experiences a feeling of control over the values, big or small. Also that the "output" has reasonable values. Values like 10m 200m 2.5km 150 km etc. instead of 1.2342356 m or 108.93837756 km. I would like for the step size to increase by 10m for the first 200m, then maybe by 50m up to 500m, then when passing the 1000 m value, it starts to deal with Kilometers, so then it is step size = 1 km up until 50 km, then maybe 25 km steps etc. Any way I go about this I end up doing a lot of rounding and a lot of calculations wrapped in a forrest of if statements and NSString/Number conversions, each time the user moves the slider just a little. I was hoping someone could lend me a bit of inspiration/math help or make me aware of a more lean approach to solving this problem. My last idea is to populate and array with a 100 string values, then have the slider int value correspond to a string, this is not very flexible, but doable. Thank you in advance for any help given:)

    Read the article

  • Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?

    - by Michael Aaron Safyan
    I'm using PYML to construct a multiclass linear support vector machine (SVM). After training the SVM, I would like to be able to save the classifier, so that on subsequent runs I can use the classifier right away without retraining. Unfortunately, the .save() function is not implemented for that classifier, and attempting to pickle it (both with standard pickle and cPickle) yield the following error message: pickle.PicklingError: Can't pickle : it's not found as __builtin__.PySwigObject Does anyone know of a way around this or of an alternative library without this problem? Thanks. Edit/Update I am now training and attempting to save the classifier with the following code: mc = multi.OneAgainstRest(SVM()); mc.train(dataset_pyml,saveSpace=False); for i, classifier in enumerate(mc.classifiers): filename=os.path.join(prefix,labels[i]+".svm"); classifier.save(filename); Notice that I am now saving with the PyML save mechanism rather than with pickling, and that I have passed "saveSpace=False" to the training function. However, I am still gettting an error: ValueError: in order to save a dataset you need to train as: s.train(data, saveSpace = False) However, I am passing saveSpace=False... so, how do I save the classifier(s)? P.S. The project I am using this in is pyimgattr, in case you would like a complete testable example... the program is run with "./pyimgattr.py train"... that will get you this error. Also, a note on version information: [michaelsafyan@codemage /Volumes/Storage/classes/cse559/pyimgattr]$ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. import PyML print PyML.__version__ 0.7.0

    Read the article

  • Do you like Twisted?

    - by Luca
    I use Python Twisted for web development, and I don't like it? I know async programming is a great idea, I know there are may async web servers now, I know it's the only way to solve some problems you'd have with threads but I don't like. The problem is that, you're forced to program in a twisted way. So, the architecture you have in mind, very often have to be modified to fit the way twisted works. The architecture have to follow the technology, I don't think this is good. When we use callback in javascript, we don't have too many difficulties: things are usually simpler, we use a callback in response to an Ajax call. But in a server web app things are, very often, a bit more complex. Writing chain of callbacks don't seem to me a wonderful way of programming. The code is not simple, and so it is difficult to understand and to maintain. Writing twisted code we very often lost the linear intuitive idea of the algorithm we wanted to implement, especially when things grow in complexity. What's your point of view?

    Read the article

  • MKL Accelerated Math Libraries for Java...

    - by Kaopua
    I've looked at the related threads on StackOverflow and Googled with not much luck. I'm also very new to Java (I'm coming from a C# and .NET background) so please bear with me. There is so much available in the Java world it's pretty overwhelming. I'm starting on a new Java-on-Linux project that requires some heavy and highly repetitious numerical calculations (i.e. statistics, FFT, Linear Algebra, Matrices, etc.). So maximizing the performance of the mathematical operations is a requirement, as is ensuring the math is correct. So hence I have an interest in finding a Java library that perhaps leverages native acceleration such as MKL, and is proven (so commercial options are definitely a possibility here). In the .NET space there are highly optimized and MKL accelerated commercial Mathematical libraries such as Centerspace NMath and Extreme Optimization. Is there anything comparable in Java? Most of the math libraries I have found for Java either do not seem to be actively maintained (such as Colt) or do not appear to leverage MKL or other native acceleration (such as Apache Commons Math). I have considered trying to leverage MKL directly from Java myself (e.g. JNI), but me being new to Java (let alone interoperating between Java and native libraries) it seemed smarter finding a Java library that has already done this correctly, efficiently, and is proven. Again I apologize if I am mistaken or misguided (even in regarding any libraries I've mentioned) and my ignorance of the Java offerings. It's a whole new world for me coming from the heavily commercialized Microsoft stock so I could easily be mistaken on where to look and regarding the Java libraries I've mentioned. I would greatly appreciate any help or advice.

    Read the article

  • Why do my CouchDB databases grow so fast?

    - by konrad
    I was wondering why my CouchDB database was growing to fast so I wrote a little test script. This script changes an attributed of a CouchDB document 1200 times and takes the size of the database after each change. After performing these 1200 writing steps the database is doing a compaction step and the db size is measured again. In the end the script plots the databases size against the revision numbers. The benchmarking is run twice: The first time the default number of document revision (=1000) is used (_revs_limit). The second time the number of document revisions is set to 1. The first run produces the following plot The second run produces this plot For me this is quite an unexpected behavior. In the first run I would have expected a linear growth as every change produces a new revision. When the 1000 revisions are reached the size value should be constant as the older revisions are discarded. After the compaction the size should fall significantly. In the second run the first revision should result in certain database size that is then keeps during the following writing steps as every new revision leads to the deletion of the previous one. I could understand if there is a little bit of overhead needed to manage the changes but this growth behavior seems weird to me. Can anybody explain this phenomenon or correct my assumptions that lead to the wrong expectations?

    Read the article

  • CMYK + CMYK = ? CMYK / 2 = ?

    - by Pete
    Suppose there are two colors defined in CMYK: color1 = 30, 40, 50, 60 color2 = 50, 60, 70, 80 If they were to be printed what values would the resulting color have? color_new = min(cyan1 + cyan2, 100), min(magenta1 + magenta2, 100), min(yellow1 + yellow2, 100), min(black1 + black2, 100)? Suppose there is a color defined in CMYK: color = 40, 30, 30, 100 It is possible to print a color at partial intensity, i.e. as a tint. What values would have a 50% tint of that color? color_new = cyan / 2, magenta / 2, yellow / 2, black / 2? I'm asking this to better understand the "tintTransform" function in PDF Reference 1.7, 4.5.5 Special Color Spaces, DeviceN Color Spaces Update: To better clarify: I'm not entirely concerned with human perception or how the CMYK dyies react to the paper. If someone specifies 90% tint which, when printed, looks like full intensity colorant, that's ok. In other words, if I asking how to compute 50% of cmyk(40, 30, 30, 100) I'm asking how to compute the new values, regardless of whether the result looks half-dark or not. Update 2: I'm confused now. I checked this in InDesign and Acrobat. For example Pantone 3005 has CMYK 100, 34, 0, 2, and its 25% tint has CMYK 25, 8.5, 0, 0.5. Does it mean I can "monkey around in a linear way"?

    Read the article

  • Should I write more SQL to be more efficient, or less SQL to be less buggy?

    - by RenderIn
    I've been writing a lot of one-off SQL queries to return exactly what a certain page needs and no more. I could reuse existing queries and issue a number of SQL requests linear to the number of records on the page. As an example, I have a query to return People and a query to return Job Details for a person. To return a list of people with their job details I could query once for people and then once for each person to retrieve their job details. I've found that in most cases that solution returns things in a reasonable amount of time, but I don't know how well it will scale in my environment. Instead I've been writing queries to join people + job details, or people + salary history, etc. I'm looking at my models and I see how I could shave off maybe 30% of my code if I were to re-use existing queries. This is a big temptation. Is it a bad thing to go for reuse over efficiency in general or does it all come down to the specific situation? Should I first do it the easy way and then optimize later, or is it best to get the code knocked out while everything is fresh in my mind? Thoughts, experiences?

    Read the article

  • pylab.savefig() and pylab.show() image difference

    - by Jack1990
    I'm making an script to automatically create plots from .xvg files, but there's a problem when I'm trying to use pylab's savefig() method. Using pylab.show() and saving from there, everything's fine. Using pylab.show() Using pylab.savefig() def producePlot(timestep, energy_values,type_line = 'r', jump = 1,finish = 100): fc = sp.interp1d(timestep[::jump], energy_values[::jump],kind='cubic') xnew = numpy.linspace(0, finish, finish*2) pylab.plot(xnew, fc(xnew),type_line) pylab.xlabel('Time in ps ') pylab.ylabel('kJ/mol') pylab.xlim(xmin=0, xmax=finish) def produceSimplePlot(timestep, energy_values,type_line = 'r', jump = 1,finish = 100): pylab.plot(timestep, energy_values,type_line) pylab.xlabel('Time in ps ') pylab.ylabel('kJ/mol') pylab.xlim(xmin=0, xmax=finish) def linearRegression(timestep, energy_values, type_line = 'g'): #, jump = 1,finish = 100): from scipy import stats import numpy #print 'fuck' timestep = numpy.asarray(timestep) slope, intercept, r_value, p_value, std_err = stats.linregress(timestep,energy_values) line = slope*timestep+intercept pylab.plot(timestep, line, type_line) def plottingTime(Title,file_name, timestep, energy_values ,loc, jump , finish): pylab.title(Title) producePlot(timestep,energy_values, 'b',jump, finish) linearRegression(timestep,energy_values) import numpy Average = numpy.average(energy_values) #print Average pylab.legend(("Average = %.2f" %(Average),'Linear Reg'),loc) #pylab.show() pylab.savefig('%s.jpg' %file_name[:-4], bbox_inches= None, pad_inches=0) #if __name__ == '__main__': #plottingTime(Title,timestep1, energy_values, jump =10, finish = 4800) def specialCase(Title,file_name, timestep, energy_values,loc, jump, finish): #print 'Working here ...?' pylab.title(Title) producePlot(timestep,energy_values, 'b',jump, finish) import numpy from pylab import * Average = numpy.average(energy_values) #print Average pylab.legend(("Average = %.2g" %(Average), Title),loc) locs,labels = yticks() yticks(locs, map(lambda x: "%.3g" % x, locs)) #pylab.show() pylab.savefig('%s.jpg' %file_name[:-4] , bbox_inches= None, pad_inches=0) Thanks in advance, John

    Read the article

  • JMF microphone volume controller

    - by TacB0sS
    How to obtain the Microphone volume controller in JMF? this is what I have: I tried this implementation concept of yours, but I keep getting a null from the first volume processor when I try to get the stream, here is how I do it: // the device is the media device specifically audio Processor processorForVolume = Manager.createProcessor(device.getLocator()); // wait until configured ProcessorStates newState = new ProcessorStateListener(Processor.Configured).waitForProcessorState(processorForVolume); System.out.println("volumeProcessorState: "+newState); // setting the content descriptor to null - read in another thread this allows to get the gain control processorForVolume.setContentDescriptor(null); // set the track control format to one supported by the device and the track control. // I didn't match it to an RTP allowed format, but I don't think this has anything to do with it... TrackControl[] trackControls = processorForVolume.getTrackControls(); if (trackControls.length == 0) throw new MC_Exception("No track controls where found for this device:", new Object[]{device}); for (TrackControl control : trackControls) trackManipulator.manipulateTrackControls(control); // wait until the processor is realized newState = new ProcessorStateListener(Controller.Realized).waitForProcessorState(processorForVolume); System.out.println("volumeProcessorState: "+newState); // receives the gain control micVolumeController = processorForVolume.getGainControl(); // cannot get the output stream to process further... any suggestions? processor = Manager.createProcessor(processorForVolume.getDataOutput()); new ProcessorStateListener(Processor.Configured).waitForProcessorState(processor); processor.setContentDescriptor(DeviceCapturingManager.RAW_RTP); new ProcessorStateListener(Controller.Realized).waitForProcessorState(processor); this is the output It generates: volumeProcessorState: Configured format set to track control - com.sun.media.ProcessEngine$ProcTControl@1627c16: LINEAR, 48000.0 Hz, 16-bit, Stereo, LittleEndian, Signed volumeProcessorState: Realized and the data output from the processor is Null. I should make clear that when the content descriptor != null I do get an output stream but not the volume controller, and the when it is null I get the controller, but no stream. I try to connect to an audio microphone device Adam.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >