Search Results

Search found 236 results on 10 pages for 'np'.

Page 8/10 | < Previous Page | 4 5 6 7 8 9 10  | Next Page >

  • Match multiline regex in file object

    - by williamx
    How can I extract the groups from this regex from a file object (data.txt)? import numpy as np import re import os ifile = open("data.txt",'r') # Regex pattern pattern = re.compile(r""" ^Time:(\d{2}:\d{2}:\d{2}) # Time: 12:34:56 at beginning of line \r{2} # Two carriage return \D+ # 1 or more non-digits storeU=(\d+\.\d+) \s uIx=(\d+) \s storeI=(-?\d+.\d+) \s iIx=(\d+) \s avgCI=(-?\d+.\d+) """, re.VERBOSE | re.MULTILINE) time = []; for line in ifile: match = re.search(pattern, line) if match: time.append(match.group(1)) The problem in the last part of the code, is that I iterate line by line, which obviously doesn't work with multiline regex. I have tried to use pattern.finditer(ifile) like this: for match in pattern.finditer(ifile): print match ... just to see if it works, but the finditer method requires a string or buffer. I have also tried this method, but can't get it to work matches = [m.groups() for m in pattern.finditer(ifile)] Any idea?

    Read the article

  • Polygon packing 2D

    - by Ilnur
    Hi! I have problem of packing 2 arbitrary polygons. I.e. we have 2 arbitrary polygons. We are to find such placement of this polygons (we could make rotations and movements), when rectangle, which circumscribes this polygons has minimal area. I know, that this is a NP-complete problem. I want to choose an efficient algorithm for solving this problem. I' looking for No-Fit-Polygon approach. But I could't find anywhere the simple and clear algorithm for finding the NFP of two arbitrary polygons.

    Read the article

  • MPI signal handling

    - by Seth Johnson
    When using mpirun, is it possible to catch signals (for example, the SIGINT generated by ^C) in the code being run? For example, I'm running a parallelized python code. I can except KeyboardInterrupt to catch those errors when running python blah.py by itself, but I can't when doing mpirun -np 1 python blah.py. Does anyone have a suggestion? Even finding how to catch signals in a C or C++ compiled program would be a helpful start. If I send a signal to the spawned Python processes, they can handle the signals properly; however, signals sent to the parent orterun process (i.e. from exceeding wall time on a cluster, or pressing control-C in a terminal) will kill everything immediately.

    Read the article

  • An approximate algorithm for finding Steiner Forest.

    - by Tadeusz A. Kadlubowski
    Hello. Consider a weighted graph G=(V,E,w). We are given a family of subsets of vertices V_i. Those sets of vertices are not necessarily disjoint. A Steiner Forest is a forest that for each subset of vertices V_i connects all of the vertices in this subset with a tree. Example: only one subset V_1 = V. In this case a Steiner forest is a spanning tree of the whole graph. Enough theory. Finding such a forest with minimal weight is difficult (NP-complete). Do you know any quicker approximate algorithm to find such a forest with non-optimal weight?

    Read the article

  • Python: x-y-plot with matplotlib

    - by kame
    I want to plot some data. The first column contains the x-data. But matplotlib doesnt plot this. Where is my mistake? #fresnel formula import numpy as np from numpy import cos from scipy import * from pylab import plot, show, ylim, yticks from matplotlib import * from pprint import pprint n1 = 1.0 n2 = 1.5 #alpha, beta, intensity data = [ [10, 22, 4.3], [20, 42, 4.2], [30, 62, 3.6], [40, 83, 1.3], [45, 102, 2.8], [50, 123, 3.0], [60, 143, 3.2], [70, 163, 3.8], ] for i in range(len(data)): rhotang1 = (n1 * cos(data[i][0]) - n2 * cos(data[i][1])) rhotang2 = (n1 * cos(data[i][0]) + n2 * cos(data[i][1])) rhotang = rhotang1 / rhotang2 data[i].append(rhotang) #append 4th value pprint(data) x = data[:][0] y1 = data[:][2] y3 = data[:][3] plot(x, y1, x, y3) show() EDIT: http://paste.pocoo.org/show/205534/ But it doesnt work.

    Read the article

  • Pinvoke- to call a function with pointer to pointer to pointer parameter

    - by jambodev
    complete newbe in PInvoke. I have a function in C with this signature: int addPos(int init_array_size, int *cnt, int *array_size, PosT ***posArray, PosT ***hPtr, char *id, char *record_id, int num, char *code, char *type, char *name, char *method, char *cont1, char *cont2, char *cont_type, char *date1, char *date_day, char *date2, char *dsp, char *curr, char *contra_acc, char *np, char *ten, char *dsp2, char *covered, char *cont_subtype, char *Xcode, double strike, int version, double t_price, double long, double short, double scale, double exrcised_price, char *infoMsg); and here is how PosT looks like: typedef union pu { struct dpos d; struct epo e; struct bpos b; struct spos c; } PosT ; my questions are: 1- do I need to define a class in CSharp representing PosT? 2- how do I pass PosT ***posArray parameter across frm CSharp to C? 3- How do I specify marshaling for it all? I Do appreciate your help

    Read the article

  • How do I set a matplotlib colorbar extents?

    - by Adam Fraser
    I'd like to display a colorbar representing an image's raw values along side a matplotlib imshow subplot which displays that image, normalized. I've been able to draw the image and a colorbar successfully like this, but the colorbar min and max values represent the normalized (0,1) image instead of the raw (0,99) image. f = plt.figure() # create toy image im = np.ones((100,100)) for x in range(100): im[x] = x # create imshow subplot ax = f.add_subplot(111) result = ax.imshow(im / im.max()) # Create the colorbar axc, kw = matplotlib.colorbar.make_axes(ax) cb = matplotlib.colorbar.Colorbar(axc, result) # Set the colorbar result.colorbar = cb If someone has a better mastery of the colorbar API, I'd love to hear from you. Thanks! Adam

    Read the article

  • Unable to run OpenMPI across more than two machines

    - by rcollyer
    When attempting to run the first example in the boost::mpi tutorial, I was unable to run across more than two machines. Specifically, this seemed to run fine: mpirun -hostfile hostnames -np 4 boost1 with each hostname in hostnames as <node_name> slots=2 max_slots=2. But, when I increase the number of processes to 5, it just hangs. I have decreased the number of slots/max_slots to 1 with the same result when I exceed 2 machines. On the nodes, this shows up in the job list: <user> Ss orted --daemonize -mca ess env -mca orte_ess_jobid 388497408 \ -mca orte_ess_vpid 2 -mca orte_ess_num_procs 3 -hnp-uri \ 388497408.0;tcp://<node_ip>:48823 Additionally, when I kill it, I get this message: node2- daemon did not report back when launched node3- daemon did not report back when launched The cluster is set up with the mpi and boost libs accessible on an NFS mounted drive. Am I running into a deadlock with NFS? Or, is something else going on?

    Read the article

  • Honor Whitespace padding to display columns in fixed width <select>

    - by Laramie
    I am trying to create the effect of columns in a dropdown by padding text with whitespace as in this example: <select style="font-family: courier;"> <option value="1">[Aux1+1] [*] [Aux1+1] [@Tn=PP] </option> <option value="2">[Main] [*] [Main Apples Oranges] [@Fu=$p] </option> <option value="3">[Main] [*] [Next NP] [@Fu=n] </option> <option value="4">[Main] [Dr] [Main] [@Ty=$p] </option> </select> According to this blog, it's possible. The problem is the whitespace is contracted so that the columsn don't line up. SAme results in FF, IE6 and Chrome. What am I missing?

    Read the article

  • vectorized approach to binning with numpy/scipy in Python

    - by user248237
    I am binning a 2d array (x by y) in Python into the bins of its x value (given in "bins"), using np.digitize: elements_to_bins = digitize(vals, bins) where "vals" is a 2d array, i.e.: vals = array([[1, v1], [2, v2], ...]). elements_to_bins just says what bin each element falls into. What I then want to do is get a list whose length is the number of bins in "bins", and each element returns the y-dimension of "vals" that falls into that bin. I do it this way right now: points_by_bins = [] for curr_bin in range(min(elements_to_bins), max(elements_to_bins) + 1): curr_indx = where(elements_to_bins == curr_bin)[0] curr_bin_vals = vals[:, curr_indx] points_by_bins.append(curr_bin_vals) is there a more elegant/simpler way to do this? All I need is a list of of lists of the y-values that fall into each bin. thanks.

    Read the article

  • draw csv file data as a heatmap using numpy and matplotlib

    - by Schrodinger's Cat
    Hello all, I was able to load my csv file into a numpy array: data = np.genfromtxt('csv_file', dtype=None, delimiter=',') Now I would like to generate a heatmap. I have 19 categories from 11 samples, along these lines: cat,1,2,3... a,0.0,0.2,0.3 b,1.0,0.4,0.2 . . . I wanted to use matplotlib colormesh. but I'm at loss. all the examples I could find used random number arrays. any help and insights would be greatly appreciated. many thanks

    Read the article

  • Batch file to check if a file exists and raise an error if it doesn't

    - by cfdev9
    This batch file releases a build from TEST to LIVE. I want to add a check constraint in this file that ensures there is an accomanying release document in a specific folder. "C:\Program Files\Windows Resource Kits\Tools\robocopy.exe" "\\testserver\testapp$" "\\liveserver\liveapp$" *.* /E /XA:H /PURGE /XO /XD ".svn" /NDL /NC /NS /NP del "\\liveserver\liveapp$\web.config" ren "\\liveserver\liveapp$\web.live.config" web.config So I have a couple of questions about how to achieve this... There is a version.txt file in the \testserver\testapp$ folder and the only contents of this file is the build number (eg 45 - for build 45) How do I read the contents of the version.txt file into a variable in the batch file? How do I check if a file \fileserver\myapp\releasedocs\ {build}.doc exists using the variable from part 1 in place of {build}?

    Read the article

  • implementing feature structures: what data type to use?

    - by Dervin Thunk
    Hello. In simplistic terms, a feature structure is an unordered list of attribute-value pairs. [number:sg, person:3 | _ ], which can be embedded: [cat:np, agr:[number:sg, person:3 | _ ] | _ ], can subindex stuff and share the value [number:[1], person:3 | _ ], where [1] is another feature structure (that is, it allows reentrancy). My question is: what data structure would people think this should be implemented with for later access to values, to perform unification between 2 fts, to "type" them, etc. There is a full book on this, but it's in lisp, which simplifies list handling. So, my choices are: a hash of lists, a list of lists, or a trie. What do people think about this?

    Read the article

  • How come, diffrent text files become diffrent sizes after compression ?

    - by Arsheep
    I have file of some random text size = 27 gb and after compression it becomes 40 mb or so. And a 3.5 GB sql file become 45 Mb after compression. But a 109 mb text file become 72 mb after compression so what can be wrong with it. Why so less compressed, it must 10 mb or so , or i am missing something . All files as i can see is English text only and and some grammar symbols (/ , . - = + etc) Can you tell why ? If not can you tell how can i super compress a text file ? I can code in PHP , np in that.

    Read the article

  • Algorithm design: can you provide a solution to the multiple knapsack problem?

    - by MalcomTucker
    I am looking for a pseudo-code solution to what is effectively the Multiple Knapsack Problem (optimisation statement is halfway down the page). I think this problem is NP Complete so the solution doesn't need to be optimal, rather if it is fairly efficient and easily implemented that would be good. The problem is this: I have many work items, with each taking a different (but fixed and known) amount of time to complete. I need to divide these work items into groups so as to have the smallest number of groups (ideally), with each group of work items taking no longer than a given total threshold - say 1 hour. I am flexible about the threshold - it doesnt need to be rigidly applied, though should be close. My idea was to allocate work items into bins where each bin represents 90% of the threshold, 80%, 70% and so on. I could then match items that take 90% to those that take 10%, and so on. Any better ideas?

    Read the article

  • Enumerating all hamiltonian paths from start to end vertex in grid graph

    - by Eric
    Hello, I'm trying to count the number of Hamiltonian paths from a specified start vertex that end at another specified vertex in a grid graph. Right now I have a solution that uses backtracking recursion but is incredibly slow in practice (e.g. O(n!) / 3 hours for 7x7). I've tried a couple of speedup techniques such as maintaining a list of reachable nodes, making sure the end node is still reachable, and checking for isolated nodes, but all of these slowed my solution down. I know that the problem is NP-complete, but it seems like some reasonable speedups should be achievable in the grid structure. Since I'm trying to count all the paths, I'm sure that the search must be exhaustive, but I'm having trouble figuring out how to prune out paths that aren't promising. Does anyone have some suggestions for speeding the search up? Or an alternate search algorithm?

    Read the article

  • Who owes who money optimisation problem

    - by Francis
    Say you have n people, each who owe each other money. In general it should be possible to reduce the amount of transactions that need to take place. i.e. if X owes Y £4 and Y owes X £8, then Y only needs to pay X £4 (1 transaction instead of 2). This becomes harder when X owes Y, but Y owes Z who owes X as well. I can see that you can easily calculate one particular cycle. It helps for me when I think of it as a fully connected graph, with the nodes being the amount each person owes. Problem seems to be NP-complete, but what kind of optimisation algorithm could I make, nevertheless, to reduce the total amount of transactions? Doesn't have to be that efficient, as N is quite small for me.

    Read the article

  • MemoryError when running Numpy Meshgrid

    - by joaoc
    I have 8823 data points with x,y coordinates. I'm trying to follow the answer on how to get a scatter dataset to be represented as a heatmap but when I go through the X, Y = np.meshgrid(x, y) instruction with my data arrays I get MemoryError. I am new to numpy and matplotlib and am essentially trying to run this by adapting the examples I can find. Here's how I built my arrays from a file that has them stored: XY_File = open ('XY_Output.txt', 'r') XY = XY_File.readlines() XY_File.close() Xf=[] Yf=[] for line in XY: Xf.append(float(line.split('\t')[0])) Yf.append(float(line.split('\t')[1])) x=array(Xf) y=array(Yf) Is there a problem with my arrays? This same code worked when put into this example but I'm not too sure. Why am I getting this MemoryError and how can I fix this?

    Read the article

  • In SQL find the combination of rows whose sum add up to a specific amount (or amt in other table)

    - by SamH
    Table_1 D_ID Integer Deposit_amt integer Table_2 Total_ID Total_amt integer Is it possible to write a select statement to find all the rows in Table_1 whose Deposit_amt sum to the Total_amt in Table_2. There are multiple rows in both tables. Say the first row in Table_2 has a Total_amt=100. I would want to know that in Table_1 the rows with D_ID 2, 6, 12 summed = 100, the rows D_ID 2, 3, 42 summed = 100, etc. Help appreciated. Let me know if I need to clarify. I am asking this question as someone as part of their job has a list of transactions and a list of totals, she needs to find the possible list of transactions that could have created the total. I agree this sounds dangerous as finding a combination of transactions that sums to a total does not guarantee that they created the total. I wasn't aware it is an np-complete problem.

    Read the article

  • how to filter an id for subcategory in sql

    - by Naim Nsco
    i have create one table with A is 1,2,3 and B is 3,5,5,9,10,10 For example, i want to change from this table NP A B C 1 1 3 why not 1 1 5 weigh 1 1 5 hello 1 1 9 no way 1 1 10 now or never 1 1 10 float the boat, capt ain 1 1 12 no nelp 2 2 4 why 2 2 6 way too much 2 2 11 help 3 3 1 not now 3 3 2 not 3 3 7 milky way 3 3 7 this is stupid 3 3 8 one way To be like this:- A B C ---------- -- -------- 1 3 why not 5 weigh 5 hello 9 no way 10 now or never 10 float the boat, capt ain 12 no nelp 2 4 why 6 way too much 11 help 3 1 not now 2 not 7 milky way 7 this is stupid 8 one way How to do like that?. Anyone know about this?.

    Read the article

  • Writing to CSV issue in Spyder

    - by 0003
    I am doing the Kaggle Titanic beginner contest. I generally work in Spyder IDE, but I came across a weird issue. The expected output is supposed to be 418 rows. When I run the script from terminal the output I get is 418 rows (as expected). When I run it in Spyder IDE the output is 408 rows not 418. When I re-run it in the current python process, it outputs the expected 418 rows. I posted a redacted portion of the code that has all of the relevant bits. Any ideas? import csv import numpy as np csvFile = open("/train.csv","ra") csvFile = csv.reader(csvFile) header = csvFile.next() testFile = open("/test.csv","ra") testFile = csv.reader(testFile) testHeader = testFile.next() writeFile = open("/gendermodelDebug.csv", "wb") writeFile = csv.writer(writeFile) count = 0 for row in testFile: if row[3] == 'male': do something to row writeFile.writerow(row) count += 1 elif row[3] == 'female': do something to row writeFile.writerow(row) count += 1 else: raise ValueError("Did not find a male or female in %s" % row)

    Read the article

  • Efficiently Reshaping/Reordering Numpy Array to Properly Ordered Tiles (Image)

    - by Phelix
    I would like to be able to somehow reorder a numpy array for efficient processing of tiles. what I got: >>> A = np.array([[1,2],[3,4]]).repeat(2,0).repeat(2,1) >>> A # image like array array([[[1, 1, 2, 2], [1, 1, 2, 2]], [[3, 3, 4, 4], [3, 3, 4, 4]]]) >>> A.reshape(2,2,4) array([[[1, 1, 2, 2], [1, 1, 2, 2]], [[3, 3, 4, 4], [3, 3, 4, 4]]]) what I want: X >>> X array([[[1, 1, 1, 1], [2, 2, 2, 2]], [[3, 3, 3, 3], [4, 4, 4, 4]]]) to be able to do something like: >>> X[X.sum(2)>12] -= 1 >>> X array([[[1, 1, 1, 1], [2, 2, 2, 2]], [[3, 3, 3, 3], [3, 3, 3, 3]]]) Is this possible without a slow python loop? Bonus: Conversion back from X to A Edit: How can I get X from A?

    Read the article

  • Choosing randomly all the elements in the the list just once

    - by Dalek
    How is it possible to randomly choose a number from a list with n elements, n time without picking the same element of the list twice. I wrote a code to choose the sequence number of the elements in the list but it is slow: >>>redshift=np.array([0.92,0.17,0.51,1.33,....,0.41,0.82]) >>>redshift.shape (1225,) exclude=[] k=0 ng=1225 while (k < ng): flag1=0 sq=random.randint(0, ng) while (flag1<1): if sq in exclude: flag1=1 sq=random.randint(0, ng) else: print sq exclude.append(sq) flag1=0 z=redshift[sq] k+=1 It doesn't choose all the sequence number of elements in the list.

    Read the article

  • How to interview a natural scientist for a dev position?

    - by Silas
    I already did some interviews for my company, mostly computer scientists for dev positions but also some testers and project managers. Now I have to fill a vacancy in our research group within the R&D department (side note: “research” means that we try to solve problems in our professional domain/market niche using software in research projects together with universities, other companies, research centres and end user organisations. It’s not computer science research; we’re not going to solve the P=NP problem). Now we invited a guy holding an MSc in chemistry (with a lot of physics in his CV, too), who never had any computer science lesson. I already talked with him about half an hour at a local university’s career days and there’s no doubt the guy is smart. Also his marks are excellent and he graduated with distinction. For his BSc he needed to teach himself programming in Mathematica and told me believably that he liked programming a lot. Also he solved some physical chemistry problem that I probably don’t understand using his own software, implemented in Mathematica, for his MSc thesis. It includes a GUI and a notable size of 8,000 LoC. He seems to be very attracted by what we’re doing in our research group and to be honest it’s quite difficult for an SME like us to get good people. I also am very interested in hiring him since he could assist me in writing project proposals, reports, doing presentations and so on. He would probably fit to our team, too. The only question left is: How can I check if he will get the programming skills he needs to do software implementation in our projects since this will be a significant part of the job? Of course I will ask him what it is, that is fascinating him about programming. I’ll also ask how he proceeded to write his natural science software and how he structured it. I’ll ask about how he managed to obtain the skills and information about software development he needed. But is there something more I could ask? Something more concrete perhaps? Should I ask him to explain his Mathematica solution? To be clear: I’m not looking for knowledge in a particular language or technology stack. We’re a .NET shop in product development but I want to have a free choice for our research projects. So I’m interested in the meta-competence being able to learn whatever is actually needed. I hope this question is answerable and not open-ended since I really like to know if there is a default way to check for the ability to get further programming skills on the job. If something is not clear to you please give me some comments and let me improve my question.

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

< Previous Page | 4 5 6 7 8 9 10  | Next Page >