Search Results

Search found 1872 results on 75 pages for 'matrix mole'.

Page 67/75 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Big problem with Dijkstra algorithm in a linked list graph implementation

    - by Nazgulled
    Hi, I have my graph implemented with linked lists, for both vertices and edges and that is becoming an issue for the Dijkstra algorithm. As I said on a previous question, I'm converting this code that uses an adjacency matrix to work with my graph implementation. The problem is that when I find the minimum value I get an array index. This index would have match the vertex index if the graph vertexes were stored in an array instead. And the access to the vertex would be constant. I don't have time to change my graph implementation, but I do have an hash table, indexed by a unique number (but one that does not start at 0, it's like 100090000) which is the problem I'm having. Whenever I need, I use the modulo operator to get a number between 0 and the total number of vertices. This works fine for when I need an array index from the number, but when I need the number from the array index (to access the calculated minimum distance vertex in constant time), not so much. I tried to search for how to inverse the modulo operation, like, 100090000 mod 18000 = 10000 and, 10000 invmod 18000 = 100090000 but couldn't find a way to do it. My next alternative is to build some sort of reference array where, in the example above, arr[10000] = 100090000. That would fix the problem, but would require to loop the whole graph one more time. Do I have any better/easier solution with my current graph implementation?

    Read the article

  • help in the Donalds B. Johnson's algorithm, i cannot understand the pseudo code (PART II)

    - by Pitelk
    Hi all , i cannot understand a certain part of the paper published by Donald Johnson about finding cycles (Circuits) in a graph. More specific i cannot understand what is the matrix Ak which is mentioned in the following line of the pseudo code : Ak:=adjacency structure of strong component K with least vertex in subgraph of G induced by {s,s+1,....n}; to make things worse some lines after is mentins " for i in Vk do " without declaring what the Vk is... As far i have understand we have the following: 1) in general, a strong component is a sub-graph of a graph, in which for every node of this sub-graph there is a path to any node of the sub-graph (in other words you can access any node of the sub-graph from any other node of the sub-graph) 2) a sub-graph induced by a list of nodes is a graph containing all these nodes plus all the edges connecting these nodes. in paper the mathematical definition is " F is a subgraph of G induced by W if W is subset of V and F = (W,{u,y)|u,y in W and (u,y) in E)}) where u,y are edges , E is the set of all the edges in the graph, W is a set of nodes. 3)in the code implementation the nodes are named by integer numbers 1 ... n. 4) I suspect that the Vk is the set of nodes of the strong component K. now to the question. Lets say we have a graph G= (V,E) with V = {1,2,3,4,5,6,7,8,9} which it can be divided into 3 strong components the SC1 = {1,4,7,8} SC2= {2,3,9} SC3 = {5,6} (and their edges) Can anybody give me an example for s =1, s= 2, s= 5 what if going to be the Vk and Ak according to the code? The pseudo code is in my previous question in http://stackoverflow.com/questions/2908575/help-in-the-donalds-b-johnsons-algorithm-i-cannot-understand-the-pseudo-code and the paper can be found at http://stackoverflow.com/questions/2908575/help-in-the-donalds-b-johnsons-algorithm-i-cannot-understand-the-pseudo-code thank you in advance

    Read the article

  • Subset a data.frame by list and apply function on each part, by rows

    - by aL3xa
    This may seem as a typical plyr problem, but I have something different in mind. Here's the function that I want to optimize (skip the for loop). # dummy data set.seed(1985) lst <- list(a=1:10, b=11:15, c=16:20) m <- matrix(round(runif(200, 1, 7)), 10) m <- as.data.frame(m) dfsub <- function(dt, lst, fun) { # check whether dt is `data.frame` stopifnot (is.data.frame(dt)) # check if vectors in lst are "whole" / integer # vector elements should be column indexes is.wholenumber <- function(x, tol = .Machine$double.eps^0.5) abs(x - round(x)) < tol # fall if any non-integers in list idx <- rapply(lst, is.wholenumber) stopifnot(idx) # check for list length stopifnot(ncol(dt) == length(idx)) # subset the data subs <- list() for (i in 1:length(lst)) { # apply function on each part, by row subs[[i]] <- apply(dt[ , lst[[i]]], 1, fun) } # preserve names names(subs) <- names(lst) # convert to data.frame subs <- as.data.frame(subs) # guess what =) return(subs) } And now a short demonstration... actually, I'm about to explain what I primarily intended to do. I wanted to subset a data.frame by vectors gathered in list object. Since this is a part of code from a function that accompanies data manipulation in psychological research, you can consider m as a results from personality questionnaire (10 subjects, 20 vars). Vectors in list hold column indexes that define questionnaire subscales (e.g. personality traits). Each subscale is defined by several items (columns in data.frame). If we presuppose that the score on each subscale is nothing more than sum (or some other function) of row values (results on that part of questionnaire for each subject), you could run: > dfsub(m, lst, sum) a b c 1 46 20 24 2 41 24 21 3 41 13 12 4 37 14 18 5 57 18 25 6 27 18 18 7 28 17 20 8 31 18 23 9 38 14 15 10 41 14 22 I took a glance at this function and I must admit that this little loop isn't spoiling the code at all... BUT, if there's an easier/efficient way of doing this, please, let me know!

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • Fastest image iteration in Python

    - by Greg
    I am creating a simple green screen app with Python 2.7.4 but am getting quite slow results. I am currently using PIL 1.1.7 to load and iterate the images and saw huge speed-ups changing from the old getpixel() to the newer load() and pixel access object indexing. However the following loop still takes around 2.5 seconds to run for an image of around 720p resolution: def colorclose(Cb_p, Cr_p, Cb_key, Cr_key, tola, tolb): temp = math.sqrt((Cb_key-Cb_p)**2+(Cr_key-Cr_p)**2) if temp < tola: return 0.0 else: if temp < tolb: return (temp-tola)/(tolb-tola) else: return 1.0 .... for x in range(width): for y in range(height): Y, cb, cr = fg_cbcr_list[x, y] mask = colorclose(cb, cr, cb_key, cr_key, tola, tolb) mask = 1 - mask bgr, bgg, bgb = bg_list[x,y] fgr, fgg, fgb = fg_list[x,y] pixels[x,y] = ( (int)(fgr - mask*key_color[0] + mask*bgr), (int)(fgg - mask*key_color[1] + mask*bgg), (int)(fgb - mask*key_color[2] + mask*bgb)) Am I doing anything hugely inefficient here which makes it run so slow? I have seen similar, simpler examples where the loop is replaced by a boolean matrix for instance, but for this case I can't see a way to replace the loop. The pixels[x,y] assignment seems to take the most amount of time but not knowing Python very well I am unsure of a more efficient way to do this. Any help would be appreciated.

    Read the article

  • XNA Reach profile with VMWare - Vertex Buffers not working?

    - by Nektarios
    Running XNA app, using Reach profile, in VMWare Fusion host OS Mac OSX, VM is Windows XP SP 3 (my dual-boot OS). Running on MacBook Pro w/NVidia 320M graphics card When I am booted in to XP natively, my code works. The code is drawing cubes that are set up using vertex buffers When another friend runs this same code on Windows 7, it also works for him just fine When I am running my code in the VM, it doesn't work. I have billboarding sprites running in a shader program and this part displays fine. I get no crashing or errors, the geometry just doesn't appear. I tried Debug and Release. This is very basic operation so I'm thinking VMWare isn't the problem, but it's my code.... My init code: var vertexArray = verts.ToArray(); var indexArray = indices.ToArray(); indexBuffer = new IndexBuffer(GraphicsDevice, typeof(Int16), indexArray.Length, BufferUsage.WriteOnly); indexBuffer.SetData(indexArray); vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionColor), vertexArray.Length, BufferUsage.WriteOnly); vertexBuffer.SetData(vertexArray); My Draw code: // problem isn't here, tried no cull GraphicsDevice.RasterizerState = RasterizerState.CullClockwise; GraphicsDevice.BlendState = BlendState.AlphaBlend; GraphicsDevice.DepthStencilState = new DepthStencilState() { DepthBufferEnable = true }; // Update View and Projection TileEffect.View = ((Game1)Game).Camera.View; TileEffect.Projection = ((Game1)Game).Camera.Projection; TileEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.SetVertexBuffer(vertexBuffer); GraphicsDevice.Indices = indexBuffer; GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, indices.Count, 0, indices.Count / 3); For LoadContent: TileEffect = new BasicEffect(GraphicsDevice) { World = Matrix.Identity, View = ((Game1)Game).Camera.View, Projection = ((Game1)Game).Camera.Projection, VertexColorEnabled = true };

    Read the article

  • OpenGL ES, UIView and Status Bar mess

    - by sfider
    I have iPhone (iPhoneOS 3.x) OpenGL ES app that: can be in landscape/portrait orientation can be with/without status bar shown I do this by changing status bar orientation and hidden state, then updating OpenGL view frame so it won't overlap status bar and setting projection matrix appropriately. OpenGL view is in portrait orientation at all time. View controller's shouldAutorotateToInterfaceOrientation: method returns false always, so the status bar won't start autorotating when app is in landscape mode. The problem I have is that I want to use some other UIViews, like: UIWebView, MFMailComposeView, MPMediaPicker. I could show them as modals, but this this have some drawbacks: views will always show in portrait orientation, even if they support landscape orientation views will not autorotate, even if they support it What I do is I take OpenGL view of the window with removeFromSuperview, set transform to the other view so it will be in portrait/landscape orientation when it shows up and place the other view on the window with addSubview:. This works fine without the status bar, but with it there are some problems I cannot work out: MPMediaPicker is sized to fit under status bar, but it slides under it anyway MFMailComposeView does not show navigation bar until it autorotates on device orientation change Does anyone has an idea how can I get it to work?

    Read the article

  • Generics with constraints hierarchy

    - by devoured elysium
    I am currently facing a very disturbing problem: interface IStateSpace<Position, Value> where Position : IPosition // <-- Problem starts here where Value : IValue // <-- and here as I don't { // know how to get away this // circular dependency! // Notice how I should be // defining generics parameters // here but I can't! Value GetStateAt(Position position); void SetStateAt(Position position, State state); } As you'll down here, both IPosition, IValue and IState depend on each other. How am I supposed to get away with this? I can't think of any other design that will circumvent this circular dependency and still describes exactly what I want to do! interface IState<StateSpace, Value> where StateSpace : IStateSpace where Value : IValue { StateSpace StateSpace { get; }; Value Value { get; set; } } interface IPosition { } interface IValue<State> where State : IState { State State { get; } } Basically I have a state space IStateSpace that has states IState inside. Their position in the state space is given by an IPosition. Each state then has one (or more) values IValue. I am simplifying the hierarchy, as it's a bit more complex than described. The idea of having this hierarchy defined with generics is to allow for different implementations of the same concepts (an IStateSpace will be implemented both as a matrix as an graph, etc). Would can I get away with this? How do you generally solve this kind of problems? Which kind of designs are used in these cases? Thanks

    Read the article

  • What ever happened to APL?

    - by lkessler
    When I was at University 30 years ago, I used a programming language called APL. I believe the acronym stood for "A Programming Language", This language was interpretive and was especially useful for array and matrix operations with powerful operators and library functions to help with that. Did you use APL? Is this language still in use anywhere? Is it still available, either commercially or open source? I remember the combinatorics assignment we had. It was complex. It took a week of work for people to program it in PL/1 and those programs ranged from 500 to 1000 lines long. I wrote it in APL in under an hour. I left it at 10 lines for readability, although I should have been a purist and worked another hour to get it into 1 line. The PL/1 programs took 1 or 2 minutes to run on the IBM mainframe and solve the problem. The computer charge was $20. My APL program took 2 hours to run and the charge was $1,500 which was paid for by our Computer Science Department's budget. That's when I realized that a week of my time is worth way more than saving some $'s in someone else's budget. I got an A+ in the course. p.s. Don't miss this presentation entitled: "APL one of the greatest programming languages ever"

    Read the article

  • LaTex: why partially showing up references?

    - by HH
    The bib.style part may be the problem. If I do not reference to references, do they show up? I have listed all errors below, the file compiles so I don't know whether they are related to partially-showing-up-references. For example, work with many authors gets only one author listed. I want to see references fully, not partially. Headers $ grep bib header.tex \usepackage{natbib} \bibliographystyle{abbrvnat} Errors $ grep -n -A 7 -B 7 Error *.log combined.log-505-! Illegal unit of measure (pt inserted). combined.log-506-<to be read again> combined.log-507- \futurelet combined.log-508-l.353 \hline combined.log-509- combined.log-510-? combined.log-511- combined.log:512:! Package caption Error: cite undefined. combined.log-513- combined.log-514-See the caption package documentation for explanation. combined.log-515-Type H <return> for immediate help. combined.log-516- ... combined.log-517- combined.log-518-l.374 ...n={CPU O(mlog(n))}, cite={topcoder:node}] combined.log-519- -- combined.log-559- [] combined.log-560- combined.log-561-) [10] combined.log-562-\openout2 = `references.aux'. combined.log-563- combined.log-564- (./references.tex combined.log-565- combined.log:566:! LaTeX Error: \include cannot be nested. combined.log-567- combined.log-568-See the LaTeX manual or LaTeX Companion for explanation. combined.log-569-Type H <return> for immediate help. combined.log-570- ... combined.log-571- combined.log-572-l.1 \include{timeUse.tex} Bibs.bib @misc{ Gundersen, author = "G. Gundersen", title = "Data Structures in Java for Matrix Computations", year = "2002" } @book{ Lennart, author = "R. Lennart", title = "Mathematics Handbook for Science and Engineering BETA", year = "2004" }

    Read the article

  • R + Bioconductor : combining probesets in an ExpressionSet

    - by Mike Dewar
    Hi, First off, this may be the wrong Forum for this question, as it's pretty darn R+Bioconductor specific. Here's what I have: library('GEOquery') GDS = getGEO('GDS785') cd4T = GDS2eSet(GDS) cd4T <- cd4T[!fData(cd4T)$symbol == "",] Now cd4T is an ExpressionSet object which wraps a big matrix with 19794 rows (probesets) and 15 columns (samples). The final line gets rid of all probesets that do not have corresponding gene symbols. Now the trouble is that most genes in this set are assigned to more than one probeset. You can see this by doing gene_symbols = factor(fData(cd4T)$Gene.symbol) length(gene_symbols)-length(levels(gene_symbols)) [1] 6897 So only 6897 of my 19794 probesets have unique probeset - gene mappings. I'd like to somehow combine the expression levels of each probeset associated with each gene. I don't care much about the actual probe id for each probe. I'd like very much to end up with an ExpressionSet containing the merged information as all of my downstream analysis is designed to work with this class. I think I can write some code that will do this by hand, and make a new expression set from scratch. However, I'm assuming this can't be a new problem and that code exists to do it, using a statistically sound method to combine the gene expression levels. I'm guessing there's a proper name for this also but my googles aren't showing up much of use. Can anyone help?

    Read the article

  • Efficient alternative to merge() when building dataframe from json files with R?

    - by Bryan
    I have written the following code which works, but is painfully slow once I start executing it over thousands of records: require("RJSONIO") people_data <- data.frame(person_id=numeric(0)) json_data <- fromJSON(json_file) n_people <- length(json_data) for(lender in 1:n_people) { person_dataframe <- as.data.frame(t(unlist(json_data[[person]]))) people_data <- merge(people_data, person_dataframe, all=TRUE) } output_file <- paste("people_data",".csv") write.csv(people_data, file=output_file) I am attempting to build a unified data table from a series of json-formated files. The fromJSON() function reads in the data as lists of lists. Each element of the list is a person, which then contains a list of the attributes for that person. For example: [[1]] person_id name gender hair_color [[2]] person_id name location gender height [[...]] structure(list(person_id = "Amy123", name = "Amy", gender = "F", hair_color = "brown"), .Names = c("person_id", "name", "gender", "hair_color")) structure(list(person_id = "matt53", name = "Matt", location = structure(c(47231, "IN"), .Names = c("zip_code", "state")), gender = "M", height = 172), .Names = c("person_id", "name", "location", "gender", "height")) The end result of the code above is matrix where the columns are every person-attribute that appears in the structure above, and the rows are the relevant values for each person. As you can see though, some data is missing for some of the people, so I need to ensure those show up as NA and make sure things end up in the right columns. Further, location itself is a vector with two components: state and zip_code, meaning it needs to be flattened to location.state and location.zip_code before it can be merged with another person record; this is what I use unlist() for. I then keep the running master table in people_data. The above code works, but do you know of a more efficient way to accomplish what I'm trying to do? It appears the merge() is slowing this to a crawl... I have hundreds of files with hundreds of people in each file. Thanks! Bryan

    Read the article

  • Function within function in R

    - by frespider
    Can you please explain to me why th code complain saying that Samdat is not found? I am trying to switch between the models as you can see, so i declared a functions that contains these specific models and I just need to call these function as one of the argument in the get.f function where the resampling will change the structure for each design matrix in the model. the code complain the Samdat is not found when it is found. Also, is there a way I can make the condition statement as if(Model == M1()) instead I have to create another argument M to set if(M==1) Can you explain please? dat <- cbind(Y=rnorm(20),rnorm(20),runif(20),rexp(20),rnorm(20),runif(20), rexp(20),rnorm(20),runif(20),rexp(20)) nam <- paste("v",1:9,sep="") colnames(dat) <- c("Y",nam) M1 <- function(){ a1 = cbind(Samdat[,c(2:5,7,9)]) b1 = cbind(Samdat[,c(2:4,6,8,7)]) c1 = b1+a1 list(a1=a1,b1=b1,c1=c1)} M2 <- function(){ a1= cbind(Samdat[,c(2:5,7,9)])+2 b1= cbind(Samdat[,c(2:4,6,8,7)])+2 c1 = a1+b1 list(a1=a1,b1=b1,c1=c1)} M3 <- function(){ a1= cbind(Samdat[,c(2:5,7,9)])+8 b1= cbind(Samdat[,c(2:4,6,8,7)])+8 c1 = a1+b1 list(a1=a1,b1=b1,c1=c1)} ################################################################# get.f <- function(asim,Model,M){ sse <-c() for(i in 1:asim){ set.seed(i) Samdat <- dat[sample(1:nrow(dat),nrow(dat),replace=T),] Y <- Samdat[,1] if(M==1){ a2 <- Model$a1 b2 <- Model$b1 c2 <- Model$c1 s<- a2+b2+c2 fit <- lm(Y~s) cof <- sum(summary(fit)$coef[,1]) coff <-Model$cof sse <-c(sse,coff) } else if(M==2){ a2 <- Model$a1 b2 <- Model$b1 c2 <- Model$c1 s<- c2+12 fit <- lm(Y~s) cof <- sum(summary(fit)$coef[,1]) coff <-Model$cof sse <-c(sse,coff) } else { a2 <- Model$a1 b2 <- Model$b1 c2 <- Model$c1 s<- c2+a2 fit <- lm(Y~s) cof <- sum(summary(fit)$coef[,1]) coff <- Model$cof sse <-c(sse,coff) } } return(sse) } get.f(10,Model=M1(),M=1) get.f(10,Model=M2(),M=2) get.f(10,Model=M3(),M=3)

    Read the article

  • vtk glyphs 3D, indenpently color and rotation

    - by user3684219
    I try to display thanks to vtk (python wrapper) several glyphs in a scene with each their own colour and rotation. Unfortunately, just the rotation (using vtkTensorGlyph) is taken in consideration by vtk. Reversely, just color is taken in consideration when I use a vtkGlyph3D. Here is a ready to use piece of code with a vtkTensorGlyph. Each cube should have a random color but there all will be in the same color. I read and read again the doc of vtk but I found no solution. Thanks in advance for any idea #!/usr/bin/env python # -*- coding: utf-8 -*- import vtk import scipy.linalg as sc import random as ra import numpy as np import itertools points = vtk.vtk.vtkPoints() # where to locate each glyph in the scene tensors = vtk.vtkDoubleArray() # rotation for each glyph tensors.SetNumberOfComponents(9) colors = vtk.vtkUnsignedCharArray() # should be the color for each glyph colors.SetNumberOfComponents(3) # let's make 10 cubes in the scene for i in range(0, 50, 5): points.InsertNextPoint(i, i, i) # position of a glyph colors.InsertNextTuple3(ra.randint(0, 255), ra.randint(0, 255), ra.randint(0, 255) ) # pick random color rot = list(itertools.chain(*np.reshape(sc.orth(np.random.rand(3, 3)).transpose(), (1, 9)).tolist())) # random rotation matrix (row major) tensors.InsertNextTuple9(*rot) polydata = vtk.vtkPolyData() # create the polydatas polydata.SetPoints(points) polydata.GetPointData().SetTensors(tensors) polydata.GetPointData().SetScalars(colors) cubeSource = vtk.vtkCubeSource() cubeSource.Update() glyphTensor = vtk.vtkTensorGlyph() glyphTensor.SetColorModeToScalars() # is it really work ? try: glyphTensor.SetInput(polydata) except AttributeError: glyphTensor.SetInputData(polydata) glyphTensor.SetSourceConnection(cubeSource.GetOutputPort()) glyphTensor.ColorGlyphsOn() # should not color all cubes independently ? glyphTensor.ThreeGlyphsOff() glyphTensor.ExtractEigenvaluesOff() glyphTensor.Update() # next is usual vtk code mapper = vtk.vtkPolyDataMapper() mapper.SetInputConnection(glyphTensor.GetOutputPort()) actor = vtk.vtkActor() actor.SetMapper(mapper) ren = vtk.vtkRenderer() ren.SetBackground(0.2, 0.5, 0.3) ren.AddActor(actor) renwin = vtk.vtkRenderWindow() renwin.AddRenderer(ren) iren = vtk.vtkRenderWindowInteractor() iren.SetInteractorStyle(vtk.vtkInteractorStyleTrackballCamera()) iren.SetRenderWindow(renwin) renwin.Render() iren.Initialize() renwin.Render() iren.Start()

    Read the article

  • Converting python collaborative filtering code to use Map Reduce

    - by Neil Kodner
    Using Python, I'm computing cosine similarity across items. given event data that represents a purchase (user,item), I have a list of all items 'bought' by my users. Given this input data (user,item) X,1 X,2 Y,1 Y,2 Z,2 Z,3 I build a python dictionary {1: ['X','Y'], 2 : ['X','Y','Z'], 3 : ['Z']} From that dictionary, I generate a bought/not bought matrix, also another dictionary(bnb). {1 : [1,1,0], 2 : [1,1,1], 3 : [0,0,1]} From there, I'm computing similarity between (1,2) by calculating cosine between (1,1,0) and (1,1,1), yielding 0.816496 I'm doing this by: items=[1,2,3] for item in items: for sub in items: if sub >= item: #as to not calculate similarity on the inverse sim = coSim( bnb[item], bnb[sub] ) I think the brute force approach is killing me and it only runs slower as the data gets larger. Using my trusty laptop, this calculation runs for hours when dealing with 8500 users and 3500 items. I'm trying to compute similarity for all items in my dict and it's taking longer than I'd like it to. I think this is a good candidate for MapReduce but I'm having trouble 'thinking' in terms of key/value pairs. Alternatively, is the issue with my approach and not necessarily a candidate for Map Reduce?

    Read the article

  • Makefile : Build in a separate directory tree

    - by Simone Margaritelli
    My project (an interpreted language) has a standard library composed by multiple files, each of them will be built into an .so dynamic library that the interpreter will load upon user request (with an import directive). Each source file is located into a subdirectory representing its "namespace", for instance : The build process has to create a "build" directory, then when each file is compiling has to create its namespace directory inside the "build" one, for instance, when compiling std/io/network/tcp.cc he run an mkdir command with mkdir -p build/std/io/network The Makefile snippet is : STDSRC=stdlib/std/hashing/md5.cc \ stdlib/std/hashing/crc32.cc \ stdlib/std/hashing/sha1.cc \ stdlib/std/hashing/sha2.cc \ stdlib/std/io/network/http.cc \ stdlib/std/io/network/tcp.cc \ stdlib/std/io/network/smtp.cc \ stdlib/std/io/file.cc \ stdlib/std/io/console.cc \ stdlib/std/io/xml.cc \ stdlib/std/type/reflection.cc \ stdlib/std/type/string.cc \ stdlib/std/type/matrix.cc \ stdlib/std/type/array.cc \ stdlib/std/type/map.cc \ stdlib/std/type/type.cc \ stdlib/std/type/binary.cc \ stdlib/std/encoding.cc \ stdlib/std/os/dll.cc \ stdlib/std/os/time.cc \ stdlib/std/os/threads.cc \ stdlib/std/os/process.cc \ stdlib/std/pcre.cc \ stdlib/std/math.cc STDOBJ=$(STDSRC:.cc=.so) all: stdlib stdlib: $(STDOBJ) .cc.so: mkdir -p `dirname $< | sed -e 's/stdlib/stdlib\/build/'` $(CXX) $< -o `dirname $< | sed -e 's/stdlib/stdlib\/build/'`/`basename $< .cc`.so $(CFLAGS) $(LDFLAGS) I have two questions : 1 - The problem is that the make command, i really don't know why, doesn't check if a file was modified and launch the build process on ALL the files no matter what, so if i need to build only one file, i have to build them all or use the command : make path/to/single/file.so Is there any way to solve this? 2 - Any way to do this in a "cleaner" way without have to distribute all the build directories with sources? Thanks

    Read the article

  • Create a unique ID by fuzzy matching of names (via agrep using R)

    - by tbrambor
    Using R, I am trying match on people's names in a dataset structured by year and city. Due to some spelling mistakes, exact matching is not possible, so I am trying to use agrep() to fuzzy match names. A sample chunk of the dataset is structured as follows: df <- data.frame(matrix( c("1200013","1200013","1200013","1200013","1200013","1200013","1200013","1200013", "1996","1996","1996","1996","2000","2000","2004","2004","AGUSTINHO FORTUNATO FILHO","ANTONIO PEREIRA NETO","FERNANDO JOSE DA COSTA","PAULO CEZAR FERREIRA DE ARAUJO","PAULO CESAR FERREIRA DE ARAUJO","SEBASTIAO BOCALOM RODRIGUES","JOAO DE ALMEIDA","PAULO CESAR FERREIRA DE ARAUJO"), ncol=3,dimnames=list(seq(1:8),c("citycode","year","candidate")) )) The neat version: citycode year candidate 1 1200013 1996 AGUSTINHO FORTUNATO FILHO 2 1200013 1996 ANTONIO PEREIRA NETO 3 1200013 1996 FERNANDO JOSE DA COSTA 4 1200013 1996 PAULO CEZAR FERREIRA DE ARAUJO 5 1200013 2000 PAULO CESAR FERREIRA DE ARAUJO 6 1200013 2000 SEBASTIAO BOCALOM RODRIGUES 7 1200013 2004 JOAO DE ALMEIDA 8 1200013 2004 PAULO CESAR FERREIRA DE ARAUJO I'd like to check in each city separately, whether there are candidates appearing in several years. E.g. in the example, PAULO CEZAR FERREIRA DE ARAUJO PAULO CESAR FERREIRA DE ARAUJO appears twice (with a spelling mistake). Each candidate across the entire data set should be assigned a unique numeric candidate ID. The dataset is fairly large (5500 cities, approx. 100K entries) so a somewhat efficient coding would be helpful. Any suggestions as to how to implement this?

    Read the article

  • Rewrite arrays using collections

    - by owca
    I have a task, which I was able to do with the use of simplest methods - arrays. Now I'd like to go further and redo it using some more complicated java features like collections, but I've never used anything more complicated than 2d matrix. What should I look at and how to start with it. Should Tower become a Collection ? And here's the task : We have two classes - Tower and Block. Towers are built from Blocks. Ande here's sample code for testing: Block k1=new Block("yellow",1,5,4); Block k2=new Block("blue",2,2,6); Block k3=new Block("green",3,4,2); Block k4=new Block("yellow",1,5,4); Tower tower=new Tower(); tower.add(k1,k2,k3); "Added 3 blocks." System.out.println(tower); "block: green, base: 4cm x 3cm, thicknes: 2 cm block: blue, base: 6cm x 2cm, thicknes: 2 cm block: yellow, base: 5cm x 4cm, thicknes: 1 cm" tower.add(k2); "Tower already contains this block." tower.add(k4); "Added 1 block." System.out.println(tower); "block: green, base: 4cm x 3cm, thicknes: 2 cm block: blue, base: 6cm x 2cm, thicknes: 2 cm block: yellow, base: 5cm x 4cm, thicknes: 1 cm block: yellow, base: 5cm x 4cm, thicknes: 1 cm" tower.delete(k1); "Deleted 1 block" tower.delete(k1); "Block not in tower" System.out.println(tower); "block: blue, base: 6cm x 2cm, thicknes: 2 cm block: yellow, base: 5cm x 4cm, thicknes: 1 cm block: yellow, base: 5cm x 4cm, thicknes: 1 cm" Let's say I will treat Tower as a collection of blocks. How to perform search for specific block among whole collection ? Or should I use other interface ?

    Read the article

  • Game of life in F# with accelerator

    - by jpalmer
    I'm trying to write life in F# using accelerator v2, but for some odd reason my output isn't square despite all my arrays being square - It appears that everything but a rectangular area in the top left of the matrix is being set to false. I've got no idea how this could be happening as all my operations should treat the entire array equally. Any ideas? open Microsoft.ParallelArrays open System.Windows.Forms open System.Drawing type IPA = IntParallelArray type BPA = BoolParallelArray type PAops = ParallelArrays let RNG = new System.Random() let size = 1024 let arrinit i = Array2D.init size size (fun x y -> i) let target = new DX9Target() let threearr = new IPA(arrinit 3) let twoarr = new IPA(arrinit 2) let onearr = new IPA(arrinit 1) let zeroarr = new IPA(arrinit 0) let shifts = [|-1;-1|]::[|-1;0|]::[|-1;1|]::[|0;-1|]::[|0;1|]::[|1;-1|]::[|1;0|]::[|1;1|]::[] let progress (arr:BPA) = let sums = shifts //adds up whether a neighbor is on or not |> List.fold (fun (state:IPA) t ->PAops.Add(PAops.Cond(PAops.Rotate(arr,t),onearr,zeroarr),state)) zeroarr PAops.Or(PAops.CompareEqual(sums,threearr),PAops.And(PAops.CompareEqual(sums,twoarr),arr)) //rule for life let initrandom () = Array2D.init size size (fun x y -> if RNG.NextDouble() > 0.5 then true else false) type meform () as self= inherit Form() let mutable array = new BoolParallelArray(initrandom()) let timer = new System.Timers.Timer(1.0) //redrawing timer do base.DoubleBuffered <- true do base.Size <- Size(size,size) do timer.Elapsed.Add(fun _ -> self.Invalidate()) do timer.Start() let draw (t:Graphics) = array <- array |> progress let bmap = new System.Drawing.Bitmap(size,size) target.ToArray2D array |> Array2D.iteri (fun x y t -> if not t then bmap.SetPixel(x,y,Color.Black)) t.DrawImageUnscaled(bmap,0,0) do self.Paint.Add(fun t -> draw t.Graphics) do Application.Run(new meform())

    Read the article

  • Setting the DataGridColumn's dataField based on XML node with the same name

    - by Rie Mino
    I am stuck. Given this XML: <matrix> <row> <column>0.51</column> <column>0.52</column> <column>0.53</column> <column>0.54</column> </row> <row> <column>0.61</column> <column>0.62</column> <column>0.63</column> <column>0.64</column> </row> I am trying to define a DataGrid such that the row nodes will represent new rows in the DataGrid and the column nodes will be used to auto-populate the DataGrid's columns. I am having a problem setting the dataField for each of the DataGridColumn ojects created. The DataGrid is created but the cell values for row 1 are all 0.51 and row 2 are 0.61. What am I doing wrong here?

    Read the article

  • Azure SDK causes Node.js service bus call to run slow

    - by PazoozaTest Pazman
    I am using this piece of code to call the service bus queue from my node.js server running locally using web matrix, I have also upload to windows azure "web sites" and it still performs slowly. var sb1 = azure.createServiceBusService(config.serviceBusNamespace, config.serviceBusAccessKey); sbMessage = { "Entity": { "SerialNumbersToCreate": '0', "SerialNumberSize": config.usageRates[3], "BlobName": 'snvideos' + channel.ChannelTableName, "TableName": 'snvideos' + channel.ChannelTableName } }; sb1.getQueue('serialnumbers', function(error, queue){ if (error === null){ sb1.sendQueueMessage('serialnumbers', JSON.stringify(sbMessage), function(error) { if (!error) res.send(req.query.callback + '({data: ' + JSON.stringify({ success: true, video: newVideo }) + '});'); else res.send(req.query.callback + '({data: ' + JSON.stringify({ success: false }) + '});'); }); } else res.send(req.query.callback + '({data: ' + JSON.stringify({ success: false }) + '});'); }); It can be up to 5 seconds before the server responds back to the client with the return result. When I comment out the sb1.getQueue('serialnumbers', function(error, queue){ and just have it return without sending a queue message it performs in less than 1 second. Why is that? Is my approach to using the azure sdk service bus correct? Any help would be appreciated.

    Read the article

  • How to setup OpenGL camera for a racing game

    - by vian
    I need the view to show the road polygon (a rectangle 3.f * 100.f) with a vanishing point for a road being at 3/4 height of the viewport and the nearest road edge as a viewport's bottom side. See Crazy Taxi game for an example of what I wish to do. I'm using iPhone SDK 3.1.2 default OpenGL ES project template. I setup the projection matrix as follows: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glFrustumf(-2.25f, 2.25f, -1.5f, 1.5f, 0.1f, 1000.0f); Then I use glRotatef to adjust for landscape mode and setup camera. glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(-90, 0.0f, 0.0f, 1.0f); const float cameraAngle = 45.0f * M_PI / 180.0f; gluLookAt(0.0f, 2.0f, 0.0f, 0.0f, 0.0f, 100.0f, 0.0f, cos(cameraAngle), sin(cameraAngle)); My road polygon triangle strip is like this: static const GLfloat roadVertices[] = { -1.5f, 0.0f, 0.0f, 1.5f, 0.0f, 0.0f, -1.5f, 0.0f, 100.0f, 1.5f, 0.0f, 100.0f, }; And I can't seem to find the right parameters for gluLookAt. My vanishing point is always at the center of the screen.

    Read the article

  • Alternative to as3isolib?

    - by tedw4rd
    Hi everyone, I've been working on a Flash game that involves an isometric space. I've been using as3isolib for a while now, and I'm less than impressed with how easy it is to use. Whether I'm approaching it the wrong way or it's just not that great to use is a question for another post. Anyways, I've been thinking of a different way to approach the problem of isometric positions, and I think I've got an idea that might work. Essentially, each object that is to be rendered to the iso-space maintains a 3-coordinate position. Those items are then registered with a camera that projects that 3-coordinate position to a 2-coordinate point on the screen according to the math on this Wikipedia article. Then, the MovieClip is added to the stage (or to the camera's MovieClip, perhaps) at that point, and at a child index of the point's y-value. That way, I figure objects that are closer to the camera will be "above" the objects further away, and will get rendered over them. So my question, then, is two-fold: Do you think this idea will work the way I think it will? Are there any existing 3D matrix/vector packages that I should look at? I know there's a Matrix3 class in Flex 3, but we're not using Flex for this game. Thanks!

    Read the article

  • Matlab - Propagate points orthogonally on to the edge of shape boundaries

    - by Graham
    Hi I have a set of points which I want to propagate on to the edge of shape boundary defined by a binary image. The shape boundary is defined by a 1px wide white edge. I also have the coordinates of these points stored in a 2 row by n column matrix. The shape forms a concave boundary with no holes within itself made of around 2500 points. I want to cast a ray from each point from the set of points in an orthogonal direction and detect at which point it intersects the shape boundary at. What would be the best method to do this? Are there some sort of ray tracing algorithms that could be used? Or would it be a case of taking orthogonal unit vector and multiplying it by a scalar and testing after multiplication if the end point of the vector is outside the shape boundary. When the end point of the unit vector is outside the shape, just find the point of intersection? Thank you very much in advance for any help!

    Read the article

  • How to map coordinates in AxesImage to coordinates in saved image file?

    - by Vebjorn Ljosa
    I use matplotlib to display a matrix of numbers as an image, attach labels along the axes, and save the plot to a PNG file. For the purpose of creating an HTML image map, I need to know the pixel coordinates in the PNG file for a region in the image being displayed by imshow. I have found an example of how to do this with a regular plot, but when I try to do the same with imshow, the mapping is not correct. Here is my code, which saves an image and attempts to print the pixel coordinates of the center of each square on the diagonal: import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_axes([0.1, 0.1, 0.8, 0.8]) axim = ax.imshow(np.random.random((27,27)), interpolation='nearest') for x, y in axim.get_transform().transform(zip(range(28), range(28))): print int(x), int(fig.get_figheight() * fig.get_dpi() - y) plt.savefig('foo.png', dpi=fig.get_dpi()) Here is the resulting foo.png, shown as a screenshot in order to include the rulers: The output of the script starts and ends as follows: 73 55 92 69 111 83 130 97 149 112 … 509 382 528 396 547 410 566 424 585 439 As you see, the y-coordinates are correct, but the x-coordinates are stretched: they range from 73 to 585 instead of the expected 135 to 506, and they are spaced 19 pixels o.c. instead of the expected 14. What am I doing wrong?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >