Search Results

Search found 1909 results on 77 pages for 'adjacency matrix'.

Page 47/77 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • A simple augmented reality application in C#

    - by Ian
    Hi All, I would like to start a personal project that will give me a lot of new concept to learn and understand. After some thought, I figured that an Augmented Reality related project will be the most beneficial for me because of the following reasons: I haven't tried interfacing a program with a live video/camera feed I haven't done any "image processing" project I haven't done any "graphics rendering" ever With that said, you can assume that I will totally suck at AR. So I'm here to ask for some advice regarding the best way to go through this. st milestone project that I am thinking of is to take a cheap webcam and read some Data Matrix using C#. nd milestone project will render some text-overlay on the feed when presented with a certain Data Matrix. rd milestone project will actually render some 3D shapes. I've searched all-around and found some nice yet advanced materials for AR: http://sites.google.com/site/augmentedrealitytestingsite/ http://soldeveloper.com/ http://www.mperfect.net/wpfAugReal/ So I came here to ask the following: Do you think my milestone is reasonable given my "deficiencies"? In order to start Milestone 1, can you give some materials that I can study? I prefer online materials. Thanks!

    Read the article

  • Transform OpenGL coordinates to lower UIView coordinates

    - by John Qualis
    Hi, I am new to OpenGL over iPhone. I am developing an iPhone app similar to a barcode reader but with an extra OpenGL layer. The bottommost layer is UIImagePickerController, then I use UIView on top and draw a rectangle at certain co-ordinates on the iphone screen. So far everything is OK. Then I am trying to draw an OpenGL 3-D model in that rectangle. I am able to load a 3-D model in the iPhone based on this code here - http://iphonedevelopment.blogspot.com/2008/12/start-of-wavefront-obj-file-loader.html I am not able to transform the co-ordinates of the rectangle into OpenGL co-ordinates. Appreciate any help. Do I need to use a matrix to translate the currentPosition of the 3-D model so it is drawn within myRect? The code is given below.. Appreciate any help/pointers in this regards. John (void)drawView:(GLView*)view { static GLfloat rotation = 0.0; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glLoadIdentity(); glColor4f(0.0, 0.5, 1.0, 1.0); // The coordinates of the rectangle are myRect.x, // myRect.y, myRect.width, myRect.height // Do I need a transform matrix here? //glOrthof(-160.0f, 160.0f, -240.0f, 240.0f, -1.0f, 1.0f); [plane drawSelf]; .... } -(void)setupView:(GLView*)view { const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0; GLfloat size; glEnable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); //glMatrixMode(GL_MODELVIEW); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); }

    Read the article

  • Pixel Perfect Collision Detection in HTML5 Canvas

    - by Armin Ronacher
    Hi, I want to check a collision between two Sprites in HTML5 canvas. So for the sake of the discussion, let's assume that both sprites are IMG objects and a collision means that the alpha channel is not 0. Now both of these sprites can have a rotation around the object's center but no other transformation in case this makes this any easier. Now the obvious solution I came up with would be this: calculate the transformation matrix for both figure out a rough estimation of the area where the code should test (like offset of both + calculated extra space for the rotation) for all the pixels in the intersecting rectangle, transform the coordinate and test the image at the calculated position (rounded to nearest neighbor) for the alpha channel. Then abort on first hit. The problem I see with that is that a) there are no matrix classes in JavaScript which means I have to do that in JavaScript which could be quite slow, I have to test for collisions every frame which makes this pretty expensive. Furthermore I have to replicate something I already have to do on drawing (or what canvas does for me, setting up the matrices). I wonder if I'm missing anything here and if there is an easier solution for collision detection.

    Read the article

  • Arrrg! MovieClip object refuses to moved in any rational way in as3?

    - by Aaron H.
    I have a MovieClip object, which is exported for actionscript (AS3) in an .swc file. When I place an instance of the clip on the stage without any modifications, it appears in the upper left corner, about half off stage (i.e. only the lower right quadrant of the object is visible). I understand that this is because the clip has a registration point which is not the upper left corner. If you call getBounds() on the movieclip you can get the bounds of the clip (presumably from the "point" that it's aligned on) which looks something like (left: -303, top: -100, right: 303, bottom: 100), you can subtract the left and top values from the clip x and y: clip.x -= bounds.left; clip.y -= bounds.top; This seems to properly align the clip fully on stage with the top left of the clip squarely in the corner of the stage. But! Following that logic doesn't seem to work when aligning it on the center of the stage! clip.x = (stage.stageWidth / 2); etc... This creates the crazy parallel universe where the clip is now down in the lower right corner of the stage. The only clue I have is that looking at: clip.transform.matrix and clip.transform.concatenatedMatrix matrix has a tx value of 748 (half of stage height) ty value of 426 (Half of stage height) concatenatedMatrix has a tx value of 1699.5 and ty value of 967.75 That's also obviously where the movieclip is getting positioned, but why? Where is this additional translation coming from?

    Read the article

  • Accurate least-squares fit algorithm needed

    - by ggkmath
    I've experimented with the two ways of implementing a least-squares fit (LSF) algorithm shown here. The first code is simply the textbook approach, as described by Wolfram's page on LSF. The second code re-arranges the equation to minimize machine errors. Both codes produce similar results for my data. I compared these results with Matlab's p=polyfit(x,y,1) function, using correlation coefficients to measure the "goodness" of fit and compare each of the 3 routines. I observed that while all 3 methods produced good results, at least for my data, Matlab's routine had the best fit (the other 2 routines had similar results to each other). Matlab's p=polyfit(x,y,1) function uses a Vandermonde matrix, V (n x 2 matrix) and QR factorization to solve the least-squares problem. In Matlab code, it looks like: V = [x1,1; x2,1; x3,1; ... xn,1] % this line is pseudo-code [Q,R] = qr(V,0); p = R\(Q'*y); % performs same as p = V\y I'm not a mathematician, so I don't understand why it would be more accurate. Although the difference is slight, in my case I need to obtain the slope from the LSF and multiply it by a large number, so any improvement in accuracy shows up in my results. For reasons I can't get into, I cannot use Matlab's routine in my work. So, I'm wondering if anyone has a more accurate equation-based approach recommendation I could use that is an improvement over the above two approaches, in terms of rounding errors/machine accuracy/etc. Any comments appreciated! thanks in advance.

    Read the article

  • help in the Donalds B. Johnson's algorithm, i cannot understand the pseudo code

    - by Pitelk
    Hi , does anyone know the Donald B. Johnson's algorithm which enumarates all the elementary circuits (cycles) in a Directed graph? link text I have the paper he had published in 1975 but I cannot understand the pseudo-code. My goal is to implement this algorithm in java. Some questions i have is for example what is the matrix Ak it refers to. In the pseudo code mentions that Ak:=adjacency structure of strong component K with least vertex in subgraph of G induced by {s,s+1,....n}; Does that mean i have to implement another algorithm that finds the Ak matrix? Another question is what the following means? begin logical f; Does also the line "logical procedure CIRCUIT (integer value v);" means that the circuit procedure returns a logical variable. In the pseudo code also has the line "CIRCUIT := f;" . Does this mean? It would be great if someone could translate this 1970's pseudocode to a more modern type of pseudo code so i can understand it in case you are interested to help but you cannot find the paper please email me at [email protected] and i will send you the paper. Thanks in advance

    Read the article

  • Fitting Gaussian KDE in numpy/scipy in Python

    - by user248237
    I am fitting a Gaussian kernel density estimator to a variable that is the difference of two vectors, called "diff", as follows: gaussian_kde_covfact(diff, smoothing_param) -- where gaussian_kde_covfact is defined as: class gaussian_kde_covfact(stats.gaussian_kde): def __init__(self, dataset, covfact = 'scotts'): self.covfact = covfact scipy.stats.gaussian_kde.__init__(self, dataset) def _compute_covariance_(self): '''not used''' self.inv_cov = np.linalg.inv(self.covariance) self._norm_factor = sqrt(np.linalg.det(2*np.pi*self.covariance)) * self.n def covariance_factor(self): if self.covfact in ['sc', 'scotts']: return self.scotts_factor() if self.covfact in ['si', 'silverman']: return self.silverman_factor() elif self.covfact: return float(self.covfact) else: raise ValueError, \ 'covariance factor has to be scotts, silverman or a number' def reset_covfact(self, covfact): self.covfact = covfact self.covariance_factor() self._compute_covariance() This works, but there is an edge case where the diff is a vector of all 0s. In that case, I get the error: File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/stats/kde.py", line 334, in _compute_covariance self.inv_cov = linalg.inv(self.covariance) File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/linalg/basic.py", line 382, in inv if info>0: raise LinAlgError, "singular matrix" numpy.linalg.linalg.LinAlgError: singular matrix What's a way to get around this? In this case, I'd like it to return a density that's essentially peaked completely at a difference of 0, with no mass everywhere else. thanks.

    Read the article

  • Suggest a good method with least lookup time complexity

    - by Amrish
    I have a structure which has 3 identifier fields and one value field. I have a list of these objects. To give an analogy, the identifier fields are like the primary keys to the object. These 3 fields uniquely identify an object. Class { int a1; int a2; int a3; int value; }; I would be having a list of say 1000 object of this datatype. I need to check for specific values of these identity key values by passing values of a1, a2 and a3 to a lookup function which would check if any object with those specific values of a1, a2 and a3 is present and returns that value. What is the most effective way to implement this to achieve a best lookup time? One solution I could think of is to have a 3 dimensional matrix of length say 1000 and populate the value in it. This has a lookup time of O(1). But the disadvantages are. 1. I need to know the length of array. 2. For higher identity fields (say 20), then I will need a 20 dimension matrix which would be an overkill on the memory. For my actual implementation, I have 23 identity fields. Can you suggest a good way to store this data which would give me the best look up time?

    Read the article

  • This is more a matlab/math brain teaser than a question

    - by gd047
    Here is the setup. No assumptions for the values I am using. n=2; % dimension of vectors x and (square) matrix P r=2; % number of x vectors and P matrices x1 = [3;5] x2 = [9;6] x = cat(2,x1,x2) P1 = [6,11;15,-1] P2 = [2,21;-2,3] P(:,1)=P1(:) P(:,2)=P2(:) modePr = [-.4;16] TransPr=[5.9,0.1;20.2,-4.8] pred_modePr = TransPr'*modePr MixPr = TransPr.*(modePr*(pred_modePr.^(-1))') x0 = x*MixPr Then it was time to apply the following formula to get myP , where µij is MixPr. I used this code to get it: myP=zeros(n*n,r); Ptables(:,:,1)=P1; Ptables(:,:,2)=P2; for j=1:r for i = 1:r; temp = MixPr(i,j)*(Ptables(:,:,i) + ... (x(:,i)-x0(:,j))*(x(:,i)-x0(:,j))'); myP(:,j)= myP(:,j) + temp(:); end end Some brilliant guy proposed this formula as another way to produce myP for j=1:r xk1=x(:,j); PP=xk1*xk1'; PP0(:,j)=PP(:); xk1=x0(:,j); PP=xk1*xk1'; PP1(:,j)=PP(:); end myP = (P+PP0)*MixPr-PP1 I tried to formulate the equality between the two methods and seems to be this one. To make things easier, I ignored from both methods the summation of matrix P. where the first part denotes the formula that I used, while the second comes from his code snippet. Do you think this is an obvious equality? If yes, ignore all the above and just try to explain why. I could only start from the LHS, and after some algebra I think I proved it equals to the RHS. However I can't see how did he (or she) think of it in the first place.

    Read the article

  • Comparing developer productivity tools

    - by Jacob
    I am getting ready to test developer productivity tools for our team (Coderush, Resharper, JustCode, etc). We're planning to roll the tool out the same time as we deploy Visual Studio 2010 and TFS. I've seen several posts discussing the merits of one tool versus another. However, I haven't been able to find any discussion of a methodology self evaluating. One approach I've seen is to use each tool for a month or so and decide which one you like best. This seems completely reasonable if I were evaluating it for myself, but since I'm evaluating for others as well, I'd like to do something more rigorous. I'm hoping to collect some ideas on how to do that. As a starting point, my thinking was to come up a list of 10-20 crucial features to compare, then prepare a matrix where I can compare the relative strengths of each product. Once the matrix is complete, we can make a well informed decision about which product aligns with out needs. Since I'm not a user of any product now, it's been difficult to figure out which features are critical and which are less so. Thank you!

    Read the article

  • Textured Primitives in XNA with a first person camera

    - by 131nary
    So I have a XNA application set up. The camera is in first person mode, and the user can move around using the keyboard and reposition the camera target with the mouse. I have been able to load 3D models fine, and they appear on screen no problem. Whenever I try to draw any primitive (textured or not), it does not show up anywhere on the screen, no matter how I position the camera. In Initialize(), I have: quad = new Quad(Vector3.Zero, Vector3.UnitZ, Vector3.Up, 2, 2); quadVertexDecl = new VertexDeclaration(this.GraphicsDevice, VertexPositionNormalTexture.VertexElements); In LoadContent(), I have: quadTexture = Content.Load<Texture2D>(@"Textures\brickWall"); quadEffect = new BasicEffect(this.GraphicsDevice, null); quadEffect.AmbientLightColor = new Vector3(0.8f, 0.8f, 0.8f); quadEffect.LightingEnabled = true; quadEffect.World = Matrix.Identity; quadEffect.View = Matrix.CreateLookAt(cameraPosition, cameraTarget, Vector3.Up); quadEffect.Projection = this.Projection; quadEffect.TextureEnabled = true; quadEffect.Texture = quadTexture; And in Draw() I have: this.GraphicsDevice.VertexDeclaration = quadVertexDecl; quadEffect.Begin(); foreach (EffectPass pass in quadEffect.CurrentTechnique.Passes) { pass.Begin(); GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionNormalTexture>( PrimitiveType.TriangleList, quad.Vertices, 0, 4, quad.Indexes, 0, 2); pass.End(); } quadEffect.End(); I think I'm doing something wrong in the quadEffect properties, but I'm not quite sure what.

    Read the article

  • Solving problems involving more complex data structures with CUDA

    - by Nils
    So I read a bit about CUDA and GPU programming. I noticed a few things such that access to global memory is slow (therefore shared memory should be used) and that the execution path of threads in a warp should not diverge. I also looked at the (dense) matrix multiplication example, described in the programmers manual and the nbody problem. And the trick with the implementation seems to be the same: Arrange the calculation in a grid (which it already is in case of the matrix mul); then subdivide the grid into smaller tiles; fetch the tiles into shared memory and let the threads calculate as long as possible, until it needs to reload data from the global memory into shared memory. In case of the nbody problem the calculation for each body-body interaction is exactly the same (page 682): bodyBodyInteraction(float4 bi, float4 bj, float3 ai) It takes two bodies and an acceleration vectors. The body vector has four components it's position and the weight. When reading the paper, the calculation is understood easily. But what is if we have a more complex object, with a dynamic data structure? For now just assume that we have an object (similar to the body object presented in the paper) which has a list of other objects attached and the number of objects attached is different in each thread. How could I implement that without having the execution paths of the threads to diverge? I'm also looking for literature which explains how different algorithms involving more complex data structures can be effectively implemented in CUDA.

    Read the article

  • Convert bitmap image information into CGImage in iPhone OS 3

    - by giftederic
    I want to create a CGImage with the color information I already have Here is the code for converting the CGImage to CML, CML_color is a matrix structure - (void)CGImageReftoCML:(CGImageRef *)image destination:(CML_color &)dest{ CML_RGBA p; NSUInteger width=CGImageGetWidth(image); NSUInteger height=CGImageGetHeight(image); CGColorSpaceRef colorSpace=CGColorSpaceCreateDeviceRGB(); unsigned char *rawData=new unsigned char[height*width*4]; NSUInteger bytesPerPixel=4; NSUInteger bytesPerRow=bytesPerPixel*width; NSUInteger bitsPerComponent=8; CGContextRef context=CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGColorSpaceRelease(colorSpace); CGContextDrawImage(context, CGRectMake(0, 0, width, height), image); CGContextRelease(context); int index=0; for (int i=0; i<height; i++) { for (int j=0; j<width; j++) { p.red=rawData[index++]; p.green=rawData[index++]; p.blue=rawData[index++]; p.alpha=rawData[index++]; dest(i,j)=p; } } delete[] rawData; } Now I want the reverse function, which converts CML into CGImage. I know all the color and alpha information to create the image, which stored in the matrix CML, but how can I do that?

    Read the article

  • How to determine the data type of a CvMat

    - by Chris
    When using the CvMat type, the type of data is crucial to keeping your program running. For example, depending on whether your data is type float or unsigned char, you would choose one of these two commands: cvmGet(mat, row, col); cvGetReal2D(mat, row, col); Is there a universal approach to this? If the wrong data type matrix is passed to these calls, they crash at runtime. This is becoming an issue, since a function I have defined is getting passed several different types of matrices. How do you determine the data type of a matrix so you can always access its data? I tried using the "type()" function as such. CvMat* tmp_ptr = cvCreateMat(t_height,t_width,CV_8U); std::cout << "type = " << tmp_ptr->type() << std::endl; This does not compile, saying "term does not evaluate to a function taking 0 arguments". If I remove the brackets after the word type, I get a type of 1111638032 EDIT minimal application that reproduces this... int main( int argc, char** argv ) { CvMat *tmp2 = cvCreateMat(10,10, CV_32FC1); std::cout << "tmp2 type = " << tmp2->type << " and CV_32FC1 = " << CV_32FC1 << " and " << (tmp2->type == CV_32FC1) << std::endl; } Output: tmp2 type = 1111638021 and CV_32FC1 = 5 and 0

    Read the article

  • Neural Network with softmax activation

    - by Cambium
    This is more or less a research project for a course, and my understanding of NN is very/fairly limited, so please be patient :) ============== I am currently in the process of building a neural network that attempts to examine an input dataset and output the probability/likelihood of each classification (there are 5 different classifications). Naturally, the sum of all output nodes should add up to 1. Currently, I have two layers, and I set the hidden layer to contain 10 nodes. I came up with two different types of implementations 1) Logistic sigmoid for hidden layer activation, softmax for output activation 2) Softmax for both hidden layer and output activation I am using gradient descent to find local maximums in order to adjust the hidden nodes' weights and the output nodes' weights. I am certain in that I have this correct for sigmoid. I am less certain with softmax (or whether I can use gradient descent at all), after a bit of researching, I couldn't find the answer and decided to compute the derivative myself and obtained softmax'(x) = softmax(x) - softmax(x)^2 (this returns an column vector of size n). I have also looked into the MATLAB NN toolkit, the derivative of softmax provided by the toolkit returned a square matrix of size nxn, where the diagonal coincides with the softmax'(x) that I calculated by hand; and I am not sure how to interpret the output matrix. I ran each implementation with a learning rate of 0.001 and 1000 iterations of back propagation. However, my NN returns 0.2 (an even distribution) for all five output nodes, for any subset of the input dataset. My conclusions: o I am fairly certain that my gradient of descent is incorrectly done, but I have no idea how to fix this. o Perhaps I am not using enough hidden nodes o Perhaps I should increase the number of layers Any help would be greatly appreciated! The dataset I am working with can be found here (processed Cleveland): http://archive.ics.uci.edu/ml/datasets/Heart+Disease

    Read the article

  • Examples of monoids/semigroups in programming

    - by jkff
    It is well-known that monoids are stunningly ubiquitous in programing. They are so ubiquitous and so useful that I, as a 'hobby project', am working on a system that is completely based on their properties (distributed data aggregation). To make the system useful I need useful monoids :) I already know of these: Numeric or matrix sum Numeric or matrix product Minimum or maximum under a total order with a top or bottom element (more generally, join or meet in a bounded lattice, or even more generally, product or coproduct in a category) Set union Map union where conflicting values are joined using a monoid Intersection of subsets of a finite set (or just set intersection if we speak about semigroups) Intersection of maps with a bounded key domain (same here) Merge of sorted sequences, perhaps with joining key-equal values in a different monoid/semigroup Bounded merge of sorted lists (same as above, but we take the top N of the result) Cartesian product of two monoids or semigroups List concatenation Endomorphism composition. Now, let us define a quasi-property of an operation as a property that holds up to an equivalence relation. For example, list concatenation is quasi-commutative if we consider lists of equal length or with identical contents up to permutation to be equivalent. Here are some quasi-monoids and quasi-commutative monoids and semigroups: Any (a+b = a or b, if we consider all elements of the carrier set to be equivalent) Any satisfying predicate (a+b = the one of a and b that is non-null and satisfies some predicate P, if none does then null; if we consider all elements satisfying P equivalent) Bounded mixture of random samples (xs+ys = a random sample of size N from the concatenation of xs and ys; if we consider any two samples with the same distribution as the whole dataset to be equivalent) Bounded mixture of weighted random samples Which others do exist?

    Read the article

  • C#.NET DataGridView - update database

    - by Geo Ego
    I know this is a basic function of the DataGridView, but for some reason, I just can't get it to work. I just want the DataGridView on my Windows form to submit any changes made to it to the database when the user clicks the "Save" button. On form load, I populate the DataGridView and that's working fine. In my save function, I basically have the following (I've trimmed out some of the code around it to get to the point): using (SqlDataAdapter ruleTableDA = new SqlDataAdapter("SELECT rule.fldFluteType AS [Flute], rule.fldKnife AS [Knife], rule.fldScore AS [Score], rule.fldLowKnife AS [Low Knife], rule.fldMatrixScore AS [Matrix Score], rule.fldMatrix AS [Matrix] FROM dbo.tblRuleTypes rule WHERE rule.fldMachine_ID = '1003'", con)) { SqlCommandBuilder commandBuilder = new SqlCommandBuilder(ruleTableDA); DataTable dt = new DataTable(); dt = RuleTable.DataSource as DataTable; ruleTableDA.Fill(dt); ruleTableDA.Update(dt); } Okay, so I edited the code to do the following: build the commands, create a DataTable based on the DataGridView (RuleTable), fill the DataAdapter with the DataTable, and update the database. Now ruleTableDA.Update(dt) is throwing the exception "Concurrency violation: the UpdateCommand affected 0 of the expected 1 records." Thanks very much!

    Read the article

  • How to prevent external translation of a movieclip object on stage in AS3?

    - by Aaron H.
    I have a MovieClip object, which is exported for actionscript (AS3) in an .swc file. When I place an instance of the clip on the stage without any modifications, it appears in the upper left corner, about half off stage (i.e. only the lower right quadrant of the object is visible). I understand that this is because the clip has a registration point which is not the upper left corner. If you call getBounds() on the movieclip you can get the bounds of the clip (presumably from the "point" that it's aligned on) which looks something like (left: -303, top: -100, right: 303, bottom: 100), you can subtract the left and top values from the clip x and y: clip.x -= bounds.left; clip.y -= bounds.top; This seems to properly align the clip fully on stage with the top left of the clip squarely in the corner of the stage. But! Following that logic doesn't seem to work when aligning it on the center of the stage! clip.x = (stage.stageWidth / 2); etc... This creates the crazy parallel universe where the clip is now down in the lower right corner of the stage. The only clue I have is that looking at: clip.transform.matrix and clip.transform.concatenatedMatrix matrix has a tx value of 748 (half of stage height) ty value of 426 (Half of stage height) concatenatedMatrix has a tx value of 1699.5 and ty value of 967.75 That's also obviously where the movieclip is getting positioned, but why? Where is this additional translation coming from?

    Read the article

  • equality on the sender of an event

    - by Berryl
    I have an interface for a UI widget, two of which are attributes of a presenter. public IMatrixWidget NonProjectActivityMatrix { set { // validate the incoming value and set the field _nonProjectActivityMatrix = value; .... // configure & load non-project activities } public IMatrixWidget ProjectActivityMatrix { set { // validate the incoming value and set the field _projectActivityMatrix = value; .... // configure & load project activities } The widget has an event that both presenter objects subscribe to, and so there is an event handler in the presenter like so: public void OnActivityEntry(object sender, EntryChangedEventArgs e) { // calculate newTotal here .... if (ReferenceEquals(sender, _nonProjectActivityMatrix)) { _nonProjectActivityMatrix.UpdateTotalHours(feedback.ActivityTotal); } else if (ReferenceEquals(sender, _projectActivityMatrix)) { _projectActivityMatrix.UpdateTotalHours(feedback.ActivityTotal); } else { // ERROR - we should never be here } } The problem is that the ReferenceEquals on the sender fails, even though it is the implemented widget that is the sender - the same implemented widget that was set to the presenter attribute! Can anyone spot what the problem / fix is? Cheers, Berryl I didn't know you could edit nicely. Cool. Here is the event raising code: void OnGridViewNumericUpDownEditingControl_ValueChanged(object sender, EventArgs e) { // omitted to save sapce if (EntryChanged == null) return; var args = new EntryChangedEventArgs(activityID, dayID, Convert.ToDouble(amount)); EntryChanged(this, args); } Here is the debugger dump of the presenter attribute, sans namespace info: ?_nonProjectActivityMatrix {WinPresentation.Widgets.MatrixWidgetDgv} [WinPresentation.Widgets.MatrixWidgetDgv]: {WinPresentation.Widgets.MatrixWidgetDgv} Here is the debugger dump of the sender: ?sender {WinPresentation.Widgets.MatrixWidgetDgv} base {Core.GUI.Widgets.Lookup.MatrixWidgetBase<Core.GUI.Widgets.Lookup.DynamicDisplayDto>}: {WinPresentation.Widgets.MatrixWidgetDgv} _configuration: {Domain.Presentation.Timesheet.Matrix.WeeklyMatrixConfiguration} _wrappedWidget: {Win.Widgets.DataGridViewDynamicLookupWidget} AllowUserToAddRows: true ColumnCount: 11 Count: 4 EntryChanged: {Method = {Void OnActivityEntry(System.Object, Smack.ConstructionAdmin.Domain.Presentation.Timesheet.Matrix.EntryChangedEventArgs)}} SelectedCell: {DataGridViewNumericUpDownCell { ColumnIndex=3, RowIndex=3 }} SelectedCellValue: "0.00" SelectedColumn: {DataGridViewNumericUpDownColumn { Name=MONDAY, Index=3 }} SelectedItem: {'AdministrativeActivity: 130-04', , AdministrativeTime, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00} Berryl

    Read the article

  • Plot vectors with labels in matlab

    - by mad
    I have a Nx62 matrix with N 62-D vectors and a NX1 vector with the labels for the vectors. I am trying to plot these vectors with their labels because I want to see the behavior of these classes when plotted in a 62-dimensional space. The vectors belong to three classes according to the labels of a NX1 vector cited before. How to to that in matlab? when i do plot(vector,classes) the result is very weird to analyse, how to put labels in the graph? The code i am using to get the labels, vectors and plotting is the following: %labels is a vector with labels, vectors is a matrix where each line is a vector [labels,vectors]=libsvmread('features-im1.txt'); when I plot a three dimensional vector is simple a=[1,2,3] plot(a) and then I get the result but now i have a set of vectors and a set of labels, and i want to see the distribution of them, i want to plot each of these labels but also want to identify their classes. How to do that in matlab? EDIT: This code is almost working. The problem is the fact that for each vector and class the plot will assign a color. I just want three colors and three labels, one per class. [class,vector]=libsvmread('features-im1.txt'); %the plot doesn't allow negative and 0 values in the label class=class+2; labels = {'class -1','class 0','class 1'}; h = plot(vector); legend(h,labels{class})

    Read the article

  • Matab - Trace contour line between two different points

    - by Graham
    Hi, I have a set of points represented as a 2 row by n column matrix. These points make up a connected boundary or edge. I require a function that traces this contour from a start point P1 and stop at an end point P2. It also needs to be able trace the contour in a clockwise or anti-clockwise direction. I was wondering if this can be achieved by using some of matlabs functions. I have tried to write my own function but this was riddled with bugs and I have also tried using bwtraceboundary and indexing however this has problematic results as the points within the matrix are not in the order that create the contour. Thank you in advance for any help. Btw, I have included a link to a plot of the set of points. It is half the outline of a hand. The function would ideally trace the contour from ether the red star to the green triangle. Returning the points in order of traversal.

    Read the article

  • Circular dependency with generics

    - by devoured elysium
    I have defined the following interface: public interface IStateSpace<State, Action> where State : IState where Action : IAction<State, Action> // <-- this is the line that bothers me { void SetValueAt(State state, Action action); Action GetValueAt(State state); } Basically, an IStateSpace interface should be something like a chess board, and in each position of the chess board you have a set of possible movements to do. Those movements here are called IActions. I have defined this interface this way so I can accommodate for different implementations: I can then define concrete classes that implement 2D matrix, 3D matrix, graphs, etc. public interface IAction<State, Action> { IStateSpace<State, Action> StateSpace { get; } } An IAction, would be to move up(this is, if in (2, 2) move to (2, 1)), move down, etc. Now, I'll want that each action has access to a StateSpace so it can do some checking logic. Is this implementation correct? Or is this a bad case of a circular dependence? If yes, how to accomplish "the same" in a different way? Thanks

    Read the article

  • Replace some column values depending on a condition, MATLAB

    - by darkcminor
    I have a matrix like A= 4.0000 120.0000 92.0000 0 0 37.6000 0.1910 30.0000 10.0000 168.0000 74.0000 0 0 38.0000 0.5370 34.0000 10.0000 139.0000 80.0000 0 0 27.1000 1.4410 57.0000 1.0000 139.0000 60.0000 23.0000 846.0000 30.1000 0.3980 59.0000 5.0000 136.0000 72.0000 19.0000 175.0000 25.8000 0.5870 51.0000 7.0000 121.0000 0 0 0 30.0000 0.4840 32.0000 I want to replace the values of the first column that are greater than '5' to 0, also in the second column if the values are in range 121-130 replace them by 0, if they are in range 131-140 replece by 1, 141-150 by 2, 151-160 by 3... so the result matrix would be A= 4.0000 0.0000 92.0000 0 0 37.6000 0.1910 30.0000 0.0000 4.0000 74.0000 0 0 38.0000 0.5370 34.0000 0.0000 1.0000 80.0000 0 0 27.1000 1.4410 57.0000 1.0000 1.0000 60.0000 23.0000 846.0000 30.1000 0.3980 59.0000 5.0000 1.0000 72.0000 19.0000 175.0000 25.8000 0.5870 51.0000 0.0000 0.0000 0 0 0 30.0000 0.4840 32.0000 How to acomplish this? I was trying something like counter=1; for i = 1: rows if A(i,1) > 5 A(i ,1) = 0; end if A(i,2) > 120 && A(i,2) < 130 A(i ,2) = 0; end counter = counter+1; end Using a case could do the trick?

    Read the article

  • R: Are there any alternatives to loops for subsetting from an optimization standpoint?

    - by Adam
    A recurring analysis paradigm I encounter in my research is the need to subset based on all different group id values, performing statistical analysis on each group in turn, and putting the results in an output matrix for further processing/summarizing. How I typically do this in R is something like the following: data.mat <- read.csv("...") groupids <- unique(data.mat$ID) #Assume there are then 100 unique groups results <- matrix(rep("NA",300),ncol=3,nrow=100) for(i in 1:100) { tempmat <- subset(data.mat,ID==groupids[i]) #Run various stats on tempmat (correlations, regressions, etc), checking to #make sure this specific group doesn't have NAs in the variables I'm using #and assign results to x, y, and z, for example. results[i,1] <- x results[i,2] <- y results[i,3] <- z } This ends up working for me, but depending on the size of the data and the number of groups I'm working with, this can take up to three days. Besides branching out into parallel processing, is there any "trick" for making something like this run faster? For instance, converting the loops into something else (something like an apply with a function containing the stats I want to run inside the loop), or eliminating the need to actually assign the subset of data to a variable?

    Read the article

  • Populating a Datagrid with variable number of columns and rows in Flex

    - by Rie Mino
    I am trying to figure out how to manage a Datagrid based on an XML object like this: <matrix rows="5" columns="5"> <column> <row>0.5</row> <row>0.21</row> </column> <column> <row>0.6</row> <row>0.9</row> </column> <column> <row>0.5</row> <row>0.5</row> </column> <column> <row>0.8</row> <row>0.4</row> </column> </matrix> I will need to populate the Datagrid column names based on a different XML object and use the above XML to populate each of the column's rows. I currently am able to create the Datagrid and populate its column headers but I am unsure as to how to how to add the rows for each column. The above XML will be update with new row and column elements added and deleted. This, of course, will be bound to the Datagrid to show updates. Thanks in advanced for your help.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >