Search Results

Search found 614 results on 25 pages for 'vectors'.

Page 15/25 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Circle collision detection and Vector math: HELP?

    - by Griffin
    Hey so i'm currently going through the wildbunny blog to learn about collision detection, but i'm a bit confused on how the vectors he's talking about come into play QUOTED BLOG: p = ||A-B|| – (r1+r2) The two spheres are penetrating by distance p. We would also like the penetration vector so that we can correct the penetration once we discover it. This is the vector that moves both circles to the point where they just touch, correcting the penetration. Importantly it is not only just a vector that does this, it is the only vector which corrects the penetration by moving the minimum amount. This is important because we only want to correct the error, not introduce more by moving too much when we correct, or too little. N = (A-B) / ||A-B|| P = N*p Here we have calculated the normalised vector N between the two centres and the penetration vector P by multiplying our unit direction by the penetration distance. Ok so i understand that p is the distance each circle is penetrating each other, but i don't get what exactly N and P is. it seems to me N is just the coordinates of the 3rd point of the right trianlge formed by point A and B (A-B) then being divided by the hypotenuse of that triangle or distance between A and B (||A-B||) Whats the significance of this? Also, what is the penetration vector used for? It seems to me like a movement that one of the circles would perform to get un-penetrated.

    Read the article

  • Finding most Important Node(s) in a Directed Graph

    - by Srikar Appal
    I have a large (˜ 20 million nodes) directed Graph with in-edges & out-edges. I want to figure out which parts of of the graph deserve the most attention. Often most of the graph is boring, or at least it is already well understood. The way I am defining "attention" is by the concept of "connectedness" i.e. How can i find the most connected node(s) in the graph? In what follows, One can assume that nodes by themselves have no score, the edges have no weight & they are either connected or not. This website suggest some pretty complicated procedures like n-dimensional space, Eigen Vectors, graph centrality concepts, pageRank etc. Is this problem that complex? Can I not do a simple Breadth-First Traversal of the entire graph where at each node I figure out a way to find the number of in-edges. The node with most in-edges is the most important node in the graph. Am I missing something here?

    Read the article

  • What kind of language will replace C++ as C++ replaced C ? [closed]

    - by jokoon
    I think I'm not totally wrong when thinking that C++0x (or C++1x) is still C++, just better, with functionnalities coming from boost. I can't stop thinking that computer sciences, even with all that has been made so far, have to evolve again. I don't really like D since it just try to be some sort of "what C++ should have been", and Go seems to be too sophisticated when I dig a little into it, especially after watching some presentation video like this one http://www.youtube.com/watch?v=rKnDgT73v8s The first thing that come into my mind is a new kind of syntax to directly handle specific datatypes and containers such as map, vectors, queues... What kind of things are researchers thinking about ? What are the real features that could make C++ better or a new C-like language could invent ? Does Go features such things ? Would there be a new kind of syntax that would "unbloat" C++ while keeping its advantages ? Could C++ have some of the interesting stuff of languages such as C# and ObjC ? EDIT: Please consider that I'm talking about a system language, not a VM/CLI/bytecode thing.

    Read the article

  • Adding 2D vector movement with rotation applied

    - by Michael Zehnich
    I am trying to apply a slight sine wave movement to objects that float around the screen to make them a little more interesting. I would like to apply this to the objects so that they oscillate from side to side, not front to back (so the oscillation does not affect their forward velocity). After reading various threads and tutorials, I have come to the conclusion that I need to create and add vectors, but I simply cannot come up with a solution that works. This is where I'm at right now, in the object's update method (updated based on comments): Vector2 oldPosition = new Vector2(spritePos.X, spritePos.Y); //note: newPosition is initially set in the constructor to spritePos.x/y Vector2 direction = newPosition - oldPosition; Vector2 perpendicular = new Vector2(direction.Y, -direction.X); perpendicular.Normalize(); sinePosAng += 0.1f; perpendicular.X += 2.5f * (float)Math.Sin(sinePosAng); spritePos.X += velocity * (float)Math.Cos(radians); spritePos.Y += velocity * (float)Math.Sin(radians); spritePos += perpendicular; newPosition = spritePos;

    Read the article

  • Information about how much time in spent in a function, based on the input of this function

    - by olchauvin
    Is there a (quantitative) tool to measure performance of functions based on its input? So far, the tools I used to measure performance of my code, tells me how much time I spent in functions (like Jetbrain Dottrace for .Net), but I'd like to have more information about the parameters passed to the function in order to know which parameters impact the most the performance. Let's say that I have function like that: int myFunction(int myParam1, int myParam 2) { // Do and return something based on the value of myParam1 and myParam2. // The code is likely to use if, for, while, switch, etc.... } If would like a tool that would allow me to tell me how much time is spent in myFunction based on the value of myParam1 and myParam2. For example, the tool would give me a result looking like this: For "myFunction" : value | value | Number of | Average myParam1 | myParam2 | call | time ---------|----------|-----------|-------- 1 | 5 | 500 | 301 ms 2 | 5 | 250 | 1253 ms 3 | 7 | 1268 | 538 ms ... That would mean that myFunction has been call 500 times with myParam1=1 and myParam2=5, and that with those parameters, it took on average 301ms to return a value. The idea behind that is to do some statistical optimization by organizing my code such that, the blocs of codes that are the most likely to be executed are tested before the one that are less likely to be executed. To put it bluntly, if I know which values are used the most, I can reorganize the if/while/for etc.. structure of the function (and the whole program) to optimize it. I'd like to find such tools for C++, Java or.Net. Note: I am not looking for technical tips to optimize the code (like passing parameters as const, inlining functions, initializing the capacity of vectors and the like).

    Read the article

  • x86 exceptions and flags

    - by b-gen-jack-o-neill
    Hi, please, I know that when you for example divide by zero, the aproptiate flag is set in CPU flag register. But today I read that there are special interrupt vectors (I think the first 16 in IVT) that are used for such conditions like dividing by zero. So, what I want to ask is, does any situation that couses change som flag also triggers apropriate interrupt? Becouse in school, we used conditional jumps that checks wheather carry flag has been set or not, and I don´t remember there was some interrupt triggerd by that. So I am pretty confused now.

    Read the article

  • Scaling along an arbitrary axis (Dealing with non-uniform scale)

    - by Jon
    I'm trying to build my own little engine to get more familiar with the concepts of 3D programming. I have a transform class that on each frame it creates a Scaling Matrix (S), a Rotation Matrix from a Quaternion (R) and concatenates them together (S*R). Once i have SR, I insert the translation values into the bottom of the three columns. So i end up with a transformation matrix that looks like: [SR SR SR 0] [SR SR SR 0] [SR SR SR 0] [tx ty tz 1] This works perfectly in all cases except when rotating an object that has a non-uniform scale. For example a unit cube with ScaleX = 4, ScaleY = 2, ScaleZ = 1 will give me a rectangular box that is 4 times as wide as the depth and twice as high as the depth. If i then translate this around, the box stays the same and looks normal. The problem happens whenever I try to rotate this scaled box. The shape itself becomes distorted and it appears as though the Scale factors are affecting the object on the World X,Y,Z axis rather than the local X,Y,Z axis of the object. I've done some pretty extensive research through a variety of textbooks (Eberly, Moller/Hoffman, Phar etc) and there isn't a ton there to go off of. Online, most of the answers say to avoid non-uniform scaling which I understand the desire to avoid it, but I'd still like to figure out how to support it. The only thing I can think off is that when constructing a Scale Matrix: [sx 0 0 0] [0 sy 0 0] [0 0 sz 0] [0 0 0 1] This is scaling along the World Axis instead of the object's local Direction, Up and Right vectors or it's local Z, Y, X axis. Does anyone have any tips or ideas on how to handle construction a transformation matrix that allows for non-uniform scaling and rotation? Thanks!

    Read the article

  • Saving each layer of a Photoshop image to separate files

    - by BadKnees
    I have a PSD file with all icons in separate layers as vectors. I would like to save them in different sizes to use in iPhone, iPhone4 and iPad. I tried Files Scripts Export Layers to Files That took about 15 minutes to save each layer while the computer was overheating from the work. Tried with two different computers, one with CS4 and the other with CS5. Same result. And that doesn't allow me to set sizes. Seem like most icon packs, like pictos, glyphish and iconsweets are distributed in this way, in one PSD file. Is some easy way to get them out of the PSD and into PNG files?

    Read the article

  • Calc direction vector based on destination vector and distance from enemy in AS3

    - by Phil
    I'm working on a zombie game in AS3 where I want a character to be able to move away from a zombie depending upon how close the zombie is. The character also has a destination that it's trying to get too on the screen. Ok so I have 2 vectors, one pointing to my destination, and one pointing to the zombie which I then invert to get my "away" vector. I then turn the distance between my character and the zombie into a value between 0 and 1. And then I'm stuck on how to get a resultant vector for my character. How would I use my 0-1 value to calculate how much of the away vector is used and how much of the original destination vector is still left if that makes sense? to end up with 1 direction vector to move my character? So if the zombie is right where my character is, then my direction vector = away vector, and if I'm far away from the zombie than my direction vector = destination vector, but how do I calculate the in-between? Ideally need the answer in AS3.

    Read the article

  • My GLSL shader isn't compiling even though it should. What should I investigate?

    - by reapz
    I'm porting an iOS game to Android. One of the shaders I'm using wouldn't compile until I reduced the number of uniform variables. Here are the uniform definitions: uniform highp mat4 ViewProjMatrix; uniform mediump vec3 LightDirWorld; uniform mediump int BoneCount; uniform highp mat4 BoneMatrixArray[8]; uniform highp mat3 BoneMatrixArrayIT[8]; uniform mediump int LightCount; uniform mediump vec3 LightPos[4]; // This used to be 12, but now 4, next lines also uniform lowp vec3 LightColour[4]; uniform mediump vec3 LightInnerOuterFalloff[4]; My issue is that the GLSL shader wouldn't compile until I reduced the count of the above arrays from 12 to 4. My understanding is that if those 3 lines were arrays of 12 then I would be using 56 vertex uniform vectors. I query the system at startup (GL_MAX_VERTEX_UNIFORM_VECTORS) and it says that 128 are available. Why wouldn't it compile with 56? I'm having issues on the Kindle Fire.

    Read the article

  • Understanding dot notation

    - by Starkers
    Here's my interpretation of dot notation: a = [2,6] b = [1,4] c = [0,8] a . b . c = (2*6)+(1*4)+(0*8) = 12 + 4 + 0 = 16 What is the significance of 16? Apparently it's a scalar. Am I right in thinking that a scalar is the number we times a unit vector by to get a vector that has a scaled up magnitude but the same direction as the unit vector? So again, what is the relevance of 16? When is it used? It's not the magnitude of all the vectors added up. The magnitude of all of them is calculated as follows: sqrt( ax * ax + ay * ay ) + sqrt( bx * bx + by * by ) + sqrt( cx * cx + cy * cy) sqrt( 2 * 2 + 6 * 6 ) + sqrt( 1 * 1 + 4 * 4 ) + sqrt( 0 * 0 + 8 * 8) sqrt( 4 + 36 ) + sqrt( 1 + 16 ) + sqrt( 0 + 64) sqrt( 40 ) + sqrt( 17 ) + sqrt( 64) 6.3 + 4.1 + 8 10.4 + 8 18.4 So I don't really get this diagram: Attempting with sensible numbers: a = [1,0] b = [4,3] a . b = (1*0) + (4*3) = 0 + 12 = 12 So what exactly is a . b describing here? The magnitude of that vector? Because that isn't right: the 'a.b' vector = [4,0] sqrt( x*x + y*y ) sqrt( 4*4 + 0*0 ) sqrt( 16 + 0 ) 4 So what is 12 describing?

    Read the article

  • C++ and SDL Trouble Creating a STL Vector of a Game Object

    - by Jackson Blades
    I am trying to create a Space Invaders clone using C++ and SDL. The problem I am having is in trying to create Waves of Enemies. I am trying to model this by making my Waves a vector of 8 Enemy objects. My Enemy constructor takes two arguments, an x and y offset. My Wave constructor also takes two arguments, an x and y offset. What I am trying to do is have my Wave constructor initialize a vector of Enemies, and have each enemy given a different x offset so that they are spaced out appropriately. Enemy::Enemy(int x, int y) { box.x = x; box.y = y; box.w = ENEMY_WIDTH; box.h = ENEMY_HEIGHT; xVel = ENEMY_WIDTH / 2; } Wave::Wave(int x, int y) { box.x = x; box.y = y; box.w = WAVE_WIDTH; box.y = WAVE_HEIGHT; xVel = (-1)*ENEMY_WIDTH; yVel = 0; std::vector<Enemy> enemyWave; for (int i = 0; i < enemyWave.size(); i++) { Enemy temp(box.x + ((ENEMY_WIDTH + 16) * i), box.y); enemyWave.push_back(temp); } } I guess what I am asking is if there is a cleaner, more elegant way to do this sort of initialization with vectors, or if this is right at all. Any help is greatly appreciated.

    Read the article

  • Bending of track in a racing game

    - by caius
    I am trying to create a small racing game in which the track would be modeled using a BSpline curve for the path's center line and directional vectors to define the 'bending' of the track at each point. My problem is that I don't know how to calculate the correct bending / slope of the curve, in such a way that it would be optimal or at least visually nice for a car to 'bend in the corner'. My idea was to use the direction of the 2nd derivatives of the curve, however while this approach looks fine for most of the track, there are points in which the 2nd derivative makes sharp 'twists' / very quick 180 degree flips. I also read about 'knots' of bsplines, but I don't know if such 'twist' in 2nd derivatives is a knot or knots are something else. Can you tell me that using a BSpline: 1. How could I calculate a visually nice bending of a track for a racing game? 2. Is it possible to do this by using some simple calculations of centripertal force / gravity? 3. Is it possible to do this by using 1st, 2nd and 3rd derivatives of the BSpline curve? I am not looking for the 'physically correct' bending angle for the track, I would just like to create something which is visually pleasing in a simple game. I am using a framework which has a built-in class for BSpline, including support for 1st, 2nd and 3rd derivatives of the curve.

    Read the article

  • How does this circle collision detection math work?

    - by Griffin
    I'm going through the wildbunny blog to learn about collision detection. I'm confused about how the vectors he's talking about come into play. Here's the part that confuses me: p = ||A-B|| – (r1+r2) The two spheres are penetrating by distance p. We would also like the penetration vector so that we can correct the penetration once we discover it. This is the vector that moves both circles to the point where they just touch, correcting the penetration. Importantly it is not only just a vector that does this, it is the only vector which corrects the penetration by moving the minimum amount. This is important because we only want to correct the error, not introduce more by moving too much when we correct, or too little. N = (A-B) / ||A-B|| P = N*p Here we have calculated the normalised vector N between the two centres and the penetration vector P by multiplying our unit direction by the penetration distance. I understand that p is the distance by which the circles penetrate, but I don't get what exactly N and P are. It seems to me N is just the coordinates of the 3rd point of the right trianlge formed by point A and B (A-B) then being divided by the hypotenuse of that triangle or distance between A and B (||A-B||). What's the significance of this? Also, what is the penetration vector used for? It seems to me like a movement that one of the circles would perform to get un-penetrated.

    Read the article

  • Points around a circumference C#

    - by Lautaro
    Im trying to get a list of vectors that go around a circle, but i keep getting the circle to go around several times. I want one circel and the dots to be placed along its circumference. I want the first dot to start at 0 and the last dot to end just before 360. Also i need to be able to calculate the spacing by the ammount of points. List<Vector2> pointsInPath = new List<Vector2>(); private int ammountOfPoints = 5; private int blobbSize = 200; private Vector2 topLeft = new Vector2(100, 100); private Vector2 blobbCenter; private int endAngle = 50; private int angleIncrementation; public Blobb() { blobbCenter = new Vector2(blobbSize / 2, blobbSize / 2) + topLeft; angleIncrementation = endAngle / ammountOfPoints; for (int i = 0; i < ammountOfPoints; i++) { pointsInPath.Add(getPointByAngle(i * angleIncrementation, 100, blobbCenter)); // pointsInPath.Add(getPointByAngle(i * angleIncrementation, blobbSize / 2, blobbCenter)); } } private Vector2 getPointByAngle(float angle, float distance, Vector2 centre) { return new Vector2((float)(distance * Math.Cos(angle) ), (float)(distance * Math.Sin(angle))) + centre ; }

    Read the article

  • Make Gameobject Stand On Surface Facing Certain Direction

    - by Julian
    I want to make a biped character stand on any surface I click on. Surfaces have up vectors of any of positive or negative X,Y,Z. So imagine a cube with each face being a gameobject whose up vector pointing directly away from the cube. If my character is facing "forward" and I click on a surface which is to the left or right of me ( left or right walls), I want my character to now be standing on that surface but still be facing in the direction he initially was. If I click on a wall which is in the forward path of my character i want him to now be standing on that surface and his forward to now be what was once "up" relative to my character. Here is the code I am working with now. void Update() { if (Input.GetMouseButtonUp (0)) { RaycastHit hit; var ray = Camera.main.ScreenPointToRay(Input.mousePosition); if (Physics.Raycast(ray, out hit)) { Vector3 upVectBefore = transform.up; Vector3 forwardVectBefore = transform.forward; Quaternion rotationVectBefore = transform.rotation; Vector3 hitPosition = hit.transform.position; transform.position = hitPosition; float lookDifference = Vector3.Distance(hit.transform.up, forwardVectBefore); if(Vector3.Distance(hit.transform.up, upVectBefore) < .23) //Same normal { transform.rotation = rotationVectBefore; } else if(lookDifference > 1.412 && lookDifference <= 1.70607) //side wall { transform.up = hit.transform.up; transform.forward = forwardVectBefore; } else //head on wall { transform.up = hit.transform.up; transform.forward = upVectBefore; } } } } The first case "Same normal" works fine, however the other two do not work as I would like them to. Sometimes my character is laying down on the surface or on the wrong side of the surface. Does anyone know nice way of solving this problem?

    Read the article

  • Matrices: Arrays or separate member variables?

    - by bjz
    I'm teaching myself 3D maths and in the process building my own rudimentary engine (of sorts). I was wondering what would be the best way to structure my matrix class. There are a few options: Separate member variables: struct Mat4 { float m11, m12, m13, m14, m21, m22, m23, m24, m31, m32, m33, m34, m41, m42, m43, m44; // methods } A multi-dimensional array: struct Mat4 { float[4][4] m; // methods } An array of vectors struct Mat4 { Vec4[4] m; // methods } I'm guessing there would be positives and negatives to each. From 3D Math Primer for Graphics and Game Development, 2nd Edition p.155: Matrices use 1-based indices, so the first row and column are numbered 1. For example, a12 (read “a one two,” not “a twelve”) is the element in the first row, second column. Notice that this is different from programming languages such as C++ and Java, which use 0-based array indices. A matrix does not have a column 0 or row 0. This difference in indexing can cause some confusion if matrices are stored using an actual array data type. For this reason, it’s common for classes that store small, fixed size matrices of the type used for geometric purposes to give each element its own named member variable, such as float a11, instead of using the language’s native array support with something like float elem[3][3]. So that's one vote for method one. Is this really the accepted way to do things? It seems rather unwieldy if the only benefit would be sticking with the conventional math notation.

    Read the article

  • XNA 3D coordinates seem off

    - by Peteyslatts
    I'm going through a book, and the example it gave me seems like is should work, but when I try and implement it, it falls short. My Camera class takes three vectors in to generate View and Projection matrices. I'm giving it a position vector of (0,0,5), a target vector of Vector.Zero and a top vector (which way is up) of Vector.Up. My Three vertices are placed at (0,1,0), (-1,-1,0), (1,-1,0). It seems like it should work because the vertices are centered around the origin, and thats where I'm telling the camera to look but when I run the game, the only way to get the camera to see the vertices is to set its position to (0,0,-5) and even then the triangle is skewed. Not sure what's wrong here. Any suggestions would be helpful. Just to make sure I've given you guys everything (I don't think these are important as the problem seems to be related to the coordinates, not the ability of the game to draw them): I'm using a VertexBuffer and a BasicEffect. My render code is as follows: effect.World = Matrix.Identity; effect.View = camera.view; effect.Projection = camera.projection; effect.VertexColorEnabled = true; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawUserPrimitives<VertexPositionColor> (PrimitiveType.TriangleStrip, verts, 0, 1); }

    Read the article

  • How to make other semantics behave like SV_Position?

    - by object
    I'm having a lot of trouble with shadow mapping, and I believe I've found the problem. When passing vectors from the vertex shader to the pixel shader, does the hardware automatically change any of the values based on the semantic? I've compiled a barebones pair of shaders which should illustrate the problem. Vertex shader : struct Vertex { float3 position : POSITION; }; struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; cbuffer Matrices { matrix projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), projection); output.light_position = output.position; // We simply pass the same vector in screenspace through different semantics. return output; } And a simple pixel shader to go along with it: struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; float4 RenderPixelShader(Pixel input) : SV_Target { // At this point, (input.position.z / input.position.w) is a normal depth value. // However, (input.light_position.z / input.light_position.w) is 0.999f or similar. // If the primitive is touching the near plane, it very quickly goes to 0. return (0.0f).rrrr; } How is it possible to make the hardware treat light_position in the same way which position is being treated between the vertex and pixel shaders? EDIT: Aha! (input.position.z) without dividing by W is the same as (input.light_position.z / input.light_position.w). Not sure why this is.

    Read the article

  • Writing a program in C++ and I need help [migrated]

    - by compscinoob
    So I am a new to this. I am trying to write a program with a function double_product(vector< double a, vector< double b) that computes the scalar product of two vectors. The scalar product is $a_{0}b_{0}+a_{1}b_{1}+...a_{n-1}b_{n-1}$. Here is what I have. It is a mess, but I am trying! #include<iostream> #include<vector> using namespace std; class Scalar_product { public: Scalar_product(vector<double> a, vector<bouble> b); }; double scalar_product(vector<double> a, vector<double> b) { double product = 0; for (int i=0; i <=a.size()-1; i++) for (int i=0; i <=b.size()-1; i++) product = product + (a[i])*(b[i]); return product; } int main() { cout << product << endl; return 0; }

    Read the article

  • How can you easily determine the textureRect for tiled maps in SFML 2.0?

    - by ThePlan
    I'm working on creating a 2d map prototype, and I've come across the rendering bit of it. I have a tilesheet with tiles, each tile is 30x30 pixels, and there's a 1px border to delimitate them. In SFML the usual method of drawing a part of a tilesheet is declaring an IntRect with the rectangle coordinates then calling the setTextureRectangle() method to a sprite. In a small game it would work, but I have well over 45 tiles and adding more every day, I can't declare 45 intRects for every material, the map is not optimized yet, it would get even worse if I would have to call the setTextureRect() method, aside from declaring 45 rectangleInts. How could I simplify this task? All I need is a very simple and flexible solution for extracting a region of the tilesheet. Basically I have a Tile class. I create multiple instances of tiles (vectors) and each tile has a position and a material. I parse a map file and as I parse it I set the materials of the map according to the parsed map file, and all I need to do is render. Basically I need to do something like this: switch(tile.getMaterial()) { case GRASS: material_sprite.setTextureRect(something); window.draw(material_sprite); break; case WATER: material_sprite.setTextureRect(something); window.draw(material_sprite); break; // handle more cases }

    Read the article

  • Problems with 3D transformation - (SharpDX)

    - by Morphex
    First of all , I have been trying to get this right for a couple of day already, I have read so much info and still fail miserably to understand this. So I am going to tell you that even though I have done fairly amount of research myself, I failed to implement it. I must say miserably I am trying to create a generic camera class for a game engine of sorts - for research purposes only - the thing is I have no idea how to go about it. I have read about quaternions and matrices, but when it comes to the actual implementation I suck at it. Sharpdx has already Matrices and QUaternions implemented. SO no big deal on the map behind it. How in the world would I go about creating a camera? I have seen so many camera examples and still can't make one that works as expected. I would like to implement diferent types too (Orbital, 6DoF, FPS). So what is need for a camera? UP, Forward and Right vectors I read they are needed, also a quaternion for rotations, and View and Projection matrices. I understand that a FPS camera for instance only rotates around the World Y and the Right Axis of the camera. the 6DoF rotates always around their own axis, and the orbital is just translating for set distance and making it look always at a fixed target point. The concepts are there, now implementing this is not trivial for me. Can anyone point me on what am I missing, what I got wrong? I would really enjoy if you could give a tutorial, some piece of code, or just plain explanation of the concepts. Thank you for readin, a frustrated coder.

    Read the article

  • How to handle notifications to several partial views of the same model?

    - by Seki
    I am working on refactoring an old simulation of a Turing machine. The application uses a class that contains the state and the logic of program execution, and several panels to display the tape representation and show the state, messages, and the GUI controls (start, stop, program listing, ...). I would like to refactor it using the MVC architecture that was not used originaly: the Frame is the only way to get access to the different panels and there is also a strong coupling between the "engine" class and the GUI updates in the way of frame.displayPanel.state.setText("halted"); or frame.outputPanel.messages.append("some thing"); It looks to me that I should put the state related code into an observable model class and make the different panels observers. My problem is that the java Observable class only provides a global notification to the Observers, while I would prefer not to refresh every Observers everytime, but only when the part that specificaly observe has changed. I am thinking of implementing myself several vectors of listeners (for the state / position, for the output messages, ...) but I feel like reinventing the wheel. I though also about adding some flags that the observers could check like isNewMessageAvailable(), hasTapeMoved(), etc but it sounds also approximative design. BTW, is it ok to keep the fetch / execute loop into the model or should I move it in another place? We can think in a theorical ideal way as I am completely revamping this small application.

    Read the article

  • How do I get confidence intervals without inverting a singular Hessian matrix?

    - by AmalieNot
    Hello. I recently posted this to reddit and it was suggested I come here, so here I am. I'm a student working on an epidemiology model in R, using maximum likelihood methods. I created my negative log likelihood function. It's sort of gross looking, but here it is: NLLdiff = function(v1, CV1, v2, CV2, st1 = (czI01 - czV01), st2 = (czI02 - czV02), st01 = czI01, st02 = czI02, tt1 = czT01, tt2 = czT02) { prob1 = (1 + v1 * CV1 * tt1)^(-1/CV1) prob2 = ( 1 + v2 * CV2 * tt2)^(-1/CV2) -(sum(dbinom(st1, st01, prob1, log = T)) + sum(dbinom(st2, st02, prob2, log = T))) } The reason the first line looks so awful is because most of the data it takes is inputted there. czI01, for example, is already declared. I did this simply so that my later calls to the function don't all have to have awful vectors in them. I then optimized for CV1, CV2, v1 and v2 using mle2 (library bbmle). That's also a bit gross looking, and looks like: ml.cz.diff = mle2 (NLLdiff, start=list(v1 = vguess, CV1 = cguess, v2 = vguess, CV2 = cguess), method="L-BFGS-B", lower = 0.0001) Now, everything works fine up until here. ml.cz.diff gives me values that I can turn into a plot that reasonably fits my data. I also have several different models, and can get AICc values to compare them. However, when I try to get confidence intervals around v1, CV1, v2 and CV2 I have problems. Basically, I get a negative bound on CV1, which is impossible as it actually represents a square number in the biological model as well as some warnings. The warnings are this: http://i.imgur.com/B3H2l.png . Is there a better way to get confidence intervals? Or, really, a way to get confidence intervals that make sense here? What I see happening is that, by coincidence, my hessian matrix is singular for some values in the optimization space. But, since I'm optimizing over 4 variables and don't have overly extensive programming knowledge, I can't come up with a good method of optimization that doesn't rely on the hessian. I have googled the problem - it suggested that my model's bad, but I'm reconstructing some work done before which suggests that my model's really not awful (the plots I make using the ml.cz.diff look like the plots of the original work). I have also read the relevant parts of the manual as well as Bolker's book Ecological Models in R. I have also tried different optimization methods, which resulted in a longer run time but the same errors. The "SANN" method didn't finish running within an hour, so I didn't wait around to see the result. tl;dr : my confidence intervals are bad, is there a relatively straightforward way to fix them in R. My vectors are: czT01 = c(5, 5, 5, 5, 5, 5, 5, 25, 25, 25, 25, 25, 25, 25, 50, 50, 50, 50, 50, 50, 50) czT02 = c(5, 5, 5, 5, 5, 10, 10, 10, 10, 10, 25, 25, 25, 25, 25, 50, 50, 50, 50, 50, 75, 75, 75, 75, 75) czI01 = c(25, 24, 22, 22, 26, 23, 25, 25, 25, 23, 25, 18, 21, 24, 22, 23, 25, 23, 25, 25, 25) czI02 = c(13, 16, 5, 18, 16, 13, 17, 22, 13, 15, 15, 22, 12, 12, 13, 13, 11, 19, 21, 13, 21, 18, 16, 15, 11) czV01 = c(1, 4, 5, 5, 2, 3, 4, 11, 8, 1, 11, 12, 10, 16, 5, 15, 18, 12, 23, 13, 22) czV02 = c(0, 3, 1, 5, 1, 6, 3, 4, 7, 12, 2, 8, 8, 5, 3, 6, 4, 6, 11, 5, 11, 1, 13, 9, 7) and I get my guesses by: v = -log((c(czI01, czI02) - c(czV01, czV02))/c(czI01, czI02))/c(czT01, czT02) vguess = mean(v) cguess = var(v)/vguess^2 It's also possible that I'm doing something else completely wrong, but my results seem reasonable so I haven't caught it.

    Read the article

  • Detecting Acceleration in a car (iPhone Accelerometer)

    - by TheGazzardian
    Hello, I am working on an iPhone app where we are trying to calculate the acceleration of a moving car. Similar apps have accomplished this (Dynolicious), but the difference is that this app is designed to be used during general city driving, not on a drag strip. This leads us to one big concern that Dynolicious was luckily able to avoid: hills. Yes, hills. There are two important stages to this: calibration, and actual driving. Our initial run was simple and suffered the consequences. During the calibration stage, I took the average force on the phone, and during running, I just subtracted the average force from the current force to get the current acceleration this frame. The problem with this is that the typical car receives much more force than just the forward force - everything from turning to potholes was causing the values to go out of sync with what was really happening. The next run was to add the condition that the iPhone must be oriented in such a way that the screen was facing toward the back of the car. Using this method, I attempted to follow only force on the z-axis, but this obviously lead to problems unless the iPhone was oriented directly upright, because of gravity. Some trigonometry later, and I had managed to work gravity out of the equation, so that the car was actually being read very, very well by the iPhone. Until I hit a slope. As soon as the angle of the car changed, suddenly I was receiving accelerations and decelerations that didn't make sense, and we were once again going out of sync. Talking with someone a lot smarter than me at math lead to a solution that I have been trying to implement for longer than I would like to admit. It's steps are as follows: 1) During calibration, measure gravity as a vector instead of a size. Store that vector. 2) When the car initially moves forward, take the vector of motion and subtract gravity. Use this as the forward momentum. (Ignore, for now, the user cases where this will be difficult and let's concentrate on the math :) 3) From the forward vector and the gravity vector, construct a plane. 4) Whenever a force is received, project it onto said plane to get rid of sideways force/etc. 5) Then, use that force, the known magnitude of gravity, and the known direction of forward motion to essentially solve a triangle to get the forward vector. The problem that is causing the most difficulty in this new system is not step 5, which I have gotten to the point where all the numbers look as they should. The difficult part is actually the detection of the forward vector. I am selecting vectors whose magnitude exceeds gravity, and from there, averaging them and subtracting gravity. (I am doing some error checking to make sure that I am not using a force just because the iPhone accelerometer was off by a bit, which happens more frequently than I would like). But if I plot these vectors that I am using, they actually vary by an angle of about 20-30 degrees, which can lead to some strong inaccuracies. The end result is that the app is even more inaccurate now than before. So basically - all you math and iPhone brains out there - any glaring errors? Any potentially better solutions? Any experience that could be useful at all? Award: offering a bounty of $250 to the first answer that leads to a solution.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >