Search Results

Search found 215 results on 9 pages for 'coords'.

Page 7/9 | < Previous Page | 3 4 5 6 7 8 9  | Next Page >

  • how to load an image to a grid using pygame, instead of just using a fill color?

    - by yao jiang
    I am trying to create a "map of a city" using pygame. I want to be able to put images of buildings in specific grid coords rather than just filling them in with a color. This is how I am creating this map grid: def clear(): for r in range(rows): for c in range(rows): if r%3 == 1 and c%3 == 1: color = brown; grid[r][c] = 1; else: color = white; grid[r][c] = 0; pygame.draw.rect(screen, color, [(margin+width)*c+margin, (margin+height)*r+margin, width, height]) pygame.display.flip(); Now how do I put images of buildings in those brown colored grids at those specific locations? I've tried some of the samples online but can't seem to get them to work. Any help is appreciated. If anyone have a good source for free sprites that I can use for pygame, please let me know. Thanks!

    Read the article

  • Handling button presses in Gtk2::Image objects

    - by willert
    I've been trying to get an Gtk2::Image object in this perl Gtk2 application to get to react to button presses, but to no avail. The image shows as expected but the button events don't get handled. What am I missing? my $img = Gtk2::Image-new_from_file( $file ); $img-set_property( sensitive = 1 ); $img-can_focus( 1 ); $img-set_events([qw/ button-press-mask button-release-mask /]); $img-signal_connect( 'button-press-event' = sub { my ( $self, $event ) = @_; print STDERR "Coords: ", $event-get_coords; return; }); $window-add( $img ); $window-show_all;

    Read the article

  • OpenLayers .containsPoint after pan

    - by Adrian
    I've seem to have hit a bug or i have overlooked something. I written some code that enumerates through all the vector features on a OpenLayers Vector layer - to check if the mouse is inside a vector feature - if so then it displays some info based on the feature. I had to write my own methods to do this because the existing OpenLayers Controls( select etc) stop after finding a feature under the mouse, and i the possibility of several features being stacked on top of one another. My problem is that the .containsPoint method seems to be using coords from before a 'pan'. After zooming in or out the geometry seems to be in the right place and .containsPoint is works correctly when I wave the mouse over the map. Do I need to do something after the map has been panned to update something( feature's geometry)

    Read the article

  • How to use Haar wavelet to detect LINES on an image?

    - by Ole Jak
    So I have Image like this I want to get something like this (I hevent drawn all lines I want but I hope you can get my idea) I want to use SURF ( (Speeded Up Robust Features) is a robust image descriptor, first presented by Herbert Bay et al. in 2006 ) or something that is based on sums of 2D Haar wavelet responses and makes an efficient use of integral images for finding all straight lines on image. I want to get relative to picture pixel coords start and end points of lines. So on this picture to find all lines between tiles and thouse 2 black lines on top. Is there any such Code Example (with lines search capability) to start from? I love C and C++ but any other readable code will probably work for me=)

    Read the article

  • 2D World design question

    - by Maciek
    I'm facing a problem which is probably extremely common in game-design. Let's assume that we've got a 2D world The world's size is M x N rect The world may contain some items in it The items have (x,y) coords The world can be browsed via a window that is physically (m x n) large. The browser window can be zoomed in / out The browser window can be panned up/down + left right, while in the extents of the world's rect. How should I go about implementing this? I'm especially concerned about the browser window. Can anyone recommend any good reads ? This is not a homework - it's more of a task which I've set myself to complete.

    Read the article

  • pgf/tikz: String Symbols as Input Coordinates

    - by red_lynx
    Hi all, I'm new to pgf so i was trying out some examples from the pgfplot manual. One example is especially relevant for my current task but, alas, it would not compile. Here is the code: \documentclass[11pt]{article} \usepackage{tikz} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[symbolic x coords={a,b,c,d,e,f,g,h,i}] \addplot+[smooth] coordinates { (a,42) (b,50) (c,80) (f,60) (g,62) (i,90)}; \end{axis} \end{tikzpicture} \end{document} the compiler quits with the following error: ! Package PGF Math Error: Could not parse input 'a' as a floating point number, sorry. The unreadable part was near 'a'.. I have no clue how to correct this behavior. Other plots (smooth, scatter, bar), which contain only numerical data compile fine. Could anybody give me a hint? Cheers K.

    Read the article

  • R: Creating Custom Shapes with ggplot

    - by Brandon Bertelsen
    Full Disclosure: This was also posted to the ggplot2 mailing list. (I'll update if I receive a response) I'm a bit lost on this one, I've tried messing around with geom_polygon but successive attempts seem worse than the previous. The image that I'm trying to recreate is this, the colours are unimportant, but the positions are: In addition to creating this, I also need to be able to label each element with text. At this point, I'm not expecting a solution (although that would be ideal) but pointers or similar examples would be immensely helpful. One option that I played with was hacking scale_shape and using 1,1 as coords. But was stuck with being able to add labels. The reason I'm doing this with ggplot, is because I'm generating scorecards on a company by company basis. This is only one plot in a 4 x 10 grid of other plots (using pushViewport) Note: The top tier of the pyramid could also be a rectangle of similar size.

    Read the article

  • Making only part of an image a hot spot link.

    - by Rkstarcass
    I have a large image and only want a rectangle portion of it to be a hot spot link that goes "Home". Here is my code that is not working. Any help on what I am missing or have done wrong? Thanks! <div id="defaultPic" runat="server"> <img src="Images/boardwise-home.jpg" alt="Boardwise Home" usemap="#map" /> <map name="map" id="boardwise-home"> <area shape="rect" coords="32,23,32,105,275,105,275,23" href="~/default.aspx" title="Boardwise Home" alt="Home" /> </map>

    Read the article

  • C typedef struct uncertainty.

    - by Emanuel Ey
    Consider the following typedef struct in C: 21:typedef struct source{ 22: double ds; //ray step 23: double rx,zx; //source coords 24: double rbox1, rbox2; //the box that limits the range of the rays 25: double freqx; //source frequency 26: int64_t nThetas; //number of launching angles 27: double theta1, thetaN; //first and last launching angle 28:}source_t; I get the error: globals.h:21: error: redefinition of 'struct source' globals.h:28: error: conflicting types for 'source_t' globals.h:28: note: previous declaration of 'source_t' was here I've tried using other formats for this definition: struct source{ ... }; typedef struct source source_t; and typedef struct{ ... }source_t; Which both return the same error. Why does this happen? it looks perfectly right to me.

    Read the article

  • I'm an idiot/blind and I can't find why I'm getting a list index error. Care to take a look at these 20 or so lines?

    - by Meff
    Basically it's supposed to take a set of coordinates and return a list of coordinates of it's neighbors. However, when it hits here:if result[i][0] < 0 or result[i][0] >= board.dimensions: result.pop(i) when i is 2, it gives me an out of index error. I can manage to have it print result[2][0] but at the if statement it throws the errors. I have no clue how this is happening and if anyone could shed any light on this problem I'd be forever in debt. def neighborGen(row,col,board): """ returns lists of coords of neighbors, in order of up, down, left, right """ result = [] result.append([row-1 , col]) result.append([row+1 , col]) result.append([row , col-1]) result.append([row , col+1]) #prune off invalid neighbors (such as (0,-1), etc etc) for i in range(len(result)): if result[i][0] < 0 or result[i][0] >= board.dimensions: result.pop(i) if result[i][1] < 0 or result[i][1] >= board.dimensions: result.pop(i) return result

    Read the article

  • dynamically changing the selector text

    - by kelly
    I have a 'next' button which fades out a div, shows another, changes out a graphic and then... I want it to change the actual ID of the 'next' button but neither .html nor replaceWith seem to be working. I have this: $(document).ready(function(){ $('#portfolio').fadeTo(500,0.25); $('#account') .animate({width:"10.1875em",height:"11.1875em",duration:'medium'}); $('#next2_btn').click(function(){ $('#content').fadeTo(300, 0.0, function() { $('#content2').show(300, function() { $('#next2_btn').$('#next2_btn'.value).html('<area shape="rect" coords="826,935,906,971" id="next3_btn" href="#">') $('#account').fadeTo(500,1.0) .animate({marginLeft:"220px", width:"2em",'height' :"2em",duration:'medium'}) .animate({     marginLeft:"400px",     marginTop:"35px",     width:"7em",     height:"7em", duration:"medium"     }, 'medium', 'linear', function() {     $('#statusGraphic').replaceWith('<img src="state2_138x28.gif">');     }) .fadeTo(500,0.5); $('#portfolio') .fadeTo(500,1.5) .animate({marginLeft:"-175px", width:"2em",height:"2.5em",duration:'medium'}) .animate({marginLeft:"-330px", width:"8.5em",height:"9.9375em",duration:'medium'}); }); }) })

    Read the article

  • Creating ActionEvent object for CustomButton in Java

    - by Crystal
    For a hw assignment, we were supposed to create a custom button to get familiar with swing and responding to events. We were also to make this button an event source which confuses me. I have an ArrayList to keep track of listeners that would register to listen to my CustomButton. What I am getting confused on is how to notify the listeners. My teacher hinted at having a notify and overriding actionPerformed which I tried doing, but then I wasn't sure how to create an ActionEvent object looking at the constructor documentation. The source, id, string all confuses me. Any help would be appreciated. Thanks! code: import java.awt.*; import java.awt.event.*; import javax.swing.*; import java.util.List; import java.util.ArrayList; public class CustomButton { public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { CustomButtonFrame frame = new CustomButtonFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setVisible(true); } }); } public void addActionListener(ActionListener al) { listenerList.add(al); } public void removeActionListener(ActionListener al) { listenerList.remove(al); } public void actionPerformed(ActionEvent e) { System.out.println("Button Clicked!"); } private void notifyListeners() { ActionEvent event = new ActionEvent(CONFUSED HERE!!!!; for (ActionListener action : listenerList) { action.actionPerfomed(event); } } List<ActionListener> listenerList = new ArrayList<ActionListener>(); } class CustomButtonFrame extends JFrame { // constructor for CustomButtonFrame public CustomButtonFrame() { setTitle("Custom Button"); CustomButtonSetup buttonSetup = new CustomButtonSetup(); this.add(buttonSetup); this.pack(); } } class CustomButtonSetup extends JComponent { public CustomButtonSetup() { ButtonAction buttonClicked = new ButtonAction(); this.addMouseListener(buttonClicked); } // because frame includes borders and insets, use this method public Dimension getPreferredSize() { return new Dimension(200, 200); } public void paintComponent(Graphics g) { Graphics2D g2 = (Graphics2D) g; // first triangle coords int x[] = new int[TRIANGLE_SIDES]; int y[] = new int[TRIANGLE_SIDES]; x[0] = 0; y[0] = 0; x[1] = 200; y[1] = 0; x[2] = 0; y[2] = 200; Polygon firstTriangle = new Polygon(x, y, TRIANGLE_SIDES); // second triangle coords x[0] = 0; y[0] = 200; x[1] = 200; y[1] = 200; x[2] = 200; y[2] = 0; Polygon secondTriangle = new Polygon(x, y, TRIANGLE_SIDES); g2.drawPolygon(firstTriangle); g2.setColor(firstColor); g2.fillPolygon(firstTriangle); g2.drawPolygon(secondTriangle); g2.setColor(secondColor); g2.fillPolygon(secondTriangle); // draw rectangle 10 pixels off border int s1[] = new int[RECT_SIDES]; int s2[] = new int[RECT_SIDES]; s1[0] = 5; s2[0] = 5; s1[1] = 195; s2[1] = 5; s1[2] = 195; s2[2] = 195; s1[3] = 5; s2[3] = 195; Polygon rectangle = new Polygon(s1, s2, RECT_SIDES); g2.drawPolygon(rectangle); g2.setColor(thirdColor); g2.fillPolygon(rectangle); } private class ButtonAction implements MouseListener { public void mousePressed(MouseEvent e) { System.out.println("Click!"); firstColor = Color.GRAY; secondColor = Color.WHITE; repaint(); } public void mouseReleased(MouseEvent e) { System.out.println("Released!"); firstColor = Color.WHITE; secondColor = Color.GRAY; repaint(); } public void mouseEntered(MouseEvent e) {} public void mouseExited(MouseEvent e) {} public void mouseClicked(MouseEvent e) {} } public static final int TRIANGLE_SIDES = 3; public static final int RECT_SIDES = 4; private Color firstColor = Color.WHITE; private Color secondColor = Color.DARK_GRAY; private Color thirdColor = Color.LIGHT_GRAY; }

    Read the article

  • How to save coordinates with Google Map?

    - by Pavel
    Hey everyone. I'm currently developing an app which uses tabs and google map. What I want to do is to get the gps positions, say 3, and store them in sql db (which I'm already doing) and then display them on the map. I already created canvas, added to overlay but those points disappear when I'm changing tabs so I thought if there is a way to somehow store those coords with google map so I can retrieve them and display them nicely whenever I'm clicking the "map tab"? Please can anyone help?

    Read the article

  • How do I quiet image_submit_tag from params hash?

    - by Alan S
    Does anyone know how to eliminate the x and y params when you use image_submit_tag with a get method? I have a simple search form, and using get to pass the value in the url. When I use image_submit_tag, it also appends the x and y coords, so I get urls like http://example.com?q=somesearchterm&x=15&y=12 When I have used submit_tag, I can use the :name = nil attribute (was in one of Ryan Bates' Railscasts), but it doesn't seem to work for image_submit_tag. Granted it doesn't affect functionality, but I don't need them and would like them quieted.

    Read the article

  • A 3-D grid of regularly spaced points

    - by Jack
    I want to create a list containing the 3-D coords of a grid of regularly spaced points, each as a 3-element tuple. I'm looking for advice on the most efficient way to do this. In C++ for instance, I simply loop over three nested loops, one for each coordinate. In Matlab, I would probably use the meshgrid function (which would do it in one command). I've read about meshgrid and mgrid in Python, and I've also read that using numpy's broadcasting rules is more efficient. It seems to me that using the zip function in combination with the numpy broadcast rules might be the most efficient way, but zip doesn't seem to be overloaded in numpy.

    Read the article

  • OpenGL multitexture tessellation

    - by user1715296
    I have to tessellate some surface in OpenGL with rectangular textures. Let it be a single triangle for simplicity. The textures touch each other by sides, and do not overlap. That is done by setting GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T to GL_CLAMP_TO_BORDER and adjusting texture coords properly. Everything goes fine while GL_TEXTURE_MIN_FILTER and GL_TEXTURE_MAG_FILTER is set to GL_NEAREST, but when I want to apply GL_LINEAR filering and/or anisotropic filtering following arifact apperas: textures border pixel's alpha gradually fall to transparent, so that line of background color is visible between neighbouring textures. How can I avoid this artifact without merging multiple textures to one while linear filtering is preserved?

    Read the article

  • Drawing a texture at the end of a trace (crosshair?) UDK

    - by Dave Voyles
    I'm trying to draw a crosshair at the end of my trace. If my crosshair does not hit a pawn or static mesh (ex, just a skybox) then the crosshair stays locked on a certain point at that actor - I want to say its origin. Ex: Run across a pawn, then it turns yellow and stays on that pawn. If it runs across the skybox, then it stays at one point on the box. Weird? How can I get my crosshair to stay consistent? I've included two images for reference, to help illustrate. Note: The wrench is actually my crosshair. The "X" is just a debug crosshair. Ignore that. /// Image 1 /// /// Image 2 /// /*************************************************************************** * Draws the crosshair ***************************************************************************/ function bool CheckCrosshairOnFriendly() { local float CrosshairSize; local vector HitLocation, HitNormal, StartTrace, EndTrace, ScreenPos; local actor HitActor; local MyWeapon W; local Pawn MyPawnOwner; /** Sets the PawnOwner */ MyPawnOwner = Pawn(PlayerOwner.ViewTarget); /** Sets the Weapon */ W = MyWeapon(MyPawnOwner.Weapon); /** If we don't have an owner, then get out of the function */ if ( MyPawnOwner == None ) { return false; } /** If we have a weapon... */ if ( W != None) { /** Values for the trace */ StartTrace = W.InstantFireStartTrace(); EndTrace = StartTrace + W.MaxRange() * vector(PlayerOwner.Rotation); HitActor = MyPawnOwner.Trace(HitLocation, HitNormal, EndTrace, StartTrace, true, vect(0,0,0),, TRACEFLAG_Bullet); DrawDebugLine(StartTrace, EndTrace, 100,100,100,); /** Projection for the crosshair to convert 3d coords into 2d */ ScreenPos = Canvas.Project(HitLocation); /** If we haven't hit any actors... */ if ( Pawn(HitActor) == None ) { HitActor = (HitActor == None) ? None : Pawn(HitActor.Base); } } /** If our trace hits a pawn... */ if ((Pawn(HitActor) == None)) { /** Draws the crosshair for no one - Grey*/ CrosshairSize = 28 * (Canvas.ClipY / 768) * (Canvas.ClipX /1024); Canvas.SetDrawColor(100,100,128,255); Canvas.SetPos(ScreenPos.X - (CrosshairSize * 0.5f), ScreenPos.Y -(CrosshairSize * 0.5f)); Canvas.DrawTile(class'UTHUD'.default.AltHudTexture, CrosshairSize, CrosshairSize, 600, 262, 28, 27); return false; } /** Draws the crosshair for friendlies - Yellow */ CrosshairSize = 28 * (Canvas.ClipY / 768) * (Canvas.ClipX /1024); Canvas.SetDrawColor(255,255,128,255); Canvas.SetPos(ScreenPos.X - (CrosshairSize * 0.5f), ScreenPos.Y -(CrosshairSize * 0.5f)); Canvas.DrawTile(class'UTHUD'.default.AltHudTexture, CrosshairSize, CrosshairSize, 600, 262, 28, 27); return true; }

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Converting openGl code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run in to some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it rigth before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has diffrent coordinations systems, and functions that alters how you must implement to code abit. This is his code example: public Ray GetPickRay() { int mouseX = Mouse.getX(); int mouseY = WORLD.Byte56Game.getHeight() - Mouse.getY(); float windowWidth = WORLD.Byte56Game.getWidth(); float windowHeight = WORLD.Byte56Game.getHeight(); //get the mouse position in screenSpace coords double screenSpaceX = ((float) mouseX / (windowWidth / 2) - 1.0f) * aspectRatio; double screenSpaceY = (1.0f - (float) mouseY / (windowHeight / 2)); double viewRatio = Math.tan(((float) Math.PI / (180.f/ViewAngle) / 2.00f))* zoomFactor; screenSpaceX = screenSpaceX * viewRatio; screenSpaceY = screenSpaceY * viewRatio; //Find the far and near camera spaces Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * NearPlane), (float) (screenSpaceY * NearPlane), (float) (-NearPlane), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * FarPlane), (float) (screenSpaceY * FarPlane), (float) (-FarPlane), 1); //Unproject the 2D window into 3D to see where in 3D we're actually clicking Matrix4f tmpView = Matrix4f(view); Matrix4f invView = (Matrix4f) tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invView, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invView, cameraSpaceFar, worldSpaceFar); //calculate the ray position and direction Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); return new Ray(rayPosition, rayDirection); } All rigths reserved to him of course This is my DirectX 11 code : void GraphicEngine::pickRayVector(float mouseX, float mouseY,XMVECTOR& pickRayInWorldSpacePos, XMVECTOR& pickRayInWorldSpaceDir) { float PRVecX, PRVecY; float nearPlane = 0.1f; float farPlane = 200.0f; floar viewAngle = 0.4 * 3.14; PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); XMVECTOR cameraSpaceNear = XMVectorSet(PRVecX * nearPlane,PRVecY * nearPlane, -nearPlane, 1.0f); XMVECTOR cameraSpaceFar = XMVectorSet(PRVecX * farPlane,PRVecY * farPlane, -farPlane, 1.0f); // Transform 3D Ray from View space to 3D ray in World space XMMATRIX invMat; XMVECTOR matInvDeter; invMat = XMMatrixInverse(&matInvDeter, cam->getCameraView()); //Inverse of View Space matrix is World space matrix XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); pickRayInWorldSpacePos = worldSpaceNear; pickRayInWorldSpaceDir = worldSpaceFar-worldSpaceNear; pickRayInWorldSpaceDir = XMVector3Normalize(pickRayInWorldSpaceDir); } A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom rigth (800,600) ( or whatever resolution you would have) I hadn't used any far or near plane before, so i just made some arbitrary number up for them. To my understanding it shouldnt matter as long as the object you are trying to pick is in between the range of thoese numbers The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH , I just hadn't made it a member variable of my Camera class yet. I removed the variable aspectRation and zoomFactor because I assumed that they where related to some specific function of his game. Now I'm not sure, but I think the problems lies either withing the mouse to viewspace conversion, maby that we use diffrent coordinations systems. Either that or how i transform the matrixes in the the end, because i know order is important when it comes to matrixes. Any help is appriciated! Thanks in advance. Edit: One more note, my code is in c++

    Read the article

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • how can I specify interleaved vertex attributes and vertex indices

    - by freefallr
    I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components into a shader program, for use with glDrawArrays. My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer: class CustomVertex { public: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3]; // nx, ny, nz; float colour[4]; // r, g, b, a float padding[20]; // padded for performance }; I've already written a working VertexBufferObject class that creates a vertex buffer object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code: void VertexBufferObject::Draw() { if( ! m_bInitialized ) return; glBindBuffer( GL_ARRAY_BUFFER, m_nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, m_nVboIdIndex ); glEnableClientState( GL_VERTEX_ARRAY ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glEnableClientState( GL_NORMAL_ARRAY ); glEnableClientState( GL_COLOR_ARRAY ); glVertexPointer( 3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 0) ); glTexCoordPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 12)); glNormalPointer(GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 20)); glColorPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 32)); glDrawElements( GL_TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex Array object is as follows. This is performed before the ShaderProgram runtime linking stage, and no glErrors are reported after its steps. // Specify the shader arg locations (e.g. their order in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which stores the relationship between // the buffer and the input attributes glGenVertexArrays( 1, &m_nVaoHandle ); glBindVertexArray( m_nVaoHandle ); // Enable the vertex attribute array (we're using interleaved array, since its faster) glBindBuffer( GL_ARRAY_BUFFER, vShaderArgs[0].nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vShaderArgs[0].nVboIndexId ); // vertex data for( int n = 0; n < vShaderArgs.size(); n ++ ) { glEnableVertexAttribArray(n); glVertexAttribPointer( n, vShaderArgs[n].nFieldSize, GL_FLOAT, GL_FALSE, vShaderArgs[n].nStride, (GLubyte *) NULL + vShaderArgs[n].nFieldOffset ); AppLog::Ref().OutputGlErrors(); } This doesn't render correctly at all. I get a pattern of white specks onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m_nShaderProgramId || ! m_nVaoHandle ) { AppLog::Ref().LogMsg("ShaderProgram::Draw() Couldn't draw object, as initialization of ShaderProgram is incomplete"); return; } glUseProgram( m_nShaderProgramId ); glBindVertexArray( m_nVaoHandle ); glDrawArrays( GL_TRIANGLES, 0, m_nNumTris ); glBindVertexArray(0); glUseProgram(0); } Can anyone see errors or omissions in either the VAO creation code or rendering code? thanks!

    Read the article

  • Arcball Problems with UDK

    - by opdude
    I'm trying to re-create an arcball example from a Nehe, where an object can be rotated in a more realistic way while floating in the air (in my game the object is attached to the player at a distance like for example the Physics Gun) however I'm having trouble getting this to work with UDK. I have created an LGArcBall which follows the example from Nehe and I've compared outputs from this with the example code. I think where my problem lies is what I do to the Quaternion that is returned from the LGArcBall. Currently I am taking the returned Quaternion converting it to a rotation matrix. Getting the product of the last rotation (set when the object is first clicked) and then returning that into a Rotator and setting that to the objects rotation. If you could point me in the right direction that would be great, my code can be found below. class LGArcBall extends Object; var Quat StartRotation; var Vector StartVector; var float AdjustWidth, AdjustHeight, Epsilon; function SetBounds(float NewWidth, float NewHeight) { AdjustWidth = 1.0f / ((NewWidth - 1.0f) * 0.5f); AdjustHeight = 1.0f / ((NewHeight - 1.0f) * 0.5f); } function StartDrag(Vector2D startPoint, Quat rotation) { StartVector = MapToSphere(startPoint); } function Quat Update(Vector2D currentPoint) { local Vector currentVector, perp; local Quat newRot; //Map the new point to the sphere currentVector = MapToSphere(currentPoint); //Compute the vector perpendicular to the start and current perp = startVector cross currentVector; //Make sure our length is larger than Epsilon if (VSize(perp) > Epsilon) { //Return the perpendicular vector as the transform newRot.X = perp.X; newRot.Y = perp.Y; newRot.Z = perp.Z; //In the quaternion values, w is cosine (theta / 2), where //theta is the rotation angle newRot.W = startVector dot currentVector; } else { //The two vectors coincide, so return an identity transform newRot.X = 0.0f; newRot.Y = 0.0f; newRot.Z = 0.0f; newRot.W = 0.0f; } return newRot; } function Vector MapToSphere(Vector2D point) { local float x, y, length, norm; local Vector result; //Transform the mouse coords to [-1..1] //and inverse the Y coord x = (point.X * AdjustWidth) - 1.0f; y = 1.0f - (point.Y * AdjustHeight); length = (x * x) + (y * y); //If the point is mapped outside of the sphere //( length > radius squared) if (length > 1.0f) { norm = 1.0f / Sqrt(length); //Return the "normalized" vector, a point on the sphere result.X = x * norm; result.Y = y * norm; result.Z = 0.0f; } else //It's inside of the sphere { //Return a vector to the point mapped inside the sphere //sqrt(radius squared - length) result.X = x; result.Y = y; result.Z = Sqrt(1.0f - length); } return result; } DefaultProperties { Epsilon = 0.000001f } I'm then attempting to rotate that object when the mouse is dragged, with the following update code in my PlayerController. //Get Mouse Position MousePosition.X = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.X; MousePosition.Y = LGMouseInterfacePlayerInput(PlayerInput).MousePosition.Y; newQuat = ArcBall.Update(MousePosition); rotMatrix = MakeRotationMatrix(QuatToRotator(newQuat)); rotMatrix = rotMatrix * LastRot; LGMoveableActor(movingPawn.CurrentUseableObject).SetPhysics(EPhysics.PHYS_Rotating); LGMoveableActor(movingPawn.CurrentUseableObject).SetRotation(MatrixGetRotator(rotMatrix));

    Read the article

  • SVG animation along path with Raphael

    - by Toby Hede
    I have a rather interesting issue with SVG animation. I am animating along a circular path using Raphael obj = canvas.circle(x, y, size); path = canvas.circlePath(x, y, radius); path = canvas.path(path); //generate path from path value string obj.animateAlong(path, rate, false); The circlePath method is one I have created myself to generate the circle path in SVG path notation: Raphael.fn.circlePath = function(x , y, r) { var s = "M" + x + "," + (y-r) + "A"+r+","+r+",0,1,1,"+(x-0.1)+","+(y-r)+" z"; return s; } So far, so good. This all works. I have my object (obj) animating along the circular path. BUT: The animation only works if I create the object at the same X, Y coords as the path itself. If I start the animation from any other coordinates (say, half-way along the path) the object animates in a circle of the correct radius, however it starts the animation from the object X,Y coordinates, rather than along the path as it is displayed visually. Ideally I would like to be able to stop/start the animation - the same problem occurs on restart. When I stop then restart the animation, it animates in a circle starting from the stopped X,Y.

    Read the article

  • jQuery selector issue finding image with usemap attribute

    - by Greg K
    I have an image map in my page: <div id="books"> <img src="images/books.png" width="330" height="298" border="0" \ usemap="#map_books" /> <map name="map_books" id="map_books" alt="books"> <area shape="poly" coords="17,73,81,288,210,248,254,264, ..." \ href="/about" alt="books" /> </map> </div> I have a function that tries to find the image in the document using this map: function(elemId) { // elemId = "#map_books" if ($(elemId).attr("tagName") == "MAP") { // find image using this map var selector = "img [usemap=\\" + elemId + "]"; var img = $(selector); if (img.length == 0) { console.log("Could not find image using " + selector); } } It fails to find the image. Could not find image using img [usemap=\#map_books] I've escaped the elemId because it starts with a hash and tried variations of selectors. $("img [usemap$=" + elemId.substring(1) + "]") $("img").find("[usemap=\\" + elemId + "]") Neither find the image. Any ideas?

    Read the article

  • GPGPU programming with OpenGL ES 2.0

    - by Albus Dumbledore
    I am trying to do some image processing on the GPU, e.g. median, blur, brightness, etc. The general idea is to do something like this framework from GPU Gems 1. I am able to write the GLSL fragment shader for processing the pixels as I've been trying out different things in an effect designer app. I am not sure however how I should do the other part of the task. That is, I'd like to be working on the image in image coords and then outputting the result to a texture. I am aware of the gl_FragCoords variable. As far as I understand it it goes like that: I need to set up a view (an orthographic one maybe?) and a quad in such a way so that the pixel shader would be applied once to each pixel in the image and so that it would be rendering to a texture or something. But how can I achieve that considering there's depth that may make things somewhat awkward to me... I'd be very grateful if anyone could help me with this rather simple task as I am really frustrated with myself. UPDATE It seems I'll have to use an FBO, getting one like this: glBindFramebuffer(...)

    Read the article

< Previous Page | 3 4 5 6 7 8 9  | Next Page >