Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 456/1027 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • How can i run my .LÖVE game directly from the lua interpreter?

    - by jonathan
    I've just started with LOVE and LUA , i'm interested in LOVE because i want to play around with something different from my dayjob(i'm a webdeveloper) and since it uses LUA and is interpreted , i though it would be a great way to try out the API. but i couldn't find how to run my .LÖVE game directly from the lua interpreter? i'm finding it bothersome to package the game each time i make a little test with the API. since i couldn't find the answer i'm asking, but maybe i'm serching for the wrong terms, if this it is a simple matter like "import the library" or set the global, i'll gladly remove my question.

    Read the article

  • Creating a frozen bubble clone

    - by Vaughan Hilts
    This photo illustrates the environment: http://i.imgur.com/V4wbp.png I'll shoot the cannon, it'll bounce off the wall and it's SUPPOSED to stick to the bubble. It does at pretty much every other angle. The problem is always reproduced here, when hit off the wall into those bubbles. It also exists in other cases, but I'm not sure what triggers it. What actually happens: The ball will sometimes set to the wrong cell, and my "dropping" code will detect it as a loner and drop it off the stage. *There are many implementations of "Frozen Bubble" on the web, but I can't for the life of me find a good explanation as to how the algorithm for the "Bubble Sticking" works. * I see this: http://www.wikiflashed.com/wiki/BubbleBobble https://frozenbubblexna.svn.codeplex.com/svn/FrozenBubble/ But I can't figure out the algorithims... could anyone explain possibly the general idea behind getting the balls to stick? Code in question: //Counstruct our bounding rectangle for use var nX = currentBall.x + ballvX * gameTime; var nY = currentBall.y - ballvY * gameTime; var movingRect = new BoundingRectangle(nX, nY, 32, 32); var able = false; //Iterate over the cells and draw our bubbles for (var x = 0; x < 8; x++) { for (var y = 0; y < 12; y++) { //Get the bubble at this layout var bubble = bubbleLayout[x][y]; var rowHeight = 27; //If this slot isn't empty, draw if (bubble != null) { var bx = 0, by = 0; if (y % 2 == 0) { bx = x * 32 + 270; by = y * 32 + 45; } else { bx = x * 32 + 270 + 16; by = y * 32 + 45; } //Check var targetBox = new BoundingRectangle(bx, by, 32, 32); if (targetBox.intersects(movingRect)) { able = true; } } } } cellY = Math.round((currentBall.y - 45) / 32); if (cellY % 2 == 0) cellX = Math.round((currentBall.x - 270) / 32); else cellX = Math.round((currentBall.x - 270 - 16) / 32); Any ideas are very much welcome. Things I've tried: Flooring and Ceiling values Changing the wall bounce to a lower value Slowing down the ball None of these seem to affect it. Is there something in my math I'm not getting?

    Read the article

  • Basics of drawing in 2d with OpenGL 3 shaders

    - by davidism
    I am new to OpenGL 3 and graphics programming, and want to create some basic 2d graphics. I have the following scenario of how I might go about drawing a basic (but general) 2d rectangle. I'm not sure if this is the correct way to think about it, or, if it is, how to implement it. In my head, here's how I imagine doing it: t = make_rectangle(width, height) build general VBO, centered at 0, 0 optionally: t.set_scale(2) optionally: t.set_angle(30) t.draw_at(x, y) calculates some sort of scale/rotate/translate matrix (or matrices), passes the VBO and the matrix to a shader program Something happens to clip the world to the view visible on screen. I'm really unclear on how 4 and 5 will work. The main problem is that all the tutorials I find either: use fixed function pipeline, are for 3d, or are unclear how to do something this "simple". Can someone provide me with either a better way to think of / do this, or some concrete code detailing performing the transformations in a shader and constructing and passing the data required for this shader transformation?

    Read the article

  • Object-Oriented OpenGL

    - by Sullivan
    I have been using OpenGL for a while and have read a large number of tutorials. Aside from the fact that a lot of them still use the fixed pipeline, they usually throw all the initialisation, state changes and drawing in one source file. This is fine for the limited scope of a tutorial, but I’m having a hard time working out how to scale it up to a full game. How do you split your usage of OpenGL across files? Conceptually, I can see the benefits of having, say, a rendering class that purely renders stuff to screen, but how would stuff like shaders and lights work? Should I have separate classes for things like lights and shaders?

    Read the article

  • Is there an application that converts a PC into a video game kiosk/arcade machine?

    - by Rahil627
    Sorry to make the question so vague. What I ultimately want is software that allows people to play independent video games on a PC and not have to worry about maintaining it. Imagine a game that was made in a few hours that does not have a restart button and crashes often. It should be able to handle these kinds of things and do more! The software should: allow the game to be restarted manually handle game crashes (likely by restarting the game) restrict the user from doing anything crazy later... offer a UI to select the game from a list handle pre-configured key bindings cross-platform (start with windows) I just want to know if this exists already before I start creating one. As of now AutoHotKey is being used to do this sloppily. If such software does not exist then perhaps someone could recommend a general open source Kiosk software? Open Kiosk? I'll take anything. (I also could not find a related tag. Not even sure if this question should be here rather than stackoverflow)

    Read the article

  • What is causing these visual artifacts on my OpenGL sprites?

    - by Amplify91
    What could be the cause of the defects in my characters sprite? I am using OpenGL ES 2.0. I draw my sprites in a sprite batch that uses UV coordinates from one large texture atlas. If you look around the character' edges, you'll see two noticeable problems: The invisible alpha background is not invisible, but shows a strange static-like background. There are unwanted streaks where the character nears the edge of the frame (but only in some frames of the animation, this happened to be one of them). Any idea what could be causing these? I will provide related code if asked for, but I'll try to avoid just dumping the entire project and expecting someone to look through it all. EDIT: Here's a bit of code: This is how I generate my UV coordinates: private float[] createFrameUV(int frameWidth, int frameHeight, int x, int y){ float[] uv = new float[4]; if(numberOfFrames>1){ float width = (float)frameWidth / (float)mBitmap.getWidth(); float height = (float)frameHeight / (float)mBitmap.getHeight(); float u = (float)x / (float)mBitmap.getWidth(); float v = (float)y / (float)mBitmap.getHeight(); uv[0] = u; uv[1] = v; uv[2] = u + width; uv[3] = v + height; }else{ uv[0] = 0f; uv[1] = 0f; uv[2] = 1f; uv[3] = 1f; } return uv; } These are some OpenGL settings: GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);

    Read the article

  • JOGL2 test compiles, but doesn't execute - help?

    - by Chuchinyi
    I have a problem with JOGL2. My JOGL2Template.java compiles fine, but executing it results in the following error: D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more Here is the JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } } Any ideas what might be the cause of these errors?

    Read the article

  • Generated 3d tree meshes

    - by Jari Komppa
    I did not find a question on these lines yet, correct me if I'm wrong. Trees (and fauna in general) are common in games. Due to their nature, they are a good candidate for procedural generation. There's SpeedTree, of course, if you can afford it; as far as I can tell, it doesn't provide the possibility of generating your tree meshes at runtime. Then there's SnappyTree, an online webgl based tree generator based on the proctree.js which is some ~500 lines of javascript. One could use either of above (or some other tree generator I haven't stumbled upon) to create a few dozen tree meshes beforehand - or model them from scratch in a 3d modeller - and then randomly mirror/scale them for a few more variants.. But I'd rather have a free, linkable tree mesh generator. Possible solutions: Port proctree.js to c++ and deal with the open source license (doesn't seem to be gpl, so could be doable; the author may also be willing to co-operate to make the license even more free). Roll my own based on L-systems. Don't bother, just use offline generated trees. Use some other method I haven't found yet.

    Read the article

  • Making a game with responsive resolution

    - by alexandervrs
    I am making a game, however I wish for it to be resolution agnostic. My target resolution i.e. where things look as intended is 1600 x 900. My ideas are: Make the HUD stay fixed to the sides no matter what resolution, use different size for HUD graphics under a certain resolution and another under a certain large one. Use large HD sprites/backgrounds which are a power of 2, so they scale nicely. Use the player's native resolution. Scale the game area (not the HUD) to fit (resulting zooming in some and cropping the game area sides if necessary for widescreen, no stretch), but always fill the screen. Have a min and max resolution limit for small and very large displays where you will just change the resolution(?) or scale up/down to fit. What I am a bit confused though is what math formula I would use to scale the game area correctly based on the resolution no matter the aspect ratio, fully fit in a square screen and with some clip to the sides for widescreen. Pseudocode would help as well. :)

    Read the article

  • collision detection problems - Javascript/canvas game

    - by Tom Burman
    Ok here is a more detailed version of my question. What i want to do: i simply want the have a 2d array to represent my game map. i want a player sprite and i want that sprite to be able to move around my map freely using the keyboard and also have collisions with certain tiles of my map array. i want to use very large maps so i need a viewport. What i have: I have a loop to load the tile images into an array: /Loop to load tile images into an array var mapTiles = []; for (x = 0; x <= 256; x++) { var imageObj = new Image(); // new instance for each image imageObj.src = "images/prototype/"+x+".jpg"; mapTiles.push(imageObj); } I have a 2d array for my game map: //Array to hold map data var board = [ [1,2,3,4,3,4,3,4,5,6,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [17,18,19,20,19,20,19,20,21,22,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [33,34,35,36,35,36,35,36,37,38,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [49,50,51,52,51,52,51,52,53,54,1,1,1,1,1,1,1,1,1,1,1,1,1,197,198,199,1,1,1,1], [65,66,67,68,146,147,67,68,69,70,1,1,1,1,1,1,1,1,216,217,1,1,1,213,214,215,1,1,1,1], [81,82,83,161,162,163,164,84,85,86,1,1,1,1,1,1,1,1,232,233,1,1,1,229,230,231,1,1,1,1], [97,98,99,177,178,179,180,100,101,102,1,1,1,1,59,1,1,1,248,249,1,1,1,245,246,247,1,1,1,1], [1,1,238,1,1,1,1,239,240,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [216,217,254,1,1,1,1,255,256,1,204,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [232,233,1,1,1,117,118,1,1,1,220,1,1,119,120,1,1,1,1,1,1,1,1,1,1,1,119,120,1,1], [248,249,1,1,1,133,134,1,1,1,1,1,1,135,136,1,1,1,1,1,1,59,1,1,1,1,135,136,1,1], [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,216,217,1,1,1,1,1,1,60,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,232,233,1,1,1,1,1,1,1,1,1,1,1,1,1,1,204,1,1,1,1,1,1,1,1,1,1,1], [1,1,248,249,1,1,1,1,1,1,1,1,1,1,1,1,1,1,220,1,1,1,1,1,1,216,217,1,1,1], [1,1,1,1,1,1,1,1,1,1,1,1,149,150,151,1,1,1,1,1,1,1,1,1,1,232,233,1,1,1], [12,12,12,12,12,12,12,13,1,1,1,1,165,166,167,1,1,1,1,1,1,119,120,1,1,248,249,1,1,1], [28,28,28,28,28,28,28,29,1,1,1,1,181,182,183,1,1,1,1,1,1,135,136,1,1,1,1,1,1,1], [44,44,44,44,44,15,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,1,1,27,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,1,1,27,28,29,1,1,1,1,1,1,1,1,1,59,1,1,197,198,199,1,1,1,1,119,120,1], [1,1,1,1,1,27,28,29,1,1,216,217,1,1,1,1,1,1,1,1,213,214,215,1,1,1,1,135,136,1], [1,1,1,1,1,27,28,29,1,1,232,233,1,1,1,1,1,1,1,1,229,230,231,1,1,1,1,1,1,1], [1,1,1,1,1,27,28,29,1,1,248,249,1,1,1,1,1,1,1,1,245,246,247,1,1,1,1,1,1,1], [1,1,1,197,198,199,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,213,214,215,28,29,1,1,1,1,1,60,1,1,1,1,204,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,229,230,231,28,29,1,1,1,1,1,1,1,1,1,1,220,1,1,1,1,119,120,1,1,1,1,1], [1,1,1,245,246,247,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,135,136,1,1,60,1,1], [1,1,1,1,1,27,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,1,1,27,28,29,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1] ]; I have my loop to place the correct tile sin the correct positions: //Loop to place tiles onto screen in correct position for (x = 0; x <= viewWidth; x++){ for (y = 0; y <= viewHeight; y++){ var width = 32; var height = 32; context.drawImage(mapTiles[board[y+viewY][x+viewX]],x*width, y*height); } } I Have my player object : //Place player object context.drawImage(playerImg, (playerX-viewX)*32,(playerY-viewY)*32, 32, 32); I have my viewport setup: //Set viewport pos viewX = playerX - Math.floor(0.5 * viewWidth); if (viewX < 0) viewX = 0; if (viewX+viewWidth > worldWidth) viewX = worldWidth - viewWidth; viewY = playerY - Math.floor(0.5 * viewHeight); if (viewY < 0) viewY = 0; if (viewY+viewHeight > worldHeight) viewY = worldHeight - viewHeight; I have my player movement: canvas.addEventListener('keydown', function(e) { console.log(e); var key = null; switch (e.which) { case 37: // Left if (playerY > 0) playerY--; break; case 38: // Up if (playerX > 0) playerX--; break; case 39: // Right if (playerY < worldWidth) playerY++; break; case 40: // Down if (playerX < worldHeight) playerX++; break; } My Problem: I have my map loading an it looks fine, but my player position thinks it's on a different tile to what it actually is. So for instance, i know that if my player moves left 1 tile, the value of that tile should be 2, but if i print out the value it should be moving to (2), it comes up with a different value. How ive tried to solve the problem: I have tried swap X and Y values for the initialization of my player, for when my map prints. If i swap the x and y values in this part of my code: context.drawImage(mapTiles[board[y+viewY][x+viewX]],x*width, y*height); The map doesnt get draw correctly at all and tiles are placed all in random positions or orientations IF i sway the x and y values for my player in this line : context.drawImage(playerImg, (playerX-viewX)*32,(playerY-viewY)*32, 32, 32); The players movements are inversed, so up and down keys move my player left and right viceversa. My question: Where am i going wrong in my code, and how do i solve it so i have my map looking like it should and my player moving as it should as well as my player returning the correct tileID it is standing on or moving too. Thanks Again ALSO Here is a link to my whole code: prototype

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • How to move from home page screen to the next menu screen on clicking a particular image in XNA4.0?

    - by Raj
    I m new 2 XNA game pgming(also C#)....I want 2 create a main page with some buttons and on clicking a particular button, it should goto another screen whr there r some buttons to select which should inturn goto the game screen on clicking....Whether I can put all the codes in the "game1.cs" or create new class for every page....Pls help... I've jus went through some pages in "Learning xna4.0" by o'reilly...If there s any other gud tutorials, pls suggest me...

    Read the article

  • Finding shapes in 2D Array, then optimising

    - by assemblism
    I'm new so I can't do an image, but below is a diagram for a game I am working on, moving bricks into patterns, and I currently have my code checking for rotated instances of a "T" shape of any colour. The X and O blocks would be the same colour, and my last batch of code would find the "T" shape where the X's are, but what I wanted was more like the second diagram, with two "T"s Current result      Desired Result [X][O][O]                [1][1][1] [X][X][_]                [2][1][_] [X][O][_]                [2][2][_] [O][_][_]                [2][_][_] My code loops through x/y, marks blocks as used, rotates the shape, repeats, changes colour, repeats. I have started trying to fix this checking with great trepidation. The current idea is to: loop through the grid and make note of all pattern occurrences (NOT marking blocks as used), and putting these to an array loop through the grid again, this time noting which blocks are occupied by which patterns, and therefore which are occupied by multiple patterns. looping through the grid again, this time noting which patterns obstruct which patterns That much feels right... What do I do now? I think I would have to try various combinations of conflicting shapes, starting with those that obstruct the most other patterns first.How do I approach this one? use the rational that says I have 3 conflicting shapes occupying 8 blocks, and the shapes are 4 blocks each, therefore I can only have a maximum of two shapes. (I also intend to incorporate other shapes, and there will probably be score weighting which will need to be considered when going through the conflicting shapes, but that can be another day) I don't think it's a bin packing problem, but I'm not sure what to look for. Hope that makes sense, thanks for your help

    Read the article

  • Pixmaps, ByteBuffers, and Textures....Oh my

    - by odaymichael
    My ultimate goal is to take a specific region of the screen, and redraw it somewhere else. For example, take a square from the upper left hand corner of the screen and redraw it on the lower right hand corner, so that it is basically a copy of that screen section; kind of like a minimap, but at the same scale as the original. I have looked in to pixmaps and bytebuffers. Also maybe copying that region from the backbuffer somehow. Wondering the best way to go about this. Any help is appreciated. I am using opengl es and libgdx for what it's worth.

    Read the article

  • Using SQL tables for storing user created level stats. Is there a better way?

    - by Ivan
    I am developing a racing game in which players can create their own tracks and upload them to a server. Players will be able to compare their best track times to their friends and see world records. I was going to generate a table for each track submitted to store the best times of each player who plays the track. However, I can't predict how many will be uploaded and I imagine too many tables might cause problems, or is this a valid method? I considered saving each player's best times in a string in a single table field like so: level1:00.45;level2:00.43;level3:00.12 If I did this I wouldn't need a separate table for each level (each level could just have a row in a 'WorldRecords' table). However, this just causes another problem because the text would eventually reach the limit for varchar length. I also considered storing the times data in XML files. This would avoid database issues and server disk space can be increased if needed. But I imagine this would be very slow. To update one players best time on one level, I would have to check every node in the file to find their time record to update. Apologies for the wall of text. Any suggestions would be appreciated.

    Read the article

  • Position Reconstruction from Depth by inverting Perspective Projection

    - by user1294203
    I had some trouble reconstructing position from depth sampled from the depth buffer. I use the equivalent of gluPerspective in GLM. The code in GLM is: template GLM_FUNC_QUALIFIER detail::tmat4x4 perspective ( valType const & fovy, valType const & aspect, valType const & zNear, valType const & zFar ) { valType range = tan(radians(fovy / valType(2))) * zNear; valType left = -range * aspect; valType right = range * aspect; valType bottom = -range; valType top = range; detail::tmat4x4 Result(valType(0)); Result[0][0] = (valType(2) * zNear) / (right - left); Result[1][2] = (valType(2) * zNear) / (top - bottom); Result[2][3] = - (zFar + zNear) / (zFar - zNear); Result[2][4] = - valType(1); Result[3][5] = - (valType(2) * zFar * zNear) / (zFar - zNear); return Result; } There doesn't seem to be any errors in the code. So I tried to invert the projection, the formula for the z and w coordinates after projection are: and dividing z' with w' gives the post-projective depth (which lies in the depth buffer), so I need to solve for z, which finally gives: Now, the problem is I don't get the correct position (I have compared the one reconstructed with a rendered position). I then tried using the respective formula I get by doing the same for this Matrix. The corresponding formula is: For some reason, using the above formula gives me the correct position. I really don't understand why this is the case. Have I done something wrong? Could someone enlighten me please?

    Read the article

  • Managing constant buffers without FX interface

    - by xcrypt
    I am aware that there is a sample on working without FX in the samplebrowser, and I already checked that one. However, some questions arise: In the sample: D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = g_World; mView = g_View; mProj = g_Projection; mWorldViewProj = mWorld * mView * mProj; VS_CONSTANT_BUFFER* pConstData; g_pConstantBuffer10->Map( D3D10_MAP_WRITE_DISCARD, NULL, ( void** )&pConstData ); pConstData->mWorldViewProj = mWorldViewProj; pConstData->fTime = fBoundedTime; g_pConstantBuffer10->Unmap(); They are copying their D3DXMATRIX'es to D3DXMATRIXA16. Checked on msdn, these new matrices are 16 byte aligned and optimised for intel pentium 4. So as my first question: 1) Is it necessary to copy matrices to D3DXMATRIXA16 before sending them to the constant buffer? And if no, why don't we just use D3DXMATRIXA16 all the time? I have another question about managing multiple constant buffers within one shader. Suppose that, within your shader, you have multiple constant buffers that need to be updated at different times: cbuffer cbNeverChanges { matrix View; }; cbuffer cbChangeOnResize { matrix Projection; }; cbuffer cbChangesEveryFrame { matrix World; float4 vMeshColor; }; Then how would I set these buffers all at different times? g_pd3dDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer10 ); gives me the possibility to set multiple buffers, but that is within one call. 2) Is that okay even if my constant buffers are updated at different times? And do I suppose I have to make sure the constantbuffers are in the same position in the array as the order they appear in the shader?

    Read the article

  • Are there any good guides for making mods for Minecraft?

    - by Pureferret
    I've been coding in Java for 5 months at work now, and having past experience with programming in other languages, modifying existing code at Uni etc. I feel like I want to get started on (read: continue learning to program by) modding with minecraft. I know what I need, but not exactly how to do so. I once saw some good guides on the minecraft forum, but they all explained how to write in java, hows different classes in the code work etc. I'm more interested in how you decompile the code, write your own separate from the main 'trunk' of minecraft and then package it to install with a tool like 'Magic Loader'. My issue with these guides is that they always relied on being in windows, but I'm primarily a linux user, and the guides on the forums only seemed to assume you were on a Windows box. So is there a good 'walkthrough' for modding for Minecraft? Especially one where it assumes or at least allows for the fact you are in linux?

    Read the article

  • 2D vector graphic html5 framework

    - by Yury
    I trying to find html5 game framework by following criteria: 1)Real good performance. 2)Good support of vector graphic( objects which contains canvas elements -line, rec,bezierCurve etc.) 3)Easy port to mobile. Optional- Physics Engine. I found 1)Pixi.js- it looks like real good, but i didn't find any info about "vector objects" support. 2) i found "vector objects" support in paper.js I need something like these: http://paperjs.org/examples/chain/ and http://paperjs.org/examples/path-intersections/. But it looks like paper.js- not so good performance as pixi.js. And it is not game engine. Is there any good framework meets these requirements? P.S. I found similar question here Which free HTML5-based game engine meets these requirements?. But it was a long time ago. A lot of new things were created since 2011.

    Read the article

  • determine collision angle on a rotating body

    - by jorb
    update: new diagram and updated description I have a contact listener set up to try and determine the side that a collision happened at relative to the a bodies rotation. One way to solve this is to find the value of the yellow angle between the red and blue vectors drawn above. The angle can be found by taking the arc cosine of the dot product of the two vectors (Evan pointed this out). One of my points of confusion is the difference in domain of the atan2 function html canvas coordinates and the Box2d rotation information. I know I have to account for this somehow... SS below questions: Does Box2D provide these angles more directly in the collision information? Am I even on the right track? If so, any hints? I have the following javascript so far: Ship.prototype.onCollide = function (other_ent,cx,cy) { var pos = this.body.GetPosition(); //collision position relative to body var d_cx = pos.x - cx; var d_cy = pos.y - cy; //length of initial vector var len = Math.sqrt(Math.pow(pos.x -cx,2) + Math.pow(pos.y-cy,2)); //body angle - can over rotate hence mod 2*Pi var ang = this.body.GetAngle() % (Math.PI * 2); //vector representing body's angle - same magnitude as the first var b_vx = len * Math.cos(ang); var b_vy = len * Math.sin(ang); //dot product of the two vectors var dot_prod = d_cx * b_vx + d_cy * b_vy; //new calculation of difference in angle - NOT WORKING! var d_ang = Math.acos(dot_prod); var side; if (Math.abs(d_ang) < Math.PI/2 ) side = "front"; else side = "back"; console.log("length",len); console.log("pos:",pos.x,pos.y); console.log("offs:",d_cx,d_cy); console.log("body vec",b_vx,b_vy); console.log("body angle:",ang); console.log("dot product",dot_prod); console.log("result:",d_ang); console.log("side",side); console.log("------------------------"); }

    Read the article

  • FBX Importer - Texture Name

    - by CmasterG
    I have a problem with the FBX SDK. I read in the data for the vertex position and the uv coordinates. It works fine, but now I want to read for each polygon to which texture it belongs, so that I can have models with multiple textures. Can anyone tell me how I can get the texture name (file name) for my polygon. My code to read in vertex position and uv coordinates is the following: int i, j, lPolygonCount = pMesh->GetPolygonCount(); FbxVector4* lControlPoints = pMesh->GetControlPoints(); int vertexId = 0; for (i = 0; i < lPolygonCount; i++) { int lPolygonSize = pMesh->GetPolygonSize(i); for (j = 0; j < lPolygonSize; j++) { int lControlPointIndex = pMesh->GetPolygonVertex(i, j); FbxVector4 pos = lControlPoints[lControlPointIndex]; current_model[vertex_index].x = pos.mData[0] - pivot_offset[0]; current_model[vertex_index].y = pos.mData[1] - pivot_offset[1]; current_model[vertex_index].z = pos.mData[2]- pivot_offset[2]; FbxVector4 vertex_normal; pMesh->GetPolygonVertexNormal(i,j, vertex_normal); current_model[vertex_index].nx = vertex_normal.mData[0]; current_model[vertex_index].ny = vertex_normal.mData[1]; current_model[vertex_index].nz = vertex_normal.mData[2]; //read in UV data FbxStringList lUVSetNameList; pMesh->GetUVSetNames(lUVSetNameList); //get lUVSetIndex-th uv set const char* lUVSetName = lUVSetNameList.GetStringAt(0); const FbxGeometryElementUV* lUVElement = pMesh->GetElementUV(lUVSetName); if(!lUVElement) continue; // only support mapping mode eByPolygonVertex and eByControlPoint if( lUVElement->GetMappingMode() != FbxGeometryElement::eByPolygonVertex && lUVElement->GetMappingMode() != FbxGeometryElement::eByControlPoint ) return; //index array, where holds the index referenced to the uv data const bool lUseIndex = lUVElement->GetReferenceMode() != FbxGeometryElement::eDirect; const int lIndexCount= (lUseIndex) ? lUVElement->GetIndexArray().GetCount() : 0; FbxVector2 lUVValue; //get the index of the current vertex in control points array int lPolyVertIndex = pMesh->GetPolygonVertex(i,j); //the UV index depends on the reference mode //int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyVertIndex) : lPolyVertIndex; int lUVIndex = pMesh->GetTextureUVIndex(i, j); lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex); current_model[vertex_index].tu = (float)lUVValue.mData[0]; current_model[vertex_index].tv = (float)lUVValue.mData[1]; vertex_index ++; } } float v1[3], v2[3], v3[3]; v1[0] = current_model[vertex_index - 3].x; v1[1] = current_model[vertex_index - 3].y; v1[2] = current_model[vertex_index - 3].z; v2[0] = current_model[vertex_index - 2].x; v2[1] = current_model[vertex_index - 2].y; v2[2] = current_model[vertex_index - 2].z; v3[0] = current_model[vertex_index - 1].x; v3[1] = current_model[vertex_index - 1].y; v3[2] = current_model[vertex_index - 1].z; collision_model->addTriangle(v1,v2,v3);

    Read the article

  • Bullet Physics - Casting a ray straight down from a rigid body (first person camera)

    - by Hydrocity
    I've implemented a first person camera using Bullet--it's a rigid body with a capsule shape. I've only been using Bullet for a few days and physics engines are new to me. I use btRigidBody::setLinearVelocity() to move it and it collides perfectly with the world. The only problem is the Y-value moves freely, which I temporarily solved by setting the Y-value of the translation vector to zero before the body is moved. This works for all cases except when falling from a height. When the body drops off a tall object, you can still glide around since the translate vector's Y-value is being set to zero, until you stop moving and fall to the ground (the velocity is only set when moving). So to solve this I would like to try casting a ray down from the body to determine the Y-value of the world, and checking the difference between that value and the Y-value of the camera body, and disable or slow down movement if the difference is large enough. I'm a bit stuck on simply casting a ray and determining the Y-value of the world where it struck. I've implemented this callback: struct AllRayResultCallback : public btCollisionWorld::RayResultCallback{ AllRayResultCallback(const btVector3& rayFromWorld, const btVector3& rayToWorld) : m_rayFromWorld(rayFromWorld), m_rayToWorld(rayToWorld), m_closestHitFraction(1.0){} btVector3 m_rayFromWorld; btVector3 m_rayToWorld; btVector3 m_hitNormalWorld; btVector3 m_hitPointWorld; float m_closestHitFraction; virtual btScalar addSingleResult(btCollisionWorld::LocalRayResult& rayResult, bool normalInWorldSpace) { if(rayResult.m_hitFraction < m_closestHitFraction) m_closestHitFraction = rayResult.m_hitFraction; m_collisionObject = rayResult.m_collisionObject; if(normalInWorldSpace){ m_hitNormalWorld = rayResult.m_hitNormalLocal; } else{ m_hitNormalWorld = m_collisionObject->getWorldTransform().getBasis() * rayResult.m_hitNormalLocal; } m_hitPointWorld.setInterpolate3(m_rayFromWorld, m_rayToWorld, m_closestHitFraction); return 1.0f; } }; And in the movement function, I have this code: btVector3 from(pos.x, pos.y + 1000, pos.z); // pos is the camera's rigid body position btVector3 to(pos.x, 0, pos.z); // not sure if 0 is correct for Y AllRayResultCallback callback(from, to); Base::getSingletonPtr()->m_btWorld->rayTest(from, to, callback); So I have the callback.m_hitPointWorld vector, which seems to just show the position of the camera each frame. I've searched Google for examples of casting rays, as well as the Bullet documentation, and it's been hard to just find an example. An example is really all I need. Or perhaps there is some method in Bullet to keep the rigid body on the ground? I'm using Ogre3D as a rendering engine, and casting a ray down is quite straightforward with that, however I want to keep all the ray casting within Bullet for simplicity. Could anyone point me in the right direction? Thanks.

    Read the article

  • What is the practical use of IBOs / degenerate vertex in OpenGL?

    - by 0xFAIL
    Vertices in 3D models CAN get cut in the process of optimizing 3D geometry, (degenerate vertices) by 3D graphics software (Blender, ...) when exporting because they aren't needed when reusing a vertex for multiple triangles. (In the current case 3D data is exported from Blender as .ply and read by a simple application that displays the 3D model) Every vertex has a few attributes like position, color, normal, tangent,... But the data for each vertex that is cut through the vertex sharing is lost and is missing in the vertex shader. Modern shader techniques like Bump or Normal mapping require normals/tangents per vertex which are also cut. To use complex shader techniques IBOs must not be used? Or is there a way to use IBOs and retain the data per vertex that was origionally lost?

    Read the article

  • HLSL: Pack 4 values into 32 bit float.

    - by TheBigO
    I can't find any useful information on packing 4 values into a 32 bit float in HLSL. Ideally, what I want to be able to do in HLSL is: float4 values = ... // Some values where each component is between 0 and 1. float packedValues = pack32R(values); float4 values2 = unpack32R(packedValues); I realize that there will be precision limitations, and performance tradeoffs between different precisions in different methods. I'm just wondering what ideas are out there.

    Read the article

  • Apply bone tranforms when importing FBX in XNA

    - by hichaeretaqua
    Preconditions: I have some models, that does only contain some meshes and one texture. There is no animation within the model. An example: a model of a table. I want to draw the Model with a custom effect, so I have to swap the effect after loading the model. In order to draw them correctly, I have to apply the bone transformation manually on each draw for each mesh and effect as can be seen here. So there are two questions: Is there a option during import that allows my to apply the bone transformation on all vertices, so that during draw call I should not have to do this? Is there a option during import that merges all vertices into a Vertex- and IndexBuffer, that allows me to draw the whole model with just one call? I'm pretty sure that the build-in "Autodesk FBX - XNA Framework" does not support this features, but maybe there is an other imported available or an other possibility I missed. The aim is to speed up rendering a little bit especially by using instancing. So having one VertexBuffer to draw at one time would be pretty nice.

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >