Search Results

Search found 25550 results on 1022 pages for 'umbraco development'.

Page 453/1022 | < Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >

  • JOGL2 test compiles, but doesn't execute - help?

    - by Chuchinyi
    I have a problem with JOGL2. My JOGL2Template.java compiles fine, but executing it results in the following error: D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more Here is the JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } } Any ideas what might be the cause of these errors?

    Read the article

  • libgdx actors and instant actions

    - by vaati
    I'm having trouble with actors and actions. I have a list of actors, they all have either no action, or 1 sequence action This sequence action has either : a couple of actions (some are instant, some have duration 0) a couple of actions followed by a parallel action. My problem is the following: some of the instant actions are used to set the position and the alpha of the actor. So when one of the action is "move to x,y and set alpha to 0" the actor is visible for one frame at position 0,0 , move instantly to x,y for the next frame, and then disappears. Though this behaviours is to be expected, I want to avoid it. How can I achieve that? I tried to intercept the actions before I put actors in the stage but I need the stage width/height for some actions. So something like : Action actionSequence = actor.getActions().get(0); Array<Action> actions = ((SequenceAction) actionSequence).getActions(); for(Action act : actions){ if(act.act(0)) System.out.println("action " + act.toString() + " successfully run"); else System.out.println("action " + act.toString() + " wasn't instant"); } won't work. It gets even more complicated when an actor can also have a repeat action in stead of the sequence action (because you have to only run the actions that have duration 0 once without repeat, and then start the repeat). Any help is appreciated.

    Read the article

  • Looking for a small, light scene graph style abstraction lib for shader based OpenGL

    - by Pris
    I'm looking for a 'lean and mean' c/c++ scene graph library for OpenGL that doesn't use any deprecated functionality. It should be cross platform (strictly speaking I just dev on Linux so no love lost if it doesn't work on Windows), and it should be possible to deploy to mobile targets (ie OpenGLES2, and no crazy mandatory dependencies that wouldn't port well to modern mobile frameworks like iOS, Android, etc), with a license that's compatible with closed source software (LGPL or more liberal). Specific nice-to-haves would be: Cameras and Viewers (trackball, fly-by, etc) Object transform hierarchies (if B is a child of A, and you move A, B has the same transform applied to it) Simple animation Scene optimization (frustum culling, use VBOs, minimize state changes, etc) Text I've played around with OpenSceneGraph a lot and it's pretty amazing for fixed function pipeline stuff, but I've had a few of problems using it with the programmable pipeline and after going through their mailing list, it seems several people have had similar issues (going back years). Kitware's VES looks neat (http://www.vtk.org/Wiki/VES), but VES + VTK is pretty heavy. VTK is also typically for analyzing scientific data and I've read that it's not that appropriate for a general use case (not that great at rendering a lot of objects on scene,etc) I'm currently looking at VisualizationLibrary (http://www.visualizationlibrary.org/documentation/pag_gallery.html) which looks like it offers some of the functionality I'd like, but it doesn't explicitly support mobile targets. Other solutions like Ogre, Horde3D, Irrlicht, etc tend to be full on game engines and that's not really what I'm looking for. I'd like some suggestions for other libraries that I may have missed... please note I'm not willing to roll my own solution from scratch.

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • Adaptive Characters: AI Solution Needs a Problem

    - by Roger F. Gay
    Have sophisticated adaptive programming, will travel - so to speak. I'm part of a group that developed sophisticated learning / adaptive software for robotics. The system "thinks" via its simulator, building and adapting code on its own; and then carries out the best solution. The software can also adapt to new situations, etc. http://mensnewsdaily.com/2007/05/16/robobusiness-robots-with-imagination/ It's easy to imagine using it with automated game characters that will adapt to the players moves and style - the easiest example would be fighting. The more the simulated fighter fights with the human player, the more it learns to counter that players fighting skills. But there should be more. Anyone have any ideas as to how adaptive characters might be interesting in games?

    Read the article

  • Pixmaps, ByteBuffers, and Textures....Oh my

    - by odaymichael
    My ultimate goal is to take a specific region of the screen, and redraw it somewhere else. For example, take a square from the upper left hand corner of the screen and redraw it on the lower right hand corner, so that it is basically a copy of that screen section; kind of like a minimap, but at the same scale as the original. I have looked in to pixmaps and bytebuffers. Also maybe copying that region from the backbuffer somehow. Wondering the best way to go about this. Any help is appreciated. I am using opengl es and libgdx for what it's worth.

    Read the article

  • What are cons of usage only non-member functions and POD?

    - by Miro
    I'm creating my own game engine. I've read these articles and this question about DOD and there was written to not use member functions and classes. I also heard some criticism to this idea. I can write it using member functions or non-member functions it would be similar. So what are benefits/cons of that approach or when project grows, does any of these approaches give clearer and better manageable code? With POD & non-member functions I don't have to make struct members public I can still use object id outside of engine like OpenGL does with all it's stuff, so It's not about encapsulation. POD - plain old data DOD - data oriented design

    Read the article

  • Ray Intersecting Plane Formula in C++/DirectX

    - by user4585
    I'm developing a picking system that will use rays that intersect volumes and I'm having trouble with ray intersection versus a plane. I was able to figure out spheres fairly easily, but planes are giving me trouble. I've tried to understand various sources and get hung up on some of the variables used within their explanations. Here is a snippet of my code: bool Picking() { D3DXVECTOR3 vec; D3DXVECTOR3 vRayDir; D3DXVECTOR3 vRayOrig; D3DXVECTOR3 vROO, vROD; // vect ray obj orig, vec ray obj dir D3DXMATRIX m; D3DXMATRIX mInverse; D3DXMATRIX worldMat; // Obtain project matrix D3DXMATRIX pMatProj = CDirectXRenderer::GetInstance()->Director()->Proj(); // Obtain mouse position D3DXVECTOR3 pos = CGUIManager::GetInstance()->GUIObjectList.front().pos; // Get window width & height float w = CDirectXRenderer::GetInstance()->GetWidth(); float h = CDirectXRenderer::GetInstance()->GetHeight(); // Transform vector from screen to 3D space vec.x = (((2.0f * pos.x) / w) - 1.0f) / pMatProj._11; vec.y = -(((2.0f * pos.y) / h) - 1.0f) / pMatProj._22; vec.z = 1.0f; // Create a view inverse matrix D3DXMatrixInverse(&m, NULL, &CDirectXRenderer::GetInstance()->Director()->View()); // Determine our ray's direction vRayDir.x = vec.x * m._11 + vec.y * m._21 + vec.z * m._31; vRayDir.y = vec.x * m._12 + vec.y * m._22 + vec.z * m._32; vRayDir.z = vec.x * m._13 + vec.y * m._23 + vec.z * m._33; // Determine our ray's origin vRayOrig.x = m._41; vRayOrig.y = m._42; vRayOrig.z = m._43; D3DXMatrixIdentity(&worldMat); //worldMat = aliveActors[0]->GetTrans(); D3DXMatrixInverse(&mInverse, NULL, &worldMat); D3DXVec3TransformCoord(&vROO, &vRayOrig, &mInverse); D3DXVec3TransformNormal(&vROD, &vRayDir, &mInverse); D3DXVec3Normalize(&vROD, &vROD); When using this code I'm able to detect a ray intersection via a sphere, but I have questions when determining an intersection via a plane. First off should I be using my vRayOrig & vRayDir variables for the plane intersection tests or should I be using the new vectors that are created for use in object space? When looking at a site like this for example: http://www.tar.hu/gamealgorithms/ch22lev1sec2.html I'm curious as to what D is in the equation AX + BY + CZ + D = 0 and how does it factor in to determining a plane intersection? Any help will be appreciated, thanks.

    Read the article

  • Postgres: clear entire database before re-creating / re-populating from bash script

    - by Hoff
    hi folks, I'm writing a shell script (will become a cronjob) that will: 1: dump my production database 2: import the dump into my development database Between step 1 and 2, I need to clear the development database (drop all tables?). How is this best accomplished from a shell script? So far, it looks like this: #!/bin/bash time=`date '+%Y'-'%m'-'%d'` # 1. export(dump) the current production database pg_dump -U production_db_name > /backup/dir/backup-${time}.sql # missing step: drop all tables from development database so it can be re-populated # 2. load the backup into the development database psql -U development_db_name < backup/dir/backup-${time}.sql Many thanks in advance! Martin

    Read the article

  • Making a game with responsive resolution

    - by alexandervrs
    I am making a game, however I wish for it to be resolution agnostic. My target resolution i.e. where things look as intended is 1600 x 900. My ideas are: Make the HUD stay fixed to the sides no matter what resolution, use different size for HUD graphics under a certain resolution and another under a certain large one. Use large HD sprites/backgrounds which are a power of 2, so they scale nicely. Use the player's native resolution. Scale the game area (not the HUD) to fit (resulting zooming in some and cropping the game area sides if necessary for widescreen, no stretch), but always fill the screen. Have a min and max resolution limit for small and very large displays where you will just change the resolution(?) or scale up/down to fit. What I am a bit confused though is what math formula I would use to scale the game area correctly based on the resolution no matter the aspect ratio, fully fit in a square screen and with some clip to the sides for widescreen. Pseudocode would help as well. :)

    Read the article

  • Setting Krypton Light to Screen Pixels

    - by Adam Jerrett
    So a few days back, I started playing around with Krypton XNA for 2D lighting in my game. I noticed in general, that spawning a light at (0,0) with Krypton causes the light to appear in, pretty much, the centre of the game screen. Is there any way to change this so a Krypton light's "starting point" at [0,0] would spawn at the top left of the screen, and thus follow the standard screen co-ordinates for position? I ask because currently I'm busy working on my game where my spawn point is [512,512]. With hard code, the closest I've got to the light being "central" to this point is the vector position [12,-20], which makes no sense and is impossible to craft, mathematically, if I want the light to move with the camera (the position [480,512] maps roughly to [10,-20]). So, is there any way to "normalise" the krypton lights to use standard screen co-ordinates? If you guys can, play around with the demo from the site and please see if you can find anything out about it. Documentation on the engine is rather scarce, so it's difficult to find anything relevant to my "pixel-perfect" need. It might just also be something in the code with regards to the matrices that I'm not fully understanding. Any help would be useful. Thanks.

    Read the article

  • Bullet Physics - Casting a ray straight down from a rigid body (first person camera)

    - by Hydrocity
    I've implemented a first person camera using Bullet--it's a rigid body with a capsule shape. I've only been using Bullet for a few days and physics engines are new to me. I use btRigidBody::setLinearVelocity() to move it and it collides perfectly with the world. The only problem is the Y-value moves freely, which I temporarily solved by setting the Y-value of the translation vector to zero before the body is moved. This works for all cases except when falling from a height. When the body drops off a tall object, you can still glide around since the translate vector's Y-value is being set to zero, until you stop moving and fall to the ground (the velocity is only set when moving). So to solve this I would like to try casting a ray down from the body to determine the Y-value of the world, and checking the difference between that value and the Y-value of the camera body, and disable or slow down movement if the difference is large enough. I'm a bit stuck on simply casting a ray and determining the Y-value of the world where it struck. I've implemented this callback: struct AllRayResultCallback : public btCollisionWorld::RayResultCallback{ AllRayResultCallback(const btVector3& rayFromWorld, const btVector3& rayToWorld) : m_rayFromWorld(rayFromWorld), m_rayToWorld(rayToWorld), m_closestHitFraction(1.0){} btVector3 m_rayFromWorld; btVector3 m_rayToWorld; btVector3 m_hitNormalWorld; btVector3 m_hitPointWorld; float m_closestHitFraction; virtual btScalar addSingleResult(btCollisionWorld::LocalRayResult& rayResult, bool normalInWorldSpace) { if(rayResult.m_hitFraction < m_closestHitFraction) m_closestHitFraction = rayResult.m_hitFraction; m_collisionObject = rayResult.m_collisionObject; if(normalInWorldSpace){ m_hitNormalWorld = rayResult.m_hitNormalLocal; } else{ m_hitNormalWorld = m_collisionObject->getWorldTransform().getBasis() * rayResult.m_hitNormalLocal; } m_hitPointWorld.setInterpolate3(m_rayFromWorld, m_rayToWorld, m_closestHitFraction); return 1.0f; } }; And in the movement function, I have this code: btVector3 from(pos.x, pos.y + 1000, pos.z); // pos is the camera's rigid body position btVector3 to(pos.x, 0, pos.z); // not sure if 0 is correct for Y AllRayResultCallback callback(from, to); Base::getSingletonPtr()->m_btWorld->rayTest(from, to, callback); So I have the callback.m_hitPointWorld vector, which seems to just show the position of the camera each frame. I've searched Google for examples of casting rays, as well as the Bullet documentation, and it's been hard to just find an example. An example is really all I need. Or perhaps there is some method in Bullet to keep the rigid body on the ground? I'm using Ogre3D as a rendering engine, and casting a ray down is quite straightforward with that, however I want to keep all the ray casting within Bullet for simplicity. Could anyone point me in the right direction? Thanks.

    Read the article

  • Implementing a wheeled character controller

    - by Lazlo
    I'm trying to implement Boxycraft's character controller in XNA (with Farseer), as Bryan Dysmas did (minus the jumping part, yet). My current implementation seems to sometimes glitch in between two parallel planes, and fails to climb 45 degree slopes. (YouTube videos in links, plane glitch is subtle). How can I fix it? From the textual description, I seem to be doing it right. Here is my implementation (it seems like a huge wall of text, but it's easy to read. I wish I could simplify and isolate the problem more, but I can't): public Body TorsoBody { get; private set; } public PolygonShape TorsoShape { get; private set; } public Body LegsBody { get; private set; } public Shape LegsShape { get; private set; } public RevoluteJoint Hips { get; private set; } public FixedAngleJoint FixedAngleJoint { get; private set; } public AngleJoint AngleJoint { get; private set; } ... this.TorsoBody = BodyFactory.CreateRectangle(this.World, 1, 1.5f, 1); this.TorsoShape = new PolygonShape(1); this.TorsoShape.SetAsBox(0.5f, 0.75f); this.TorsoBody.CreateFixture(this.TorsoShape); this.TorsoBody.IsStatic = false; this.LegsBody = BodyFactory.CreateCircle(this.World, 0.5f, 1); this.LegsShape = new CircleShape(0.5f, 1); this.LegsBody.CreateFixture(this.LegsShape); this.LegsBody.Position -= 0.75f * Vector2.UnitY; this.LegsBody.IsStatic = false; this.Hips = JointFactory.CreateRevoluteJoint(this.TorsoBody, this.LegsBody, Vector2.Zero); this.Hips.MotorEnabled = true; this.AngleJoint = new AngleJoint(this.TorsoBody, this.LegsBody); this.FixedAngleJoint = new FixedAngleJoint(this.TorsoBody); this.Hips.MaxMotorTorque = float.PositiveInfinity; this.World.AddJoint(this.Hips); this.World.AddJoint(this.AngleJoint); this.World.AddJoint(this.FixedAngleJoint); ... public void Move(float m) // -1, 0, +1 { this.Hips.MotorSpeed = 0.5f * m; }

    Read the article

  • How to add a sound that an enemy AI can hear?

    - by Chris
    Given: a 2D top down game Tiles are stored just in a 2D array Every tile has a property - dampen (so bricks might be -50db, air might be -1) From this I want to add it so a sound is generated at point x1, y1 and it "ripples out". The image below kind of outlines it better. Obviously the end goal is that the AI enemy can "hear" the sound - but if a wall is blocking it, the sound doesn't travel as far. Red is the wall, which has a dampen of 50db. I think in the 3rd game tick I am confusing my maths. What would be the best way of implementing this?

    Read the article

  • Turn-based Client-Server Card Game - Unicast (TCP) or Multicast (UDP)

    - by LDM91
    I am currently planning to make a card game project where the clients will communicate with the server in a turn-based and synchronous manner using messages sent over sockets. The problem I have is how to handle the following scenario: (Client takes it turn and sends its action to server) Client sends a message telling the server its move for the turn (e.g. plays the card 5 from its hand which needs to placed onto the table) Server receives messages and updates game state (server will hold all game state). Server iterates through a list of connected clients and sends a message to tell of them change in state Clients all refresh to display the state This is all based on using TCP, and looking at it now it seems a bit like the Observer pattern. The reason this seems to be an issue to me is this message doesn't seem to be point-to-point like the others as I want to send it to all the clients, and doesn't seem very efficient sending the same message in that way. I was thinking about using multicasting with UDP as then I could send the message to all the clients, however wouldn't this mean that the clients would in theory be able to message each other? There is of course the synchronous aspect as well, though this could be put on top of the UDP I guess. Basically, I would like to know what would be good practice as this project is really all about learning, and even though it won't be big enough to encounter performance issues from this I would like to consider them anyway. However, please note I am not interested in using message oriented middleware as a solution (I have experience with using MOM and I'm interested in considering other options excluding MOM if TCP sockets is a bad idea!).

    Read the article

  • Are there existing FOSS component-based frameworks?

    - by Tesserex
    The component based game programming paradigm is becoming much more popular. I was wondering, are there any projects out there that offer a reusable component framework? In any language, I guess I don't care about that. It's not for my own project, I'm just curious. Specifically I mean are there projects that include a base Entity class, a base Component class, and maybe some standard components? It would then be much easier starting a game if you didn't want to reinvent the wheel, or maybe you want a GraphicsComponent that does sprites with Direct3D, but you figure it's already been done a dozen times. A quick Googling turns up Rusher. Has anyone heard of this / does anyone use it? If there are no popular ones, then why not? Is it too difficult to make something like this reusable, and they need heavy customization? In my own implementation I found a lot of boilerplate that could be shoved into a framework.

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • What is the practical use of IBOs / degenerate vertex in OpenGL?

    - by 0xFAIL
    Vertices in 3D models CAN get cut in the process of optimizing 3D geometry, (degenerate vertices) by 3D graphics software (Blender, ...) when exporting because they aren't needed when reusing a vertex for multiple triangles. (In the current case 3D data is exported from Blender as .ply and read by a simple application that displays the 3D model) Every vertex has a few attributes like position, color, normal, tangent,... But the data for each vertex that is cut through the vertex sharing is lost and is missing in the vertex shader. Modern shader techniques like Bump or Normal mapping require normals/tangents per vertex which are also cut. To use complex shader techniques IBOs must not be used? Or is there a way to use IBOs and retain the data per vertex that was origionally lost?

    Read the article

  • Constrained/penalized distance function

    - by sigma.z.1980
    Assume a character is located on a n by n grid and has to reach a certain entry on that grid. Its current position is (x1,y1). Also on the same grid is an enemy with coordinates (x2,y2). Each step algorithm randomly generates new candidate locations for the hero (if there are k candidates then there is a kx2 matrix of new potential locations. What I need is some distance objective function to compare the candidates. I'm currently using d1 - c * d2, where d1 is distance to the objective (measure in terms of number of pixels for each axis), d2 is distance to the enemy and c is some coefficient (this is very much like a set-up for Lagrangian). It's not working very well though. I'd be quite keen to learn how what constrained distance function are used for similar cases. Any suggestions are very much appreciated.

    Read the article

  • Projectiles in tile mapped turn-based tactics game?

    - by Petteri Hietavirta
    I am planning to make a Laser Squad clone and I think I have most of the aspects covered. But the major headache is the projectiles shot/thrown. The easy way would be to figure out the probability of hit and just mark miss/hit. But I want to be able to have the projectile to hit something eventually (collateral damage!). Currently everything is flat 2D tile map and there would be full (wall, door) and half height (desk, chair, window) obstacles. My idea is to draw an imaginary line from the shooter to the target and add some horizontal&vertical error based on the player skills. Then I would trace the modified path until it hits something. This is basically what the original Laser Squad seems to do. Can you recommend any algorithms or other approaches for this?

    Read the article

  • Designing Videogame Character Parodies [duplicate]

    - by David Dimalanta
    This question already has an answer here: Is it legal to add a cameo appearance of a known video game character in my game? 2 answers Was it okay to make a playable character when making a videogame despite its resemblance? For example, I'm making a 3rd-person action-platform genre and I have to make a character design resembling like Megaman but not exactly the same as him since there is little alternate in color, details, and facial features.

    Read the article

  • Creating a frozen bubble clone

    - by Vaughan Hilts
    This photo illustrates the environment: http://i.imgur.com/V4wbp.png I'll shoot the cannon, it'll bounce off the wall and it's SUPPOSED to stick to the bubble. It does at pretty much every other angle. The problem is always reproduced here, when hit off the wall into those bubbles. It also exists in other cases, but I'm not sure what triggers it. What actually happens: The ball will sometimes set to the wrong cell, and my "dropping" code will detect it as a loner and drop it off the stage. *There are many implementations of "Frozen Bubble" on the web, but I can't for the life of me find a good explanation as to how the algorithm for the "Bubble Sticking" works. * I see this: http://www.wikiflashed.com/wiki/BubbleBobble https://frozenbubblexna.svn.codeplex.com/svn/FrozenBubble/ But I can't figure out the algorithims... could anyone explain possibly the general idea behind getting the balls to stick? Code in question: //Counstruct our bounding rectangle for use var nX = currentBall.x + ballvX * gameTime; var nY = currentBall.y - ballvY * gameTime; var movingRect = new BoundingRectangle(nX, nY, 32, 32); var able = false; //Iterate over the cells and draw our bubbles for (var x = 0; x < 8; x++) { for (var y = 0; y < 12; y++) { //Get the bubble at this layout var bubble = bubbleLayout[x][y]; var rowHeight = 27; //If this slot isn't empty, draw if (bubble != null) { var bx = 0, by = 0; if (y % 2 == 0) { bx = x * 32 + 270; by = y * 32 + 45; } else { bx = x * 32 + 270 + 16; by = y * 32 + 45; } //Check var targetBox = new BoundingRectangle(bx, by, 32, 32); if (targetBox.intersects(movingRect)) { able = true; } } } } cellY = Math.round((currentBall.y - 45) / 32); if (cellY % 2 == 0) cellX = Math.round((currentBall.x - 270) / 32); else cellX = Math.round((currentBall.x - 270 - 16) / 32); Any ideas are very much welcome. Things I've tried: Flooring and Ceiling values Changing the wall bounce to a lower value Slowing down the ball None of these seem to affect it. Is there something in my math I'm not getting?

    Read the article

  • How to transform mesh components?

    - by Lea Hayes
    I am attempting to transform the components of a mesh directly using a 4x4 matrix. This is working for the vertex positions, but it is not working for the normals (and probably not the tangents either). Here is what I have: // Transform vertex positions - Works like a charm! vertices = mesh.vertices; for (int i = 0; i < vertices.Length; ++i) vertices[i] = transform.MultiplyPoint(vertices[i]); // Does not work, lighting is messed up on mesh normals = mesh.normals; for (int i = 0; i < normals.Length; ++i) normals[i] = transform.MultiplyVector(normals[i]); Note: The input matrix converts from local to world space and is needed to combine multiple meshes together.

    Read the article

  • Apply bone tranforms when importing FBX in XNA

    - by hichaeretaqua
    Preconditions: I have some models, that does only contain some meshes and one texture. There is no animation within the model. An example: a model of a table. I want to draw the Model with a custom effect, so I have to swap the effect after loading the model. In order to draw them correctly, I have to apply the bone transformation manually on each draw for each mesh and effect as can be seen here. So there are two questions: Is there a option during import that allows my to apply the bone transformation on all vertices, so that during draw call I should not have to do this? Is there a option during import that merges all vertices into a Vertex- and IndexBuffer, that allows me to draw the whole model with just one call? I'm pretty sure that the build-in "Autodesk FBX - XNA Framework" does not support this features, but maybe there is an other imported available or an other possibility I missed. The aim is to speed up rendering a little bit especially by using instancing. So having one VertexBuffer to draw at one time would be pretty nice.

    Read the article

  • 2D vector graphic html5 framework

    - by Yury
    I trying to find html5 game framework by following criteria: 1)Real good performance. 2)Good support of vector graphic( objects which contains canvas elements -line, rec,bezierCurve etc.) 3)Easy port to mobile. Optional- Physics Engine. I found 1)Pixi.js- it looks like real good, but i didn't find any info about "vector objects" support. 2) i found "vector objects" support in paper.js I need something like these: http://paperjs.org/examples/chain/ and http://paperjs.org/examples/path-intersections/. But it looks like paper.js- not so good performance as pixi.js. And it is not game engine. Is there any good framework meets these requirements? P.S. I found similar question here Which free HTML5-based game engine meets these requirements?. But it was a long time ago. A lot of new things were created since 2011.

    Read the article

< Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >