Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 531/1027 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Attributes and Behaviours in game object design

    - by Brukwa
    Recently I have read interesting slides about game object design written by Marcin Chady Theory and Practice of the Game Object Component Architecture. I have prototyped quick sample that utilize all Attributes\Behaviour idea with some sample data. Now I have faced a little problem when I added a RenderingSystem to my prototype application. I have created an object with RenderBehaviour which listens for messages (OnMessage function) like MovedObject in order to mark them as invalid and in OnUpdate pass I am inserting a new renderable object to rederer queue. I have noticed that rendering updates should be the last thing made in single frame and this causes RenderBehaviour to depend on any other Behaviour that changes object position (i.ex. PhysicsSystem and PhysicsBehaviour). I am not even sure if I am doing this the way it should be. Do you have any clues that might put me on the right track?

    Read the article

  • How can I create a VBO of a type determined at runtime?

    - by lapin
    I've written a Vbo template class to work with OpenGL. I'd like to set the type from a config file at runtime. For example: <vbo type="bump_vt" ... /> Vbo* pVbo = new Vbo(bump_vt, ...); Is there some way I can do this without a large if-else block such as: if( sType.compareTo("bump_vt") == 0 ) Vbo* pVbo = new Vbo(bump_vt, ...); else if ... I'm writing for multiple platforms in C++.

    Read the article

  • Complex shading using one single (small) texture

    - by teodron
    Recently I stumbled upon a demo reel in UDK about how one can attain beautiful results using just one (rather tiny) texture that's being sent to the shader pipeline. The famous link is this one. Basically, the author states that they've used just one texture and give a snapshot of the technique here. I see that every RGBA channel contains different grayscale information.. and that info could be used to inside a shader to obtain a colour blended output. The problem is that the reel displays a fairly complex scene. To top that, the author even makes use of a normal map. How did they manage to fit a normal map in an already cluttered texture? It makes sense to have a half-space normal map by using only RG from an RGB texture, but what about the rest of the information? Since it was proven to be possible, could someone please explain how it was done (the big picture, not the dirty details!)!? Here's the texture being used. Click to see in full size.

    Read the article

  • How to Point sprite's direction towards Mouse or an Object [duplicate]

    - by Irfan Dahir
    This question already has an answer here: Rotating To Face a Point 1 answer I need some help with rotating sprites towards the mouse. I'm currently using the library allegro 5.XX. The rotation of the sprite works but it's constantly inaccurate. It's always a few angles off from the mouse to the left. Can anyone please help me with this? Thank you. P.S I got help with the rotating function from here: http://www.gamefromscratch.com/post/2012/11/18/GameDev-math-recipes-Rotating-to-face-a-point.aspx Although it's by javascript, the maths function is the same. And also, by placing: if(angle < 0) { angle = 360 - (-angle); } doesn't fix it. The Code: #include <allegro5\allegro.h> #include <allegro5\allegro_image.h> #include "math.h" int main(void) { int width = 640; int height = 480; bool exit = false; int shipW = 0; int shipH = 0; ALLEGRO_DISPLAY *display = NULL; ALLEGRO_EVENT_QUEUE *event_queue = NULL; ALLEGRO_BITMAP *ship = NULL; if(!al_init()) return -1; display = al_create_display(width, height); if(!display) return -1; al_install_keyboard(); al_install_mouse(); al_init_image_addon(); al_set_new_bitmap_flags(ALLEGRO_MIN_LINEAR | ALLEGRO_MAG_LINEAR); //smoother rotate ship = al_load_bitmap("ship.bmp"); shipH = al_get_bitmap_height(ship); shipW = al_get_bitmap_width(ship); int shipx = width/2 - shipW/2; int shipy = height/2 - shipH/2; int mx = width/2; int my = height/2; al_set_mouse_xy(display, mx, my); event_queue = al_create_event_queue(); al_register_event_source(event_queue, al_get_mouse_event_source()); al_register_event_source(event_queue, al_get_keyboard_event_source()); //al_hide_mouse_cursor(display); float angle; while(!exit) { ALLEGRO_EVENT ev; al_wait_for_event(event_queue, &ev); if(ev.type == ALLEGRO_EVENT_KEY_UP) { switch(ev.keyboard.keycode) { case ALLEGRO_KEY_ESCAPE: exit = true; break; /*case ALLEGRO_KEY_LEFT: degree -= 10; break; case ALLEGRO_KEY_RIGHT: degree += 10; break;*/ case ALLEGRO_KEY_W: shipy -=10; break; case ALLEGRO_KEY_S: shipy +=10; break; case ALLEGRO_KEY_A: shipx -=10; break; case ALLEGRO_KEY_D: shipx += 10; break; } }else if(ev.type == ALLEGRO_EVENT_MOUSE_AXES) { mx = ev.mouse.x; my = ev.mouse.y; angle = atan2(my - shipy, mx - shipx); } // al_draw_bitmap(ship,shipx, shipy, 0); //al_draw_rotated_bitmap(ship, shipW/2, shipH/2, shipx, shipy, degree * 3.142/180,0); al_draw_rotated_bitmap(ship, shipW/2, shipH/2, shipx, shipy,angle, 0); //I directly placed the angle because the allegro library calculates radians, and if i multiplied it by 180/3. 142 the rotation would go hawire, not would, it actually did. al_flip_display(); al_clear_to_color(al_map_rgb(0,0,0)); } al_destroy_bitmap(ship); al_destroy_event_queue(event_queue); al_destroy_display(display); return 0; } EDIT: This was marked duplicate by a moderator. I'd like to say that this isn't the same as that. I'm a total beginner at game programming, I had a view at that other topic and I had difficulty understanding it. Please understand this, thank you. :/ Also, while I was making a print of what the angle is I got this... Here is a screenshot:http://img34.imageshack.us/img34/7396/fzuq.jpg Which is weird because aren't angles supposed to be 360 degrees only?

    Read the article

  • StringBuffer behavior in LWJGL

    - by Michael Oberlin
    Okay, I've been programming in Java for about ten years, but am entirely new to LWJGL. I have a specific problem whilst attempting to create a text console. I have built a class meant to abstract input polling to it, which (in theory) captures key presses from the Keyboard object and appends them to a StringBuilder/StringBuffer, then retrieves the completed string after receiving the ENTER key. The problem is, after I trigger the String return (currently with ESCAPE), and attempt to print it to System.out, I consistently get a blank line. I can get an appropriate string length, and I can even sample a single character out of it and get complete accuracy, but it never prints the actual string. I could swear that LWJGL slipped some kind of thread-safety trick in while I wasn't looking. Here's my code: static volatile StringBuffer command = new StringBuffer(); @Override public void chain(InputPoller poller) { this.chain = poller; } @Override public synchronized void poll() { //basic testing for modifier keys, to be used later on boolean shift = false, alt = false, control = false, superkey = false; if(Keyboard.isKeyDown(Keyboard.KEY_LSHIFT) || Keyboard.isKeyDown(Keyboard.KEY_RSHIFT)) shift = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMENU) || Keyboard.isKeyDown(Keyboard.KEY_RMENU)) alt = true; if(Keyboard.isKeyDown(Keyboard.KEY_LCONTROL) || Keyboard.isKeyDown(Keyboard.KEY_RCONTROL)) control = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMETA) || Keyboard.isKeyDown(Keyboard.KEY_RMETA)) superkey = true; while(Keyboard.next()) if(Keyboard.getEventKeyState()) { command.append(Keyboard.getEventCharacter()); } if (Framework.isConsoleEnabled() && Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { System.out.println("Escape down"); System.out.println(command.length() + " characters polled"); //works System.out.println(command.toString().length()); //works System.out.println(command.toString().charAt(4)); //works System.out.println(command.toString().toCharArray()); //blank line! System.out.println(command.toString()); //blank line! Framework.disableConsole(); } //TODO: Add command construction and console management after that } } Maybe the answer's obvious and I'm just feeling tired, but I need to walk away from this for a while. If anyone sees the issue, please let me know. This machine is running the latest release of Java 7 on Ubuntu 12.04, Mate desktop environment. Many thanks.

    Read the article

  • What are some ways to texture map a terrain?

    - by ApocKalipsS
    I'm working with XNA on a 3D Game, and I'm trying to have a proper and nice environnement. I actually followed a tutorial to create a terrain from a heightmap. To texture it, I just apply a grass texture on it and tile it a number of times. But what I want to do is to have a really realistic texturing, but also generate it automatically (for example if I want to use Perlin noise to generate a terrain and then texture it). I already learned about multi-texturing, loading a map file with different colors for different textures, but I don't think this is really efficient, for instance for cliffs or very steep areas it will tile a texture badly as it's a view from the top. (Also, I don't know how I'll draw roads or dirt paths with that.) I'm looking for an efficient solution to realistically texture mapping procedurally-generated terrain.

    Read the article

  • Edge flicker when moving Camera (2D)

    - by Matthias Reisner
    I have a Orthographic camera. I have a fixed landscape texture and a texture for a moveable object. If the object moves to the right the camera will also move with the object. When I also draw an score text that should have fixed position on the screen, that score text position will be update too if the camera's position gets updated so that it looks like that it is fixed on the screen. But if I do that, I have some edge flickering at the text object. I'am using SpriteBatch! Is there another approach to implement a fixed positioned object on the screen?

    Read the article

  • Generating random tunnels

    - by IVlad
    What methods could we use to generate a random tunnel, similar to the one in this classic helicopter game? Other than that it should be smooth and allow you to navigate through it, while looking as natural as possible (not too symmetric but not overly distorted either), it should also: Most importantly - be infinite and allow me to control its thickness in time - make it narrower or wider as I see fit, when I see fit; Ideally, it should be possible to efficiently generate it with smooth curves, not rectangles as in the above game; I should be able to know in advance what its bounds are, so I can detect collisions and generate powerups inside the tunnel; Any other properties that let you have more control over it or offer optimization possibilities are welcome. Note: I'm not asking for which is best or what that game uses, which could spark extended discussion and would be subjective, I'm just asking for some methods that others know about or have used before or even think they might work. That is all, I can take it from there. Also asked on stackoverflow, where someone suggested I should ask here too. I think it fits in both places, since it's as much an algorithm question as it is a gamedev question, IMO.

    Read the article

  • What is the correct way to implement hit detection with non-rectangular sprites?

    - by hogni89
    What is the correct way to implement hit or touch detection for non-rectangular sprites in Cocos2d? I am working on a jigsaw puzzle, so our sprites have some strange forms (jigsaw puzzle bricks). As of now, we have implemented the "detection" this way: - (void)selectSpriteForTouch:(CGPoint)touchLocation { CCSprite * newSprite = nil; // Loop array of sprites for (CCSprite *sprite in movableSprites) { // Check if sprite is hit. // TODO: Swap if with something better. if (CGRectContainsPoint(sprite.boundingBox, touchLocation)) { newSprite = sprite; break; } } if (newSprite != selSprite) { // Move along, nothing to see here // Not the problem } } - (BOOL)ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event { CGPoint touchLocation = [self convertTouchToNodeSpace:touch]; [self selectSpriteForTouch:touchLocation]; return TRUE; } I know that the problem is in the keyword "sprite.boundingBox". Is there a better way of implementing this, or is it a limitation when using sprites based on .png's? If so, how should I proceed?

    Read the article

  • How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • How do I reconstruct depth in deferred rendering using an orthographic projection?

    - by Jeremie
    I've been trying to get my world space position of my pixel but I4m missing something. I'm using a orthographic view for a 2.5d game. My depth is linear and this is my code. float3 lightPos = lightPosition; float2 texCoord = PostProjToScreen(PSIn.lightPosition)+halfPixel; float depth = tex2D(depthMap, texCoord); float4 position; position.x = texCoord.x *2-1; position.y = (1-texCoord.y)*2-1; position.z = depth.r; position.w = 1; position = mul(position, inViewProjection); //position.xyz/=position.w; // I comment it but even without it it doesn't work float4 normal = (tex2D(normalMap, texCoord)-.5f) * 2; normal = normalize(normal); float3 lightDirection = normalize(lightPos-position); float att = saturate(1.0f - length(lightDirection) /attenuation); float lightning = saturate (dot(normal, lightDirection)); lightning*= brightness; return float4(lightColor* lightning*att, 1); I'm using a sphere but it's not working the way I want. I reproject the texture properly onto the sphere but the light coordinates in the pixel shader seems to be stuck at zero even if when I move the light volume update properly.

    Read the article

  • Need the co-ordinates of innerPolygon

    - by user960567
    Let say I have this diagram, Given that i have all the co-ordinates of outer polygon and the distance between inner and outer polygon is d is also given. How to calculate the inner polygon co-ordinates? Edit: I was able to solved the issue by getting the mid-points of all lines. From these mid-points I can move d distance, So I can get three points. No I have 3 points and 3 slopes. From this, I can get three new equations. Simultaneously, solving the equation get the 3 points.

    Read the article

  • Open Source Analysis

    - by BluFire
    There are a lot of code in open source projects, looking at all of the code is time consuming and can be confusing to a novice like me. Are there any sections of open-source projects that should be focused on? What should I focus on when I look at code? I'm asking this in general because if I ask this specifically, the question will only apply in one or two projects rather than an entire group of projects ranging in different types of games and difficulty.

    Read the article

  • Client-side prediction for FPS

    - by newprogrammer
    People that understand client-side prediction and client-side interpolation, I have a question: When I play the game Team Fortress 2, and type cl_predict 1 into the developer's console, it enables client-side prediction. The also says "6 predictable entities reinitialized". It says this regardless of how many players are on the server, which makes sense, because other players are not predictable entities. I thought client-side prediction was only for the movement of the player. Are there other entities that the client can provide prediction for?

    Read the article

  • Drawing a texture line between two vectors in XNA WP7

    - by Krav
    I want to create a simple graph maker in WP7. The goal is to draw a texture line between two vectors what the user defines with touch. I already made the rotation, and it is working, but not correctly, because it doesn't calculate the line's texture height, and because of that, there are too many overlapping textures. So it does draw the line, but too many of them. How could I calculate it correctly? Here is the code: public void DrawLine(Vector2 st,Vector2 dest,NodeUnit EdgeParent,NodeUnit EdgeChild) { float d = Vector2.Distance(st, dest); float rotate = (float)(Math.Atan2(st.Y - dest.Y, st.X - dest.X)); direction = new Vector2(((dest.X - st.X) / (float)d), (dest.Y - st.Y) / (float)d); Vector2 _pos = st; World.TheHive.Add(new LineHiveMind(linetexture, _pos, rotate, EdgeParent, EdgeChild,new List<LineUnit>())); for (int i = 0; i < d; i++) { World.TheHive.Last()._lines.Add(new LineUnit(linetexture, _pos, rotate, EdgeParent, EdgeChild)); _pos += direction; } } d is for the Distance of the st (Starting node) and dest (Destination node) rotate is for rotation direction calculates the direction between the starting and the destination node _pos is for starting position changing Thanks for any suggestions/help!

    Read the article

  • Bukkit send a custom placed name plate?

    - by HcgRandon
    Hello i have been working on a part of my plugin that has waypoints allowing the user to create delete etc. I got to thinking after using and seeing a couple of the disguse plugins. That maybe i could create a command toggle that would show the user where the waypoints they have are! I know how to do all of this i just have no idea how to display a nameplate to the client. I know its possible because disguisecraft does it i tried looking though their code but couldent find much... I belive to get this effect i need to send packets to the client if someone can direct me to a list of bukkit packets or even a solution to sending the client a custom located nameplate that would be fantastic! Thanks in advanced.

    Read the article

  • ScissorStack LIBGDX example?

    - by user36531
    I cant find a good resource/tutorial on how to do this. I would appreciate it if someone could provide a scissorstack example from an entity class. ie. using scissorstack on PlayerClass such that the map renders around the Player sprite, say 5 tiles. which would then allow me to create a Pawn class and apply same methodology to give a pawn sprite a lower number, like only rendering 1 tile around the location of the pawn.

    Read the article

  • What is this type of sound effect called?

    - by Fibericon
    There is a sound typically associated with a bright flash of light, which starts with a lower whirring noise, then breaks into a higher pitched sound. What is that type of sound called? I'm not sure how to begin searching for that, so a typical name for it would be very helpful. It's something similar to what occurs at 0:41 in this youtube video (here's a link to a few seconds beforehand), where Naruto 6 tails transforms into Kyuubei in Naruto Generations.

    Read the article

  • Trying to use stencils in 2D while retaining layer depth

    - by Steve
    This is a screen of what's going on just so you can get a frame of reference. http://i935.photobucket.com/albums/ad199/fobwashed/tilefloors.png The problem I'm running into is that my game is slowing down due to the amount of texture swapping I'm doing during my draw call. Since walls, characters, floors, and all objects are on their respective sprite sheet containing those types, per tile draw, the loaded texture is swapping no less than 3 to 5+ times as it cycles and draws the sprites in order to layer properly. Now, I've tried throwing all common objects together into their respective lists, and then using layerDepth drawing them that way which makes things a lot better, but the new problem I'm running into has to do with the way my doors/windows are drawn on walls. Namely, I was using stencils to clear out a block on the walls that are drawn in the shape of the door/window so that when the wall would draw, it would have a door/window sized hole in it. This is the way my draw was set up for walls when I was going tile by tile rather than grouped up common objects. first it would check to see if a door/window was on this wall. If not, it'd skip all the steps and just draw normally. Otherwise end the current spriteBatch Clear the buffers with a transparent color to preserve what was already drawn start a new spritebatch with stencil settings draw the door area end the spriteBatch start a new spritebatch that takes into account the previously set stencil draw the wall which will now be drawn with a hole in it end that spritebatch start a new spritebatch with the normal settings to continue drawing tiles In the tile by tile draw, clearing the depth/stencil buffers didn't matter since I wasn't using any layerDepth to organize what draws on top of what. Now that I'm drawing from lists of common objects rather than tile by tile, it has sped up my draw call considerably but I can't seem to figure out a way to keep the stencil system to mask out the area a door or window will be drawn into a wall. The root of the problem is that when I end a spriteBatch to change the DepthStencilState, it flattens the current RenderTarget and there is no longer any depth sorting for anything drawn further down the line. This means walls always get drawn on top of everything regardless of depth or positioning in the game world and even on top of each other as the stencil has to happen once for every wall that has a door or window. Does anyone know of a way to get around this? To boil it down, I need a way to draw having things sorted by layer depth while also being able to stencil/mask out portions of specific sprites.

    Read the article

  • Cocos2d: carom like game

    - by Raj
    Now I am working on carrom like game using cocos2d+Box2d. I set world gravity(0,0), want gravity in z - axis. I set following value for coin striker body: Coin body: density = 20.0f; friction = 0.4f; restitution = 0.6f; Shape Circle with radius - 15/PTM_RATIO Striker body: density = 25.0f; friction = 0.6f; restitution = 0.3f; Shape Circle with radius - 15/PTM_RATIO Output is not smooth, when I apply ApplyLinearImpulse(force,position); Coin movement looks like floating in air....takes too much time to stop... Which value of coin and striker makes it look like real carom?

    Read the article

  • Drawing flaming letters in 3D with OpenGL ES 2.0

    - by Chiquis
    I am a bit confused about how to achieve this. What I want is to "draw with flames". I have achieved this with textures successfully, but now my concern is about doing this with particles to achieve the flaming effect. Am I supposed to create a path along which I should add many particle emitters that will be emitting flame particles? I understand the concept for 2D, but for 3D are the particles always supposed to be facing the user? Something else I'm worried about is the performance hit that will occur by having that many particle emitters, because there can be many letters and drawings at the same time, and each of these elements will have many particle emitters. More detailed explanation: I have a path of points, which is my model. Imagine a dotted letter "S" for example. I want make the "S" be on fire. The "S" is just an example it can be a circle, triangle, a line, pretty much any path described by my set of points. For achieving this fire effect I thought about using particles. So I am using a program called "Particle Designer" to create a fire style particle emitter. This emitter looks perfect on 2D on the iphone screen dimensions. So then I thought that I could probably draw an S or any other figure if i place many particle emitters next to each other following the path described. To move from the 2D version to the 3D version I thought about, scaling the emitter (with a scale matrix multiplication in its model matrix) and then moving it to a point in my 3D world. I did this and it works. So now I have 1 particle emitter in the 3D world. My question is, is this how you would achieve a flaming letter? Is this too inefficient if i expect to have many flaming paths on my world? Am i supposed to rotate the particle's quad so that its always looking at the user? (the last one is because i noticed that if u look at it from the side the particles start to flatten out)

    Read the article

  • Searching for fast skeletal animation algorithm

    - by igf
    Is it theoretically possible to dynamicaly animate scene with 100-150 400 poly characters meshes on high-end GL ES 2.0 mobile devices or i definetley should use prerendered keyframe animation? Scene have only one light source and precalculated shadow maps. View from top like in Warcraft 3. No any other meshes or objects. 2d collision detecting between objects calculated via spatial hashing. It can be any other algorithm besides ragdoll, but it must supply fast and simple skeletal animation for frame with 100+ low poly meshes for each mesh. Any ideas?

    Read the article

  • Using a Higher Precision (than 8-bit unsigned integer) Buffered Image for Heightmaps in Java

    - by pl12
    I am generating a heightmap for every quad in my quadtree in openCL. The way I was creating the image is as follows: DataBufferInt dataBuffer = (DataBufferInt)img.getRaster().getDataBuffer(); int data[] = dataBuffer.getData(); //img is a bufferedimage inputImageMem = CL.clCreateImage2D( context, CL_MEM_READ_WRITE | CL_MEM_USE_HOST_PTR, new cl_image_format[]{imageFormat}, size, size, size * Sizeof.cl_uint, Pointer.to(data), null); This works ok but the major issue is that as the quads get smaller and smaller the 8-bit format of the buffered image starts to cause intolerable "stepping" issues as seen below: I was wondering if there was an alternate way I could go about doing this? Thanks for the time.

    Read the article

  • Can there be an Environment that Reacts to Weather changes in-game?

    - by The415
    Just to be straightforward, I am completely new to many aspects of coding and am searching for different specs and guidelines to aid me on my journey to crafting a wonderful game in Epic Games' Unreal Engine 4. I had some recent thoughts about the possibility of creating an environment in a game that interacts with weather (Rain, Snow, Storms) Is it possible to make an environment that can simulate weather changes in a game? I wrote notes on this for weeks now. I was thinking that an increase on environments occlusion maps was necessary for creating the effect of rain on windows, as well as making a flowing liquid surface on windows that is only visible in rain. I was also considering the idea of additive bump-maps on meshes for snow, to simulate accumulation. Are these elements dynamic in Unreal 4? Can I implement them?

    Read the article

  • What would be a good game making engine supporting Vector images?

    - by Qqwy
    I want to create a simple platforming game, in which you are a square in a wonderful world. I would like this game to be able to be played in browsers. Basically I am searching for something similar to "Flixel", but with the following features: Support Vector Graphics Allow zooming/rotating objects without producing huge amounts of lag as soon as you are using more objects. (Because I want to rotate the map around the player) So in other words, preferably zoom the viewport/camera instead of the objects themselves. Does an engine like that exist?

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >