Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 382/1027 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • XAudio2 - Multiple instances of the same sound

    - by Boreal
    Right now, I'm adding a rudimentary sound engine to my game. So far, I am able to load in a WAV file and play it once, then free up the memory when I close the game. However, the game crashes with a nice ArgumentOutOfBoundsException when I try to play another sound instance. Specified argument was out of the range of valid values. Parameter name: readLength I'm following this tutorial pretty much exactly, but I still keep getting the aforementioned error. Here's my sound-related code. http://pastebin.com/FgaqfXTs The exception occurs on line 156 when I am playing the sound: source.SubmitSourceBuffer(buffer);

    Read the article

  • Rain effect looks like snowfall effect?

    - by Nikhil Lamba
    i am making a game in that game i want rain effect i am little bit far from this right now i am doing like below particleSystem.addParticleInitializer(new ColorInitializer(1, 1, 1)); particleSystem.addParticleInitializer(new AlphaInitializer(0)); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new VelocityInitializer(2, 2, 20, 10)); particleSystem.addParticleInitializer(new RotationInitializer(0.0f, 30.0f)); particleSystem.addParticleModifier(new ScaleModifier(1.0f, 2.0f, 0, 150)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1, 1f, 1, 1, 1, 3)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1f, 1, 1, 1, 1, 6)); particleSystem.addParticleModifier(new AlphaModifier(0, 1, 0, 3)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 1, 125)); particleSystem.addParticleModifier(new ExpireModifier(50, 50)); scene.attachChild(particleSystem); But its looks like snowfall effect what changes i can do for make it rain effect please correct me EDIT : here is link for snapshot http://i.imgur.com/bRIMP.png

    Read the article

  • UIView vs CCLayer and Making gestures work in Cocos2d

    - by Lewis
    now I've been been searching for an answer to this question for at least 3 days now. I've tried on many cocos2d forums, including the official one and have heard nothing back. I've found a project which uses custom gestures: https://github.com/melle/OneFingerRotationGestureDemo Explanation here: http://blog.mellenthin.de/archives/2012/02/13/an-one-finger-rotation-gesture-recognizer/ Now I want to implement that behaviour onto a sprite in a cocos2d application. I've tried to do this but it fails to work. It uses a view controller which inherits like this: @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> Now my question is how would I implement the OneFingerRotationGesture behaviour onto a CCSprite in cocos2d 2.0? As interfaces in cocos2d look like this: @interface HelloWorldLayer : CCLayer Now I have asked a similar question to this on stack overflow and a user directed me to this link: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which I believe makes use of gestures (like the first github linked project) but not custom gestures. I lack the obj-c skills to work out and implement the functionality into my game, so I would appreciate it if someone could explain the differences between CCLayer and UIViewController, and help me implement the OneFingerRotation gesture into a cocos2d 2.0 project. Regards, Lewis.

    Read the article

  • Keeping Aspect Screen Ration While Stays in Center

    - by David Dimalanta
    I sqw and I tried this suggestion on PISTACHIO BRAINSTORMIN* on how to make a good and adaptive screen ration. For every different screen size, let's say I put the perfect circle as a Texture in LibGDX and played it on screen. Here's the blueberry image example and it's perfectly rounded: When I played it on the Google Nexus 7, the circle turn into a slightly oblonng shape, resembling as it was being flatten a bit. Please observe this snapshot below and you can see the blueberry is almost but slightly not perfectly rounded: Now, when I tried the suggested code for aspect ratio, the perfect circle retained but another problem is occured. The problem is that I expecting for a view on center but instead it's been moved to the right offset leaving with a half black screen. This would be look like this: Here is my code using the suggested screen aspect ratio code: Class' Field // Ingredients Needed for Screen Aspect Ratio private static final int VIRTUAL_WIDTH = 720; private static final int VIRTUAL_HEIGHT = 1280; private static final float ASPECT_RATIO = ((float) VIRTUAL_WIDTH)/((float) VIRTUAL_HEIGHT); private Camera Mother_Camera; private Rectangle Viewport; render() // Camera updating... Mother_Camera.update(); Mother_Camera.apply(Gdx.gl10); // Reseting viewport... Gdx.gl.glViewport((int) Viewport.x, (int) Viewport.y, (int) Viewport.width, (int) Viewport.height); // Clear previous frame. Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); show() Mother_Camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); Was this code useful for screen aspect ratio-proportion fixing or it is statically dependent on actual device's width and height? *see http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/#comment-317

    Read the article

  • How do you structure a 2D level format with collisions etc. in Java (Slick 2D)?

    - by liamzebedee
    I am developing a game in Java. 2D Fighter, Kind of like the 2d flash game Raze(http://armorgames.com/play/5395/raze). I currently am using the Slick 2D game library and am researching how to structure my levels. I am currently stuck on the problem of the level format(e.g. file format). How do you structure a 2d level with collisions etc.? Level Notes: Will go up down left right NOTE: New to gamedev

    Read the article

  • What is a simple deformer in which vertices deform linearly with control points?

    - by sebf
    In my project I want to deform a complex mesh, using a simpler 'proxy' mesh. In effect, each vertex of the proxy/collision mesh will be a control point/bone, which should deform the vertices of the main mesh attached to it depending on weight, but where the weight is not dependant on the absolute distance from the control point but rather distance relative to the other affecting control points. The point of this is to preserve complex three dimensional features of the main mesh while using physics implementations which expect something far simpler, low resolution, single surface, etc. Therefore, the vertices must deform linearly with their respective weighted control points (i.e. no falloff fields or all the mesh features will end up collapsed) - as if each vertex was linked to a point on the plane created by the attached control points and deformed with it. I have tried implementing the weight computation algorithm in this paper (page 4) but it is not working as expected and I am wondering if it is really the best way to do what I want. What is the simplest way to 'skin'* an arbitrary mesh, to another arbitrary mesh? *By skin I mean I need an algorithm to determine the best control points for a vertex, and their weights.

    Read the article

  • Rotate sphere in Javascript / three.js while moving on x/z axes

    - by kaipr
    I have a sphere/ball in three.js which I want to "roll" arround on a x/z axis. For the z axe I could simply do this no matter what the current x and y rotation is: sphere.roll_z = function(distance) { sphere.position.z += distance; sphere.rotation.x += distance > 0 ? 0.05 : -0.05; } But how can I roll it along the x axe? And how could I properly do the roll_z? I've found a lot about quateration and matrixes, but I can't figure out how to use them properly to achieve my (rather simple) goal. I'm aware that I have to update multiple rotations and that I have to calculate how far to rotate the sphere to match the distance, but the "how" is the question. It's probably just lack of mathematical skills which I should train, but a working example/short explanation would help alot to start with.

    Read the article

  • Need help in determing what, if any, tools can be used to create a free Flash game

    - by ReaperOscuro
    Yes I proudly -and sadly- declare that I am a complete nincompoop when it comes to Flash, and I have been fishing around the big wide web for information. The reason for this is that I have been contracted to create a game(s) for a website -the usual flash-based games caveat. Please I do not mean things like by those gaming generator websites, I mean small yet professional games- but the caveat, as always, is that impossible dream: it needs to be done all for free. The budget...well imagine it as not there. Annoyingly is that I am a game designer yes, but with a ridiculously tight deadline I haven't got much time to re-learn (ah the heady days of programming at uni) everything by the end of March, so I'd like to ask some people who know their stuff rather than keep looking at a gazillion different things. This is my understanding: with the flash sdk you can create a game, albeit you need to be pretty programming savvy. FlashDevelop helps there -yet I am not entirely sure how. Yet even FD says to use Flash for the animation/graphics. Yes its undeniably powerful but as I said there is the unattainable demand of no money. The million dollar question: what, if any, tools can I use to create a free flash game?

    Read the article

  • OpenGL 2.1+ Render with data returned form assimp library

    - by Bình Nguyên
    I have just readed this tutorial about load a 3D model file: http://www.lighthouse3d.com/cg-topics/code-samples/importing-3d-models-with-assimp/#comment-14551. Its render routine uses a recursive_render function to scan all node. My question: What is a aiNode struct store? What different form this method and above method: for (int i=0; imNumMesh; ++i) { draw scene-mMeshes[i]; } Thanks for reading!

    Read the article

  • Simple 2d game pathfinding

    - by Kooi Nam Ng
    So I was trying to implement a simple pathfinding on iOS and but the outcome seems less satisfactory than what I intended to achieve.The thing is units in games like Warcraft and Red Alert move in all direction whereas units in my case only move in at most 8 directions as these 8 directions direct to the next available node.What should I do in order to achieve the result as stated above?Shrink the tile size? The screenshot intended for illustration. Those rocks are the obstacles whereas the both ends of the green path are the starting and end of the path.The red line is the path that I want to achieve. http://i.stack.imgur.com/lr19c.jpg

    Read the article

  • converting a mouse click to a ray

    - by Will
    I have a perspective projection. When the user clicks on the screen, I want to compute the ray between the near and far planes that projects from the mouse point, so I can do some ray intersection code with my world. I am using my own matrix and vector and ray classes and they all work as expected. However, when I try and convert the ray to world coordinates my far always ends up as 0,0,0 and so my ray goes from the mouse click to the centre of the object space, rather than through it. (The x and y coordinates of near and far are identical, they differ only in the z coordinates where they are negatives of each other) GLint vp[4]; glGetIntegerv(GL_VIEWPORT,vp); matrix_t mv, p; glGetFloatv(GL_MODELVIEW_MATRIX,mv.f); glGetFloatv(GL_PROJECTION_MATRIX,p.f); const matrix_t inv = (mv*p).inverse(); const float unit_x = (2.0f*((float)(x-vp[0])/(vp[2]-vp[0])))-1.0f, unit_y = 1.0f-(2.0f*((float)(y-vp[1])/(vp[3]-vp[1]))); const vec_t near(vec_t(unit_x,unit_y,-1)*inv); const vec_t far(vec_t(unit_x,unit_y,1)*inv); ray = ray_t(near,far-near); What have I got wrong? (How do you unproject the mouse-point?)

    Read the article

  • Quaternion Camera Orbiting around a Sphere

    - by jessejuicer
    Background: I'm trying to create a game where the camera is always rotating around a single sphere. I'm using the DirectX D3DX math functions in C++ on Windows. The Problem: I cannot get both the camera position and orientation both working properly at the same time. Either one works but not both together. Here's the code for my quaternion camera that revolves around a sphere, always looking at the centerpoint of the sphere, ... as far as I understand it (but which isn't working properly): (I'm only going to present rotation around the X axis here, to simplify this post) Whenever the UP key is pressed or held down, the camera should rotate around the X axis, while looking at the centerpoint of the sphere (which is at 0,0,0 in the world). So, I build a quaternion that represents a small angle of rotation around the x axis like this (where 'deltaAngle' is a small enough number for a slow rotation): D3DXVECTOR3 rotAxis; D3DXQUATERNION tempQuat; tempQuat.x = 0.0f; tempQuat.y = 0.0f; tempQuat.z = 0.0f; tempQuat.w = 1.0f; rotAxis.x = 1.0f; rotAxis.y = 0.0f; rotAxis.z = 0.0f; D3DXQuaternionRotationAxis(&tempQuat, &rotAxis, deltaAngle); ...and I accumulate the result into the camera's current orientation quat, like this: D3DXQuaternionMultiply(&cameraOrientationQuat, &cameraOrientationQuat, &tempQuat); ...which all works fine. Now I need to build a view matrix to pass to DirectX SetTransform function. So I build a rotation matrix from the camera orientation quat as follows: D3DXMATRIXA16 rotationMatrix; D3DXMatrixIdentity(&rotationMatrix); D3DXMatrixRotationQuaternion(&rotationMatrix, &cameraOrientationQuat); ...Now (as seen below) if I just transpose that rotationMatrix and plug it into the 3x3 section of the view matrix, then negate the camera's position and plug it into the translation section of the view matrix, the rotation magically works. Perfectly. (even when I add in rotations for all three axes). There's no gimbal lock, just a smooth rotation all around in any direction. BUT- this works even though I never change the camera's position. At all. Which sorta blows my mind. I even display the camera position and can watch it stay constant at it's starting point (0.0, 0.0, -4000.0). It never moves, but the rotation around the sphere is perfect. I don't understand that. For proper view rotation, the camera position should be revolving around the sphere. Here's the rest of building the view matrix (I'll talk about the commented code below). Note that the camera starts out at (0.0, 0.0, -4000.0) and m_camDistToTarget is 4000.0: /* D3DXVECTOR3 vec1; D3DXVECTOR4 vec2; vec1.x = 0.0f; vec1.y = 0.0f; vec1.z = -1.0f; D3DXVec3Transform(&vec2, &vec1, &rotationMatrix); g_cameraActor->pos.x = vec2.x * g_cameraActor->m_camDistToTarget; g_cameraActor->pos.y = vec2.y * g_cameraActor->m_camDistToTarget; g_cameraActor->pos.z = vec2.z * g_cameraActor->m_camDistToTarget; */ D3DXMatrixTranspose(&g_viewMatrix, &rotationMatrix); g_viewMatrix._41 = -g_cameraActor->pos.x; g_viewMatrix._42 = -g_cameraActor->pos.y; g_viewMatrix._43 = -g_cameraActor->pos.z; g_viewMatrix._44 = 1.0f; g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix ); ...(The world matrix is always an identity, and the perspective projection works fine). ...So, without the commented code being compiled, the rotation works fine. But to be proper, for obvious reasons, the camera position should be rotating around the sphere, which it currently is not. That's what the commented code is supposed to do. And when I add in that chunk of code to do that, and look at all the data as I hold the keys down (using UP, DOWN, LEFT, RIGHT to rotate different directions) all the values look correct! The camera position is rotating around the sphere just fine, and I can watch that happen visually too. The problem is that the camera orientation does not lookat the center of the sphere. It always looks straight forward down the z axis (toward positive z) as it revolves around the sphere. Yet the values of both the rotation matrix and the view matrix seem to be behaving correctly. (The view matrix orientation is the same as the rotation matrix, just transposed). For instance if I just hold down the key to spin around the x axis, I can watch the values of the three axes represented in the view matrix (x, y, and z axes)... view x-axis stays at (1.0, 0.0, 0.0), and view y-axis and z-axis both spin around the x axis just fine. All the numbers are changing as they should be... well, almost. As far as I can tell, the position of the view matrix is spinning around the sphere one direction (like clockwise), and the orientation (the axes in the view matrix) are spinning the opposite direction (like counter-clockwise). Which I guess explains why the orientation appears to stay straight ahead. I know the position is correct. It revolves properly. It's the orientation that's wrong. Can anyone see what am I doing wrong? Am I using these functions incorrectly? Or is my algorithm flawed? As usual I've been combing my code for simple mistakes for many hours. I'm willing to post the actual code, and a video of the behavior, but that will take much more effort. Thought I'd ask this way first.

    Read the article

  • Auto-tiling with Yoshi's Island style tiles

    - by Boreal
    I'm creating a 2D platformer and I'd like to implement an auto-tiling system. Normally, this wouldn't be particularly difficult. However, I'd like to have tiles like in Yoshi's Island, where the graphics extend past the actual collidable tile's boundaries. Consider this image: Although the eggs and the Piranha Plant are clearly resting on the ground, the flower tiles continue behind them, out of the collidable tile. I know that it would be simple to do by hand, but extremely time consuming. Using an auto-tiling algorithm would save me a lot of time and boredom, but I'm not sure where to start.

    Read the article

  • Why does multiplying texture coordinates scale the texture?

    - by manning18
    I'm having trouble visualizing this geometrically - why is it that multiplying the U,V coordinates of a texture coordinate has the effect of scaling that texture by that factor? eg if you scaled the texture coordinates by a factor of 3 ..then doesn't this mean that if you had texture coordinates 0,1 and 0,2 ...you'd be sampling 0,3 and 0,6 in the U,V texture space of 0..1? How does that make it bigger eg HLSL: tex2D(textureSampler, TexCoords*3) Integers make it smaller, decimals make it bigger I mean I understand intuitively if you added to the U,V coordinates, as that is simply an offset into the sampling range, but what's the case with multiplication? I have a feeling when someone explains this to me I'm going to be feeling mighty stupid

    Read the article

  • How to calculate turn heading to a missile?

    - by Tony
    I have a missile that is shot from a ship at an angle, the missile then turns towards the target in an arc with a given turn radius. How do I determine the point on the arc when I need to start turning so the missile is heading straight for the target? EDIT What I need to do before I launch the missiles is calculate and draw the flight paths. So in the attached example the launch vehicle has a heading of 90 deg and the targets are behind it. Both missiles are launched at a relative heading of -45deg or + 45 deg to the launch vehicle's heading. The missiles initially turn towards the target with a known turn radius. I have to calculate the point at which the turn takes the missile to heading at which it will turn to directly attack the target. Obviously if the target is at or near 45 degrees then there is no initial turn the missile just goes straight for the target. After the missile is launched the map will also show the missile tracking on this line as indication of its flight path. What I am doing is working on a simulator which mimics operational software. So I need to draw the calculated flight path before I allow the missile to be launched. In this example the targets are behind the launch vehicle but the precalculated paths are drawn.

    Read the article

  • Sending a android.content.Context parameter to a function with JNI

    - by Ef Es
    I am trying to create a method that checks for internet connection that needs a Context parameter. The JNIHelper allows me to call static functions with parameters, but I don't know how to "retrieve" Cocos2d-x Activity class to use it as a parameter. public static boolean isNetworkAvailable(Context context) { boolean haveConnectedWifi = false; boolean haveConnectedMobile = false; ConnectivityManager cm = (ConnectivityManager) context.getSystemService( Context.CONNECTIVITY_SERVICE); NetworkInfo[] netInfo = cm.getAllNetworkInfo(); for (NetworkInfo ni : netInfo) { if (ni.getTypeName().equalsIgnoreCase("WIFI")) if (ni.isConnected()) haveConnectedWifi = true; if (ni.getTypeName().equalsIgnoreCase("MOBILE")) if (ni.isConnected()) haveConnectedMobile = true; } return haveConnectedWifi || haveConnectedMobile; } and the c++ code is JniMethodInfo methodInfo; if ( !JniHelper::getStaticMethodInfo( methodInfo, "my/app/TestApp", "isNetworkAvailable", "(android/content/Context;)V")) { //error return; } CCLog( "Method found and loaded!"); methodInfo.env->CallStaticVoidMethod( methodInfo.classID, methodInfo.methodID); methodInfo.env->DeleteLocalRef( methodInfo.classID);

    Read the article

  • what is the simplest 3d software for unity?

    - by kdavis8
    Ive heard a lot about Daz studio, Poser, Maya, K-3d, Anim8or, Blender, and all the rest. My question is which one is the best choice in terms of simplicity and quality. price is not an issue really. I'm programming games in java for android mobile devices at the moment but i will eventually move onto larger platforms. I would like to utilize unity3d for the game programming itself and utilize a 3d modeling software just to create the game objects. I just need to know the best one to get started with from scratch or should i use a combination of multiple ones? Any insight for this would be great, thanks!

    Read the article

  • Game actions that take multiple frames to complete

    - by CantTetris
    I've never really done much game programming before, pretty straightforward question. Imagine I'm building a Tetris game, with the main loop looking something like this. for every frame handle input if it's time to make the current block move down a row if we can move the block move the block else remove all complete rows move rows down so there are no gaps if we can spawn a new block spawn a new current block else game over Everything in the game so far happens instantly - things are spawned instantly, rows are removed instantly etc. But what if I don't want things to happen instantly (i.e animate things)? for every frame handle input if it's time to make the current block move down a row if we can move the block move the block else ?? animate complete rows disappearing (somehow, wait over multiple frames until the animation is done) ?? animate rows moving downwards (and again, wait over multiple frames) if we can spawn a new block spawn a new current block else game over In my Pong clone this wasn't an issue, as every frame I was just moving the ball and checking for collisions. How can I wrap my head around this issue? Surely most games involves some action that takes more than a frame, and other things halt until the action is done.

    Read the article

  • OpenGL-ES: clearing the alpha of the FrameBufferObject

    - by MrDatabase
    This question is a follow-up to Texture artifacts on iPad How does one "clear the alpha of the render texture frameBufferObject"? I've searched around here, StackOverflow and various search engines but no luck. I've tried a few things... for example calling GlClear(GL_COLOR_BUFFER_BIT) at the beginning of my render loop... but it doesn't seem to make a difference. Any help is appreciated since I'm still new to OpenGL. Cheers! p.s. I read on SO and in Apple's documentation that GlClear should always be called at the beginning of the renderLoop. Agree? Disagree? Here's where I read this: http://stackoverflow.com/questions/2538662/how-does-glclear-improve-performance

    Read the article

  • What are the benefits of designing a KeyBinding relay?

    - by Adam Naylor
    The input system of Quake3 is handled using a Keybinding relay, whereby each keypress is matched against a 'binding' which is then passed to the CLI along with a time stamp of when the keypress (or release) occurred. I just wanted to get an idea from developers what they considered to be the key benefits of designing your input system around this approach? One thing i don't particularly like is the appending of the timestamp to the bound command. This seems like a bit of a hack to bend the CLI into handling the games input? Also I feel that detecting the keypress only to add the command to a stream of text that gets parsed at a later date to be a slightly latent way of responding to input? (or is this unfounded?) The only real benefit i can see is that it allows you to bind 'complex' commands to keypresses; like 'switch weapon;+fire;' for example. Or maybe for journaling purposes? Thanks for any insights!

    Read the article

  • Collision detection of convex shapes on voxel terrain

    - by Dave
    I have some standard convex shapes (cubes, capsules) on a voxel terrain. It is very easy to detect single vertex collisions. However, it becomes computationally expensive when many vertices are involved. To clarify, currently my algorithm represents a cube as multiple vertices covering every face of the cube, not just the corners. This is because the cubes can be much bigger than the voxels, so multiple sample points (vertices) are required (the distance between sample points must be at least the width of a voxel). This very rapidly becomes intractable. It would be great if there were some standard algorithm(s) for collision detection between convex shapes and arbitrary voxel based terrain (like there is with OBB's and seperating axis theorem etc). Any help much appreciated.

    Read the article

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • Stack Overflow Error

    - by dylanisawesome1
    I recently created a recursive cave algorithm, and would like to have more extensive caves, but get a stack overflow after re-cursing a couple times. Any advice? Here's my code: for(int i=0;i<100;i++) { int rand = new Random().nextInt(100); if(rand<=20) { if(curtile.bounds.y-40>500+new Random().nextInt(20)) digDirection(Direction.UP); } if(rand<=40 && rand>20) { if(curtile.bounds.y+40<m.height) digDirection(Direction.DOWN); } if(rand<=60 && rand>40) { if(curtile.bounds.x-40>0) digDirection(Direction.LEFT); } if(rand<=80 && rand>60) { if(curtile.bounds.x+40<m.width) digDirection(Direction.RIGHT); } } } public void digDirection(Direction d) { if(new Random().nextInt(100)<=10) { new Miner(curtile, map); // try { // Thread.sleep(2); // } catch (InterruptedException e) { // // TODO Auto-generated catch block // e.printStackTrace(); // } //Tried this to avoid stack overflow. Didn't work. }

    Read the article

  • OpenGL ES Basic Fragment Shader help with transparency

    - by Chris
    I have just spent my first half hour playing with the shader language. I have modified the basic program I have which renders the texture, to allow me to colour the texture. varying vec2 texCoord; uniform sampler2D texSampler; /* Given the texture coordinates, our pixel shader grabs the corresponding * color from the texture. */ void main() { //gl_FragColor = texture2D(texSampler, texCoord); gl_FragColor = vec4(0,1,0,1)*vec4(texture2D(texSampler,texCoord).xyz,1); } I have noticed how this affects my transparent textures, and I believe I am loosing the alpha channel which would explain why previously transparent area's appear totally black. If I use the following line instead, I am shown the transparent area's gl_FragColor = vec4(0,1,0,1)*vec4(texture2D(texSampler,texCoord).aaa,1); How can I retain the transparency after this modification to the colour? I have seen various things about a .w property, and also luminous, but my tweaks with those and the .aaa property are not working XD

    Read the article

  • Why I'm getting the same result when deleting target?

    - by XNA
    In the following code we use target in the function: moon.mouseEnabled = false; sky0.addChild(moon); addEventListener(MouseEvent.MOUSE_DOWN, onDrag, false, 0, true); addEventListener(MouseEvent.MOUSE_UP, onDrop, false, 0, true); function onDrag(evt:MouseEvent):void { evt.target.addChild(moon); evt.target.startDrag(); } function onDrop(evt:MouseEvent):void { stopDrag(); } But if I rewrite this code without evt.target it still work. So what is the difference, am I going to get errors later in the run time because I didn't put target? If not then why some use target a lot while it works without it. function onDrag(evt:MouseEvent):void { addChild(moon); startDrag(); }

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >