Search Results

Search found 35343 results on 1414 pages for 'development tools'.

Page 629/1414 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • Complex shading using one single (small) texture

    - by teodron
    Recently I stumbled upon a demo reel in UDK about how one can attain beautiful results using just one (rather tiny) texture that's being sent to the shader pipeline. The famous link is this one. Basically, the author states that they've used just one texture and give a snapshot of the technique here. I see that every RGBA channel contains different grayscale information.. and that info could be used to inside a shader to obtain a colour blended output. The problem is that the reel displays a fairly complex scene. To top that, the author even makes use of a normal map. How did they manage to fit a normal map in an already cluttered texture? It makes sense to have a half-space normal map by using only RG from an RGB texture, but what about the rest of the information? Since it was proven to be possible, could someone please explain how it was done (the big picture, not the dirty details!)!? Here's the texture being used. Click to see in full size.

    Read the article

  • Drawing a line using openGL does not work

    - by vikasm
    I am a beginner in OpenGL and tried to write my first program to draw some points and a line. I can see that the window opens with white background but no line is drawn. I was expecting to see red colored (because glColor3f(1.0, 0.0, 0.0);) dots (pixels) and line. But nothing is seen. Here is my code. void init2D(float r, float g, float b) { glClearColor(r,g,b,0.0); glMatrixMode(GL_PROJECTION); gluOrtho2D(0.0, 200.0, 0.0, 150.0); } void display() { glClear(GL_COLOR_BUFFER_BIT); glColor3f(1.0, 0.0, 0.0); glBegin(GL_POINTS); for(int i = 0; i < 10; i++) { glVertex2i(10+5*i, 110); } glEnd(); //draw a line glBegin(GL_LINES); glVertex2i(10,10); glVertex2i(100,100); glEnd(); glFlush(); } int main(int argc, char** argv) { //Initialize Glut glutInit(&argc, argv); //setup some memory buffers for our display glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); //set the window size glutInitWindowSize(500, 500); //create the window with the title 'points and lines' glutCreateWindow("Points and Lines"); init2D(0.0, 0.0, 0.0); glutDisplayFunc(display); glutMainLoop(); } I wanted to verify that the glcontext was opening properly and used this code: int main(int argc, char **argv) { glutInit(&argc, argv); //setup some memory buffers for our display glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); //set the window size glutInitWindowSize(500, 500); //create the window with the title 'points and lines' glutCreateWindow("Points and Lines"); char *GL_version=(char *)glGetString(GL_VERSION); puts(GL_version); char *GL_vendor=(char *)glGetString(GL_VENDOR); puts(GL_vendor); char *GL_renderer=(char *)glGetString(GL_RENDERER); puts(GL_renderer); getchar(); return 0; } And the ouput I got was: 3.1.0 - Build 8.15.10.2345 Intel Intel(R) HD Graphics Family Can someone point out what I am doing wrong ? Thanks.

    Read the article

  • Move Camera Freely Around Object While Looking at It

    - by Alex_Hyzer_Kenoyer
    I've got a 3D model loaded (a planet) and I have a camera that I want to allow the user to move freely around it. I have no problem getting the camera to orbit the planet around either the x or y axis. My problem is when I try to move the camera on a different axis I have no idea how to go about doing it. I am using OpenGL on Android with the libGDX library. I want the camera to orbit the planet in the direction that the user swipes their finger on the screen.

    Read the article

  • Tutorial on OpenGL texture formats

    - by Cyan
    Looking at the documentation glGetTexImage(), one can see that there are plenty of available texture formats. GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D, GL_TEXTURE_1D_ARRAY, GL_TEXTURE_2D_ARRAY, GL_TEXTURE_RECTANGLE, GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z I've only used GL_TEXTURE_2D for the time being. Is there any place / documentation where one can learn about these other formats ? PS : and yes, of course, i've googled for it, results are pretty poor

    Read the article

  • How can I stop my Jitter physics meshes being offset?

    - by ben1066
    I'm developing a C# game engine and have hit a snag trying to add physics. I'm using XNA for graphics and Jitter for physics. I am trying to split the XNA model into it's meshes, then create a ConvexHull for each mesh. I then attempt to combine those into a CompoundObject, this however isn't working and depending upon the model the meshes are offset by different amounts. This is the code I'm currently using and it gives me: Any ideas?

    Read the article

  • How to configure background image to be at the bottom OpenGL Android

    - by Maxim Shoustin
    I have class that draws white line: public class Line { //private FloatBuffer vertexBuffer; private FloatBuffer frameVertices; ByteBuffer diagIndices; float[] vertices = { -0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; public Line(GL10 gl) { // a float has 4 bytes so we allocate for each coordinate 4 bytes ByteBuffer vertexByteBuffer = ByteBuffer.allocateDirect(vertices.length * 4); vertexByteBuffer.order(ByteOrder.nativeOrder()); // allocates the memory from the byte buffer frameVertices = vertexByteBuffer.asFloatBuffer(); // fill the vertexBuffer with the vertices frameVertices.put(vertices); // set the cursor position to the beginning of the buffer frameVertices.position(0); } /** The draw method for the triangle with the GL context */ public void draw(GL10 gl) { gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(2, GL10.GL_FLOAT, 0, frameVertices); gl.glColor4f(1.0f, 1.0f, 1.0f, 1f); gl.glDrawArrays(GL10.GL_LINE_LOOP , 0, vertices.length / 3); gl.glLineWidth(5.0f); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } It works fine. The problem is: When I add BG image, I don't see the line glView = new GLSurfaceView(this); // Allocate a GLSurfaceView glView.setEGLConfigChooser(8, 8, 8, 8, 16, 0); glView.setRenderer(new mainRenderer(this)); // Use a custom renderer glView.setBackgroundResource(R.drawable.bg_day); // <- BG glView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY); glView.getHolder().setFormat(PixelFormat.TRANSLUCENT); How to get rid of that?

    Read the article

  • What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits?

    - by user12365
    I have server objects that have corresponding client objects. The data to be kept in sync is inside the server object's key/value dictionary. To keep the client objects in sync with the sever objects, I want the server to send the key/value dictionary every frame for each object. What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits? Bonus constraint 1: For each type of object, the values of some keys change more often than others. Bonus constraint 2: Memory usage on the server side is relatively expensive.

    Read the article

  • Frame rate on one of two machines running same code seems to be capped at 60 for no reason

    - by dennmat
    ISSUE I recently moved a project from my laptop to my desktop(machine info below). On my laptop the exact same code displays the fps(and ms/f) correctly. On my desktop it does not. What I mean by this is on the laptop it will display 300 fps(for example) where on my desktop it will show only up to 60. If I add 100 objects to the game on the laptop I'll see my frame rate drop accordingly; the same test on the desktop results in no change and the frames stay at 60. It takes a lot(~300) entities before I'll see a frame drop on the desktop, then it will descend. It seems as though its "theoretical" frames would be 400 or 500 but will never actually get to that and only do 60 until there's too much to handle at 60. This 60 frame cap is coming from no where. I'm not doing any frame limiting myself. It seems like something external is limiting my loop iterations on the desktop, but for the last couple days I've been scratching my head trying to figure out how to debug this. SETUPS Desktop: Visual Studio Express 2012 Windows 7 Ultimate 64-bit Laptop: Visual Studio Express 2010 Windows 7 Ultimate 64-bit The libraries(allegro, box2d) are the same versions on both setups. CODE Main Loop: while(!abort) { frameTime = al_get_time(); if (frameTime - lastTime >= 1.0) { lastFps = fps/(frameTime - lastTime); lastTime = frameTime; avgMspf = cumMspf/fps; cumMspf = 0.0; fps = 0; } /** DRAWING/UPDATE CODE **/ fps++; cumMspf += al_get_time() - frameTime; } Note: There is no blocking code in the loop at any point. Where I'm at My understanding of al_get_time() is that it can return different resolutions depending on the system. However the resolution is never worse than seconds, and the double is represented as [seconds].[finer-resolution] and seeing as I'm only checking for a whole second al_get_time() shouldn't be responsible. My project settings and compiler options are the same. And I promise its the same code on both machines. My googling really didn't help me much, and although technically it's not that big of a deal. I'd really like to figure this out or perhaps have it explained, whichever comes first. Even just an idea of how to go about figuring out possible causes, because I'm out of ideas. Any help at all is greatly appreciated.

    Read the article

  • Android Activity access Unity Classes

    - by Anomaly
    I have made my own C# classes in Unity, is there any way I can access these classes from the Android Activity that starts the UnityPlayer? Example: I have a C# class called testClass in Unity: class testClass{ public static string myString="test string"; } From the Android activity in Java I want to access that class: string str=testClass.myString; Is this possible? If so, how? Or is there some other way to do this? In the end I basically want to communicate between my Android activity and the UnityPlayer object. Thanks in advance. EDIT: Ok so I looked at building Android plugins for Unity but this wasn't satisfactory to me. I ended up building a socket client-server interface in Unity with C# and another one in Java for the Android app: So Unity listens on port X and broadcasts on port Y The Android activity listens on port Y and broadcasts on port X This is necessary as both interfaces are running on the same host. So that's how I solved my problem, but I'm open for any suggestions if anyone knows a better way of communicating between the Unityplayer and your app.

    Read the article

  • How do I implement a "sliding out of / into" effect on a settings menu similar to that in Angry Birds?

    - by VictorB
    I'm trying to implement a settings menu component similar to that in Angry Birds - a button control that makes an options menu slide out of it and back into it when clicked on. I use scene2d.ui to build the UI components: a Button in a Table to implement the button control, a Table to implement the options menu, and a Stack to lay these out one on top of the other and at this moment I have the following behavior: When the user hits the button control for the first time, then the alpha of the table component is set to 1; When the user hits the button control the second time, then the alpha of the table component is set to 0; And so on. Any ideas how I can get the sliding out of and into effect on user clicks with libgdx? Similar to what Angry Birds provides. Maybe using the TweenEngine, actions, interpolations, combinations of these? Thanks in advance.

    Read the article

  • Changing the rendering resolution while maintaining the design layout

    - by Coyote
    I would like to increase the FPS of my project. Currently I would like to try reducing the resolution at which the scenes are rendered. Let's say I never want to draw more than 1280*720. What ever the real resolution is. How should I proceed? I tried pEGLView->setFrameSize(1280, 720); but only reduces the displayed size of the frame on screen (boxing). In my activity I tried setting the size of the "surface" but this seems to completely break the layout (as defined by setDesignResolutionSize). @Override public Cocos2dxGLSurfaceView onCreateView() { Cocos2dxGLSurfaceView surfaceView = new Cocos2dxGLSurfaceView(this); surfaceView.getHolder().setFixedSize(1280, 720); return surfaceView; } Is there a way to simply change the rendered

    Read the article

  • Using OpenCl to jiggle the Pipe

    - by TOAOGG
    I've got the Idea to use OpenCL to program a simple Renderer. A clear contra is, that this approach won't benefit from the hardware as the functions on the device (I think). Would it be useful to do this in OpenCL..lets say we want to Cull as early as possible so we won't have many per vertex operations. Is it correct, that Culling is done after the Vertex-Shader? For static-vertecies who won't get effected by the shader it could be interesting to cull them before. Another idea would be an deferred renderer. So the main question is: Would it make sense to program a renderer in OpenCL (aside the effort)? The resulting picture would be drawn in OpenGL.

    Read the article

  • Repelling a rigidbody in the direction an object is rotating

    - by ndg
    Working in Unity, I have a game object which I rotate each frame, like so: void Update() { transform.Rotate(new Vector3(0, 1, 0) * speed * Time.deltaTime); } However, I'm running into problems when it comes to applying a force to rigidbodies that collide with this game objects sphere collider. The effect I'm hoping to achieve is that objects which touch the collider are thrown in roughly the same direction as the object is rotating. To do this, I've tried the following: Vector3 force = ((transform.localRotation * Vector3.forward) * 2000) * Time.deltaTime; collision.gameObject.rigidbody.AddForce(force, ForceMode.Impulse); Unfortunately this doesn't always match the rotation of the object. To debug the issue, I wrote a simple OnDrawGizmos script, which (strangely) appears to draw the line correctly oriented to the rotation. void OnDrawGizmos() { Vector3 pos = transform.position + ((transform.localRotation * Vector3.forward) * 2); Debug.DrawLine(transform.position, pos, Color.red); } You can see the result of the OnDrawGizmos function below: What am I doing wrong?

    Read the article

  • What is this type of sound effect called?

    - by Fibericon
    There is a sound typically associated with a bright flash of light, which starts with a lower whirring noise, then breaks into a higher pitched sound. What is that type of sound called? I'm not sure how to begin searching for that, so a typical name for it would be very helpful. It's something similar to what occurs at 0:41 in this youtube video (here's a link to a few seconds beforehand), where Naruto 6 tails transforms into Kyuubei in Naruto Generations.

    Read the article

  • XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4

    - by danbystrom
    Beginner question. Synopsis: my water effects does something that causes the drawing of my sky sphere to throw an exeption when run in full screen. The exception is: XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4. This happens both when I start in full screen directly or switch to full screen from windowed. It does NOT happen, however, if I comment out the drawing of my water. So, what in my water effect can possibly cause the drawing of my sky sphere to choke???

    Read the article

  • What is the most efficient way to add and remove Slick2D sprites?

    - by kirchhoff
    I'm making a game in Java with Slick2D and I want to create planes which shoots: int maxBullets = 40; static int bullet = 0; Missile missile[] = new Missile[maxBullets]; I want to create/move my missiles in the most efficient way, I would appreciate your advise: public void shoot() throws SlickException{ if(bullet<maxBullets){ if(missile[bullet] != null){ missile[bullet].resetLocation(plane.getCenterX(), plane.getCenterY(), plane.image.getRotation()); }else{ missile[bullet] = new Missile("resources/missile.png", plane.getCenterX(), plane.getCenterY(), plane.image.getRotation()); } }else{ bullet = 0; missile[bullet].resetLocation(plane.getCenterX(), plane.getCenterY(), plane.image.getRotation()); } bullet++; } I created the method resetLocation in my Missile class in order to avoid loading again the resource. Is it correct? In the update method I've got this to move all the missiles: if(bullet > 0 && bullet < maxBullets){ float hyp = 0.4f * delta; if(bullet == 1){ missile[0].move(hyp); }else{ for(int x = 0; x<bullet; x++){ missile[x].move(hyp); } } }

    Read the article

  • Is it only possible to display 64k vertices on the monitor with 16bit?

    - by Aufziehvogel
    I did the first 3D tutorial over at riemers.net and stumbled upon that my graphic card only supports Shader 2.0 (Reach profile in XNA) which means I can only use Int16 to store the indices (triangle to vertex). This means that I can only store 2^16 = 65536 vertices. Also I read on the internet that you should prefer 16-bit over 32-bit because not all hardware (like mine) does support 32-bit. Yet, I am wondering: Do really all game scenes get along with only so little vertices? I though already faces of people used a lot of polygons (which are made up of vertices?). It’s not relevant for me yet, but I am interested: Do game scenes use only 65536 vertices? Do you use some trade-off to display more (e.g. 64k in GPU buffer rest on RAM) Is there some method to get more into the GPU buffer? I already read on some other posts that there seems to be a limit of 64k per mesh too, so maybe you can compact stuff to meshes?

    Read the article

  • Alternatives to voxel-based terrain

    - by Neomex
    Are there any alternatives to voxel based terrains? Such terrain should be fully destructable, allow for arches, overhangs, preserve sharp features where needed and keep consistent topology. Maybe you can explain the problem that makes you ask this question? Voxel based terrain is basically just using a 3D grid of data to store data. There are lots of ways to render that data, but it doesn't get much simpler for storing it. – Byte56 Current isosurface extraction methods aren't most effective/bug-free. Cubical Marching Squares seem to solve most of the issues, however it is a relatively new method and there aren't too many resources about it. (I've found single university paper) Even if we stick to CMS, when we want to add multi-material support, we can either divide surface into multiple meshes, or pass a texture array or texture atlas to shaders, then we are limited to set amount of textures and additionally increase memory-usage alot.

    Read the article

  • In a multiplayer game, should I store the list of character names on the Player class?

    - by Gökhan Nas
    I am writing a multiplayer game that has account system and character creation system like standart MMORPGs. I have a question about name creating issue. I think that I can create a static variable on Player class that keeps created player names but it confused me. It will tell me name is valid or unvalid depends on the other players has this name. Questions; Does implementation does make sense ? If i have 1000 players, is it means it consumes 1000 times of memory of this list? Or it just consume as like there is one? What is your suggestion for place that I can keep player name list? A new class?

    Read the article

  • Start Game Programming [on hold]

    - by vishalpamnani
    I am 23 and working as a Software Developer. Though my work is entirely based on Java and Advanced Java, I know a very little and all my interest is in developing games. I want to make a my career in Gaming Industry as a Game Programmer. I am not able to figure out the starting step to start with Game Programming. I have zero knowledge with developing games and never ever tried a tiniest of game. Please suggest me from where to start. Which programming language to start with? What should be my practice? What references to use? What type of games to begin with? BTW my preferable language would be C++ ~Thanks

    Read the article

  • Double Buffering in Panda3D (C++)

    - by jsvcycling
    How would I go about using Double Buffering (to create a loading screen) in Panda3D using C++? I've searched Google and found some forums that talk about the concept of swapping buffers, but I haven't seen any that show any type of source code (specifically Panda3D/C++). I'd like to try and stay away from using pure OpenGL code and work it through Panda3D, but if I have no other choice, then I'll have to go with OpenGL coding.

    Read the article

  • How can I support scrolling when using batched rendering for my tiles?

    - by dardanel
    I have tiled map 100*75 and tiles are 32*32 pixel.I want to use batching for performance .I don't figure it out , because of my game needs scrolling and every frame i draw 22*16 tiles (my screen is 20*16 tile) .I thought that batching tiles for every frame .Is it good or any suggestion? edit :to more clarify I want to use occlusion culling and batching at the same time.I thought that drawing only visible areas and batching them together .But there is a something i couldn't figure out .When scrolling screen with translate matrix , if one row become invisible , I bind new row and batch them again.Every batched objects needs to buffer again.So I batch tiles and buffer to VBO every time when one row become invisible .I don't know these way is efficient or not .This is my question .And i am open to any suggestions.

    Read the article

  • Phone complains that identical GLSL struct definition differs in vert/frag programs

    - by stephelton
    When I provide the following struct definition in linked frag and vert shaders, my phone (Samsung Vibrant / Android 2.2) complains that the definition differs. struct Light { mediump vec3 _position; lowp vec4 _ambient; lowp vec4 _diffuse; lowp vec4 _specular; bool _isDirectional; mediump vec3 _attenuation; // constant, linear, and quadratic components }; uniform Light u_light; I know the struct is identical because its included from another file. These shaders work on a linux implementation and on my Android 3.0 tablet. Both shaders declare "precision mediump float;" The exact error is: Uniform variable u_light type/precision does not match in vertex and fragment shader Am I doing anything wrong here, or is my phone's implementation broken? Any advice (other than file a bug report?)

    Read the article

  • Android Array Lag?

    - by Mike
    I am making a platform game for Android. It is sort of a tile based game. I added bullets and enemies with AI and a bunch of tile types. I created a simple map with no Enemies. Everything was running well and smooth until I shot a bunch of bullets randomly everywhere. A couple of hundreds of bullets later, the FPS lowered. I made a test to find out if the bullets were the problem so I made another simple map with just a tile to stand on and left it for a while. Minutes later, I played around with it a bit to check if the FPS changed and it didnt. I reloaded the same map and shot a lot of bullets. Minutes later, the FPS was visibly lower even after the number of bullets were zero. Points to note: Programmed FPS is 30 Tested on a Samsung Galaxy Y and Samsung Galaxy W Any tile, enemy, bullet that is off screen is not drawn to prevent lag Bullets collide with Tiles (if they dont collide with in 450 frames, they are removed from the array) I used List bullets = new ListArray(); I used bullets.add(new Bullet(x, y, params...)); I used for(...){ if(...){ bullets.remove(i); } } Code for bullet: private void drawBullets(Canvas canvas) { for (int i = 0; i < bullets.size(); i++) { Bullet b = bullets.get(i); b.update(canvas); //updates physics if (b.t > blm) { //if the bullet is past its expiry bullets.remove(i); i--; } else { if (svx((b.x)) > 0 && svx(b.x) < width && svy((b.y)) > 0 && svy(b.y) < height) { // if bullet is not off screen b.draw(canvas); // draw the bullet } } } } I tried searching for solutions and references but I have no luck. I'm guessing that the lag has something to do with the Array and the Bullets or Classes that I've loaded? I'm not sure! Someone please help! Thanks in advance! :)

    Read the article

  • What would be a good game making engine supporting Vector images?

    - by Qqwy
    I want to create a simple platforming game, in which you are a square in a wonderful world. I would like this game to be able to be played in browsers. Basically I am searching for something similar to "Flixel", but with the following features: Support Vector Graphics Allow zooming/rotating objects without producing huge amounts of lag as soon as you are using more objects. (Because I want to rotate the map around the player) So in other words, preferably zoom the viewport/camera instead of the objects themselves. Does an engine like that exist?

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >