Search Results

Search found 25550 results on 1022 pages for 'umbraco development'.

Page 486/1022 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be .png, .jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan B (actually, I'm probably on plan 'E' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the API. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via JNI (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • Per fragment lighting with OpenGL 4.x tessellated model

    - by Finlaybob
    I'm experienced with OpenGL 3+. I'm dabbling with tessellation shaders and have now got to a point where I have a nicely tessellated teapot/plane demo (quick look here) As can be seen from the screenshots, the lighting is broken (though admittedly doesn't look too bad in the image) I've tried to add a normal map to the equation but it still doesn't come out right, I can calculate the normals, tangents and binormals per triangle in the geometry shader but still looks wrong. I think the question would be; How do I add per fragment lighting to a tessellated model? The teapot is 32 16-point patches, the plane is one single 16 point patch. The shaders are here, but they are a complete mess, so I don't blame anyone who cant make sense of them. But peruse at your leisure if you like. Also, if this question is more suited to be somewhere else i.e. Stack Overflow or the Programming stack please let me know.

    Read the article

  • Indexed Drawing in OpenGL not working

    - by user2050846
    I am trying to render 2 types of primitives- - points ( a Point Cloud ) - triangles ( a Mesh ) I am rendering points simply without any index arrays and they are getting rendered fine. To render the meshes I am using indexed drawing with the face list array having the indices of the vertices to be rendered as Triangles. Vertices and their corresponding vertex colors are stored in their corresponding buffers. But the indexed drawing command do not draw anything. The code is as follows- Main Display Function: void display() { simple->enable(); simple->bindUniform("MV",modelview); simple->bindUniform("P", projection); // rendering Point Cloud glBindVertexArray(vao); // Vertex buffer Point Cloud glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glEnableVertexAttribArray(0); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); // Color Buffer point Cloud glBindBuffer(GL_ARRAY_BUFFER,colorbuffer); glEnableVertexAttribArray(1); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,0); // Render Colored Point Cloud //glDrawArrays(GL_POINTS,0,model->vertexCount); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // ---------------- END---------------------// //// Floor Rendering glBindBuffer(GL_ARRAY_BUFFER,fl); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,0,(void *)48); glDrawArrays(GL_QUADS,0,4); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); // -----------------END---------------------// //Rendering the Meshes //////////// PART OF CODE THAT IS NOT DRAWING ANYTHING //////////////////// glBindVertexArray(vid); for(int i=0;i<NUM_MESHES;i++) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,0); glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,0,(void *)(meshes[i]->vertexCount*sizeof(glm::vec3))); //glDrawArrays(GL_TRIANGLES,0,meshes[i]->vertexCount); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); //cout<<gluErrorString(glGetError()); glDrawElements(GL_TRIANGLES,meshes[i]->faceCount*3,GL_FLOAT,(void *)0); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); } glUseProgram(0); glutSwapBuffers(); glutPostRedisplay(); } Point Cloud Buffer Allocation Initialization: void initGLPointCloud() { glGenBuffers(1,&vertexbuffer); glGenBuffers(1,&colorbuffer); glGenBuffers(1,&fl); //Populates the position buffer glBindBuffer(GL_ARRAY_BUFFER,vertexbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->positions[0], GL_STATIC_DRAW); //Populates the color buffer glBindBuffer(GL_ARRAY_BUFFER, colorbuffer); glBufferData(GL_ARRAY_BUFFER, model->vertexCount * sizeof (glm::vec3), &model->colors[0], GL_STATIC_DRAW); model->FreeMemory(); // To free the not needed memory, as the data has been already // copied on graphic card, and wont be used again. glBindBuffer(GL_ARRAY_BUFFER,0); } Meshes Buffer Initialization: void initGLMeshes(int i) { glBindBuffer(GL_ARRAY_BUFFER,mVertex[i]); glBufferData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3)*2,NULL,GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER,0,meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->positions[0]); glBufferSubData(GL_ARRAY_BUFFER,meshes[i]->vertexCount*sizeof(glm::vec3),meshes[i]->vertexCount*sizeof(glm::vec3),&meshes[i]->colors[0]); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,mFace[i]); glBufferData(GL_ELEMENT_ARRAY_BUFFER,meshes[i]->faceCount*sizeof(glm::vec3), &meshes[i]->faces[0],GL_STATIC_DRAW); meshes[i]->FreeMemory(); //glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0); } Initialize the Rendering, load and create shader and calls the mesh and PCD initializers. void initRender() { simple= new GLSLShader("shaders/simple.vert","shaders/simple.frag"); //Point Cloud //Sets up VAO glGenVertexArrays(1, &vao); glBindVertexArray(vao); initGLPointCloud(); //floorData glBindBuffer(GL_ARRAY_BUFFER, fl); glBufferData(GL_ARRAY_BUFFER, sizeof(floorData), &floorData[0], GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER,0); glBindVertexArray(0); //Meshes for(int i=0;i<NUM_MESHES;i++) { if(i==0) // SET up the new vertex array state for indexed Drawing { glGenVertexArrays(1, &vid); glBindVertexArray(vid); glGenBuffers(NUM_MESHES,mVertex); glGenBuffers(NUM_MESHES,mColor); glGenBuffers(NUM_MESHES,mFace); } initGLMeshes(i); } glEnable(GL_DEPTH_TEST); } Any help would be much appreciated, I have been breaking my head on this problem since 3 days, and still it is unsolved.

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • Why does my ID3DXSprite appear to be incorrectly scaled?

    - by Bjoern
    I am using D3D9 for rendering some simple things (a movie) as the backmost layer, then on top of that some text messages, and now wanted to add some buttons to that. Before adding the buttons everything seemed to have worked fine, and I was using a ID3DXSprite for the text as well (ID3DXFont), now I am loading some graphics for the buttons, but they seem to be scaled to something like 1.2 times their original size. In my test window I centered the graphic, but it being too big it just doesnt fit well, for example the client area is 640x360, the graphic is 440, so I expect 100 pixel on left and right, left side is fine [I took screenshot and "counted" the pixels in photoshop], but on the right there is only about 20 pixels) My rendering code is very simple (I am omitting error checks, et cetera, for brevity) // initially viewport was set to width/height of client area // clear device m_d3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET|D3DCLEAR_STENCIL|D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB(0,0,0,0), 1.0f, 0 ); // begin scene m_d3dDevice->BeginScene(); // render movie surface (just two triangles to which the movie is rendered) m_d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,false); m_d3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE ); //Ignored m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 ); m_d3dDevice->SetTexture( 0, m_movieTexture ); m_d3dDevice->SetStreamSource(0, m_displayPlaneVertexBuffer, 0, sizeof(Vertex)); m_d3dDevice->SetFVF(Vertex::FVF_Flags); m_d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2); // render sprites m_sprite->Begin(D3DXSPRITE_ALPHABLEND | D3DXSPRITE_SORT_TEXTURE | D3DXSPRITE_DO_NOT_ADDREF_TEXTURE); // text drop shadow m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRectDropShadow, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorDropShadow ); // text m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRect, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorMessage ) ); // control object m_sprite->Draw( m_texture, 0, 0, &m_vecPos, 0xFFFFFFFF ); // draws a few objects like this m_sprite->End() // end scene m_d3dDevice->EndScene(); What did I forget to do here? Except for the control objects (play button, pause button etc which are placed on a "panel" which is about 440 pixels wide) everything seems fine, the objects are positioned where I expect them, but just too big. By the way I loaded the images using D3DXCreateTextureFromFileEx (resizing wnidow, and reacting to lost device, etc, works fine too). For experimenting, I added some code to take an identity matrix and scale is down on the x/y axis to 0.75f, which then gave me the expected result for the controls (but also made the text smaller and out of position), but I don't know why I would need to scale anything. My rendering code is so simple, I just wanted to draw my 2D objects 1;1 the size they came from the file... I am really very inexperienced in D3D, so the answer might be very simple...

    Read the article

  • What forms of non-interactive RPG battle systems exist?

    - by Landstander
    I am interested in systems that allow players to develop a battle plan or setup strategy for the party or characters prior to entering battle. During the battle the player either cannot input commands or can choose not to. Rule Based In this system the player can setup a list of rules in the form of [Condition - Action] that are then ordered by priority. Gambits in Final Fantasy XII Tactics in Dragon Age Origin & II

    Read the article

  • How many achievements should I include, and of what challenge?

    - by stephelton
    I know this question is fairy broad and subjective, but I'm wondering if there's been any published research into what an optimal number of achievements is and what kind of challenge they should present. The game this question directly relates to is a shoot-em-up, but an ideal answer is fairly theoretical. If there are there are too few achievements, or they are not challenging, I would expect they would fail in their goal to keep people playing. If there are too many, or they are unreasonably difficult, I would expect people to quickly give up. I personally witnessed the latter happening in Starcraft 2; a section of the achievements would have you win hundreds of games against their AI opponents (boring!)

    Read the article

  • Flickering problem with world matrix

    - by gnomgrol
    I do have a pretty wierd problem today. As soon as I try to change my translation- or rotationmatrix for an object to something else than (0,0,0), the object starts to flicker (scaling works fine). It rapid and randomly switches between the spot it should be in and a crippled something. I first thought that the problem would be z-fighting, but now Im pretty sure it isn't. I have now clue at all what it could be, here are two screenshots of the two states the plant is switching between. I already used PIX, but could find anything of use (Im not a very good debugger anyway) I would appreciate any help, thanks a lot! Important code: D3DXMatrixIdentity(&World); D3DXVECTOR3 rotaxisX = D3DXVECTOR3(1.0f, 0.0f, 0.0f); D3DXVECTOR3 rotaxisY = D3DXVECTOR3(0.0f, 1.0f, 0.0f); D3DXVECTOR3 rotaxisZ = D3DXVECTOR3(0.0f, 0.0f, 1.0f); D3DXMATRIX temprot1, temprot2, temprot3; D3DXMatrixRotationAxis(&temprot1, &rotaxisX, 0); D3DXMatrixRotationAxis(&temprot2, &rotaxisY, 0); D3DXMatrixRotationAxis(&temprot3, &rotaxisZ, 0); Rotation = temprot1 *temprot2 * temprot3; D3DXMatrixTranslation(&Translation, 0.0f, 10.0f, 0.0f); D3DXMatrixScaling(&Scale, 0.02f, 0.02f, 0.02f); //Set objs world space using the transformations World = Translation * Rotation * Scale; shader: cbuffer cbPerObject { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix);

    Read the article

  • How should I organise classes for a space simulator?

    - by Peteyslatts
    I have pretty much taught myself everything I know about programming, so while I know how to teach myself (books, internet and reading API's), I'm finding that there hasn't been a whole lot in the way of good programming. I am finishing up learning the basics of XNA and I want to create a space simulator to test my knowledge. This isn't a full scale simulator, but just something that covers everything I learned. It's also going to be modular so I can build on it, after I get the basics down. One of the early features I want to implement is AI. And I want to take this into account as I'm designing my classes so I can minimize rewriting code. So my question: How should I design ship classes so that both the player and AI can use them? The only idea I have so far is: Create a ship class that contains stats, models, textures, collision data etc. The player and AI would then have the data for position, rotation, health, etc and would base their status off of the ship stats.

    Read the article

  • Coordinate spaces and transformation matrices

    - by Belgin
    I'm trying to get an object from object space, into projected space using these intermediate matrices: The first matrix (I) is the one that transforms from object space into inertial space, but since my object is not rotated or translated in any way inside the object space, this matrix is the 4x4 identity matrix. The second matrix (W) is the one that transforms from inertial space into world space, which is just a scale transform matrix of factor a = 14.1 on all coordinates, since the inertial space origin coincides with the world space origin. /a 0 0 0\ W = |0 a 0 0| |0 0 a 0| \0 0 0 1/ The third matrix (C) is the one that transforms from world space, into camera space. This matrix is a translation matrix with a translation of (0, 0, 10), because I want the camera to be located behind the object, so the object must be positioned 10 units into the z axis. /1 0 0 0\ C = |0 1 0 0| |0 0 1 10| \0 0 0 1/ And finally, the fourth matrix is the projection matrix (P). Bearing in mind that the eye is at the origin of the world space and the projection plane is defined by z = 1, the projection matrix is: /1 0 0 0\ P = |0 1 0 0| |0 0 1 0| \0 0 1/d 0/ where d is the distance from the eye to the projection plane, so d = 1. I'm multiplying them like this: (((P x C) x W) x I) x V, where V is the vertex' coordinates in column vector form: /x\ V = |y| |z| \1/ After I get the result, I divide x and y coordinates by w to get the actual screen coordinates. Apparenly, I'm doing something wrong or missing something completely here, because it's not rendering properly. Here's a picture of what is supposed to be the bottom side of the Stanford Dragon: Also, I should add that this is a software renderer so no DirectX or OpenGL stuff here.

    Read the article

  • Deferred rendering order?

    - by Nick Wiggill
    There are some effects for which I must do multi-pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects: FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes: Render g-buffer: normals and depth (used by outline & DoF blur shaders); output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog; output to FBO no. 2. (can all render via a single fragment shader -- I think.) (optional) DoF blur the scene; output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes; composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes?

    Read the article

  • Odds For Fighting Game

    - by thinkfuture
    I'm creating a fighting game where two opponents face off against each other in the ring. While I've been able to figure out the odds of a player winning based on previous wins/losses, I have yet to find a formula which modifies those odds based on opponent. For example: Player 1: W:5 L:5 - 1/1 odds Player 2: W:5 L:0 - 1/5 odds I want to calculate the odds that Player 1 will wins against player 2. Compounding this the players could be of different levels: if the players are within a few levels of each other, the odds should map closely to wins/losses. However, as the levels diverge, the odds of the lower level player winning reduce. As a swag: Player 1 - W:5 L:5 - 1:1 odds Against a level 8 - 1:2 Against a level 9 - 2:3 Against a level 10 - 1:1 Against a level 11 - 3:2 Against a level 12 - 2:1 These are just estimates, my sense is that there is a math formula out there which will calculate that - can anyone out there point me to what this could be? Thanks...Chris

    Read the article

  • Collision detection with non-rectangular images

    - by Adam Smith
    I'm creating a game and I need to detect collisions between a character and some parts of the environment. Since my character's frames are taken from a sprite sheet with a transparent background, I'm wondering how I should go about detecting collisions between a wall and my character only if the colliding parts are non-transparent in both images. I thought about checking only if part of the rectangle the character is in touches the rectangle a tile is in and comparing the alpha channels, but then I have another choice to make... Either I test every single pixel against every single pixel in the other image and if one is true, I detect a collision. That would be terribly ineficient. The other option would be to keep a x,y position of the leftmost, rightmost, etc. non-transparent pixel of each image and compare those instead. The problem with this one might be that, for instance, the character's hand could be above a tile (so it would be in a transparent zone of the tile) but a pixel that is not the rightmost could touch part of the tile without being detected. Another problem would be that in different frames, the rightmost, leftmost, etc. pixels might not be at the same position. Should I not bother with that and just check the collisions on the rectangles? It would be simpler, but I'm afraid people.will feel that there are collisions sometimes that shouldn't happen.

    Read the article

  • How do I prevent a KActor from changing the orientation of its Z-Axis?

    - by Almo
    So I have an object that inherits from KActor that I would like to behave as a dynamic physics object, but I want its Z-Axis to remain upright, but very stiffly. I've tried the bStayUpright that triggers the "Stay Upright Spring". The problem is, it's a spring, and the object in question oscillates into position when I want it to remain oriented properly without wobbling. In the image above, the yellow block has fallen onto the gray box, and it is currently pivoting about the contact point as it tries to right itself. Should I be tweaking the StayUprightMaxTorque and StayUprightTorqueFactor parameters, or should I be using a Constraint of some sort?

    Read the article

  • How to perform game object smoothing in multiplayer games

    - by spaceOwl
    We're developing an infrastructure to support multiplayer games for our game engine. In simple terms, each client (player) engine sends some pieces of data regarding the relevant game objects at a given time interval. On the receiving end, we step the incoming data to current time (to compensate for latency), followed by a smoothing step (which is the subject of this question). I was wondering how smoothing should be performed ? Currently the algorithm is similar to this: Receive incoming state for an object (position, velocity, acceleration, rotation, custom data like visual properties, etc). Calculate a diff between local object position and the position we have after previous prediction steps. If diff doesn't exceed some threshold value, start a smoothing step: Mark the object's CURRENT POSITION and the TARGET POSITION. Linear interpolate between these values for 0.3 seconds. I wonder if this scheme is any good, or if there is any other common implementation or algorithm that should be used? (For example - should i only smooth out the position? or other values, such as speed, etc) any help will be appreciated.

    Read the article

  • Implementing a switch statement based on user input

    - by Dave Voyles
    I'm trying to delay the time it takes for the main menu screen to pop up after a user has won / lost a match. As it stands, the game immediately displays a message stating "you won / lost" and waits for 6 seconds before loading the menu screen. I would also like players to have the ability to press a key to advance to the menu screen immediately but thus far my switch statement doesn't seem to do the trick. I've included the switch statement, along with my (theoretical) inputs. What could I be doing wrong here? if (gamestate == GameStates.End) switch (input.IsMenuDown(ControllingPlayer)) { case true: ScreenManager.AddScreen(new MainMenuScreen(), null); // Draws the MainMenuScreen break; case false: if (screenLoadDelay > 0) { screenLoadDelay -= gameTime.ElapsedGameTime.TotalSeconds; } ScreenManager.AddScreen(new MainMenuScreen(), null); // Draws the MainMenuScreen break; } /// <summary> /// Checks for a "menu down" input action. /// The controllingPlayer parameter specifies which player to read /// input for. If this is null, it will accept input from any player. /// </summary> public bool IsMenuDown(PlayerIndex? controllingPlayer) { PlayerIndex playerIndex; return IsNewKeyPress(Keys.Down, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.DPadDown, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.LeftThumbstickDown, controllingPlayer, out playerIndex); }

    Read the article

  • Image loaded from TGA texture isn't displayed correctly

    - by Ramy Al Zuhouri
    I have a TGA texture containing this image: The texture is 256x256. So I'm trying to load it and map it to a cube: #import <OpenGL/OpenGL.h> #import <GLUT/GLUT.h> #import <stdlib.h> #import <stdio.h> #import <assert.h> GLuint width=640, height=480; GLuint texture; const char* const filename= "/Users/ramy/Documents/C/OpenGL/Test/Test/texture.tga"; void init() { // Initialization glEnable(GL_DEPTH_TEST); glViewport(-500, -500, 1000, 1000); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45, width/(float)height, 1, 1000); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(0, 0, -100, 0, 0, 0, 0, 1, 0); // Texture char bitmap[256][256][3]; FILE* fp=fopen(filename, "r"); assert(fp); assert(fread(bitmap, 3*sizeof(char), 256*256, fp) == 256*256); fclose(fp); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 256, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, bitmap); } void display() { glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture); glColor3ub(255, 255, 255); glBegin(GL_QUADS); glVertex3f(0, 0, 0); glTexCoord2f(0.0, 0.0); glVertex3f(40, 0, 0); glTexCoord2f(0.0, 1.0); glVertex3f(40, 40, 0); glTexCoord2f(1.0, 1.0); glVertex3f(0, 40, 0); glTexCoord2f(1.0, 0.0); glEnd(); glDisable(GL_TEXTURE_2D); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE); glutInitWindowPosition(100, 100); glutInitWindowSize(width, height); glutCreateWindow(argv[0]); glutDisplayFunc(display); init(); glutMainLoop(); return 0; } But this is what I get when the window loads: So just half of the image is correctly displayed, and also with different colors.Then if I resize the window I get this: Magically the image seems to fix itself, even if the colors are wrong.Why?

    Read the article

  • Can't export Blender model for use in jMonkeyEngine SDK

    - by Nathan Sabruka
    I have a scene rendered in blender called "civ1.blend" which contains multiple materials (for example, I have one called "white"). I want to use this model in jMonkeyEngine, so I used the OGRE exporter to create .scene and .material files. This gives me, for example, a civ1.scene file and a white.material file.However, when I then try to import civ1.scene into the jMonkeyEngine SDK, I get an error along the lines of "Cannot find material file 'civ1.material'". Like I said, I have a white.material file, but I do not have a civ1.material file. Did anyone encounter this problem? How do I fix this?

    Read the article

  • Store and create game objects at positions along terrain

    - by Alex
    I have a circular character that rolls down terrain like that shown in the picture below. The terrain is created from an array holding 1000 points. The ground is drawn one screen width infront and one screen width behind. So as the character moves, edges are created infront and edges are removed behind. My problem is, I want to create box2d bodies at certain locations along the path and need a way to store these creator methods or objects. I need some way to store a position at which they are created and some pointer to a function to create them, once the character is in range. I guess this would be an array of some sort that is checked each time the ground is updated and then if in range, the function is executed and removed from the array. But I'm not sure if its even possible to store pointers to functions with parameters included... any help is much appreciated!

    Read the article

  • Better solution for boolean mixing?

    - by Ruben Nunez
    Sorry if this question has been asked in the past, but searching Google and here didn't yield relevant results, so here goes. I'm working on a fragment shader that implements both conditional/boolean diffuse and bump mapping (that is to say, you don't need a diffuse texture or a normals texture, and if they're not present, they're simply changed to default values). My current solution is to use a uniform float to say "mix amount". For example, computing the diffuse texel works as: // Compute diffuse amount scaled by vCol // If no texture is present (mDif = 0.0), then DiffuseTexel = vCol // kT[0] is the diffuse texture // vTex is the texture co-ordinates // mDif is the uniform float containing the mix amount (either 0.0 or 1.0) vec4 DiffuseTexel = vCol*mix(vec4(1.0), texture2D(kT[0], vTex), mDif); While that works great and all, I was wondering if there's a better way of doing this, as I will never have any use for in-between values for funky effects. I know that perhaps the best solution is to simply write separate shaders for mDif=0.0 and mDif=1.0, but I'd like a more elegant solution than splicing shaders before compiling or writing multiple shader files and keeping each one updated. Any ideas are greatly appreciated. =)

    Read the article

  • Creating a 2D Line Branch (Part 2)

    - by Danran
    Yesterday i asked this question on how to create a 2D line branch; Creating a 2D Line Branch And thanks to the answered provided, i now have this nice looking main branch; *coloured to show the different segments in the final item. Now is the time now to branch things off as discussed in the article; http://drilian.com/2009/02/25/lightning-bolts/ Again however i am confused as to the meaning of the following pseudo code; splitEnd = Rotate(direction, randomSmallAngle)*lengthScale + midPoint; I'm unsure how to actually rotate this correctly. In all honesty i'm abit unsure what to-do completely at this part, "splitEnd" will be a Vector3, so whatever happens in the rotate function must then return some form of directional rotation which is then * by a scale to create length and then added to the midPoint. I'm not sure. If someone could explain what i'm meant to be doing in this part that would be really grateful.

    Read the article

  • Problems using easing equations in C# XNA

    - by codinghands
    I'm having some trouble using the easing equations suggested by Robert Penner for ActionScript (http://www.robertpenner.com/easing/, and a Flash demo here) in my C# XNA game. Firstly, what is the definition of the following variables passed in as arguments to each equation? float t, float b, float c, float d I'm currently calculating the new X position of a sprite in the Update() loop, however even for the linear tween equation I'm getting some odd results. I'm using the following values: float t = gameTime.TotalGameTime.TotalMilliseconds; float d = 8000f; float b = x.Position.X; float c = (ScreenManager.Game.GraphicsDevice.Viewport.Width >> 1) - (x.Position.X + x.frameSize.X / 2); And this equation for linear easing: float val = c*t/d + b;

    Read the article

  • Why distance field text rendering have clear outline?

    - by jinhwan
    http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf All the process for doing distance rendering is clear, but 'how does it work' is not clear for me. It looks like that distance field pixels which are created around original pixel may affect 2d texture sampling interpolation process. But I can't understand the interpolation process. I've read that the distance field rendering is processed under nearest-neighbour interpolation. If it is true, shouldn't the distance field redering creates non interpolated result? In my thought, they should looks liked retro style pixel art. Where do i misunderstand in this process? So far, It is no difference with alpha test for me. Both of them throw away all pixcel which are not in. How does extra distance field pixel affect rendering under nearest-neighbour interpolation?

    Read the article

  • Is it possible to construct a cube with less than 24 vertices

    - by Telanor
    I have a cube-based world like minecraft and I'm wondering if there's a way to construct a cube with less than 24 vertices so I can reduce memory usage. It doesn't seem possible to me for 2 reasons: the normals wouldn't come out right and per-face textures wouldn't work. Is this the case or am I wrong? Maybe there's some fancy new dx11 tech that can help? Edit: Just to clarify, I have 2 requirements: I need surface normals for each cube face in order to do proper lighting and I need a way to address a different indexes in a texture array for each cube face

    Read the article

  • Rotation of viewplatform in Java3D

    - by user29163
    I have just started with Java3D programming. I thought I had built up some basic intuition about how the scene graph works, but something that should work, does not work. I made a simple program for rotating a pyramid around the y-axis. This was done just by adding a RotationInterpolator R to the TransformGroup above the pyramid. Then I thought hey, can I now remove the RotationInterpolator from this TransformGroup, then add it to the TransformGroup above my ViewPlatform leaf. This should work if I have understood how things work. Adding the RotationInterpolator to this TransformGroup, should make the children of this TransformGroup rotate, and the ViewingPlatform is a child of the TransformGroup. Any ideas on where my reasoning is flawed? Here is the code for setting up the universe, and the view branchgroup. import java.awt.*; import java.awt.event.*; import javax.media.j3d.*; import javax.vecmath.*; public class UniverseBuilder { // User-specified canvas Canvas3D canvas; // Scene graph elements to which the user may want access VirtualUniverse universe; Locale locale; TransformGroup vpTrans; View view; public UniverseBuilder(Canvas3D c) { this.canvas = c; // Establish a virtual universe that has a single // hi-res Locale universe = new VirtualUniverse(); locale = new Locale(universe); // Create a PhysicalBody and PhysicalEnvironment object PhysicalBody body = new PhysicalBody(); PhysicalEnvironment environment = new PhysicalEnvironment(); // Create a View and attach the Canvas3D and the physical // body and environment to the view. view = new View(); view.addCanvas3D(c); view.setPhysicalBody(body); view.setPhysicalEnvironment(environment); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); // Create a ViewPlatform object, and its associated // TransformGroup object, and attach it to the root of the // subgraph. Attach the view to the view platform. Transform3D t = new Transform3D(); Transform3D s = new Transform3D(); t.set(new Vector3f(0.0f, 0.0f, 10.0f)); t.rotX(-Math.PI/4); s.set(new Vector3f(0.0f, 0.0f, 10.0f)); //forandre verdier her for å endre viewing position t.mul(s); ViewPlatform vp = new ViewPlatform(); vpTrans = new TransformGroup(t); vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); // Rotator stuff Transform3D yAxis = new Transform3D(); //yAxis.rotY(Math.PI/2); Alpha rotationAlpha = new Alpha( -1, Alpha.INCREASING_ENABLE, 0, 0,4000, 0, 0, 0, 0, 0); RotationInterpolator rotator = new RotationInterpolator( rotationAlpha, vpTrans, yAxis, 0.0f, (float) Math.PI*2.0f); RotationInterpolator rotator2 = new RotationInterpolator( rotationAlpha, vpTrans); BoundingSphere bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0); rotator.setSchedulingBounds(bounds); vpTrans.addChild(rotator); vpTrans.addChild(vp); vpRoot.addChild(vpTrans); view.attachViewPlatform(vp); // Attach the branch graph to the universe, via the // Locale. The scene graph is now live! locale.addBranchGraph(vpRoot); } public void addBranchGraph(BranchGroup bg) { locale.addBranchGraph(bg); } }

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >