Search Results

Search found 21194 results on 848 pages for 'game state'.

Page 368/848 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Unity Problem with colliding instances of same object

    - by Kuba Sienkiewicz
    I want to check if object's instance is overlapping with another instance (any spawned object with another spawned object, not necessary the same object). I'm doing this by detecting collisions between bodies. But I have a problem. Spawned object (instances) are detecting collision with everything but other spawned objects. I've checked collision layers etc. All of spawned objects have rigidbodies and mesh colliders. Also when I attach my script to another body and I touch that body with an instanced object it detects collision. So problem is visible only in collision between spawned objects. And one more information I have script, rigid body and collider attached to child of main object. using UnityEngine; using System.Collections; public class CantPlace : MonoBehaviour { public bool collided = false; // Use this for initialization void Start () { } // Update is called once per frame void Update () { //Debug.Log (collided); } void OnTriggerEnter(Collider collider) { //if (true) { //foreach (Transform child in this.transform) { // if (child.name == "Cylinder") { //collided = true; Color c; c = this.renderer.material.color; c.g = 0f; c.b = 1f; c.r = 0f; this.renderer.material.color = c; Debug.Log (collider.name); //} // } //} //foreach (ContactPoint contact in collision.contacts) { // Debug.DrawRay(contact.point, contact.normal, Color.red,15f); // } } }

    Read the article

  • How can I convert an image from raw data in Android without any munging?

    - by stephelton
    I have raw image data (may be .png, .jpg, ...) and I want it converted in Android without changing its pixel depth (bpp). In particular, when I load a grayscale (8 bpp) image that I want to use as alpha (glTexImage() with GL_ALPHA), it converts it to 16 bpp (presumably 5_6_5). While I do have a plan B (actually, I'm probably on plan 'E' by now, this is really becoming annoying) I would really like to discover an easy way to do this using what is readily available in the API. So far, I'm using BitmapFactory.decodeByteArray(). While I'm at it. I'm doing this from a native environment via JNI (passing the buffer in from C, and a new buffer back to C from Java). Any portable solution in C/C++ would be preferable, but I don't want to introduce anything that might break in future versions of Android, etc.

    Read the article

  • Per fragment lighting with OpenGL 4.x tessellated model

    - by Finlaybob
    I'm experienced with OpenGL 3+. I'm dabbling with tessellation shaders and have now got to a point where I have a nicely tessellated teapot/plane demo (quick look here) As can be seen from the screenshots, the lighting is broken (though admittedly doesn't look too bad in the image) I've tried to add a normal map to the equation but it still doesn't come out right, I can calculate the normals, tangents and binormals per triangle in the geometry shader but still looks wrong. I think the question would be; How do I add per fragment lighting to a tessellated model? The teapot is 32 16-point patches, the plane is one single 16 point patch. The shaders are here, but they are a complete mess, so I don't blame anyone who cant make sense of them. But peruse at your leisure if you like. Also, if this question is more suited to be somewhere else i.e. Stack Overflow or the Programming stack please let me know.

    Read the article

  • Difference between Sound and Music

    - by Southpaw Hare
    What are the key differences between the Sound and Music classes in Pygame? What are the limitations of each? In what situation would one use one or the other? Is there a benefit to using them in an unintuitive way such as using Sound objects to play music files or visa-versa? Are there specifically issues with channel limitations, and do one or both have the potential to be dropped from their channel unreliably? What are the risks of playing music as a Sound?

    Read the article

  • How do I get the point coords of a rotated SFML shaperect?

    - by user15498
    I am trying to get collisions of bullets working, and am using SFML. I am using code to get the position of the points of the rectangle for collisions, however I think there's a way to do this without having to get points but by simply getting the points from SFML, since the shape is a rectangle and the points are stored in that way. Is there a way to do that? Through a combination of getPoint() and getGlobalBounds() maybe? While on this topic, is it better to use shapeRects or sprites? I used to only use sprites, however with the addition of textures and more low level stuff I think it would be best to switch to using rectangles and setting their size.

    Read the article

  • Understanding IDAT chunk of PNG file format

    - by DRapp
    From the sample image below, I have a border in yellow just for display purposes only. The actual .png file is a simple black/white image 3 pixels by 3 pixels. I was originally thinking to try as a 2x2, but that would not help trying to interpret low/hi vs hi/low drawing stream. At least this way, I would have two black, one white from the top, or one white, two black from the bottom.. So I read the chunks of data, get to the IDAT chunk, decode that (zlib) and come up with 12 bytes as follows 00 20 00 40 00 80 So, my question, how does the above get broken down into the 3x3 black and white sample... Also, it is saved in palette format and properly recognizes the bit depth of 1 and color palette of 2... color pallet[0] is RGBA all zeros. Palette1 has RGBA of 255, 255, 255, 0 I'll eventually get into the multiple other depth formats later, just wanted to start with what would expect to be the easiest. Part II. Any guidance on handling the other depth formats would help if anything special to be considered especially regarding alpha channel (which I am already looking for in the palette) that might trip me up.

    Read the article

  • Basic 3D Collision detection in XNA 4.0

    - by NDraskovic
    I have a problem with detecting collision between 2 models using BoundingSpheres in XNA 4.0. The code I'm using i very simple: private bool IsCollision(Model model1, Matrix world1, Model model2, Matrix world2) { for (int meshIndex1 = 0; meshIndex1 < model1.Meshes.Count; meshIndex1++) { BoundingSphere sphere1 = model1.Meshes[meshIndex1].BoundingSphere; sphere1 = sphere1.Transform(world1); for (int meshIndex2 = 0; meshIndex2 < model2.Meshes.Count; meshIndex2++) { BoundingSphere sphere2 = model2.Meshes[meshIndex2].BoundingSphere; sphere2 = sphere2.Transform(world2); if (sphere1.Intersects(sphere2)) return true; } } return false; } The problem I'm getting is that when I call this method from the Update method, the program behaves as if this method always returns true value (which of course is not correct). The code for calling is very simple (although this is only the test code): if (IsCollision(model1, worldModel1, model2, worldModel2)) { Window.Title = "Intersects"; } What is causing this?

    Read the article

  • Unable to find good parameters for behavior of a puck in Farseer

    - by Krumelur
    EDIT: I have tried all kinds of variations now. The last one was to adjust the linear velocity in each step: newVel = oldVel * 0.9f - all of this including your proposals kind of work, however in the end if the velocity is little enough, the puck sticks to the wall and slides either vertically or horizontally. I can't get rid of this behavior. I want it to bounce, no matter how slow it is. Retitution is 1 for all objects, Friction is 0 for all. There is no gravity. What am I possibly doing wrong? In my battle to learn and understand Farseer I'm trying to setup a simple Air Hockey like table with a puck on it. The table's border body is made up from: Vertices aBorders = new Vertices( 4 ); aBorders.Add( new Vector2( -fHalfWidth, fHalfHeight ) ); aBorders.Add( new Vector2( fHalfWidth, fHalfHeight ) ); aBorders.Add( new Vector2( fHalfWidth, -fHalfHeight ) ); aBorders.Add( new Vector2( -fHalfWidth, -fHalfHeight ) ); FixtureFactory.AttachLoopShape( aBorders, this ); this.CollisionCategories = Category.All; this.CollidesWith = Category.All; this.Position = Vector2.Zero; this.Restitution = 1f; this.Friction = 0f; The puck body is defined with: FixtureFactory.AttachCircle( DIAMETER_PHYSIC_UNITS / 2f, 0.5f, this ); this.Restitution = 0.1f; this.Friction = 0.5f; this.BodyType = FarseerPhysics.Dynamics.BodyType.Dynamic; this.LinearDamping = 0.5f; this.Mass = 0.2f; I'm applying a linear force to the puck: this.oPuck.ApplyLinearImpulse( new Vector2( 1f, 1f ) ); The problem is that the puck and the walls appear to be sticky. This means that the puck's velocity drops to zero to quickly below a certain velocity. The puck gets reflected by the walls a couple of times and then just sticks to the left wall and continues sliding downwards the left wall. This looks very unrealistic. What I'm really looking for is that a puck-wall-collision does slow down the puck only a tiny little bit. After tweaking all values left and right I was wondering if I'm doing something wrong. Maybe some expert can comment on the parameters?

    Read the article

  • Periodic updates of an object in Unity

    - by Blue
    I'm trying to make a collider appear every 1 second. But I can't get the code right. I tried enabling the collider in the Update function and putting a yield to make it update every second or so. But it's not working (it gives me an error: Update() cannot be a coroutine.) How would I fix this? Would I need a timer system to toggle the collider? var waitTime : float = 1; var trigger : boolean = false; function Update () { if(!trigger){ collider.enabled = false; yield WaitForSeconds(waitTime); } if(trigger){ collider.enabled = true; yield WaitForSeconds(waitTime); } } }

    Read the article

  • Why distance field text rendering have clear outline?

    - by jinhwan
    http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf All the process for doing distance rendering is clear, but 'how does it work' is not clear for me. It looks like that distance field pixels which are created around original pixel may affect 2d texture sampling interpolation process. But I can't understand the interpolation process. I've read that the distance field rendering is processed under nearest-neighbour interpolation. If it is true, shouldn't the distance field redering creates non interpolated result? In my thought, they should looks liked retro style pixel art. Where do i misunderstand in this process? So far, It is no difference with alpha test for me. Both of them throw away all pixcel which are not in. How does extra distance field pixel affect rendering under nearest-neighbour interpolation?

    Read the article

  • Creating a 2D Line Branch (Part 2)

    - by Danran
    Yesterday i asked this question on how to create a 2D line branch; Creating a 2D Line Branch And thanks to the answered provided, i now have this nice looking main branch; *coloured to show the different segments in the final item. Now is the time now to branch things off as discussed in the article; http://drilian.com/2009/02/25/lightning-bolts/ Again however i am confused as to the meaning of the following pseudo code; splitEnd = Rotate(direction, randomSmallAngle)*lengthScale + midPoint; I'm unsure how to actually rotate this correctly. In all honesty i'm abit unsure what to-do completely at this part, "splitEnd" will be a Vector3, so whatever happens in the rotate function must then return some form of directional rotation which is then * by a scale to create length and then added to the midPoint. I'm not sure. If someone could explain what i'm meant to be doing in this part that would be really grateful.

    Read the article

  • Heightmap and Textures

    - by Robert
    Im trying to find the "best way" to apply a texture to a heightmap with opengl 3.x. Its really hard to find something on google because tutorials are olds and they're all using different methods, im really lost and i dont know what to use at all. Here is my code that generates the heightmap (its basic) float[] vertexes = null; float[] textureCoords = null; for(int x = 0; x < this.m_size.width; x++) { for(int y = 0; y < this.m_size.height; y++) { vertexes ~= [x, 1.0f, y]; textureCoords ~= [cast(float)x / 50, cast(float)y / 50]; } } As you can see, i dont know how to apply the texture at all (i was using / 50 for my tests). Result of that code : I would like to have something very basic like : (you can find more pics in his blog) Edit : my texture size is 1024x1024.

    Read the article

  • Deferred rendering order?

    - by Nick Wiggill
    There are some effects for which I must do multi-pass rendering. I've got the basics set up (FBO rendering etc.), but I'm trying to get my head around the most suitable setup. Here's what I'm thinking... The framebuffer objects: FBO 1 has a color attachment and a depth attachment. FBO 2 has a color attachment. The render passes: Render g-buffer: normals and depth (used by outline & DoF blur shaders); output to FBO no. 1. Render solid geometry, bold outlines (as in toon shader), and fog; output to FBO no. 2. (can all render via a single fragment shader -- I think.) (optional) DoF blur the scene; output to the default frame buffer OR ELSE render FBO2 directly to default frame buffer. (optional) Mesh wireframes; composite over what's already in the default framebuffer. Does this order seem viable? Any obvious mistakes?

    Read the article

  • What forms of non-interactive RPG battle systems exist?

    - by Landstander
    I am interested in systems that allow players to develop a battle plan or setup strategy for the party or characters prior to entering battle. During the battle the player either cannot input commands or can choose not to. Rule Based In this system the player can setup a list of rules in the form of [Condition - Action] that are then ordered by priority. Gambits in Final Fantasy XII Tactics in Dragon Age Origin & II

    Read the article

  • What are the factors that determine the default frequency of a shader call?

    - by user827992
    After i have been played for some days with various vertex and fragments shaders seems clear to me that this programs are called by the GPU at every and each rendering cycle, the problem is that I can't really quantify this frequency and I can't tell if is based on some default values or not because I don't have a big collection of hardware right now to do extensive tests. For what i know the answer could be really trivial like "it's the same of the refresh rate of your monitor", but i would like some good answers on that to be clear on this. For instance looks really odd to me that all the techniques used to control the amount of FPS that i have seen until now uses a call for the OpenGL function glutGet(GLUT_ELAPSED_TIME) to retrieve a value in ms about when the rendering started but I have to relies on the CPU to do the math. Why I can't set an FPS value in OpenGL if OpenGL clearly has a counter and a timer/clock? PS I'm referring to OpenGL 3.0+

    Read the article

  • How do I prevent a KActor from changing the orientation of its Z-Axis?

    - by Almo
    So I have an object that inherits from KActor that I would like to behave as a dynamic physics object, but I want its Z-Axis to remain upright, but very stiffly. I've tried the bStayUpright that triggers the "Stay Upright Spring". The problem is, it's a spring, and the object in question oscillates into position when I want it to remain oriented properly without wobbling. In the image above, the yellow block has fallen onto the gray box, and it is currently pivoting about the contact point as it tries to right itself. Should I be tweaking the StayUprightMaxTorque and StayUprightTorqueFactor parameters, or should I be using a Constraint of some sort?

    Read the article

  • 3ds Max CAT to XNA

    - by user12214
    Has anyone successfully used the CAT bone system in 3ds Max and exported the file into XNA? If so, what was your method of doing so? There are a number of methods of doing this apparently, but the ones I've tried have not worked. I used the Panda Exporter which creates a .X file. This seems to be the latest way of going about this, but when it's loaded in XNA, there is an error saying something about the bone weights. This happens when I export with and without CAT bones.

    Read the article

  • Whole map design vs. tiles array design

    - by Mikalichov
    I am working on a 2D RPG, which will feature the usual dungeon/town maps (pre-generated). I am using tiles, that I will then combine to make the maps. My original plan was to assemble the tiles using Photoshop, or some other graphic program, in order to have one bigger picture that I could then use as a map. However, I have read on several places people talking about how they used arrays to build their map in the engine (so you give an array of x tiles to your engine, and it assemble them as a map). I can understand how it's done, but it seems a lot more complicated to implement, and I can't see obvious avantages. What is the most common method, and what are advantages/disadvantages of each?

    Read the article

  • How to get this wavefront .obj data onto the frustum?

    - by NoobScratcher
    I've finally figured out how to get the data from a .obj file and store the vertex positions x,y,z into a structure called Points with members x y z which are of type float. I want to know how to get this data onto the screen. Here is my attempt at doing so: //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); std::vector<int>faces; std::vector<Point>points; points.push_back(Point()); Point p; int face[4]; while ( !file.eof() ) { char modelbuffer[10000]; //Get lines and store it in line string file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); cout << "Getting Vertex Positions" << endl; cout << "v" << p.x << endl; cout << "v" << p.y << endl; cout << "v" << p.z << endl; break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); cout << face[0] << endl; cout << face[1] << endl; cout << face[2] << endl; cout << face[3] << endl; faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } GLuint vertexbuffer; glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size(), points.data(), GL_STATIC_DRAW); //glBufferData(GL_ARRAY_BUFFER,sizeof(points), &(points[0]), GL_STATIC_DRAW); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); glVertexPointer(3, GL_FLOAT, sizeof(points),points.data()); glIndexPointer(GL_DOUBLE, 0, faces.data()); glDrawArrays(GL_QUADS, 0, points.size()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); } As you can see I've clearly failed the end part but I really don't know why its not rendering the data onto the frustum? Does anyone have a solution for this?

    Read the article

  • Rotation of viewplatform in Java3D

    - by user29163
    I have just started with Java3D programming. I thought I had built up some basic intuition about how the scene graph works, but something that should work, does not work. I made a simple program for rotating a pyramid around the y-axis. This was done just by adding a RotationInterpolator R to the TransformGroup above the pyramid. Then I thought hey, can I now remove the RotationInterpolator from this TransformGroup, then add it to the TransformGroup above my ViewPlatform leaf. This should work if I have understood how things work. Adding the RotationInterpolator to this TransformGroup, should make the children of this TransformGroup rotate, and the ViewingPlatform is a child of the TransformGroup. Any ideas on where my reasoning is flawed? Here is the code for setting up the universe, and the view branchgroup. import java.awt.*; import java.awt.event.*; import javax.media.j3d.*; import javax.vecmath.*; public class UniverseBuilder { // User-specified canvas Canvas3D canvas; // Scene graph elements to which the user may want access VirtualUniverse universe; Locale locale; TransformGroup vpTrans; View view; public UniverseBuilder(Canvas3D c) { this.canvas = c; // Establish a virtual universe that has a single // hi-res Locale universe = new VirtualUniverse(); locale = new Locale(universe); // Create a PhysicalBody and PhysicalEnvironment object PhysicalBody body = new PhysicalBody(); PhysicalEnvironment environment = new PhysicalEnvironment(); // Create a View and attach the Canvas3D and the physical // body and environment to the view. view = new View(); view.addCanvas3D(c); view.setPhysicalBody(body); view.setPhysicalEnvironment(environment); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); // Create a ViewPlatform object, and its associated // TransformGroup object, and attach it to the root of the // subgraph. Attach the view to the view platform. Transform3D t = new Transform3D(); Transform3D s = new Transform3D(); t.set(new Vector3f(0.0f, 0.0f, 10.0f)); t.rotX(-Math.PI/4); s.set(new Vector3f(0.0f, 0.0f, 10.0f)); //forandre verdier her for å endre viewing position t.mul(s); ViewPlatform vp = new ViewPlatform(); vpTrans = new TransformGroup(t); vpTrans.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); // Rotator stuff Transform3D yAxis = new Transform3D(); //yAxis.rotY(Math.PI/2); Alpha rotationAlpha = new Alpha( -1, Alpha.INCREASING_ENABLE, 0, 0,4000, 0, 0, 0, 0, 0); RotationInterpolator rotator = new RotationInterpolator( rotationAlpha, vpTrans, yAxis, 0.0f, (float) Math.PI*2.0f); RotationInterpolator rotator2 = new RotationInterpolator( rotationAlpha, vpTrans); BoundingSphere bounds = new BoundingSphere(new Point3d(0.0,0.0,0.0), 1000.0); rotator.setSchedulingBounds(bounds); vpTrans.addChild(rotator); vpTrans.addChild(vp); vpRoot.addChild(vpTrans); view.attachViewPlatform(vp); // Attach the branch graph to the universe, via the // Locale. The scene graph is now live! locale.addBranchGraph(vpRoot); } public void addBranchGraph(BranchGroup bg) { locale.addBranchGraph(bg); } }

    Read the article

  • Exporting UV coords from Blender

    - by Soapy
    So I have searched on google and various other websites but I've not found an answer. The only ones I did find did not work. So my question is how do I get UV coords from blender (2.63)? Currently I'm writing my own custom file exporter, and so far have managed to export vertices and their normals. Is there a way to export the UV coords? N.B. I'm currently try to figure it out using a simple cube that is unwrapped and has a texture applied to it.

    Read the article

  • Can't export Blender model for use in jMonkeyEngine SDK

    - by Nathan Sabruka
    I have a scene rendered in blender called "civ1.blend" which contains multiple materials (for example, I have one called "white"). I want to use this model in jMonkeyEngine, so I used the OGRE exporter to create .scene and .material files. This gives me, for example, a civ1.scene file and a white.material file.However, when I then try to import civ1.scene into the jMonkeyEngine SDK, I get an error along the lines of "Cannot find material file 'civ1.material'". Like I said, I have a white.material file, but I do not have a civ1.material file. Did anyone encounter this problem? How do I fix this?

    Read the article

  • exact point oh a rotating sphere

    - by nkint
    I have a sphere that represents the heart textured with real pictures. It's rotating about the x axis, and when user click down it has to show me the exact place he clicked on. For example if he clicked on Singapore and the system should be able to: understand that user clicked on the sphere (OK, I'll do it with unProject) understand where user clicked on the sphere (ray-sphere collision?) and take into account the rotation transform sphere-coordinate to some coordinate system good for some web-api service ask to api (OK, this is the simpler thing for me ;-) some advice?

    Read the article

  • Given a start and end point, how can I constrain the end point so the resulting line segment is horizontal, vertical, or 45 degrees?

    - by GloryFish
    I have a grid of letters. The player clicks on a letter and drags out a selection. Using Bresenham's Algorithm I can create a line of highlighted letters representing the player's selection. However, what I really want is to have the line segment be constrained to 45 degree angles (as is common for crossword-style games). So, given a start point and an end point, how can I find the line that passes through the start point and is closest to the end point? Bonus: To make things super sweet I'd like to get a list of points in the grid that the line passes through, and for super MEGA bonus points, I'd like to get them in order of selection (i.e. from start point to end point).

    Read the article

  • Image loaded from TGA texture isn't displayed correctly

    - by Ramy Al Zuhouri
    I have a TGA texture containing this image: The texture is 256x256. So I'm trying to load it and map it to a cube: #import <OpenGL/OpenGL.h> #import <GLUT/GLUT.h> #import <stdlib.h> #import <stdio.h> #import <assert.h> GLuint width=640, height=480; GLuint texture; const char* const filename= "/Users/ramy/Documents/C/OpenGL/Test/Test/texture.tga"; void init() { // Initialization glEnable(GL_DEPTH_TEST); glViewport(-500, -500, 1000, 1000); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45, width/(float)height, 1, 1000); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(0, 0, -100, 0, 0, 0, 0, 1, 0); // Texture char bitmap[256][256][3]; FILE* fp=fopen(filename, "r"); assert(fp); assert(fread(bitmap, 3*sizeof(char), 256*256, fp) == 256*256); fclose(fp); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 256, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, bitmap); } void display() { glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture); glColor3ub(255, 255, 255); glBegin(GL_QUADS); glVertex3f(0, 0, 0); glTexCoord2f(0.0, 0.0); glVertex3f(40, 0, 0); glTexCoord2f(0.0, 1.0); glVertex3f(40, 40, 0); glTexCoord2f(1.0, 1.0); glVertex3f(0, 40, 0); glTexCoord2f(1.0, 0.0); glEnd(); glDisable(GL_TEXTURE_2D); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE); glutInitWindowPosition(100, 100); glutInitWindowSize(width, height); glutCreateWindow(argv[0]); glutDisplayFunc(display); init(); glutMainLoop(); return 0; } But this is what I get when the window loads: So just half of the image is correctly displayed, and also with different colors.Then if I resize the window I get this: Magically the image seems to fix itself, even if the colors are wrong.Why?

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >