Search Results

Search found 4320 results on 173 pages for 'vertex arrays'.

Page 12/173 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Proper Usage of Arrays and Functions [closed]

    - by Ssegawa Victor
    Can some one help me write a C code that solves the following problem. PROBLEM Consider the faculty registrar who has to process results for 1st year 1st semester students. Students offer five courses CSC 1100, CSK 1101, CSC 1104, CSC 1105 and CSC 1106. The courses have credit units 4,4,4,3 and 3 respectively. Lecturers provide course work and exam marks. For each course, course work constitutes 40% of the final mark while the exam constitutes 60% of the final mark. The role of the registrar is to Compute the final mark for each student for each course. The final mark must be a whole number Compute the grade and grade point of the students for each course they offered. According to senate regulations, grades and grade points are awarded to final marks according to the following criteria Range Grade Grade Point 90 – 100 A+ 5.0 80 – 89 A 5.0 75 – 79 B+ 4.5 70 – 74 B 4.0 65 – 69 C+ 3.5 60 – 64 C 3.0 55 – 59 D+ 2.5 50 – 54 D 2.0 45 – 49 E 1.5 40 – 44 E- 1.0 0 – 39 F 0.0 Put a comment ‘Retake’ to a student for every course where the Grade Point is less than 2.0 Compute the cumulative grade point average CGPA for each student. The senate formula for CGPA is GGPA =(?_(i=1)^(i=N)¦?CU _i×GP _i ?)/(?_(i=1)^(i=N)¦CU i) Put a comment “Progress” for any student whose GGPA is greater than 2 and “Stay Put” on a student whose CGPA is less than 2 You are required to create a c program that considers a class of 25 students and: 1.Initializes an array ‘student’ which stores student names 2.Initializes arrays for course work and exam for each course. ‘cw_csc_1100’ and ‘ex_csc_1100’ store course work and exam marks (respectively) for CSC 1100. The same approach is considered for all other courses 3.Initializes the coursework and exam marks arrays with marks between 0 and 99 4.Write appropriate functions that will generate the final marks, generate grades, generate grade points, generate cumulative grade points, generate comments for students and comments for courses per student 5.Create appropriate arrays for final marks and insert the data there using the appropriate functions 6.Without having to create any extra arrays, use the functions created to generate a report per student that looks like the one bellow. Student Name: Ngubiri Course Unit Final mark Grade Grade Point Course Comment CSC 1100 43 E- 1.0 Retake CSK 1101 50 D 2.0 CSC 1104 59 D+ 2.5 CSC 1105 70 B 4.0 CSC 1106 65 C+ 3.5 CGPA 2.47 Overall Comment Progress NB It is advisable that the indices are used to identify the owners. Eg if student[x] is John, then cs_csc_100[x] should be a mark for John since the index is the same

    Read the article

  • Manual TRIM Windows 7 on OCZ VERTEX 2 SATA II 2.5" SSD

    - by INTPnerd
    I have an OCZ VERTEX 2 SATA II 2.5" SSD with Windows 7 Professional installed on it. I am pretty sure TRIM is not working because my motherboard is the Asus M2N-SLI (not the Deluxe model) which does not support AHCI mode for the drive. Is there a utility that is compatible with this drive that I could possibly run once a day that would do something similar to a manual TRIM in order to keep the drive performance up? I could not find one specifically for this drive on the OCZ website. I did find a User-Initiated Garbage Collection wiper tool, but it is for a Vertex drive not Vertex 2. I tried to run it, but it said that wiper could not be run for all the drives on this system.

    Read the article

  • Bitmap font rendering, UV generation and vertex placement

    - by jack
    I am generating a bitmap, however, I am not sure on how to render the UV's and placement. I had a thread like this once before, but it was too loosely worded as to what I was looking to do. What I am doing right now is creating a large 1024x1024 image with characters evenly placed every 64 pixels. Here is an example of what I mean. I then save the bitmap X/Y information to a file (which is all multiples of 64). However, I am not sure how to properly use this information and bitmap to render. This falls into two different categories, UV generation and kerning. Now I believe I know how to do both of these, however, when I attempt to couple them together I will get horrendous results. For example, I am trying to render two different text arrays, "123" and "njfb". While ignoring the texture quality (I will be increasing the texture to provide more detail once I fix this issue), here is what it looks like when I try to render them. http://img64.imageshack.us/img64/599/badfontrendering.png Now for the algorithm. I am doing my letter placement with both GetABCWidth and GetKerningPairs. I am using GetABCWidth for the width of the characters, then I am getting the kerning information for adjust the characters. Does anyone have any suggestions on how I can implement my own bitmap font renderer? I am trying to do this without using external libraries such as angel bitmap tool or freetype. I also want to stick to the way the bitmap font sheet is generated so I can do extra effects in the future. Rendering Algorithm for(U32 c = 0, vertexID = 0, i = 0; c < numberOfCharacters; ++c, vertexID += 4, i += 6) { ObtainCharInformation(fontName, m_Text[c]); letterWidth = (charInfo.A + charInfo.B + charInfo.C) * scale; if(c != 0) { DWORD BytesReq = GetGlyphOutlineW(dc, m_Text[c], GGO_GRAY8_BITMAP, &gm, 0, 0, &mat); U8 * glyphImg= new U8[BytesReq]; DWORD r = GetGlyphOutlineW(dc, m_Text[c], GGO_GRAY8_BITMAP, &gm, BytesReq, glyphImg, &mat); for (int k=0; k<nKerningPairs; k++) { if ((kerningpairs[k].wFirst == previousCharIndex) && (kerningpairs[k].wSecond == m_Text[c])) { letterBottomLeftX += (kerningpairs[k].iKernAmount * scale); break; } } letterBottomLeftX -= (gm.gmCellIncX * scale); } SetVertex(letterBottomLeftX, 0.0f, zFight, vertexID); SetVertex(letterBottomLeftX, letterHeight, zFight, vertexID + 1); SetVertex(letterBottomLeftX + letterWidth, letterHeight, zFight, vertexID + 2); SetVertex(letterBottomLeftX + letterWidth, 0.0f, zFight, vertexID + 3); zFight -= 0.001f; float BottomLeftX = (F32)(charInfo.bitmapXOrigin) / (float)m_BitmapWidth; float BottomLeftY = (F32)(charInfo.bitmapYOrigin + charInfo.charBitmapHeight) / (float)m_BitmapWidth; float TopLeftX = BottomLeftX; float TopLeftY = (F32)(charInfo.bitmapYOrigin) / (float)m_BitmapWidth; float TopRightX = (F32)(charInfo.bitmapXOrigin + charInfo.B - charInfo.C) / (float)m_BitmapWidth; float TopRightY = TopLeftY; float BottomRightX = TopRightX; float BottomRightY = BottomLeftY; SetTextureCoordinate(TopLeftX, TopLeftY, vertexID + 1); SetTextureCoordinate(BottomLeftX, BottomLeftY, vertexID + 0); SetTextureCoordinate(BottomRightX, BottomRightY, vertexID + 3); SetTextureCoordinate(TopRightX, TopRightY, vertexID + 2); /// index setting letterBottomLeftX += letterWidth; previousCharIndex = m_Text[c]; }

    Read the article

  • Using PHP Gettext Extension vs PHP Arrays in Multilingual Websites?

    - by janoChen
    So far the only 2 good things that I've seen about using gettext instead of arrays is that I don't have to create the "greeting" "sub-array" (or whatever its called). And I don't have to create a folder for the "default language". Are there other pros and cos of using gettext and php arrays for multilingual websites? USING GETTEXT: spanish/messages.po: #: test.php:3 msgid "Hello World!" msgstr "Hola Mundo" index.php: <?php echo _("Hello World!"); ?> index.php?lang=spanish: <?php echo _("Hello World!"); ?> turns to Hola Mundo USING PHP ARRAYS: lang.en.php <?php $lang = array( "greeting" => "Hello World", ); ?> lang.es.php <?php $lang = array( "greeting" => "Hola Mundo", ); ?> index.php: <?php echo $lang['greeting']; ?> greeting turns to Hello World index.php?lang=spanish <?php echo $lang['greeting']; ?> greeting turns to Hola Mundo (I first started with gettext, but it wasn't supported in my shared free hosting Zymic. I didn't want to use Zend_translate, I found it too complicated to my simple task, so I finally ended up using php define, but later on someone told me I should use arrays)

    Read the article

  • Are there arrays in Latex(not asking for the way to typeset them but to use for programing)

    - by drasto
    Are there arrays in Latex ? I don't mean the way to typeset arrays. I mean arrays as the data structure in latex/tex as a "programming language". I need to store a number of vbox-es or hbox-es in array. Or it may by something like "an array of macros". More details: I have an environment that should typeset songs. I need to store some songs paragraphs given as arguments to my macro \songparagraph(so I will not typeset them, just store those paragraphs). As I don't know how many paragraphs can by in one particular song I need an array for this... When the environment is closed all the paragraphs will be typeset - but they will be first measured and the best placement for each paragraph will be computed(for example, some paragraphs can be put one aside other in two columns to make the song look more compact and save some space). Any ides would be welcome. Please if you know about that there are some arrays in latex post a link to some basic documentation, tutorial or just state basic commands.

    Read the article

  • How do i create an array of unequal length arrays?

    - by Faken
    Hello everyone, I need to create a sort of 2D array in which each one of the secondary arrays are of different length. I have a 1D array of known length (which defines the number of arrays to be formed) with each element having a number that denotes the length of the secondary array in that position. Each one of the arrays are fairly large so i don't want to create a one-size-fits-all "fake" 2D heap array to cover everything. How would i go about doing this? Any 2D array I have made before are always rectangular. I'm trying to do this so that i can create some code to dynamically generate threads to split up some workload. Thanks, -Faken

    Read the article

  • How to return arrays with the biggest elements in C#?

    - by theateist
    I have multiple int arrays: 1) [1 , 202,4 ,55] 2) [40, 7] 3) [2 , 48 ,5] 4) [40, 8 ,90] I need to get the array that has the biggest numbers in all positions. In my case that would be array #4. Explanation: arrays #2, #4 have the biggest number in 1st position, so after the first iteration these 2 arrays will be returned ([40, 7] and [40, 8 ,90]) now after comparing 2nd position of the returned array from previous iteration we will get array #4 because 8 7 and so on... Can you suggest an efficient algorithm for this? Doing with Linq will be preferable. UPDATE There is no limitation for the length, but as soon as some number in any position is greater, so this array is the biggest.

    Read the article

  • Should I convert data arrays to objects in PHP in a CouchDB/Ajax app?

    - by karlthorwald
    I find myself converting between array and object all the time in PHP application that uses couchDB and Ajax. Of course I am also converting objects to JSON and back (for sometimes couchdb but mostly Ajax), but this is not so much disturbing my workflow. At the present I have php objects that are returned by the CouchDB modules I use and on the other hand I have the old habbit to return arrays like array("error"="not found","data"=$dataObj) from my functions. This leads to a mixed occurence of real php objects and nested arrays and I cast with (object) or (array) if necessary. The worst thing is that I know more or less by heart what a function returns, but not what type (array or object), so I often run into type errors. My plan is now to always cast arrays to objects before returning from a function. Of course this implies a lot of refactoring. Is this the right way to go? What about the conversion overhead? Other ideas or tips?

    Read the article

  • Surface normal to screen angle

    - by Tannz0rz
    I've been struggling to get this working. I simply wish to take a surface normal and convert it to a screen angle. As an example, assuming we're working with the highlighted surface on the sphere below, where the arrow is the normal, the 2D angle would obviously be PI/4 radians. Here's one of the many things I've tried to no avail: float4 A = v.vertex; float4 B = v.vertex + float4(v.normal, 0.0); A = mul(VP, A); B = mul(VP, B); A.xy = (0.5 * (A.xy / A.w)) + 0.5; B.xy = (0.5 * (B.xy / B.w)) + 0.5; o.theta = atan2(B.y - A.y, B.x - A.x); I'm finally at my wit's end. Thanks for any and all help.

    Read the article

  • How are vertex shader outs sent as inputs to the fragment shader?

    - by Jeffrey
    I'm learning some OpenGL 3.2 way of doing things and I think it's quite great, I'm actually understanding more of shaders and non-fixed pipeline in 1 week rather than those 2 years I tried to learn OpenGL fixed pipeline functions. But here's my question: From what I think I've understood the vertex shader is run for each vertexes in the VBO. But the fragments shader is run per each pixel (is that right?) which is a huge number compared to let's say 3 vertexes of a triangle. Now it seems that in the vertex shader the out variables (like colors and stuff) are passed 1 to 1 to the fragment shader. But let's say that I pass to the fragment shader the position of the vertex in the vertex shader. How is all executed? What vertex (A, B or C of the hipothetical triangle) is passed per each fragment and why?

    Read the article

  • How to run Cg vertex/fragment shader on CPU?

    - by Andy
    Hi all, I'm playing about with some vertex and fragment shaders using Cg on my little netbook (running Linux). Clearly I'm going to frequently hit resource limits for my graphics controller, so was wondering if there's a nice way to run the shaders on the CPU, just to test them. Something like D3D's refrast... TIA Andy

    Read the article

  • Initializing and drawing a mesh using OpenTK

    - by Boreal
    I'm implementing a "Mesh" class to use in my OpenTK game. You pass in a vertex array and an index array, and then you can call Mesh.Draw() to draw it using a shader. I've heard VBO's and VAO's are the way to go for this approach, but nowhere have I found a guide that shows how to get Data Video Memory Shader. Can someone give me a quick rundown of how this works? EDIT: So far, I have this: struct Vertex { public Vector3 position; public Vector3 normal; public Vector3 color; public static int memSize = 9 * sizeof(float); public static byte[] memOffset = { 0, 3 * sizeof(float), 6 * sizeof(float) }; } class Mesh { private uint vbo; private uint ibo; // stores the numbers of vertices and indices private int numVertices; private int numIndices; public Mesh(int numVertices, Vertex[] vertices, int numIndices, ushort[] indices) { // set numbers this.numVertices = numVertices; this.numIndices = numIndices; // generate buffers GL.GenBuffers(1, out vbo); GL.GenBuffers(1, out ibo); GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo); // send data to the buffers GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(Vertex.memSize * numVertices), vertices, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, new IntPtr(sizeof(ushort) * numIndices), indices, BufferUsageHint.StaticDraw); } public void Render() { // bind buffers GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo); // define offsets GL.VertexPointer(3, VertexPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[0])); GL.NormalPointer(NormalPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[1])); GL.ColorPointer(3, ColorPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[2])); // draw GL.DrawElements(BeginMode.Triangles, numIndices, DrawElementsType.UnsignedInt, (IntPtr)0); } } class Application : GameWindow { Mesh triangle; protected override void OnLoad(EventArgs e) { base.OnLoad(e); GL.ClearColor(0.1f, 0.2f, 0.5f, 0.0f); GL.Enable(EnableCap.DepthTest); GL.Enable(EnableCap.VertexArray); GL.Enable(EnableCap.NormalArray); GL.Enable(EnableCap.ColorArray); Vertex v0 = new Vertex(); v0.position = new Vector3(-1.0f, -1.0f, 4.0f); v0.normal = new Vector3(0.0f, 0.0f, -1.0f); v0.color = new Vector3(1.0f, 1.0f, 0.0f); Vertex v1 = new Vertex(); v1.position = new Vector3(1.0f, -1.0f, 4.0f); v1.normal = new Vector3(0.0f, 0.0f, -1.0f); v1.color = new Vector3(1.0f, 0.0f, 0.0f); Vertex v2 = new Vertex(); v2.position = new Vector3(0.0f, 1.0f, 4.0f); v2.normal = new Vector3(0.0f, 0.0f, -1.0f); v2.color = new Vector3(0.2f, 0.9f, 1.0f); Vertex[] va = { v0, v1, v2 }; ushort[] ia = { 0, 1, 2 }; triangle = new Mesh(3, va, 3, ia); } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); Matrix4 modelview = Matrix4.LookAt(Vector3.Zero, Vector3.UnitZ, Vector3.UnitY); GL.MatrixMode(MatrixMode.Modelview); GL.LoadMatrix(ref modelview); triangle.Render(); SwapBuffers(); } } It doesn't draw anything.

    Read the article

  • MD5 vertex skinning problem extending to multi-jointed skeleton (GPU Skinning)

    - by Soapy
    Currently I'm trying to implement GPU skinning in my project. So far I have achieved single joint translation and rotation, and multi-jointed translation. The problem arises when I try to rotate a multi-jointed skeleton. The image above shows the current progress. The left image shows how the model should deform. The middle image shows how it deforms in my project. The right shows a better deform (still not right) inverting a certain value, which I will explain below. The way I get my animation data is by exporting it to the MD5 format (MD5mesh for mesh data and MD5anim for animation data). When I come to parse the animation data, for each frame, I check if the bone has a parent, if not, the data is passed in as is from the MD5anim file. If it does have a parent, I transform the bones position by the parents orientation, and the add this with the parents translation. Then the parent and child orientations get concatenated. This is covered at this website. if (Parent < 0){ ... // Save this data without editing it } else { Math3::vec3 rpos; Math3::quat pq = Parent.Quaternion; Math3::quat pqi(pq); pqi.InvertUnitQuat(); pqi.Normalise(); Math3::quat::RotateVector3(rpos, pq, jv); Math3::vec3 npos(rpos + Parent.Pos); this->Translation = npos; Math3::quat nq = pq * jq; nq.Normalise(); this->Quaternion = nq; } And to achieve the image to the right, all I need to do is to change Math3::quat::RotateVector3(rpos, pq, jv); to Math3::quat::RotateVector3(rpos, pqi, jv);, why is that? And this is my skinning shader. SkinningShader.vert #version 330 core smooth out vec2 vVaryingTexCoords; smooth out vec3 vVaryingNormals; smooth out vec4 vWeightColor; uniform mat4 MV; uniform mat4 MVP; uniform mat4 Pallete[55]; uniform mat4 invBindPose[55]; layout(location = 0) in vec3 vPos; layout(location = 1) in vec2 vTexCoords; layout(location = 2) in vec3 vNormals; layout(location = 3) in int vSkeleton[4]; layout(location = 4) in vec3 vWeight; void main() { vec4 wpos = vec4(vPos, 1.0); vec4 norm = vec4(vNormals, 0.0); vec4 weight = vec4(vWeight, (1.0f-(vWeight[0] + vWeight[1] + vWeight[2]))); normalize(weight); mat4 BoneTransform; for(int i = 0; i < 4; i++) { if(vSkeleton[i] != -1) { if(i == 0) { // These are interchangable for some reason // BoneTransform = ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform = ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } else { // These are interchangable for some reason // BoneTransform += ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform += ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } } } wpos = BoneTransform * wpos; vWeightColor = weight; vVaryingTexCoords = vTexCoords; vVaryingNormals = normalize(vec3(vec4(vNormals, 0.0) * MV)); gl_Position = wpos * MVP; } The Pallete matrices are the matrices calculated using the above code (a rotation and translation matrix get created from the translation and quaternion). The invBindPose matrices are simply the inverted matrices created from the joints in the MD5mesh file. Update 1 I looked at GLM to compare the values I get with my own implementation. They turn out to be exactly the same. So now i'm checking if there's a problem with matrix creation... Update 2 Looked at GLM again to compare matrix creation using quaternions. Turns out that's not the problem either.

    Read the article

  • Per-vertex animation with VBOs: VBO per character or VBO per animation?

    - by charstar
    Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs), if possible. Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will likely be at different frames of the same animation at the same time). Assume color and texture coordinate buffers are static. Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? I've found a lot of information on what you can do, but no real best practices. Are there other considerations or methods that I am missing?

    Read the article

  • WCF Operations and Multidimensional Arrays

    - by JoshReuben
    You cant pass MultiD arrays accross the wire using WCF - you need to pass jagged arrays. heres 2 extension methods that will allow you to convert prior to serialzation and convert back after deserialization:         public static T[,] ToMultiD<T>(this T[][] jArray)         {             int i = jArray.Count();             int j = jArray.Select(x => x.Count()).Aggregate(0, (current, c) => (current > c) ? current : c);                         var mArray = new T[i, j];             for (int ii = 0; ii < i; ii++)             {                 for (int jj = 0; jj < j; jj++)                 {                     mArray[ii, jj] = jArray[ii][jj];                 }             }             return mArray;         }         public static T[][] ToJagged<T>(this T[,] mArray)         {             var cols = mArray.GetLength(0);             var rows = mArray.GetLength(1);             var jArray = new T[cols][];             for (int i = 0; i < cols; i++)             {                 jArray[i] = new T[rows];                 for (int j = 0; j < rows; j++)                 {                     jArray[i][j] = mArray[i, j];                 }             }             return jArray;         } enjoy!

    Read the article

  • Vertex fog producing black artifacts

    - by Nick
    I originally posted this question on the XNA forums but got no replies, so maybe someone here can help: I am rendering a textured model using the XNA BasicEffect. When I enable fog, the model outline is still visible as many small black dots when it should be "in the fog". Why is this happening? Here's what it looks like for me -- http://tinypic.com/r/fnh440/6 Here is a minimal example showing my problem: (the ship model that this example uses is from the chase camera sample on this site -- http://xbox.create.msdn.com/en-US/education/catalog/sample/chasecamera -- in case anyone wants to try it out ;)) public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Model model; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here model = Content.Load<Model>("ship"); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect be in mesh.Effects) { be.EnableDefaultLighting(); be.FogEnabled = true; be.FogColor = Color.CornflowerBlue.ToVector3(); be.FogStart = 10; be.FogEnd = 30; } } } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here model.Draw(Matrix.Identity * Matrix.CreateScale(0.01f) * Matrix.CreateRotationY(3 * MathHelper.PiOver4), Matrix.CreateLookAt(new Vector3(0, 0, 30), Vector3.Zero, Vector3.Up), Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, 16f/9f, 1, 100)); base.Draw(gameTime); } }

    Read the article

  • Access Violation when trying to bind Vertex Object Array

    - by Paul
    I've just started digging into OpenGL and I've run into a problem trying to set a VOA. It's giving me a run-time error of : An unhandled exception of type 'System.AccessViolationException' At // Create and bind a VAO GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); I have searched the internet high and low for a solution and I haven't found one. The rest of my function looks like this: int main(array<System::String ^> ^args) { // Initialise GLFW if( !glfwInit() ) { fprintf( stderr, "Failed to initialize GLFW\n" ); return -1; } glfwOpenWindowHint(GLFW_FSAA_SAMPLES, 0); // 4x antialiasing glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3); // We want OpenGL 3.3 glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 3); glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); //We don't want the old OpenGL // Open a window and create its OpenGL context if( !glfwOpenWindow( 800, 600, 0,0,0,0, 32,0, GLFW_WINDOW ) ) { fprintf( stderr, "Failed to open GLFW window\n" ); glfwTerminate(); return -1; } // Initialize GLEW if (glewInit() != GLEW_OK) { fprintf(stderr, "Failed to initialize GLEW\n"); return -1; } glfwSetWindowTitle( "Game Engine" ); // Create and bind a VAO GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); glfwEnable( GLFW_STICKY_KEYS );

    Read the article

  • Why won't my vertex buffer render in GLFW3?

    - by sm81095
    I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it or how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f(), and such), I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • Per-vertex animation with VBOs: Stream each frame or use index offset per frame?

    - by charstar
    Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will be at different poses/keyframes at the same time). Assume color and texture coordinate buffers are static. Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs). Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? Are there other methods that I am missing?

    Read the article

  • Set vertex position

    - by user1806687
    Can anyone tell me how to set the positions of model vertices? I want to be able to change the position of some of the vertices of a Model. Is there any way to make that happen? And make the changed visible at that moment. EDIT: Well, the thing is,I have a model, a cube, that is made up of four "thin" cubes(top,bottom,left side, right side), so I get this cube with "hole" in the middle. And I want to scale it on Y axis. If I do Scale(0,2,0) it will scale the whole object meaning, it will double the Y size of left and right side, but also double the size of the top and bottom cube, which I do not want. Same for X axis I want to double the size of top and bottom cubes but not the left and right one. Hope you can help

    Read the article

  • How does the "Fourth Dimension" work with arrays?

    - by Questionmark
    Abstract: So, as I understand it (although I have a very limited understanding), there are three dimensions that we (usually) work with physically: The 1st would be represented by a line. The 2nd would be represented by a square. The 3rd would be represented by a cube. Simple enough until we get to the 4th -- It is kinda hard to draw in a 3D space, if you know what I mean... Some people say that it has something to do with time. The Question: Now, that is all great with me. My question isn't about this, or I'd be asking it on MathSO or PhysicsSO. My question is: How does the computer handle this with arrays? I know that you can create 4D, 5D, 6D, etc... arrays in many different programming languages, but I want to know how that works.

    Read the article

  • Vertex array with GLUTessellator

    - by user146780
    I notice that every tutorial on setting up the GLUTesselator and its callbacks, use GlBegin, GlEnd and a vertex callback which uses glVertex. It seems to me like this makes it run in immediate mode right? That seems to me as inefficient. How could I set it up to use a vertex array? Could I just use the output from the vertex callback and then generate a vertex array from this data then submit it like I might usually do it? Or is there another, better way of doing this? Thanks

    Read the article

  • QuickGraph - is there algorithm for find all parents (up to root vertex's) of a set of vertex's

    - by Greg
    Hi, In QuickGraph - is there algorithm for find all parents (up to root vertex's) of a set of vertex's. In other words all vertex's which have somewhere under them (on the way to the leaf nodes) one or more of the vertexs input. So if the vertexs were Nodes, and the edges were a depends on relationship, find all nodes that would be impacted by a given set of nodes. If not how hard is it to write one's own algorithms?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >