Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 370/1027 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • Separate shaders from HTML file in WebGL

    - by Chris Smith
    I'm ramping up on WebGL and was wondering what is the best way to specify my vertex and fragment shaders. Looking at some tutorials, the shaders are embedded directly in the HTML. (And referenced via an ID.) For example: <script id="shader_1-fs" type="x-shader/x-fragment"> precision highp float; void main(void) { // ... } </script> <script id="shader_1-vs" type="x-shader/x-vertex"> attribute vec3 aVertexPosition; uniform mat4 uMVMatrix; // ... My question is, is it possible to have my shaders referenced in a separate file? (Ideally as plain text.) I presume this is straight forward in JavaScript. Is there essentially a way to do this: var shaderText = LoadRemoteFileOnSever('/shaders/shader_1.txt');

    Read the article

  • projection / view matrix: the object is bigger than it should and depth does not affect vertices

    - by Francesco Noferi
    I'm currently trying to write a C 3D software rendering engine from scratch just for fun and to have an insight on what OpenGL does behind the scene and what 90's programmers had to do on DOS. I have written my own matrix library and tested it without noticing any issues, but when I tried projecting the vertices of a simple 2x2 cube at 0,0 as seen by a basic camera at 0,0,10, the cube seems to appear way bigger than the application's window. If I scale the vertices' coordinates down by 8 times I can see a proper cube centered on the screen. This cube doesn't seem to be in perspective: wheen seen from the front, the back vertices pe rfectly overlap with the front ones, so I'm quite sure it's not correct. this is how I create the view and projection matrices (vec4_initd initializes the vectors with w=0, vec4_initw initializes the vectors with w=1): void mat4_lookatlh(mat4 *m, const vec4 *pos, const vec4 *target, const vec4 *updirection) { vec4 fwd, right, up; // fwd = norm(pos - target) fwd = *target; vec4_sub(&fwd, pos); vec4_norm(&fwd); // right = norm(cross(updirection, fwd)) vec4_cross(updirection, &fwd, &right); vec4_norm(&right); // up = cross(right, forward) vec4_cross(&fwd, &right, &up); // orientation and translation matrices combined vec4_initd(&m->a, right.x, up.x, fwd.x); vec4_initd(&m->b, right.y, up.y, fwd.y); vec4_initd(&m->c, right.z, up.z, fwd.z); vec4_initw(&m->d, -vec4_dot(&right, pos), -vec4_dot(&up, pos), -vec4_dot(&fwd, pos)); } void mat4_perspectivefovrh(mat4 *m, float fovdegrees, float aspectratio, float near, float far) { float h = 1.f / tanf(ftoradians(fovdegrees / 2.f)); float w = h / aspectratio; vec4_initd(&m->a, w, 0.f, 0.f); vec4_initd(&m->b, 0.f, h, 0.f); vec4_initw(&m->c, 0.f, 0.f, -far / (near - far)); vec4_initd(&m->d, 0.f, 0.f, (near * far) / (near - far)); } this is how I project my vertices: void device_project(device *d, const vec4 *coord, const mat4 *transform, int *projx, int *projy) { vec4 result; mat4_mul(transform, coord, &result); *projx = result.x * d->w + d->w / 2; *projy = result.y * d->h + d->h / 2; } void device_rendervertices(device *d, const camera *camera, const mesh meshes[], int nmeshes, const rgba *color) { int i, j; mat4 view, projection, world, transform, projview; mat4 translation, rotx, roty, rotz, transrotz, transrotzy; int projx, projy; // vec4_unity = (0.f, 1.f, 0.f, 0.f) mat4_lookatlh(&view, &camera->pos, &camera->target, &vec4_unity); mat4_perspectivefovrh(&projection, 45.f, (float)d->w / (float)d->h, 0.1f, 1.f); for (i = 0; i < nmeshes; i++) { // world matrix = translation * rotz * roty * rotx mat4_translatev(&translation, meshes[i].pos); mat4_rotatex(&rotx, ftoradians(meshes[i].rotx)); mat4_rotatey(&roty, ftoradians(meshes[i].roty)); mat4_rotatez(&rotz, ftoradians(meshes[i].rotz)); mat4_mulm(&translation, &rotz, &transrotz); // transrotz = translation * rotz mat4_mulm(&transrotz, &roty, &transrotzy); // transrotzy = transrotz * roty = translation * rotz * roty mat4_mulm(&transrotzy, &rotx, &world); // world = transrotzy * rotx = translation * rotz * roty * rotx // transform matrix mat4_mulm(&projection, &view, &projview); // projview = projection * view mat4_mulm(&projview, &world, &transform); // transform = projview * world = projection * view * world for (j = 0; j < meshes[i].nvertices; j++) { device_project(d, &meshes[i].vertices[j], &transform, &projx, &projy); device_putpixel(d, projx, projy, color); } } } this is how the cube and camera are initialized: // test mesh cube = &meshlist[0]; mesh_init(cube, "Cube", 8); cube->rotx = 0.f; cube->roty = 0.f; cube->rotz = 0.f; vec4_initw(&cube->pos, 0.f, 0.f, 0.f); vec4_initw(&cube->vertices[0], -1.f, 1.f, 1.f); vec4_initw(&cube->vertices[1], 1.f, 1.f, 1.f); vec4_initw(&cube->vertices[2], -1.f, -1.f, 1.f); vec4_initw(&cube->vertices[3], -1.f, -1.f, -1.f); vec4_initw(&cube->vertices[4], -1.f, 1.f, -1.f); vec4_initw(&cube->vertices[5], 1.f, 1.f, -1.f); vec4_initw(&cube->vertices[6], 1.f, -1.f, 1.f); vec4_initw(&cube->vertices[7], 1.f, -1.f, -1.f); // main camera vec4_initw(&maincamera.pos, 0.f, 0.f, 10.f); maincamera.target = vec4_zerow; and, just to be sure, this is how I compute matrix multiplications: void mat4_mul(const mat4 *m, const vec4 *va, vec4 *vb) { vb->x = m->a.x * va->x + m->b.x * va->y + m->c.x * va->z + m->d.x * va->w; vb->y = m->a.y * va->x + m->b.y * va->y + m->c.y * va->z + m->d.y * va->w; vb->z = m->a.z * va->x + m->b.z * va->y + m->c.z * va->z + m->d.z * va->w; vb->w = m->a.w * va->x + m->b.w * va->y + m->c.w * va->z + m->d.w * va->w; } void mat4_mulm(const mat4 *ma, const mat4 *mb, mat4 *mc) { mat4_mul(ma, &mb->a, &mc->a); mat4_mul(ma, &mb->b, &mc->b); mat4_mul(ma, &mb->c, &mc->c); mat4_mul(ma, &mb->d, &mc->d); }

    Read the article

  • Rotate sprite to face 3D camera

    - by omikun
    I am trying to rotate a sprite so it is always facing a 3D camera. shaders->setUniform("camera", gCamera.matrix()); glm::mat4 scale = glm::scale(glm::mat4(), glm::vec3(5e5, 5e5, 5e5)); glm::vec3 look = gCamera.position(); glm::vec3 right = glm::cross(gCamera.up(), look); glm::vec3 up = glm::cross(look, right); glm::mat4 newTransform = glm::lookAt(glm::vec3(0), gCamera.position(), up) * scale; shaders->setUniform("model", newTransform); In the vertex shader: gl_Position = camera * model * vec4(vert, 1); The object will track the camera if I move the camera up or down, but if I rotate the camera around it, it will rotate in the other direction so I end up seeing its front twice and its back twice as I rotate around it 360. What am I doing wrong?

    Read the article

  • *DX11, HLSL* - Colour as 4 floats or one UINT

    - by Paul
    With the DX11 pipeline, would it be much quicker for the vertex buffer to pass one single UINT with one byte per channel to the input assembler, as opposed to three floats? Then the vertex shader would convert the four bytes to four floats, which I guess is the required colour format for the pipeline. In this instance, colour accuracy isn't an issue. The vertex buffer would need to be updated many times per frame, so using a single UINT and saving 12 bytes for every vertex could well be worth it: quicker uploads to vram and also less memory used. But the cost is the extra shader work for every vertex to convert each 8 bits of the input UNIT into a float. Anyone have an idea if it might be worth doing? Or, is it possible for the pipeline to be set to just internally use a four-byte colour format? The swap chain buffer has been initialised as DXGI_FORMAT_R8G8B8A8_UNORM, so ultimately that's how the colour will be written. Thanks!

    Read the article

  • VBO and shaders confusion, what's their connection?

    - by Jeffrey
    Considering OpenGL 2.1 VBOs and 1.20 GLSL shaders: When creating an entity like "Zombie", is it good to initialize just the VBO buffer with the data once and do N glDrawArrays() calls per each N zombies? Is there a more efficient way? (With a single call we cannot pass different uniforms to the shader to calculate an offset, see point 3) When dealing with logical object (player, tree, cube etc), should I always use the same shader or should I customize (or be able to customize) the shaders per each object? Considering an entity class, should I create and define the shader at object initialization? When having a movable object such as a human, is there any more powerful way to deal with its coordinates than to initialize its VBO object at 0,0 and define an uniform offset to pass to the shader to calculate its real position? Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombielist class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie }

    Read the article

  • UTF-8 encoding problem with flash mysql and php

    - by alibhp
    Hi, As you may know, I am programming an on-line game using FLASH. I am connecting my FLASH 8 movie with MySQL database through PHP. I am doing very good in that, and I have everything working fine. The problems come when I am trying to insert (Using the INSERT SQL func) data to the database that are non-english. In other words, UTF-8 data. I red a lot of articls about that stuff and found and apply the fallowing: 1. In PHP4, you need to tell the PHP to use UTF-8 when using the xml_parser_crater() func, however, in PHP5 that is done automatically. Even though I told PHP5 to use the UTF-8 when calling the func. Adding the header to the XML sent to PHP from flash. Force the FLASH to use UTF-8 encoding in the preference options. Set the encoding in MySQL to UTF-8 (utf8_unicode_ci with InnoDB engine). I can read and insert the other language data correctly in the phpadmin as well. I did all that in my coding, and still I can't insert such data. one more strange thing is that, when I use the same link, that the FLASH using, with the XML, that the FLASH creating, on the browser (google chrome), I got the data inserted right in the database!!!!! I am about to get crazy about that stuff, What am I missing? what cause the problem? Thank you in advance.

    Read the article

  • Algorithm to reduce a bitmap mask to a list of rectangles?

    - by mos
    Before I go spend an afternoon writing this myself, I thought I'd ask if there was an implementation already available --even just as a reference. The first image is an example of a bitmap mask that I would like to turn into a list of rectangles. A bad algorithm would return every set pixel as a 1x1 rectangle. A good algorithm would look like the second image, where it returns the coordinates of the orange and red rectangles. The fact that the rectangles overlap don't matter, just that there are only two returned. To summarize, the ideal result would be these two rectangles (x, y, w, h): [ { 3, 1, 2, 6 }, { 1, 3, 6, 2 } ]

    Read the article

  • Alchemy like game for the web, open source. Any ideas for element combinations?

    - by JohnDel
    I created a web game like the Android game Alchemy. It's open source and in the back-end you can create your own elements / your own game. I was wondering what elements - ideas would be good to implement as a prototype / demo? Some ideas are: Colors Programming languages Chemical Compounds Same as the original alchemy Evolution of biological organisms What do you think? Any specific combination ideas?

    Read the article

  • Cocos2d-x 3.0 animation frame by frame

    - by Narek
    As I know animations are actions. Now I need to play animation frame by frame. Say I have an animation from N frames. each frame should be played after t delay. Now I want to play animation frame by frame, each frame advance the animation's state. How I can do this? And what about playing actions frame by frame advancing the state in general. I ask because I use ECS, and I deal with frames. P.S. I want to do something like this: Action * a = MoveTo(initialPoint, finalPoint, durationOfAnimation); a->play(0.001 seconds); a->play(0.003 seconds); a->play(0.02 seconds); a->play(0.67 seconds); a->play(0.06 seconds); And see the animation.

    Read the article

  • Resolution independent physics

    - by user46877
    I'm making a game like Doodlejump but don't know how to make the physics scale on multiple resolutions. I also can't find anything related to this on Google. Right now I'm scaling the game using letterboxing and tested scaling the jump height with this code: gravity = graphics.getHeight() * 0.001f; jumpVel = graphics.getHeight() * -0.04f; ... velY += gravity; y += velY; But if I test this on my smartphone or emulator with different resolutions, I always get a slightly different jump height. I know that Farseer is resolution independent. How can I replicate this in my game? Thanks in advance.

    Read the article

  • Why isn't my lighting working properly? Are my normals messed up?

    - by Radek Slupik
    I'm relatively new to OpenGL and I am trying to draw a 3D model (loaded from a 3ds file using lib3ds) using OpenGL with lighting, but about half of it is drawn in black. I set up the light as such: glEnable(GL_LIGHTING); glShadeModel(GL_SMOOTH); GLfloat ambientColor[] = {0.2f, 0.2f, 0.2f, 1.0f}; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientColor); glEnable(GL_LIGHT0); GLfloat lightColor0[] = {1.0f, 1.0f, 1.0f, 1.0f}; GLfloat lightPos0[] = {4.0f, 0.0f, 8.0f, 0.0f}; glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor0); glLightfv(GL_LIGHT0, GL_POSITION, lightPos0); The model is in a VBO and drawn using glDrawArrays. The normals are in a separate VBO, and the normals are calculated using lib3ds_mesh_calculate_vertex_normals: std::vector<std::array<float, 3>> normals; for (std::size_t i = 0; i < model->nmeshes; ++i) { auto& mesh = *model->meshes[i]; std::vector<float[3]> vertex_normals(mesh.nfaces * 3); lib3ds_mesh_calculate_vertex_normals(&mesh, vertex_normals.data()); for (std::size_t j = 0; j < mesh.nfaces; ++j) { auto& face = mesh.faces[j]; normals.push_back(make_array(vertex_normals[j])); } } glBindBuffer(GL_ARRAY_BUFFER, normal_vbo_); glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(decltype(normals)::value_type), normals.data(), GL_STATIC_DRAW); The problem isn't the vertices; the model is drawn correctly when drawing it as a wireframe. I also fixed the normals in Blender using controlN. What could be the problem? Should I store the normals in a different order?

    Read the article

  • How can I move along an angled collision at a constant speed?

    - by Raven Dreamer
    I have, for all intents and purposes, a Triangle class that objects in my scene can collide with (In actuality, the right side of a parallelogram). My collision detection and resolution code works fine for the purposes of preventing a gameobject from entering into the space of the Triangle, instead directing the movement along the edge. The trouble is, the maximum speed along the x and y axis is not equivalent in my game, and moving along the Y axis (up or down) should take twice as long as an equivalent distance along the X axis (left or right). Unfortunately, these speeds apply to the collision resolution too, and movement along the blue path above progresses twice as fast. What can I do in my collision resolution to make sure that the speedlimit for Y axis movement is obeyed in the latter case? Collision Resolution for this case below (vecInput and velocity are the position and velocity vectors of the game object): // y = mx+c lowY = 2*vecInput.x + parag.rightYIntercept ; ... else { // y = mx+c // vecInput.y = 2(x) + RightYIntercept // (vecInput.y - RightYIntercept) / 2 = x; //if velocity.Y (positive) greater than velocity.X (negative) //pushing from bottom, so push right. if(velocity.y > -1*velocity.x) { vecInput = new Vector2((vecInput.y - parag.rightYIntercept)/2, vecInput.y); Debug.Log("adjusted rightwards"); } else { vecInput = new Vector2( vecInput.x, lowY); Debug.Log("adjusted downwards"); } }

    Read the article

  • Low-level GPU code and Shader Compilation

    - by ktodisco
    Bear with me, because I will raise several questions at once. I still feel, though, that overall this can be treated as one question that may be answered succinctly. I recently dove into solidifying my understanding of the assembly language, low-level memory operations, CPU structure, and program optimizations. This also sparked my interest in how higher-level shading languages, GLSL and HLSL in particular, are compiled and optimized, as well as what formats they are reduced to before machine code is generated (assuming they are not converted directly into machine code). After a bit of research into this, the best resource I've found is this presentation from ATI about the compilation of and optimizations for HLSL. I also found sample ARB assembly code. This sort of addressed my original curiosity, but it raised several other questions. The assembler code in the ATI presentation seems like it contains instructions specifically targeted for the GPU, but is this merely a hypothetical example created for the purpose of conceptual understanding, or is this code really generated during shader compilation? If so, is it possible to inspect it, or even write it in place of the higher-level syntax? My initial searches for an answer to the last question tell me that this may be disallowed, but I have not dug too deep yet. Also, along the same lines, are GLSL shader programs compiled into ARB assembly code before machine code is generated, and is it possible to write direct ARB assembly? Lastly, and perhaps what I am most interested in finding out: are there comprehensive resources on shader compilation and low-level GPU code? I have been unable to find any thus far. I ask simply because I am curious :)

    Read the article

  • Android, how important is deltaTime?

    - by iQue
    Im making a game that is getting pretty big and sometimes my thread has to skip a frame, so far I'm not using deltaTime for setting the speed of my different objects in the game because it's still not a big enough game for it to matter imo. But its getting bigger then I planned, so my question is, how important is delta Time? If I should use delta time there is a problem, since speedX and speedY are integers(they have to be for eclipse to let you make a rectangle of them), I cant add delta time very functionally as far as I understand, but might be wrong? Ive tried adding deltaTime to the code below, and sometimes my enemies just not move after spawn, they just stand there and run in the same place Will add an some code for how I set / use speed: public void update(int dx, int dy) { double theta = 180.0 / Math.PI * Math.atan2(-(y - controls.pointerPosition.y), controls.pointerPosition.x - x); x +=dx * Math.cos(Math.toRadians(theta)); y +=dy * Math.sin(Math.toRadians(theta)); currentFrame = ++currentFrame % BMP_COLUMNS; } public void draw(Canvas canvas) { int srcX = currentFrame * width; int srcY = 1 * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bitmap, src, dst, null); } So if someone with some experience with this has any thoughts, please share. Thank you! Changed code: public void update(int dx, int dy, float delta) { double theta = 180.0 / Math.PI * Math.atan2(-(y - controls.pointerPosition.y), controls.pointerPosition.x - x); double speedX = delta * dx * Math.cos(Math.toRadians(theta)); double speedY = delta * dy * Math.sin(Math.toRadians(theta)); x += speedX; y += speedY; currentFrame = ++currentFrame % BMP_COLUMNS; } public void draw(Canvas canvas) { int srcX = currentFrame * width; int srcY = 1 * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bitmap, src, dst, null); } with this code my enemies move like before, except they wont move to the right (wont increment x), all other directions work.

    Read the article

  • UDP Code client server architecture

    - by GameBuilder
    Hi I have developed a game on android.Now I want to play it on wifi or 3G. I have game packets which i want to send it form client(mobile) to server then to another client2(mobile). I don't know how to write code in Java to send the playPackets continuously to server and receive the playPacket continuously from the server to the clients. I guess i have to use two thread one for sending and one for receiving. Can someone help me with the code, or the procedure to write code for it. Thanks in advance.

    Read the article

  • Drawing a textured triangle with CPU instead of GPU

    - by Jenko
    I understand the benefits of GPU rendering and such, but for a certain limited application I need to render textured triangles purely using CPU. I've built a 3D engine capable of object handling, transform, projection, culling and the likes ... now all I need is a little code snippet that draws a single textured triangle onto a bitmap... any language accepted! Inputs: Texture bitmap, Triangle U/V/W coords, Triangle X/Y screen coords Output: The textured triangle drawn at the given screen coords I've currently been using a platform function to draw triangles to screen, but I'm looking to handle it myself to speeden up the process.

    Read the article

  • Creating blur with an alpha channel, incorrect inclusion of black

    - by edA-qa mort-ora-y
    I'm trying to do a blur on a texture with an alpha channel. Using a typical approach (two-pass, gaussian weighting) I end up with a very dark blur. The reason is because the blurring does not properly account for the alpha channel. It happily blurs in the invisible part of the image, whcih happens to be black, and thus results in a very dark blur. Is there a technique to blur that properly accounts for the alpha channel?

    Read the article

  • SDL to SFML simple question

    - by ultifinitus
    Hey! I've been working on a game in c++ for about a week and a half, and I've been using SDL. However, my current engine only needs the following from whatever library I use: enable double buffering load an image from path into something that I can apply to the screen apply an image to the screen with a certain x,y enable transparency on an image (possibly) image clipping, for sprite sheets. I am fairly sure that SFML has all of this functionality, I'm just not positive. Will someone confirm my suspicions? Also I have one or two questions regarding SFML itself. Do I have to do anything to enable hardware accelerated rendering? How quick is SFML at blending alpha values? (sorry for the less than intelligent question!)

    Read the article

  • Typical Method Of Building Puzzle Levels

    - by Josh Kahane
    Hi I am designing a puzzle game for the iphone and was wondering as most puzzle games consist of the player progressing through multiple levels. You see for example Angry Birds has over 100 levels. Once the basis of the game is made, how do developers typically go about building their levels? Do they generally build them from scratch each one more or less, or work of their own template or have some other method which they use to tailor these levels? I imagine building so many levels is a long process, certainly if building each one individually. Do they do this, or have a method which speeds it up once they have their basis? Thanks.

    Read the article

  • The View-Matrix and Alternative Calculations

    - by P. Avery
    I'm working on a radiosity processor in DirectX 9. The process requires that the camera be placed at the center of a mesh face and a 'screenshot' be taken facing 5 different directions...forward...up...down...left...right... ...The problem is that when the mesh face is facing up( look vector: 0, 1, 0 )...a view matrix cannot be determined using standard trigonometry functions: Matrix4 LookAt( Vector3 eye, Vector3 target, Vector3 up ) { // The "look-at" vector. Vector3 zaxis = normal(target - eye); // The "right" vector. Vector3 xaxis = normal(cross(up, zaxis)); // The "up" vector. Vector3 yaxis = cross(zaxis, xaxis); // Create a 4x4 orientation matrix from the right, up, and at vectors Matrix4 orientation = { xaxis.x, yaxis.x, zaxis.x, 0, xaxis.y, yaxis.y, zaxis.y, 0, xaxis.z, yaxis.z, zaxis.z, 0, 0, 0, 0, 1 }; // Create a 4x4 translation matrix by negating the eye position. Matrix4 translation = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -eye.x, -eye.y, -eye.z, 1 }; // Combine the orientation and translation to compute the view matrix return ( translation * orientation ); } The above function comes from http://3dgep.com/?p=1700... ...Is there a mathematical approach to this problem? Edit: A problem occurs when setting the view matrix to up or down directions, here is an example of the problem when facing down: D3DXVECTOR4 vPos( 3, 3, 3, 1 ), vEye( 1.5, 3, 3, 1 ), vLook( 0, -1, 0, 1 ), vRight( 1, 0, 0, 1 ), vUp( 0, 0, 1, 1 ); D3DXMATRIX mV, mP; D3DXMatrixPerspectiveFovLH( &mP, D3DX_PI / 2, 1, 0.5f, 2000.0f ); D3DXMatrixIdentity( &mV ); memcpy( ( void* )&mV._11, ( void* )&vRight, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._21, ( void* )&vUp, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._31, ( void* )&vLook, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._41, ( void* )&(-vEye), sizeof( D3DXVECTOR3 ) ); D3DXVec4Transform( &vPos, &vPos, &( mV * mP ) ); Results: vPos = D3DXVECTOR3( 1.5, -6, -0.5, 0 ) - this vertex is not properly processed by shader as the homogenous w value is 0 it cannot be normalized to a position within device space...

    Read the article

  • Why doesn't Unity's OnCollisionEnter give me surface normals, and what's the most reliable way to get them?

    - by michael.bartnett
    Unity's on collision event gives you a Collision object that gives you some information about the collision that happened (including a list of ContactPoints with hit normals). But what you don't get is surface normals for the collider that you hit. Here's a screenshot to illustrate. The red line is from ContactPoint.normal and the blue line is from RaycastHit.normal. Is this an instance of Unity hiding information to provide a simplified API? Or do standard 3D realtime collision detection techniques just not collect this information? And for the second part of the question, what's a surefire and relatively efficient way to get a surface normal for a collision? I know that raycasting gives you surface normals, but it seems I need to do several raycasts to accomplish this for all scenarios (maybe a contact point/normal combination misses the collider on the first cast, or maybe you need to do some average of all the contact points' normals to get the best result). My current method: Back up the Collision.contacts[0].point along its hit normal Raycast down the negated hit normal for float.MaxValue, on Collision.collider If that fails, repeat steps 1 and 2 with the non-negated normal If that fails, try steps 1 to 3 with Collision.contacts[1] Repeat 4 until successful or until all contact points exhausted. Give up, return Vector3.zero. This seems to catch everything, but all those raycasts make me queasy, and I'm not sure how to test that this works for enough cases. Is there a better way?

    Read the article

  • Shadow Mapping and Transparent Quads

    - by CiscoIPPhone
    Shadow mapping uses the depth buffer to calculate where shadows should be drawn. My problem is that I'd like some semi transparent textured quads to cast shadows - for example billboarded trees. As the depth value will be set across all of the quad and not just the visible parts it will cast a quad shadow, which is not what I want. How can I make my transparent quads cast correct shadows using shadow mapping?

    Read the article

  • 2D graphics - why use spritesheets?

    - by Columbo
    I have seen many examples of how to render sprites from a spritesheet but I havent grasped why it is the most common way of dealing with sprites in 2d games. I have started out with 2d sprite rendering in the few demo applications I've made by dealing with each animation frame for any given sprite type as its own texture - and this collection of textures is stored in a dictionary. This seems to work for me, and suits my workflow pretty well, as I tend to make my animations as gif/mng files and then extract the frames to individual pngs. Is there a noticeable performance advantage to rendering from a single sheet rather than from individual textures? With modern hardware that is capable of drawing millions of polygons to the screen a hundred times a second, does it even matter for my 2d games which just deal with a few dozen 50x100px rectangles? The implementation details of loading a texture into graphics memory and displaying it in XNA seems pretty abstracted. All I know is that textures are bound to the graphics device when they are loaded, then during the game loop, the textures get rendered in batches. So it's not clear to me whether my choice affects performance. I suspect that there are some very good reasons most 2d game developers seem to be using them, I just don't understand why.

    Read the article

  • Can and should a game design be patented?

    - by Christian
    I have an idea for a game that I want to develop and I feel is unique, and I'm wondering if I should patent it. I read on the web that games can be patented, but just because it can be done doesn't mean that it makes sense to do it. I actually don't really want patent it (it's expensive, a hassle and I don't believe in patenting of ideas... unless it's something truly revolutionary). However, I'm concerned a bigger company could come along, with more experienced game designers and developers and steal the idea.

    Read the article

  • Arbitrary Rotation about a Sphere

    - by Der
    I'm coding a mechanic which allows a user to move around the surface of a sphere. The position on the sphere is currently stored as theta and phi, where theta is the angle between the z-axis and the xz projection of the current position (i.e. rotation about the y axis), and phi is the angle from the y-axis to the position. I explained that poorly, but it is essentially theta = yaw, phi = pitch Vector3 position = new Vector3(0,0,1); position.X = (float)Math.Sin(phi) * (float)Math.Sin(theta); position.Y = (float)Math.Sin(phi) * (float)Math.Cos(theta); position.Z = (float)Math.Cos(phi); position *= r; I believe this is accurate, however I could be wrong. I need to be able to move in an arbitrary pseudo two dimensional direction around the surface of a sphere at the origin of world space with radius r. For example, holding W should move around the sphere in an upwards direction relative to the orientation of the player. I believe I should be using a Quaternion to represent the position/orientation on the sphere, but I can't think of the correct way of doing it. Spherical geometry is not my strong suit. Essentially, I need to fill the following block: public void Move(Direction dir) { switch (dir) { case Direction.Left: // update quaternion to rotate left break; case Direction.Right: // update quaternion to rotate right break; case Direction.Up: // update quaternion to rotate upward break; case Direction.Down: // update quaternion to rotate downward break; } }

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >