Search Results

Search found 28914 results on 1157 pages for 'cloud development'.

Page 432/1157 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • Why would GLCapabilities.setHardwareAccelerated(true/false) have no effect on performance?

    - by Luke
    I've got a JOGL application in which I am rendering 1 million textures (all the same texture) and 1 million lines between those textures. Basically it's a ball-and-stick graph. I am storing the vertices in a vertex array on the card and referencing them via index arrays, which are also stored on the card. Each pass through the draw loop I am basically doing this: gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_POINTS, <size>, GL.GL_UNSIGNED_INT, 0); gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_LINES, <size>, GL.GL_UNSIGNED_INT, 0); I noticed that the JOGL library is pegging one of my CPU cores. Every frame, the run method internal to the library is taking quite long. I'm not sure why this is happening since I have called setHardwareAccelerated(true) on the GLCapabilities used to create my canvas. What's more interesting is that I changed it to setHardwareAccelerated(false) and there was no impact on the performance at all. Is it possible that my code is not using hardware rendering even when it is set to true? Is there any way to check? EDIT: As suggested, I have tested breaking my calls up into smaller chunks. I have tried using glDrawRangeElements and respecting the limits that it requests. All of these simply resulted in the same pegged CPU usage and worse framerates. I have also narrowed the problem down to a simpler example where I just render 4 million textures (no lines). The draw loop then just doing this: gl.glEnableClientState(GL.GL_VERTEX_ARRAY); gl.glEnableClientState(GL.GL_INDEX_ARRAY); gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT); gl.glMatrixMode(GL.GL_MODELVIEW); gl.glLoadIdentity(); <... Camera and transform related code ...> gl.glEnableVertexAttribArray(0); gl.glEnable(GL.GL_TEXTURE_2D); gl.glAlphaFunc(GL.GL_GREATER, ALPHA_TEST_LIMIT); gl.glEnable(GL.GL_ALPHA_TEST); <... Bind texture ...> gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_POINTS, <size>, GL.GL_UNSIGNED_INT, 0); gl.glDisable(GL.GL_TEXTURE_2D); gl.glDisable(GL.GL_ALPHA_TEST); gl.glDisableVertexAttribArray(0); gl.glFlush(); Where the first buffer contains 12 million floats (the x,y,z coords of the 4 million textures) and the second (element) buffer contains 4 million integers. In this simple example it is simply the integers 0 through 3999999. I really want to know what is being done in software that is pegging my CPU, and how I can make it stop (if I can). My buffers are generated by the following code: gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBufferData(GL.GL_ARRAY_BUFFER, <size> * BufferUtil.SIZEOF_FLOAT, <buffer>, GL.GL_STATIC_DRAW); gl.glVertexAttribPointer(0, 3, GL.GL_FLOAT, false, 0, 0); and: gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glBufferData(GL.GL_ELEMENT_ARRAY_BUFFER, <size> * BufferUtil.SIZEOF_INT, <buffer>, GL.GL_STATIC_DRAW); ADDITIONAL INFO: Here is my initialization code: gl.setSwapInterval(1); //Also tried 0 gl.glShadeModel(GL.GL_SMOOTH); gl.glClearDepth(1.0f); gl.glEnable(GL.GL_DEPTH_TEST); gl.glDepthFunc(GL.GL_LESS); gl.glHint(GL.GL_PERSPECTIVE_CORRECTION_HINT, GL.GL_FASTEST); gl.glPointParameterfv(GL.GL_POINT_DISTANCE_ATTENUATION, POINT_DISTANCE_ATTENUATION, 0); gl.glPointParameterfv(GL.GL_POINT_SIZE_MIN, MIN_POINT_SIZE, 0); gl.glPointParameterfv(GL.GL_POINT_SIZE_MAX, MAX_POINT_SIZE, 0); gl.glPointSize(POINT_SIZE); gl.glTexEnvf(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); gl.glEnable(GL.GL_POINT_SPRITE); gl.glClearColor(clearColor.getX(), clearColor.getY(), clearColor.getZ(), 0.0f); Also, I'm not sure if this helps or not, but when I drag the entire graph off the screen, the FPS shoots back up and the CPU usage falls to 0%. This seems obvious and intuitive to me, but I thought that might give a hint to someone else.

    Read the article

  • Surface of Revolution with 3D surface

    - by user5584
    I have to use this function to get a Surface of Revolution (homework). newVertex = (oldVertex.y, someFunc1(oldVertex.x, oldVertex.y), someFunc2(oldVertex.x, oldVertex.y)); As far as I know (FIXME) Surface of Revolution means rotations of a (2D)curve around an axis in 3D. But this vertex computing gives a 3D plane (FIXME again :D), so rotation of this isn't obvious. Am I misunderstanding something?

    Read the article

  • Converting from different handedness coordinate systems

    - by SirYakalot
    I am currently porting a demo from XNA to DirectX which, as I understand it, both have coordinate systems with different handednesses. What are the things I need to bare in mind when converting between the two? I understand not everything needs to be changed. Also I notice that many of the 3D maths functions in some of the direct3D libraries have right handed and left handed alternatives. Would it be better to just use these?

    Read the article

  • Taking fixed direction on hemisphere and project to normal (openGL)

    - by Maik Xhani
    I am trying to perform sampling using hemisphere around a surface normal. I want to experiment with fixed directions (and maybe jitter slightly between frames). So I have those directions: vec3 sampleDirections[6] = {vec3(0.0f, 1.0f, 0.0f), vec3(0.0f, 0.5f, 0.866025f), vec3(0.823639f, 0.5f, 0.267617f), vec3(0.509037f, 0.5f, -0.700629f), vec3(-0.509037f, 0.5f, -0.700629), vec3(-0.823639f, 0.5f, 0.267617f)}; now I want the first direction to be projected on the normal and the others accordingly. I tried these 2 codes, both failing. This is what I used for random sampling (it doesn't seem to work well, the samples seem to be biased towards a certain direction) and I just used one of the fixed directions instead of s (here is the code of the random sample, when i used it with the fixed direction i didn't use theta and phi). vec3 CosWeightedRandomHemisphereDirection( vec3 n, float rand1, float rand2 ) float theta = acos(sqrt(1.0f-rand1)); float phi = 6.283185f * rand2; vec3 s = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta)); vec3 v = normalize(cross(n,vec3(0.0072, 1.0, 0.0034))); vec3 u = cross(v, n); u = s.x*u; v = s.y*v; vec3 w = s.z*n; vec3 direction = u+v+w; return normalize(direction); } ** EDIT ** This is the new code vec3 FixedHemisphereDirection( vec3 n, vec3 sampleDir) { vec3 x; vec3 z; if(abs(n.x) < abs(n.y)){ if(abs(n.x) < abs(n.z)){ x = vec3(1.0f,0.0f,0.0f); }else{ x = vec3(0.0f,0.0f,1.0f); } }else{ if(abs(n.y) < abs(n.z)){ x = vec3(0.0f,1.0f,0.0f); }else{ x = vec3(0.0f,0.0f,1.0f); } } z = normalize(cross(x,n)); x = cross(n,z); mat3 M = mat3( x.x, n.x, z.x, x.y, n.y, z.y, x.z, n.z, z.z); return M*sampleDir; } So if my n = (0,0,1); and my sampleDir = (0,1,0); shouldn't the M*sampleDir be (0,0,1)? Cause that is what I was expecting.

    Read the article

  • XNA 2D line-of-sight check

    - by bionicOnion
    I'm working on a top-down shooter in XNA, and I need to implement line-of-sight checking. I've come up with a solution that seems to work, but I get the nagging feeling that it won't be efficient enough to do every frame for multiple calls (the game already hiccups slightly at about 10 calls per frame). The code is below, but my general plan was to create a series of rectangles with a width and height of zero to act as points along the sight line, and then check to see if any of these rectangles intersects a ClutterObject (an interface I defined for things like walls or other obstacles) after first screening for any that can't possibly be in the line of sight (i.e. behind the viewer) or are too far away (a concession I made for efficiency). public static bool LOSCheck(Vector2 pos1, Vector2 pos2) { Vector2 currentPos = pos1; Vector2 perMove = (pos2 - pos1); perMove.Normalize(); HashSet<ClutterObject> clutter = new HashSet<ClutterObject>(); foreach (Room r in map.GetRooms()) { if (r != null) { foreach (ClutterObject c in r.GetClutter()) { if (c != null &&!(c.GetRectangle().X * perMove.X < 0) && !(c.GetRectangle().Y * perMove.Y < 0)) { Vector2 cVector = new Vector2(c.GetRectangle().X, c.GetRectangle().Y); if ((cVector - pos1).Length() < 1500) clutter.Add(c); } } } } while (currentPos != pos2 && ((currentPos - pos1).Length() < 1500)) { Rectangle position = new Rectangle((int)currentPos.X, (int)currentPos.Y, 0, 0); foreach (ClutterObject c in clutter) { if (position.Intersects(c.GetRectangle())) return false; } currentPos += perMove; } return true; } I'm sure that there's a better way to do this (or at least a way to make this method more efficient), but I'm not too used to XNA yet, so I figured it couldn't hurt to bring it here. At the very least, is there an efficient to determine which objects may be in front of the viewer with greater precision than the rather broad 90 degree window I've given myself?

    Read the article

  • How do I consistently re-size my game window and elements?

    - by Milo
    In my 2D game, I have a flow layout. Inside the flow layout are tables. I have a slider that lets the user make the tables larger or smaller. This makes the background larger or smaller too. Everything should scale proportionally which means the background should stay at the same position when I make things larger, and it almost does. When the scrollbar is at 0, it does exactly this. As the scrollbar gets further down problems arise. I'll toggle the slider maybe 3 times and on the fourth time, the background jumps a little lower on the Y axis. In order to be efficient, I only start rendering the background near the parent of the flow layout. Here it is: void LobbyTableManager::renderBG( GraphicsContext* g, agui::Rectangle& absRect, agui::Rectangle& childRect ) { int cx, cy, cw, ch; g->getClippingRect(cx,cy,cw,ch); g->setClippingRect(absRect.getX(),absRect.getY(),absRect.getWidth(),absRect.getHeight()); float scale = 0.35f; int w = m_bgSprite->getWidth() * getTableScale() * scale; int h = m_bgSprite->getHeight() * getTableScale() * scale; int numX = ceil(absRect.getWidth() / (float)w) + 2; int numY = ceil(absRect.getHeight() / (float)h) + 2; float offsetX = m_activeTables[0]->getLocation().getX() - w; float offsetY = m_activeTables[0]->getLocation().getY() - h; int startY = childRect.getY(); if(moo) { std::cout << "S=" << startY << ","; } int numAttempts = 0; while(startY + h < absRect.getY() && numAttempts < 1000) { startY += h; if(moo) { std::cout << startY << ","; } numAttempts++; } if(moo) { std::cout << "\n"; moo = false; } g->holdDrawing(); for(int i = 0; i < numX; ++i) { for(int j = 0; j < numY; ++j) { g->drawScaledSprite(m_bgSprite,0,0,m_bgSprite->getWidth(),m_bgSprite->getHeight(), absRect.getX() + (i * w) + (offsetX),absRect.getY() + (j * h) + startY,w,h,0); } } g->unholdDrawing(); g->setClippingRect(cx,cy,cw,ch); } The numeric problem seems to be in the value of startY. I outputted startY figuring out its value: As you can see here, this is me only zooming in, pay attention to the final number before the next s=. You'll notice that, what should happen is, the numbers should be linear, ex: -40, -38, -36, -34, -32, -30, etc. As you'll notice, the start numbers linearly correlate ex: 62k, 64k, 66k, 68k, 70k etc.. but the end result is wrong every third or 4th time. Here is most of the resize code: void LobbyTableManager::setTableScale( float scale ) { scale += 0.3f; scale *= 2.0f; agui::Gui* gotGui = getGui(); float scrollRel = m_vScroll->getRelativeValue(); setScale(scale); rescaleTables(); resizeFlow(); if(gotGui) { gotGui->toggleWidgetLocationChanged(false); } updateScrollBars(); float newVal = scrollRel * m_vScroll->getMaxValue(); if((int)(newVal + 0.5f) > (int)newVal) { newVal++; } m_vScroll->setValue(newVal); static int x = 0; x++; moo = true; //std::cout << m_vScroll->getValue() << std::endl; if(gotGui) { gotGui->toggleWidgetLocationChanged(true); } if(gotGui) { gotGui->_widgetLocationChanged(); } } void LobbyTableManager::valueChanged( agui::VScrollBar* source,int val ) { if(getGui()) { getGui()->toggleWidgetLocationChanged(false); } m_flow->setLocation(0,-val); if(getGui()) { getGui()->toggleWidgetLocationChanged(true); getGui()->_widgetLocationChanged(); } }

    Read the article

  • How much it will cost to create tile-set similar to HoM&M 2?

    - by Alexey Petrushin
    How much it will cost to create tile-set similar to HoM&M 2? I'm mostly interested in the tile-set graphics only, no animation needed, the big images of town and creatures can be done as quick and dirty pensil sketches. The quality of tiles and its amount should be roughly the same as in HoM&M 2. Can You please give a rough estimate how much it will take man-hours and how much will it cost?

    Read the article

  • Having a problem with texturing vertices in WebGL, think parameters are off in the image?

    - by mathacka
    I'm having a problem texturing a simple rectangle in my WebGL program, I have the parameters set as follows: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureImage); I'm using this image: On the properties of this image it says it's 32 bit depth, so that should take care of the gl.UNSIGNED_BYTE, and I've tried both gl.RGBA and gl.RGB to see if it's not reading the transparency. It is a 32x32 pixel image, so it's power of 2. And I've tried almost all the combinations of formats and types, but I'm not sure if this is the answer or not. I'm getting these two errors in the chrome console: INVALID_VALUE: texImage2D: invalid image (index):101 WebGL: drawArrays: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'. Or the texture is Float or Half Float type with linear filtering while OES_float_linear or OES_half_float_linear extension is not enabled. the drawArrays function is simply: "gl.drawArrays(gl.TRIANGLES, 0, 6);" using 6 vertices to make a rectangle.

    Read the article

  • Best way to Draw a cube for 3D Picking on a specific face

    - by Kenneth Bray
    Currently I am drawing a cube for a game that I am making and the cube draw method is below. My question is, what is the best way to draw a cube and to be able to easily find the face that the cursor is over? My draw method works just fine, but I am getting ready to start to add picking (this will be used to mold the cubes into other shaps), and would like to know the best way to find a face of the cube. public void Draw() { // center point posX, posY, posZ float radius = size / 2; //top glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,0.0f,0.0f); // red glVertex3f(posX + radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY + radius, posZ + radius); } glEnd(); glPopMatrix(); //bottom glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,1.0f,0.0f); // ?? color glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY - radius, posZ - radius); } glEnd(); glPopMatrix(); //right side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(1.0f,0.0f,1.0f); // ?? color glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); //left side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,1.0f,1.0f); // ?? color glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); } glEnd(); glPopMatrix(); //front side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,0.0f,1.0f); // blue glVertex3f(posX + radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY + radius, posZ + radius); glVertex3f(posX - radius, posY - radius, posZ + radius); glVertex3f(posX + radius, posY - radius, posZ + radius); } glEnd(); glPopMatrix(); //back side glPushMatrix(); glBegin(GL_QUADS); { glColor3f(0.0f,1.0f,0.0f); // green glVertex3f(posX + radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY - radius, posZ - radius); glVertex3f(posX - radius, posY + radius, posZ - radius); glVertex3f(posX + radius, posY + radius, posZ - radius); } glEnd(); glPopMatrix(); }

    Read the article

  • Atmospheric scattering sky from space artifacts

    - by ollipekka
    I am in the process of implementing atmospheric scattering of a planets from space. I have been using Sean O'Neil's shaders from http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html as a starting point. I have pretty much the same problem related to fCameraAngle except with SkyFromSpace shader as opposed to GroundFromSpace shader as here: http://www.gamedev.net/topic/621187-sean-oneils-atmospheric-scattering/ I get strange artifacts with sky from space shader when not using fCameraAngle = 1 in the inner loop. What is the cause of these artifacts? The artifacts disappear when fCameraAngle is limtied to 1. I also seem to lack the hue that is present in O'Neil's sandbox (http://sponeil.net/downloads.htm) Camera position X=0, Y=0, Z=500. GroundFromSpace on the left, SkyFromSpace on the right. Camera position X=500, Y=500, Z=500. GroundFromSpace on the left, SkyFromSpace on the right. I've found that the camera angle seems to handled very differently depending the source: In the original shaders the camera angle in SkyFromSpaceShader is calculated as: float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight; Whereas in ground from space shader the camera angle is calculated as: float fCameraAngle = dot(-v3Ray, v3Pos) / length(v3Pos); However, various sources online tinker with negating the ray. Why is this? Here is a C# Windows.Forms project that demonstrates the problem and that I've used to generate the images: https://github.com/ollipekka/AtmosphericScatteringTest/ Update: I have found out from the ScatterCPU project found on O'Neil's site that the camera ray is negated when the camera is above the point being shaded so that the scattering is calculated from point to the camera. Changing the ray direction indeed does remove artifacts, but introduces other problems as illustrated here: Furthermore, in the ScatterCPU project, O'Neil guards against situations where optical depth for light is less than zero: float fLightDepth = Scale(fLightAngle, fScaleDepth); if (fLightDepth < float.Epsilon) { continue; } As pointed out in the comments, along with these new artifacts this still leaves the question, what is wrong with the images where camera is positioned at 500, 500, 500? It feels like the halo is focused on completely wrong part of the planet. One would expect that the light would be closer to the spot where the sun should hits the planet, rather than where it changes from day to night. The github project has been updated to reflect changes in this update.

    Read the article

  • MD5 vertex skinning problem extending to multi-jointed skeleton (GPU Skinning)

    - by Soapy
    Currently I'm trying to implement GPU skinning in my project. So far I have achieved single joint translation and rotation, and multi-jointed translation. The problem arises when I try to rotate a multi-jointed skeleton. The image above shows the current progress. The left image shows how the model should deform. The middle image shows how it deforms in my project. The right shows a better deform (still not right) inverting a certain value, which I will explain below. The way I get my animation data is by exporting it to the MD5 format (MD5mesh for mesh data and MD5anim for animation data). When I come to parse the animation data, for each frame, I check if the bone has a parent, if not, the data is passed in as is from the MD5anim file. If it does have a parent, I transform the bones position by the parents orientation, and the add this with the parents translation. Then the parent and child orientations get concatenated. This is covered at this website. if (Parent < 0){ ... // Save this data without editing it } else { Math3::vec3 rpos; Math3::quat pq = Parent.Quaternion; Math3::quat pqi(pq); pqi.InvertUnitQuat(); pqi.Normalise(); Math3::quat::RotateVector3(rpos, pq, jv); Math3::vec3 npos(rpos + Parent.Pos); this->Translation = npos; Math3::quat nq = pq * jq; nq.Normalise(); this->Quaternion = nq; } And to achieve the image to the right, all I need to do is to change Math3::quat::RotateVector3(rpos, pq, jv); to Math3::quat::RotateVector3(rpos, pqi, jv);, why is that? And this is my skinning shader. SkinningShader.vert #version 330 core smooth out vec2 vVaryingTexCoords; smooth out vec3 vVaryingNormals; smooth out vec4 vWeightColor; uniform mat4 MV; uniform mat4 MVP; uniform mat4 Pallete[55]; uniform mat4 invBindPose[55]; layout(location = 0) in vec3 vPos; layout(location = 1) in vec2 vTexCoords; layout(location = 2) in vec3 vNormals; layout(location = 3) in int vSkeleton[4]; layout(location = 4) in vec3 vWeight; void main() { vec4 wpos = vec4(vPos, 1.0); vec4 norm = vec4(vNormals, 0.0); vec4 weight = vec4(vWeight, (1.0f-(vWeight[0] + vWeight[1] + vWeight[2]))); normalize(weight); mat4 BoneTransform; for(int i = 0; i < 4; i++) { if(vSkeleton[i] != -1) { if(i == 0) { // These are interchangable for some reason // BoneTransform = ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform = ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } else { // These are interchangable for some reason // BoneTransform += ((invBindPose[vSkeleton[i]] * Pallete[vSkeleton[i]]) * weight[i]); BoneTransform += ((Pallete[vSkeleton[i]] * invBindPose[vSkeleton[i]]) * weight[i]); } } } wpos = BoneTransform * wpos; vWeightColor = weight; vVaryingTexCoords = vTexCoords; vVaryingNormals = normalize(vec3(vec4(vNormals, 0.0) * MV)); gl_Position = wpos * MVP; } The Pallete matrices are the matrices calculated using the above code (a rotation and translation matrix get created from the translation and quaternion). The invBindPose matrices are simply the inverted matrices created from the joints in the MD5mesh file. Update 1 I looked at GLM to compare the values I get with my own implementation. They turn out to be exactly the same. So now i'm checking if there's a problem with matrix creation... Update 2 Looked at GLM again to compare matrix creation using quaternions. Turns out that's not the problem either.

    Read the article

  • Axis-Aligned Bounding Boxes vs Bounding Ellipse

    - by Griffin
    Why is it that most, if not all collision detection algorithms today require each body to have an AABB for the use in the broad phase only? It seems to me like simply placing a circle at the body's centroid, and extending the radius to where the circle encompasses the entire body would be optimal. This would not need to be updated after the body rotates and broad overlap-calculation would be faster to. Correct? Bonus: Would a bounding ellipse be practical for broad phase calculations also, since it would better represent long, skinny shapes? Or would it require extensive calculations, defeating the purpose of broad-phase?

    Read the article

  • Character Stats and Power

    - by Stephen Furlani
    I'm making an RPG game system and I'm having a hard time deciding on doing detailed or abstract character statistics. These statistics define the character's natural - not learned - abilities. For example: Mass Effect: 0 (None that I can see) X20 (Xtreme Dungeon Mastery): 1 "STAT" Diablo: 4 "Strength, Magic, Dexterity, Vitality" Pendragon: 5 "SIZ, STR, DEX, CON, APP" Dungeons & Dragons (3.x, 4e): 6 "Str, Dex, Con, Wis, Int, Cha" Fallout 3: 7 "S.P.E.C.I.A.L." RIFTS: 8 "IQ, ME, MA, PS, PP, PE, PB, Spd" Warhammer Fantasy Roleplay (1st ed?): 12-ish "WS, BS, S, T, Ag, Int, WP, Fel, A, Mag, IP, FP" HERO (5th ed): 14 "Str, Dex, Con, Body, Int, Ego, Pre, Com, PD, ED, Spd, Rec, END, STUN" The more stats, the more complex and detailed your character becomes. This comes with a trade-off however, because you usually only have limited resources to describe your character. D&D made this infamous with the whole min/max-ing thing where strong characters were typically not also smart. But also, a character with a high Str typically also has high Con, Defenses, Hit Points/Health. Without high numbers in all those other stats, they might as well not be strong since they wouldn't hold up well in hand-to-hand combat. So things like that force trade-offs within the category of strength. So my original (now rejected) idea was to force players into deciding between offensive and defensive stats: Might / Body Dexterity / Speed Wit / Wisdom Heart Soul But this left some stat's without "opposites" (or opposites that were easily defined). I'm leaning more towards the following: Body (Physical Prowess) Mind (Mental Prowess) Heart (Social Prowess) Soul (Spiritual Prowess) This will define a character with just 4 numbers. Everything else gets based off of these numbers, which means they're pretty important. There won't, however, be ways of describing characters who are fast, but not strong or smart, but absent minded. Instead of defining the character with these numbers, they'll be detailing their character by buying skills and powers like these: Quickness Add a +2 Bonus to Body Rolls when Dodging. for a character that wants to be faster, or the following for a big, tough character Body Building Add a +2 Bonus to Body Rolls when Lifting, Pushing, or Throwing objects. [EDIT - removed subjectiveness] So my actual questions is what are some pitfalls with a small stat list and a large amount of descriptive powers? Is this more difficult to port cross-platform (pen&paper, PC) for example? Are there examples of this being done well/poorly? Thanks,

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance update: is for high performance, but isn't released yet. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • [JOGL] My program is too slow, how can I profile with Eclipse?

    - by nkint
    My simple opengl program is really toooo slow and not fluid. I'm rendering 30 sphere with simple illumination and simple materials. The only complex computing stuff I do is a collision detection between ray-mouse and spheres (that works ok and i do it only in mouseMoved) I'm not using any threads, just an animator to move spheres. How can I profile my jogl project? Or maybe (most probable...) I have some opengl instructions that I don't understand and make render particular accurate (or back face rendering that I don't need or whatever I don't know exactly I'm just entering the opengl world)

    Read the article

  • Away3D & Directional Light w/ Rotating Meshes

    - by seethru
    This is likely a stupid error but I can't seem to find what I've done wrong. I've got a simple scene with 10 cylinders rotating at a default speed. If I grab one of these cylinders I can rotate it in the opposite direction or at a greater speed. I have a single directional light in the scene. It would appear that the directional light is only calculated at initialization and not on further frames. The shadow created by the light rotates with the cylinder giving the impression that the light is rotating when it isn't. Camera & Light Initialization _view = new View3D(); addChild(_view); _view.antiAlias = 4; _view.backgroundColor = 0xFFFFFF; _view.camera.z = -850; _view.camera.y = 0; _view.camera.x = 0; _view.camera.lookAt(new Vector3D()); _view.camera.lens = new PerspectiveLens(15); _view.mousePicker = PickingType.RAYCAST_BEST_HIT; _light = new DirectionalLight(); _light.z = -850; _light.direction = new Vector3D(1, 1, 1); _light.color = 0xFFFFFF; _light.ambient = 0.1; _light.diffuse = 0.7; _view.scene.addChild(_light); Mesh and Material creation var material:TextureMaterial = new TextureMaterial(createPow2Texture(sprite, _colors[i]) , true, false, true); material.animateUVs = true; material.lightPicker = _lightPicker; cylinder = new Mesh(new CylinderGeometry(radius, radius, 13, 70, 1, true, true), material); cylinder.subMeshes[0].scaleU = spriteWidth / sprite.width; cylinder.y = y; cylinder.mouseEnabled = true; cylinder.pickingCollider = PickingColliderType.AS3_BEST_HIT; cylinder.addEventListener(MouseEvent3D.MOUSE_OVER, onMouseOverMesh); cylinder.addEventListener(MouseEvent3D.MOUSE_MOVE, onMouseOverMesh); cylinder.addEventListener(MouseEvent3D.MOUSE_OUT, onMouseOutMesh); _cylinders.push(cylinder); Frame private function onEnterFrame(event:Event):void { for each (var mesh:Mesh in _cylinders) { if (mesh == _mouseOverMesh) continue; mesh.rotationY += 0.25; } _view.render(); }

    Read the article

  • Something other than Vertex Welding with Texture Atlas?

    - by Tim Winter
    What options (in C# with XNA) would there be for texture usage in a procedural generated 3D world made of cubes to increase performance? Yes, it's like Minecraft. I've been doing a texture atlas and rendering faces individually (4 vertices per face), but I've also read in a couple places about using texture wrapping with two 1D atlases to merge adjacent faces with the same texture. If two or more adjacent faces share the same image, it'd be quite easy to wrap in this way reducing vertices by a large amount. My problem with this is having too many textures, swapping too often, and many image related things like non-power of 2 images. Is there a middle ground option between the 1D texture atlas trick and rendering 4 vertices per cube face? This is a picture of what I have currently (in wireframe). 4 vertices per face seems extremely inefficient to me.

    Read the article

  • How do i start Game programming in windows phone xna?

    - by Ankit Rathod
    Hello, I am very much interested in Game programming in Xna. However during my college days i did not take Physics or Maths. Does that mean i can't create games in xna? I just know basics of trignometry. Can you all point me to few links where i can learn xna as well as the basic stuff of Maths that is bound to be required in most of the games? Are all game programmers excellent in Maths and Physics ? Thanks in advance :)

    Read the article

  • What is wrong with my Dot Product? [Javascript]

    - by Clay Ellis Murray
    I am trying to make a pong game but I wanted to use dot products to do the collisions with the paddles, however whenever I make a dot product objects it never changes much from .9 this is my code to make vectors vector = { make:function(object){ return [object.x + object.width/2,object.y + object.height/2] }, normalize:function(v){ var length = Math.sqrt(v[0] * v[0] + v[1] * v[1]) v[0] = v[0]/length v[1] = v[1]/length return v }, dot:function(v1,v2){ return v1[0] * v2[0] + v1[1] * v2[1] } } and this is where I am calculating the dot in my code vector1 = vector.normalize(vector.make(ball)) vector2 = vector.normalize(vector.make(object)) dot = vector.dot(vector1,vector2) Here is a JsFiddle of my code currently the paddles don't move. Any help would be greatly appreciated

    Read the article

  • Game component causes game to freeze

    - by ChocoMan
    I'm trying to add my camera component to Game1 class' constructor like so: Camera camera; // from class Camera : GameComponent .... public Game1() { graphics = new GraphicsDeviceManager(this); this.graphics.PreferredBackBufferWidth = screenWidth; this.graphics.PreferredBackBufferHeight = screenHieght; this.graphics.IsFullScreen = true; Content.RootDirectory = "Content"; camera = new Camera(this); Components.Add(camera); } From the just adding the last two lines, when I run the game, the screen freezes then gives me this message: An unhandled exception of type 'System.ComponentModel.Win32Exception' occurred in System.Drawing.dll Additional information: The operation completed successfully

    Read the article

  • GLSL vertex shaders with movements vs vertex off the screen

    - by user827992
    If i have a vertex shader that manage some movements and variations about the position of some vertex in my OpenGL context, OpenGL is smart enough to just run this shader on only the vertex visible on the screen? This part of the OpenGL programmable pipeline is not clear to me because all the sources are not really really clear about this, they talk about fragments and pixels and I get that, but what about vertex shaders? If you need a reference i'm reading from this right now and this online book has a couple of examples about this.

    Read the article

  • 2D Tile-Based Concept Art App

    - by ashes999
    I'm making a bunch of 2D games (now and in the near future) that use a 2D, RPG-like interface. I would like to be able to quickly paint tiles down and drop character sprites to create concept art. Sure, I could do it in GIMP or Photoshop. But that would require manually adding each tile, layering on more tiles, cutting and pasting particular character sprites, etc. and I really don't need that level of granularity; I need a quick and fast way to churn out concept art. Is there a tool that I can use for this? Perhaps some sort of 2D tile editor which lets me draw sprites and tiles given that I can provide the graphics files.

    Read the article

  • Out of Memory when building an application

    - by Jacob Neal
    I have quite a major problem with my Multimedia Fusion 2 game. I finished it months ago, however, the only thing keeping me from finally compiling the game into an executable file is this error message that pops up every time I try to, simply saying, "Out of Memory". Its highly frustrating to be halted at this point by this message, and I tried everything I could come up with to fix it including compressing the runtime and sounds and increasing the proity of MMF2 all the way to realtime in the task manager. Im begging someone to toss me a bone on this problem, and any advice at all would be much appreciated.

    Read the article

  • Single and Double Jump with single button.

    - by Asad
    I want to make Single Jump on Single Tap and Double Jump on Double Tap. My problem is that if I make double Tap on ground then it’s fine but if I make first Tap on ground and second Tap in Air then Player gain more height then usual As in image 1. I want to Make my jump like in Image 2, No matter from which point user gives second Tap, player Always get a specific height. I Used both Impulse and Linear velocity to make Jump but my problem did not solved.

    Read the article

  • How to restrict paddle movement using Farseer Physics engine 3.2

    - by brainydexter
    I am new to using Farseer Physics Engine 3.2(FPE), so please bear with my questions. Also, since FPE 3.2 is based on Box2D, I have been reading Box2D manual and pieces of code scattered in samples to better understand terminology and usage. Pong is usually my testbed whenever I try to do something new. Here is one of the issue I am running into: How can I restrict paddles to move only along Y axis, because the ball comes in and knocks off the paddles and everything floats in space afterwards ? (Box = Rectangle and ball = circle) I know MKS is the unit system, but is there a recommendation for sizes/position to be used ? I know this is a very generic question, but it would be good to know a simple set of values that one could use for making a game as simple as pong. Between box2d and FPE, I have some doubts: what is the recommended way of making a body in FPE ? world.CreateBody() does not exist in FPE Box2d manual recommends never to "new" body(since Box2D uses Small Object allocators), so is there a recommended way in Farseer to create a body (apart from factories) ? In box2d, it is recommended to keep a track of the body object, since it is also the parent to fixture(s). Why is it that in most of the examples, the fixture object is tracked ? Is there a reason why body is not tracked ? Thanks

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >