Search Results

Search found 25660 results on 1027 pages for 'dotnetnuke development'.

Page 388/1027 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • Implementing a multilanguage AI contest platform

    - by Alejandro Piad
    This is a followup to this question. To sum: I'm implementing an AI contest site, where each user may submit several AI implementations for different games. Think about Google AI Challenge but instead of just having a big event once a year, I would like it more on a league fashion, with all virtual players playing with each other every some close period of time. I want to support as many programming languages as possible. I've seen that contest sites (like codeforces) ask you to submit a source code and interact through stdin and stdout. The first question is: what is the best way of supporting multiple languages? As I see it, I can either ask people to upload some binary/script, and interact either through stdin/*stdout*, or sockets, or the file system; or ask people to submit source code, and wrap it with whatever is necessary for the interaction. I would like to skip the need to compile the code by myself (in the server, I mean), but I am willing to do it if its the "best" choice. I need to comunicate virtual players with each other, or even better, with some intermediary arbiter. The second question is regarding security. If I'm going to be running user code in my server, I want to ensure strict security conditions, like no file system access, no networking, etc. Otherwise it would be a safe heaven for hackers. I will be implementing the engine/arbiter in .NET. I would like to support at least C#, C++, Java and Python for the user's implementations. I'm willing to write interfaces for each of these languages to simplify the user interaction with the system. Thanks in advance.

    Read the article

  • How Do I Search For Struct Items In A Vector? [migrated]

    - by Vladimir Marenus
    I'm attempting to create an inventory system using a vector implementation, but I seem to be having some troubles. I'm running into issues using a struct I made. NOTE: This isn't actually in a game code, this is a separate Solution I am using to test my knowledge of vectors and structs! struct aItem { string itemName; int damage; }; int main() { aItem healingPotion; healingPotion.itemName = "Healing Potion"; healingPotion.damage= 6; aItem fireballPotion; fireballPotion.itemName = "Potion of Fiery Balls"; fireballPotion.damage = -2; vector<aItem> inventory; inventory.push_back(healingPotion); inventory.push_back(healingPotion); inventory.push_back(healingPotion); inventory.push_back(fireballPotion); if(find(inventory.begin(), inventory.end(), fireballPotion) != inventory.end()) { cout << "Found"; } system("PAUSE"); return 0; } The preceeding code gives me the following error: 1c:\program files (x86)\microsoft visual studio 11.0\vc\include\xutility(3186): error C2678: binary '==' : no operator found which takes a left-hand operand of type 'aItem' (or there is no acceptable conversion) There is more to the error, if you need it please let me know. I bet it's something small and silly, but I've been thumping at it for over two hours. Thanks in advance!

    Read the article

  • CSM DX11 issues

    - by KaiserJohaan
    I got CSM to work in OpenGL, and now Im trying to do the same in directx. I'm using the same math library and all and I'm pretty much using the alghorithm straight off. I am using right-handed, column major matrices from GLM. The light is looking (-1, -1, -1). The problem I have is twofolds; For some reason, the ground floor is causing alot of (false) shadow artifacts, like the vast shadowed area you see. I confirmed this when I disabled the ground for the depth pass, but thats a hack more than anything else The shadows are inverted compared to the shadowmap. If you squint you can see the chairs shadows should be mirrored instead. This is the first cascade shadow map, in range of the alien and the chair: I can't figure out why this is. This is the depth pass: for (uint32_t cascadeIndex = 0; cascadeIndex < NUM_SHADOWMAP_CASCADES; cascadeIndex++) { mShadowmap.BindDepthView(context, cascadeIndex); CameraFrustrum cameraFrustrum = CalculateCameraFrustrum(degreesFOV, aspectRatio, nearDistArr[cascadeIndex], farDistArr[cascadeIndex], cameraViewMatrix); lightVPMatrices[cascadeIndex] = CreateDirLightVPMatrix(cameraFrustrum, lightDir); mVertexTransformPass.RenderMeshes(context, renderQueue, meshes, lightVPMatrices[cascadeIndex]); lightVPMatrices[cascadeIndex] = gBiasMatrix * lightVPMatrices[cascadeIndex]; farDistArr[cascadeIndex] = -farDistArr[cascadeIndex]; } CameraFrustrum CalculateCameraFrustrum(const float fovDegrees, const float aspectRatio, const float minDist, const float maxDist, const Mat4& cameraViewMatrix) { CameraFrustrum ret = { Vec4(1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, -1.0f, -1.0f, 1.0f), Vec4(-1.0f, 1.0f, -1.0f, 1.0f), Vec4(1.0f, -1.0f, 1.0f, 1.0f), Vec4(1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, 1.0f, 1.0f, 1.0f), Vec4(-1.0f, -1.0f, 1.0f, 1.0f), }; const Mat4 perspectiveMatrix = PerspectiveMatrixFov(fovDegrees, aspectRatio, minDist, maxDist); const Mat4 invMVP = glm::inverse(perspectiveMatrix * cameraViewMatrix); for (Vec4& corner : ret) { corner = invMVP * corner; corner /= corner.w; } return ret; } Mat4 CreateDirLightVPMatrix(const CameraFrustrum& cameraFrustrum, const Vec3& lightDir) { Mat4 lightViewMatrix = glm::lookAt(Vec3(0.0f), -glm::normalize(lightDir), Vec3(0.0f, -1.0f, 0.0f)); Vec4 transf = lightViewMatrix * cameraFrustrum[0]; float maxZ = transf.z, minZ = transf.z; float maxX = transf.x, minX = transf.x; float maxY = transf.y, minY = transf.y; for (uint32_t i = 1; i < 8; i++) { transf = lightViewMatrix * cameraFrustrum[i]; if (transf.z > maxZ) maxZ = transf.z; if (transf.z < minZ) minZ = transf.z; if (transf.x > maxX) maxX = transf.x; if (transf.x < minX) minX = transf.x; if (transf.y > maxY) maxY = transf.y; if (transf.y < minY) minY = transf.y; } Mat4 viewMatrix(lightViewMatrix); viewMatrix[3][0] = -(minX + maxX) * 0.5f; viewMatrix[3][1] = -(minY + maxY) * 0.5f; viewMatrix[3][2] = -(minZ + maxZ) * 0.5f; viewMatrix[0][3] = 0.0f; viewMatrix[1][3] = 0.0f; viewMatrix[2][3] = 0.0f; viewMatrix[3][3] = 1.0f; Vec3 halfExtents((maxX - minX) * 0.5, (maxY - minY) * 0.5, (maxZ - minZ) * 0.5); return OrthographicMatrix(-halfExtents.x, halfExtents.x, -halfExtents.y, halfExtents.y, halfExtents.z, -halfExtents.z) * viewMatrix; } And this is the pixel shader used for the lighting stage: #define DEPTH_BIAS 0.0005 #define NUM_CASCADES 4 cbuffer DirectionalLightConstants : register(CBUFFER_REGISTER_PIXEL) { float4x4 gSplitVPMatrices[NUM_CASCADES]; float4x4 gCameraViewMatrix; float4 gSplitDistances; float4 gLightColor; float4 gLightDirection; }; Texture2D gPositionTexture : register(TEXTURE_REGISTER_POSITION); Texture2D gDiffuseTexture : register(TEXTURE_REGISTER_DIFFUSE); Texture2D gNormalTexture : register(TEXTURE_REGISTER_NORMAL); Texture2DArray gShadowmap : register(TEXTURE_REGISTER_DEPTH); SamplerComparisonState gShadowmapSampler : register(SAMPLER_REGISTER_DEPTH); float4 ps_main(float4 position : SV_Position) : SV_Target0 { float4 worldPos = gPositionTexture[uint2(position.xy)]; float4 diffuse = gDiffuseTexture[uint2(position.xy)]; float4 normal = gNormalTexture[uint2(position.xy)]; float4 camPos = mul(gCameraViewMatrix, worldPos); uint index = 3; if (camPos.z > gSplitDistances.x) index = 0; else if (camPos.z > gSplitDistances.y) index = 1; else if (camPos.z > gSplitDistances.z) index = 2; float3 projCoords = (float3)mul(gSplitVPMatrices[index], worldPos); float viewDepth = projCoords.z - DEPTH_BIAS; projCoords.z = float(index); float visibilty = gShadowmap.SampleCmpLevelZero(gShadowmapSampler, projCoords, viewDepth); float angleNormal = clamp(dot(normal, gLightDirection), 0, 1); return visibilty * diffuse * angleNormal * gLightColor; } As you can see I am using depth bias and a bias matrix. Any hints on why this behaves so wierdly?

    Read the article

  • Quadtree collapsing

    - by Caius Eugene
    Okay so i've spent a few days learning what a quadtree is and how to implement one. So far I have a quadtree that when I click inside a leaf it subdivides, I wondering how do I get the previous subdivisions to collapse back up, so that only one area is subdivided at a time? This is what mine looks like: (1. initial mouse click) (2. another mouse click) The aim to to eventually track the position of my mouse and subdivide the area it is in dynamically. THE OVERALL aim it to use this to create a terrain mesh and subdivide based on the camera. But I've gone right back to basics to get an understanding of how this will work. Any advice would be grand! - Caius

    Read the article

  • Performance issues with visibility detection and object transparency

    - by maul
    I'm working on a 3d game that has a view similar to classic isometric games (diablo, etc.). One of the things I'm trying to implement is the effect of turning walls transparent when the player walks behind them. By itself this is not a huge issue, but I'm having trouble determining which walls should be transparent exactly. I can't use a circle or square mask. There are a lot of cases where the wall piece at the same (relative) position has different visibility depending on the surrounding area. With the help of a friend I came up with this algorithm: Create a grid around the player that contains a lot of "visibility points" (my game is semi tile-based so I create one point for every tile on the grid) - the size of the square's side is close to the radius where I make objects transparent. I found 6x6 to be a good value, so that's 36 visibility points total. For every visibility point on the grid, check if that point is in the player's line of sight. For every visibility point that is in the LOS, cast a ray from the camera to that point and mark all objects the ray hits as transparent. This algorithm works - not perfectly, but only requires some tuning - however this is very slow. As you can see, it requries 36 ray casts minimum, but most of the time 60-70 depending on the position. That's simply too much for the CPU. Is there a better way to do this? I'm using Unity 3D but I'm not looking for an engine-specific solution.

    Read the article

  • Pitch camera around model

    - by ChocoMan
    Currently, my camera rotates with my model's Y-Axis (yaw) perfectly. What I'm having trouble with is rotating the X-Axis (pitch) along with it. I've tried the same method for cameraYaw() in the form of cameraPitch(), while adjusting the axis to Vector.Right, but the camera wouldn't pitch at all in accordance to the Y-Axes of the controller. Is there a way similar to this to get the same effect for pitching the camera around the model? // Rotates model on its own Y-axis public void modelRotMovement(GamePadState pController) { Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(speedAngleMAX); AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); } // Orbit (yaw) Camera around model public void cameraYaw(Vector3 axis, float yaw, float pitch) { Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(speedAngleMAX); ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axis, yaw)) + ModelLoad.camTarget; } public void updateCamera() { cameraYaw(Vector3.Up, Yaw); }

    Read the article

  • Unity: Assigning a key to perform an action in the inspector

    - by Marc Pilgaard
    I am trying to write a simple piece of code in JavaScript where a button toggles the activation of a shield, by dragging a prefab with Resources.load("ActivateShieldPreFab") and destroying it again (Haven't implemented that yet). I wish to assign this button through the inspector, so I have created a string variable which appears as intended in the inspector. Though it doesn't seem to register the inspector input, even though I changed the value through the inspector. It only provides the error: "Input Key named: is unknown" When the button name is assigned within the code, there is no issues. Code as follows: var ShieldOn = false; var stringbutton : String; function Start(){ } function Update () { if(Input.GetKey(stringbutton) && ShieldOn != true) { Instantiate(Resources.load("ActivateShieldPreFab"), Vector3 (0, 0, 0), Quaternion.identity); ShieldOn = true; } }

    Read the article

  • Simple iOS glDrawElements - BAD_ACCESS

    - by user699215
    You can copy paste this into the default OpenGl template created in Xcode. Why am I not seeing anything :-) It is strange as the glDrawArrays(GL_TRIANGLES, 0, 3); is working fine, but with glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); Is giving BAD_ACCESS? Copy paste this into Xcode default OpenGl template: ViewController #import "ViewController.h" #define BUFFER_OFFSET(i) ((char *)NULL + (i)) // Uniform index. enum { UNIFORM_MODELVIEWPROJECTION_MATRIX, UNIFORM_NORMAL_MATRIX, NUM_UNIFORMS }; GLint uniforms[NUM_UNIFORMS]; // Attribute index. enum { ATTRIB_VERTEX, ATTRIB_NORMAL, NUM_ATTRIBUTES }; @interface ViewController () { GLKMatrix4 _modelViewProjectionMatrix; GLKMatrix3 _normalMatrix; float _rotation; GLuint _vertexArray; GLuint _vertexBuffer; NSArray* arrayOfVertex; } @property (strong, nonatomic) EAGLContext *context; @property (strong, nonatomic) GLKBaseEffect *effect; - (void)setupGL; - (void)tearDownGL; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; GLKView *view = (GLKView *)self.view; view.context = self.context; view.drawableDepthFormat = GLKViewDrawableDepthFormat24; [self setupGL]; } - (void)dealloc { [self tearDownGL]; if ([EAGLContext currentContext] == self.context) { [EAGLContext setCurrentContext:nil]; } } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; if ([self isViewLoaded] && ([[self view] window] == nil)) { self.view = nil; [self tearDownGL]; if ([EAGLContext currentContext] == self.context) { [EAGLContext setCurrentContext:nil]; } self.context = nil; } // Dispose of any resources that can be recreated. } GLuint vertexBufferID; GLuint indexBufferID; static const GLfloat vertices[9] = { -0.5, -0.5, 0.5, 0.5, -0.5, 0.5, -0.5, 0.5, 0.5 }; static const GLubyte indices[3] = { 0, 1, 2 }; - (void)setupGL { [EAGLContext setCurrentContext:self.context]; // [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); // glGenVertexArraysOES(1, &_vertexArray); // glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &vertexBufferID); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glGenBuffers(1, &indexBufferID); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, // Specifies the index of the generic vertex attribute to be modified. 3, // Specifies the number of components per generic vertex attribute. Must be 1, 2, 3, 4. GL_FLOAT, // GL_FALSE, // 0, // BUFFER_OFFSET(0)); // // glBindVertexArrayOES(0); } - (void)tearDownGL { [EAGLContext setCurrentContext:self.context]; glDeleteBuffers(1, &_vertexBuffer); glDeleteVertexArraysOES(1, &_vertexArray); self.effect = nil; } #pragma mark - GLKView and GLKViewController delegate methods - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } int i; - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect { glClearColor(0.65f, 0.65f, 0.65f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // glBindVertexArrayOES(_vertexArray); // Render the object with GLKit [self.effect prepareToDraw]; //glDrawArrays(GL_TRIANGLES, 0, 3); // Render the object again with ES2 // glDrawArrays(GL_TRIANGLES, 0, 3); glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); } @end

    Read the article

  • How to utilize miniMax algorithm in Checkers game

    - by engineer
    I am sorry...as there are too many articles about it.But I can't simple get this. I am confused in the implementation of AI. I have generated all possible moves of computer's type pieces. Now I can't decide the flow. Whether I need to start a loop for the possible moves of each piece and assign score to it.... or something else is to be done. Kindly tell me the proper flow/algorithm for this. Thanks

    Read the article

  • Corona SDK: Quality of support and resources?

    - by Nick Wiggill
    I've not used Corona SDK before and am looking into it for a friend. (He is also considering Unity.) I wonder what the support is like for Corona? While Unity has a great many customers and so the Unity team can often not address issues directly, the community at large is very helpful and there are many excellent resources: tutorials, forum posts, code resources. What is Corona like in this regard, and by comparison?

    Read the article

  • Game window systems and internal frames

    - by 2080
    I don't know if this is a valid question, but: What kind of window manager do games use which have internal frames (Frames inside frames)? Does this differ between the programming languages (Are e.g. in Java the AWT/Swing libraries used to manage these and other graphical elements, such as buttons,or is this to restrictive (speed, graphical possibilities?)) A special example would be EVE Online, where the client can use the ingame windows like on a normal desktop.

    Read the article

  • Homemaking a 2d soft body physics engine

    - by Griffin
    hey so I've decided to Code my own 2D soft-body physics engine in C++ since apparently none exist and I'm starting only with a general idea/understanding on how physics work and could be simulated: by giving points and connections between points properties such as elasticity, density, mass, shape retention, friction, stickiness, etc. What I want is a starting point: resources and helpful examples/sites that could give me the specifics needed to actually make this such as equations and required physics knowledge. It would be great if anyone out there also would give me their attempts or ideas. finally I was wondering if it was possible to... use the source code of an existing 3D engine such as Bullet and transform it to be 2D based? use the source code of a 2D Rigid body physics engine such as box2d as a starting point?

    Read the article

  • Should I continue reading Frank Luna's Introduction to 3D Game Programming with DirectX 11 book after D3DX and XNA Math Library have been deprecated? [on hold]

    - by milindsrivastava1997
    I recently started learning DirectX 11 (C++) by reading Frank Luna's Introduction to 3D Game Programming with DirectX 11. In that the author uses D3DX and XNA Math Library. Since they have been deprecated should I continue using that book? If yes, should I use the deprecated libraries or should I switch some other libraries? If no, which book should I consult for up-to-date content with no use of deprecated library? Thanks!

    Read the article

  • Smooth Camera Rotation around 90 degrees

    - by Nicholas
    I'm developing a third person 3D platformer in XNA. My problem is when I try to rotate the camera around the player. I would like to rotate (and animate) the camera 90 degrees around the player. So the camera should rotate until it has reached 90 degrees from the starting position. I cannot figure out how to keep track of the rotation, and when the rotation has made the full 90 degrees. Currently my cameras update: public void Update(Vector3 playerPosition) { if (rotateCamera) { position = Vector3.Transform(position - playerPosition, Matrix.CreateRotationY(0.1f)) + playerPosition; } this.viewMatrix = Matrix.CreateLookAt(position, playerPosition, Vector3.Up); } The initial position of the camera is set in the constructor. The "rotateCamera" bool is set on keypress. Thanks for the help in advance. Cheers.

    Read the article

  • Push back rectangle where collision happens

    - by Tifa
    I have a tile collision on a game I am creating but the problem is once a collision happens for example a collision happens in right side my sprite cant move to up and bottom :( thats because i set the speed to 0. I thinks its wrong. here is my code: int startX, startY, endX, endY; float pushx = 0,pushy = 0; // move player if(Gdx.input.isKeyPressed(Input.Keys.LEFT)){ dx=-1; currentWalk = leftWalk; } if(Gdx.input.isKeyPressed(Input.Keys.RIGHT)){ dx=1; currentWalk = rightWalk; } if(Gdx.input.isKeyPressed(Input.Keys.DOWN)){ dy=-1; currentWalk = downWalk; } if(Gdx.input.isKeyPressed(Input.Keys.UP)){ dy=1; currentWalk = upWalk; } sr.setProjectionMatrix(camera.combined); sr.begin(ShapeRenderer.ShapeType.Line); Rectangle koalaRect = rectPool.obtain(); koalaRect.set(player.getX(), player.getY(), pw, ph /2 ); float oldX = player.getX(), oldY = player.getY(); // THIS LINE WAS ADDED player.setXY(player.getX() + dx * Gdx.graphics.getDeltaTime() * 4f, player.getY() + dy * Gdx.graphics.getDeltaTime() * 4f); // THIS LINE WAS MOVED HERE FROM DOWN BELOW if(dx> 0) { startX = endX = (int)(player.getX() + pw); } else { startX = endX = (int)(player.getX() ); } startY = (int)(player.getY()); endY = (int)(player.getY() + ph); getTiles(startX, startY, endX, endY, tiles); for(Rectangle tile: tiles) { sr.rect(tile.x,tile.y,tile.getWidth(),tile.getHeight()); if(koalaRect.overlaps(tile)) { //dx = 0; player.setX(oldX); // THIS LINE CHANGED Gdx.app.log("x","hit " + player.getX() + " " + oldX); break; } } if(dy > 0) { startY = endY = (int)(player.getY() + ph ); } else { startY = endY = (int)(player.getY() ); } startX = (int)(player.getX()); endX = (int)(player.getX() + pw); getTiles(startX, startY, endX, endY, tiles); for(Rectangle tile: tiles) { if(koalaRect.overlaps(tile)) { //dy = 0; player.setY(oldY); // THIS LINE CHANGED //Gdx.app.log("y","hit" + player.getY() + " " + oldY); break; } } sr.rect(koalaRect.x,koalaRect.y,koalaRect.getWidth(),koalaRect.getHeight() / 2); sr.setColor(Color.GREEN); sr.end(); I want to push back the sprite when a collision happens but i have no idea how :D pls help

    Read the article

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • Applying effects to an existing program that uses BasicEffect

    - by Fibericon
    Using the finished product from the tutorial here. Is it possible to apply the grayscale effect from here: Making entire scene fade to grayscale Or would you basically have to rewrite everything? EDIT: It's doing something now, but the whole grayscale seems extremely blue. It's like I'm looking at it through dark blue sunglasses. Here's my draw function: protected override void Draw(GameTime gameTime) { device.SetRenderTarget(renderTarget); graphics.GraphicsDevice.Clear(Color.CornflowerBlue); //Drawing models, bullets, etc. device.SetRenderTarget(null); spriteBatch.Begin(0, BlendState.Additive, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, grayScale); Texture2D temp = (Texture2D)renderTarget; grayScale.Parameters["coloredTexture"].SetValue(temp); grayScale.CurrentTechnique = grayScale.Techniques["Grayscale"]; foreach (EffectPass pass in grayScale.CurrentTechnique.Passes) { pass.Apply(); } spriteBatch.Draw(temp, new Vector2(GraphicsDevice.PresentationParameters.BackBufferWidth/2, GraphicsDevice.PresentationParameters.BackBufferHeight/2), null, Color.White, 0f, new Vector2(renderTarget.Width/2, renderTarget.Height/2), 1.0f, SpriteEffects.None, 0f); spriteBatch.End(); base.Draw(gameTime); } Another edit: figured out what I was doing wrong. I have Blendstate.Additive in the spriteBatch.Draw() call. It should be Blendstate.Opaque, or it literally tries to add the blank blue image to the grayscale image.

    Read the article

  • What is the best Broadphase Interface for moving spheres?

    - by Molmasepic
    As of now I am working on optimizing the performance of the physics and collision, and as of now I am having some slowdowns on my other computers from my main. I have well over 3000 btSphereShape Rigidbodies and 2/3 of them do not move at all, but I am noticing(by the profile below) that collision is taking a bit of time to maneuver. Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 10.09 0.65 0.65 SphereTriangleDetector::collide(btVector3 const&, btVector3&, btVector3&, float&, float&, float) 7.61 1.14 0.49 btSphereTriangleCollisionAlgorithm::processCollision(btCollisionObject*, btCollisionObject*, btDispatcherInfo const&, btManifoldResult*) 5.59 1.50 0.36 btConvexTriangleCallback::processTriangle(btVector3*, int, int) 5.43 1.85 0.35 btQuantizedBvh::reportAabbOverlappingNodex(btNodeOverlapCallback*, btVector3 const&, btVector3 const&) const 4.97 2.17 0.32 btBvhTriangleMeshShape::processAllTriangles(btTriangleCallback*, btVector3 const&, btVector3 const&) const::MyNodeOverlapCallback::processNode(int, int) 4.19 2.44 0.27 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowGeneric(btRigidBody&, btRigidBody&, btSolverConstraint const&) 4.04 2.70 0.26 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowLowerLimit(btRigidBody&, btRigidBody&, btSolverConstraint const&) 3.73 2.94 0.24 Ogre::OctreeSceneManager::walkOctree(Ogre::OctreeCamera*, Ogre::RenderQueue*, Ogre::Octree*, Ogre::VisibleObjectsBoundsInfo*, bool, bool) 3.42 3.16 0.22 btTriangleShape::getVertex(int, btVector3&) const 2.48 3.32 0.16 Ogre::Frustum::isVisible(Ogre::AxisAlignedBox const&, Ogre::FrustumPlane*) const 2.33 3.47 0.15 1246357 0.00 0.00 Gorilla::Layer::setVisible(bool) 2.33 3.62 0.15 SphereTriangleDetector::getClosestPoints(btDiscreteCollisionDetectorInterface::ClosestPointInput const&, btDiscreteCollisionDetectorInterface::Result&, btIDebugDraw*, bool) 1.86 3.74 0.12 btCollisionDispatcher::findAlgorithm(btCollisionObject*, btCollisionObject*, btPersistentManifold*) 1.86 3.86 0.12 btSequentialImpulseConstraintSolver::setupContactConstraint(btSolverConstraint&, btCollisionObject*, btCollisionObject*, btManifoldPoint&, btContactSolverInfo const&, btVector3&, float&, float&, btVector3&, btVector3&) 1.71 3.97 0.11 btTriangleShape::getEdge(int, btVector3&, btVector3&) const 1.55 4.07 0.10 _Unwind_SjLj_Register 1.55 4.17 0.10 _Unwind_SjLj_Unregister 1.55 4.27 0.10 Ogre::D3D9HardwareVertexBuffer::updateBufferResources(char const*, Ogre::D3D9HardwareVertexBuffer::BufferResources*) 1.40 4.36 0.09 btManifoldResult::addContactPoint(btVector3 const&, btVector3 const&, float) 1.40 4.45 0.09 btSequentialImpulseConstraintSolver::setupFrictionConstraint(btSolverConstraint&, btVector3 const&, btRigidBody*, btRigidBody*, btManifoldPoint&, btVector3 const&, btVector3 const&, btCollisionObject*, btCollisionObject*, float, float, float) 1.24 4.53 0.08 btSequentialImpulseConstraintSolver::convertContact(btPersistentManifold*, btContactSolverInfo const&) 1.09 4.60 0.07 408760 0.00 0.00 Living::MapHide() 1.09 4.67 0.07 btSphereTriangleCollisionAlgorithm::~btSphereTriangleCollisionAlgorithm() 1.09 4.74 0.07 inflate_fast EDIT: Updated to show current Profile. I have only listed the functions using over 1% time from the many functions that are being used. Another thing is that each monster has a certain area that they stay in and are only active when a player is in said area. I was wondering if maybe there is a way to deactivate the non-active monsters from bullet(reactivating once in the area again) or maybe theres a different broadphase interface that I should use. The current BPI is btDbvtBroadphase. EDIT: Here is the Profile on the other computer(the top one is my main) Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 12.18 1.19 1.19 SphereTriangleDetector::collide(btVector3 const&, btVector3&, btVector3&, float&, float&, float) 6.76 1.85 0.66 btSphereTriangleCollisionAlgorithm::processCollision(btCollisionObject*, btCollisionObject*, btDispatcherInfo const&, btManifoldResult*) 5.83 2.42 0.57 btQuantizedBvh::reportAabbOverlappingNodex(btNodeOverlapCallback*, btVector3 const&, btVector3 const&) const 5.12 2.92 0.50 btConvexTriangleCallback::processTriangle(btVector3*, int, int) 4.61 3.37 0.45 btTriangleShape::getVertex(int, btVector3&) const 4.09 3.77 0.40 _Unwind_SjLj_Register 3.48 4.11 0.34 btBvhTriangleMeshShape::processAllTriangles(btTriangleCallback*, btVector3 const&, btVector3 const&) const::MyNodeOverlapCallback::processNode(int, int) 2.46 4.35 0.24 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowLowerLimit(btRigidBody&, btRigidBody&, btSolverConstraint const&) 2.15 4.56 0.21 _Unwind_SjLj_Unregister 2.15 4.77 0.21 SphereTriangleDetector::getClosestPoints(btDiscreteCollisionDetectorInterface::ClosestPointInput const&, btDiscreteCollisionDetectorInterface::Result&, btIDebugDraw*, bool) 1.84 4.95 0.18 btTriangleShape::getEdge(int, btVector3&, btVector3&) const 1.64 5.11 0.16 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowGeneric(btRigidBody&, btRigidBody&, btSolverConstraint const&) 1.54 5.26 0.15 btSequentialImpulseConstraintSolver::setupContactConstraint(btSolverConstraint&, btCollisionObject*, btCollisionObject*, btManifoldPoint&, btContactSolverInfo const&, btVector3&, float&, float&, btVector3&, btVector3&) 1.43 5.40 0.14 Ogre::D3D9HardwareVertexBuffer::updateBufferResources(char const*, Ogre::D3D9HardwareVertexBuffer::BufferResources*) 1.33 5.53 0.13 btManifoldResult::addContactPoint(btVector3 const&, btVector3 const&, float) 1.13 5.64 0.11 btRigidBody::predictIntegratedTransform(float, btTransform&) 1.13 5.75 0.11 btTriangleIndexVertexArray::getLockedReadOnlyVertexIndexBase(unsigned char const**, int&, PHY_ScalarType&, int&, unsigned char const**, int&, int&, PHY_ScalarType&, int) const 1.02 5.85 0.10 btSphereTriangleCollisionAlgorithm::CreateFunc::CreateCollisionAlgorithm(btCollisionAlgorithmConstructionInfo&, btCollisionObject*, btCollisionObject*) 1.02 5.95 0.10 btSphereTriangleCollisionAlgorithm::btSphereTriangleCollisionAlgorithm(btPersistentManifold*, btCollisionAlgorithmConstructionInfo const&, btCollisionObject*, btCollisionObject*, bool) Edited same as other Profile.

    Read the article

  • Why use 3d matrix and camera in 2D world for 2d geometric figures?

    - by Navy Seal
    I'm working in XNA on a 2d isometric world/game and I'm using DrawUserPrimitives to draw some geometric figures... I saw some tutorials about creating dynamic shadows but I didn't understood why they use a "3d" matrix to control the transformations since the figure I'm drawing is in 2d perspective. I know I'm drawing a 2d figure in 3d but I still can't understand if I really need to work with the matrix. Is there any advantage in using a 3d Matrix to control camera and view? Any reason why I can't just update my vertex's positions by using a regular method since the view is always the same... And since I want to work only with single figures, won't this cause all the geometric figures have the same transformations simultaneously? To understand better what I mean here's a video http://www.youtube.com/watch?v=LjvsGHXaGEA&feature=player_embedded

    Read the article

  • Fog shader camera problem

    - by MaT
    I have some difficulties with my vertex-fragment fog shader in Unity. I have a good visual result but the problem is that the gradient is based on the camera's position, it moves as the camera moves. I don't know how to fix it. Here is the shader code. struct v2f { float4 pos : SV_POSITION; float4 grabUV : TEXCOORD0; float2 uv_depth : TEXCOORD1; float4 interpolatedRay : TEXCOORD2; float4 screenPos : TEXCOORD3; }; v2f vert(appdata_base v) { v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); o.uv_depth = v.texcoord.xy; o.grabUV = ComputeGrabScreenPos(o.pos); half index = v.vertex.z; o.screenPos = ComputeScreenPos(o.pos); o.interpolatedRay = mul(UNITY_MATRIX_MV, v.vertex); return o; } sampler2D _GrabTexture; float4 frag(v2f IN) : COLOR { float3 uv = UNITY_PROJ_COORD(IN.grabUV); float dpth = UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, uv)); dpth = LinearEyeDepth(dpth); float4 wsPos = (IN.screenPos + dpth * IN.interpolatedRay); // Here is the problem but how to fix it float fogVert = max(0.0, (wsPos.y - _Depth) * (_DepthScale * 0.1f)); fogVert *= fogVert; fogVert = (exp (-fogVert)); return fogVert; } Thanks a lot !

    Read the article

  • Game programming course materials: What should it include?

    - by Esa
    I am tasked to create the course materials for a game programming class, and I’d like your opinion on what aspects and areas of game programming, such as game state management, game object storing or simple AI, should I include in it? The course is intented to be the first step into game programming for students with novice skills in programming. There will be mathematics as well, but I found that there are multiple questions, with good answers, on that subject already.

    Read the article

  • 3D texture coordinates for a cube

    - by Roshan
    I want to use glTexImage3D with cube. what will be the texture coordinates for it? i am using GL_TEXTURE_3D as target. I tried with u v coordinates same as 2d texture coordinates with z component 0-depth for each face. But that goes wrong. How to apply each layer to each face of the cube with target= GL_TEXTURE_3D? Lets assume i have 8 layers of 2D images in my 3D texture. I want all 8 layers to apply on each of the cube and not 1 layer on 1 face of the cube.

    Read the article

  • Drawing lines in 3D space

    - by DeadMG
    When attempting to draw a line in 3D space with D3DPT_LINELIST, then Direct3D gives me an error about an invalid vertex declaration, saying that it cannot be converted to an FVF. I am using the same vertex declaration and shader/stream setup as for my D3DPT_TRIANGLELIST rendering which works absolutely correctly. How can I use D3DPT_LINELIST to render some lines in 3D space? Edit: Oopsie, forgot my codeses. Here's my raw Draw call. D3DCALL(device->SetStreamSource(1, PerBoneBuffer.get(), 0, sizeof(PerInstanceData))); D3DCALL(device->SetStreamSourceFreq(1, D3DSTREAMSOURCE_INSTANCEDATA | 1)); D3DCALL(device->SetStreamSource(0, LineVerts, 0, sizeof(D3DXVECTOR3))); D3DCALL(device->SetStreamSourceFreq(0, D3DSTREAMSOURCE_INDEXEDDATA | lines.size())); D3DCALL(device->SetIndices(LineIndices)); PerInstanceData* data; std::vector<Wide::Render::Line*> lines_vec(lines.begin(), lines.end()); D3DCALL(PerBoneBuffer->Lock(0, lines.size() * sizeof(PerInstanceData), reinterpret_cast<void**>(&data), D3DLOCK_DISCARD)); std::for_each(lines.begin(), lines.end(), [&](Wide::Render::Line* ptr) { data->Color = D3DXColor(ptr->Colour); D3DXMATRIXA16 Translate, Scale, Rotate; D3DXMatrixTranslation(&Translate, ptr->Start.x, ptr->Start.y, ptr->Start.z); D3DXMatrixScaling(&Scale, ptr->Scale, 1, 1); D3DXMatrixRotationQuaternion(&Rotate, &D3DQuaternion(ptr->Rotation)); data->World = Scale * Rotate * Translate; }); D3DCALL(PerBoneBuffer->Unlock()); D3DCALL(device->DrawIndexedPrimitive(D3DPRIMITIVETYPE::D3DPT_LINELIST, 0, 0, 2, 0, 1)); Here's my vertex declaration: D3DVERTEXELEMENT9 BasicMeshVertices[] = { {0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0}, {1, 0, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}, {1, 16, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1}, {1, 32, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 2}, {1, 48, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 3}, {1, 64, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_COLOR, 0}, D3DDECL_END() }; The LineIndices are just 0, 1 and the LineVerts are just {0,0,0} and {1,0,0}, to represent a unit 3D line along the X axis.

    Read the article

  • General visual effects to meshes/entities

    - by Pacha
    I am trying to add some visual effects to some entities, meshes, or whatever you want to call them as they are looking pretty dull in my game right now. What I want to achieve is this: http://youtu.be/zox8935PLw0?t=36s (the "texture" gets disintegrated and then goes back to normal, covering the whole mesh.) Also I would like to know what is the best way to add effects like the one in the video to my game (for example, thunder effects, shattering, etc.) I know that I can do some things with shaders, but I haven't learned them too well and I am still in a beginner level. I am using Ogre3D, and GLSL for shaders. Thanks! Note: this is a screen-shot of my game, I want to apply the effect in the video to my main character):

    Read the article

  • How can I create a 3D model in Java without using modeling software?

    - by Galen Nare
    I am a lightly experienced game developer and this is my first time trying 3D objects in Java for the first time. I have been recently creating and updating games using AWT, Swing, and Graphics, but I want to delve farther into Java. I have looked into Java3D, but it's not what I want. I want to use Images and then crop the Image and place the respective textures in their respective places. I already know how to do the cropping and 2D Image editing, but how do I go 3D?

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >