Search Results

Search found 16410 results on 657 pages for 'game component'.

Page 327/657 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Using Appendbuffers in unity for terrain generation

    - by Wardy
    Like many others I figured I would try and make the most of the monster processing power of the GPU but I'm having trouble getting the basics in place. CPU code: using UnityEngine; using System.Collections; public class Test : MonoBehaviour { public ComputeShader Generator; public MeshTopology Topology; void OnEnable() { var computedMeshPoints = ComputeMesh(); CreateMeshFrom(computedMeshPoints); } private Vector3[] ComputeMesh() { var size = (32*32) * 4; // 4 points added for each x,z pos var buffer = new ComputeBuffer(size, 12, ComputeBufferType.Append); Generator.SetBuffer(0, "vertexBuffer", buffer); Generator.Dispatch(0, 1, 1, 1); var results = new Vector3[size]; buffer.GetData(results); buffer.Dispose(); return results; } private void CreateMeshFrom(Vector3[] generatedPoints) { var filter = GetComponent<MeshFilter>(); var renderer = GetComponent<MeshRenderer>(); if (generatedPoints.Length > 0) { var mesh = new Mesh { vertices = generatedPoints }; var colors = new Color[generatedPoints.Length]; var indices = new int[generatedPoints.Length]; //TODO: build this different based on topology of the mesh being generated for (int i = 0; i < indices.Length; i++) { indices[i] = i; colors[i] = Color.blue; } mesh.SetIndices(indices, Topology, 0); mesh.colors = colors; mesh.RecalculateNormals(); mesh.Optimize(); mesh.RecalculateBounds(); filter.sharedMesh = mesh; } else { filter.sharedMesh = null; } } } GPU code: #pragma kernel Generate AppendStructuredBuffer<float3> vertexBuffer : register(u0); void genVertsAt(uint2 xzPos) { //TODO: put some height generation code here. // could even run marching cubes / dual contouring code. float3 corner1 = float3( xzPos[0], 0, xzPos[1] ); float3 corner2 = float3( xzPos[0] + 1, 0, xzPos[1] ); float3 corner3 = float3( xzPos[0], 0, xzPos[1] + 1); float3 corner4 = float3( xzPos[0] + 1, 0, xzPos[1] + 1 ); vertexBuffer.Append(corner1); vertexBuffer.Append(corner2); vertexBuffer.Append(corner3); vertexBuffer.Append(corner4); } [numthreads(32, 1, 32)] void Generate (uint3 threadId : SV_GroupThreadID, uint3 groupId : SV_GroupID) { uint2 currentXZ = unint2( groupId.x * 32 + threadId.x, groupId.z * 32 + threadId.z); genVertsAt(currentXZ); } Can anyone explain why when I call "buffer.GetData(results);" on the CPU after the compute dispatch call my buffer is full of Vector3(0,0,0), I'm not expecting any y values yet but I would expect a bunch of thread indexes in the x,z values for the Vector3 array. I'm not getting any errors in any of this code which suggests it's correct syntax-wise but maybe the issue is a logical bug. Also: Yes, I know I'm generating 4,000 Vector3's and then basically round tripping them. However, the purpose of this code is purely to learn how round tripping works between CPU and GPU in Unity.

    Read the article

  • Dynamic libraries are not allowed on iOS but what about this?

    - by tapirath
    I'm currently using LuaJIT and its FFI interface to call C functions from LUA scripts. What FFI does is to look at dynamic libraries' exported symbols and let the developer use it directly form LUA. Kind of like Python ctypes. Obviously using dynamic libraries is not permitted in iOS for security reasons. So in order to come up with a solution I found the following snippet. /* (c) 2012 +++ Filip Stoklas, aka FipS, http://www.4FipS.com +++ THIS CODE IS FREE - LICENSED UNDER THE MIT LICENSE ARTICLE URL: http://forums.4fips.com/viewtopic.php?f=3&t=589 */ extern "C" { #include <lua.h> #include <lualib.h> #include <lauxlib.h> } // extern "C" #include <cassert> // Please note that despite the fact that we build this code as a regular // executable (exe), we still use __declspec(dllexport) to export // symbols. Without doing that FFI wouldn't be able to locate them! extern "C" __declspec(dllexport) void __cdecl hello_from_lua(const char *msg) { printf("A message from LUA: %s\n", msg); } const char *lua_code = "local ffi = require('ffi') \n" "ffi.cdef[[ \n" "const char * hello_from_lua(const char *); \n" // matches the C prototype "]] \n" "ffi.C.hello_from_lua('Hello from LUA!') \n" // do actual C call ; int main() { lua_State *lua = luaL_newstate(); assert(lua); luaL_openlibs(lua); const int status = luaL_dostring(lua, lua_code); if(status) printf("Couldn't execute LUA code: %s\n", lua_tostring(lua, -1)); lua_close(lua); return 0; } // output: // A message from LUA: Hello from LUA! Basically, instead of using a dynamic library, the symbols are exported directly inside the executable file. The question is: is this permitted by Apple? Thanks.

    Read the article

  • Kinect Hand tracking in xna

    - by N0xus
    I'm trying to create an application using the Kinect to simulate the following project: Kinect Hand Tracking I want my project to have similar usability with the Kinect tracking hand and finger positions for use in a menu system, or to navigate another system. What I would like to know is; is it possible for the exact same to be accomplished in XNA using Kinect? I know that it can be done in Winform / C#, but I know XNA / C# a lot better and would (ideally) prefer to use that.

    Read the article

  • Impact of variable-length loops on GPU shaders

    - by Will
    Its popular to render procedural content inside the GPU e.g. in the demoscene (drawing a single quad to fill the screen and letting the GPU compute the pixels). Ray marching is popular: This means the GPU is executing some unknown number of loop iterations per pixel (although you can have an upper bound like maxIterations). How does having a variable-length loop affect shader performance? Imagine the simple ray-marching psuedocode: t = 0.f; while(t < maxDist) { p = rayStart + rayDir * t; d = DistanceFunc(p); t += d; if(d < epsilon) { ... emit p return; } } How are the various mainstream GPU families (Nvidia, ATI, PowerVR, Mali, Intel, etc) affected? Vertex shaders, but particularly fragment shaders? How can it be optimised?

    Read the article

  • Computer Games Technolgy or Software Engineering?

    - by Suleman Anwar
    I'm in the last year of my college and going to university next year. Could you tell me what the difference between Software Engineering and Computer Games Technology is? I know a bit of both but don't know the actual difference. I'm kind off in a dilemma between these two. I want to be a programmer, I'd love to go into gaming but I heard getting a job within a computer games company is really hard.

    Read the article

  • WIn API Basic Paint program

    - by Tom Burman
    Just trying to learn a bit of Win API. Im trying to make a basic drawing app, a bit like MS Paint. For the time being im trying to get one function to work which is, when you left click and drag the mouse around the screen a line is drawn behind the mouse. Heres what i have so far, but for some reason: 1) the line starts drawing straight away rather then waiting for the left click 2) the line isn't solid its very dotty. case WM_MOUSEMOVE: { if(MK_LBUTTON){ hdc = GetDC(hwnd); hPen = CreatePen(PS_SOLID,5,RGB(0, 0, 255)); SelectObject(hdc, hPen); int x = LOWORD(lParam); int y = HIWORD(lParam); MoveToEx(hdc,x,y,NULL); LineTo(hdc, LOWORD(lParam), HIWORD(lParam)); ReleaseDC(hwnd,hdc); } else break; } } Thanks for any help!

    Read the article

  • circle - rectangle collision in 2D, most efficient way

    - by john smith
    Suppose I have a circle intersecting a rectangle, what is ideally the least cpu intensive way between the two? method A calculate rectangle boundaries loop through all points of the circle and, for each of those, check if inside the rect. method B calculate rectangle boundaries check where the center of the circle is, compared to the rectangle make 9 switch/case statements for the following positions: top, bottom, left, right top left, top right, bottom left, bottom right inside rectangle check only one distance using the circle's radius depending on where the circle happens t be. I know there are other ways that are definitely better than these two, and if could point me a link to them, would be great but, exactly between those two, which one would you consider to be better, regarding both performance and quality/precision? Thanks in advance.

    Read the article

  • Vertex Array Object (OpenGL)

    - by Shin
    I've just started out with OpenGL I still haven't really understood what Vertex Array Objects are and how they can be employed. If Vertex Buffer Object are used to store vertex data (such as their positions and texture coordinates) and the VAOs only contain status flags, where can they be used? What's their purpose? As far as I understood from the (very incomplete and unclear) GL Wiki, VAOs are used to set the flags/status for every vertex, following the order described in the Element Array Buffer, but the wiki was really ambiguous about it and I'm not really sure about what VAOs really do and how I could employ them.

    Read the article

  • How to build a "traffic AI"?

    - by Lunikon
    A project I am working on right now features a lot of "traffic" in the sense of cars moving along roads, aircraft moving aroun an apron etc. As of now the available paths are precalculated, so nodes are generated automatically for crossings which themselves are interconnected by edges. When a character/agent spawns into the world it starts at some node and finds a path to a target node by means of a simply A* algorithm. The agent follows the path and ultimately reaches its destination. No problem so far. Now I need to enable the agents to avoid collisions and to handle complex traffic situations. Since I'm new to the field of AI I looked up several papers/articles on steering behavior but found them to be too low-level. My problem consists less of the actual collision avoidance (which is rather simple in this case because the agents follow strictly defined paths) but of situations like one agent leaving a dead-end while another one wants to enter exactly the same one. Or two agents meeting at a bottleneck which only allows one agent to pass at a time but both need to pass it (according to the optimal route found before) and they need to find a way to let the other one pass first. So basically the main aspect of the problem would be predicting traffic movement to avoid dead-locks. Difficult to describe, but I guess you get what I mean. Do you have any recommendations for me on where to start looking? Any papers, sample projects or similar things that could get me started? I appreciate your help!

    Read the article

  • Sharing VBO with multiple objects and fixed size buffer data

    - by Mark Ingram
    I'm just messing around with OpenGL and getting some basic structures in place and my first attempt resulted in each SceneObject class (just contains vertex information right now) having it's own VBO inside it, however I've read that it might be better to share VBOs across multiple objects. Also, I read that you should avoid resizing a VBO (repeated calls to glBufferData with different size parameters), and instead choose a fixed size for a VBO, and just try a range from the buffer. I don't think changing the size of the buffer data would happen too often, but surely it would be better to only allocate the data you need? Choosing an arbitrary value seems risky. I'm looking for some advice on working with individual objects in a scene and their associated buffer data.

    Read the article

  • Comparing angles and working out the difference

    - by Thomas O
    I want to compare angles and get an idea of the distance between them. For this application, I'm working in degrees, but it would also work for radians and grads. The problem with angles is that they depend on modular arithmetic, i.e. 0-360 degrees. Say one angle is at 15 degrees and one is at 45. The difference is 30 degrees, and the 45 degree angle is greater than the 15 degree one. But, this breaks down when you have, say, 345 degrees and 30 degrees. Although they compare properly, the difference between them is 315 degrees instead of the correct 45 degrees. How can I solve this? I could write algorithmic code: if(angle1 > angle2) delta_theta = 360 - angle2 - angle1; else delta_theta = angle2 - angle1; But I'd prefer a solution that avoids compares/branches, and relies entirely on arithmetic.

    Read the article

  • A* Start path finding in HTML5 Canvas

    - by gyhgowvi
    I'm trying implement A* Start path finding in my games(which are written with JavaScript, HTML5 Canvas). Library for A* Start found this - http://46dogs.blogspot.com/2009/10/star-pathroute-finding-javascript-code.html and now I'm using this library for path finding. And with this library, I'm trying write a simple test, but stuck with one problem. I'm now done when in HTML5 canvas screen click with mouse show path until my mouse.x and mouse.y. Here is a screenshot - http://oi46.tinypic.com/14qxrl.jpg (Pink square: Player, Orange squares: path until my mouse.x/mouse.y) Code how I'm drawing the orange squares until my mouse.x/mouse.y is: 'http://pastebin.com/bfq74ybc (Sorry I do not understand how upload code in my post) My problem is I do not understand how to move my player until path goal. I've tried: 'http://pastebin.com/nVW3mhUM But with this code my player is not beung drawn.(When I run the code, player.x and player.y are equals to 0 and when I click with the mouse I get the path player blink and disappear) Maybe anyone know how to solve this problem? And I'm very very very sorry for my bad English language. :)

    Read the article

  • Everything turning black when pitching down

    - by Gordon
    Just a quick questions about something that's occurring in my world. Every time I pitch my camera downward, everything starts turning black, and if I pitch upward, everything sort of intensifies. I'm multiplying my normals by the normal matrix in the shader, and I'm multiplying my lights direction by the model view matrix. If I leave the normal and light dir in world space everything ends up fine. I thought putting them both in view space would not cause those weird things to happen?

    Read the article

  • Floodfill algorithm for GO

    - by user1048606
    The floodfill algorithm is used in the bucket tool in MS paint and photoshop, but it can also be used for GO and minesweeper. http://en.wikipedia.org/wiki/Flood_fill In go you can capture groups of stones, this website portrays it with two stones. http://www.connectedglobe.com/mindy/cap6.html This is my floodfill method in Java, it is not capturing a group of stones and I have no idea why because to me it makes sense. public void floodfill(int turn, int col, int row){ for(int a = col; a<19; a++){ for(int b = row; b<19; b++){ if(turn == black){ if(stones[col][row] == white){ stones[col][row] = 0; floodfill(black, col-1, row); floodfill(black, col+1, row); floodfill(black, col, row-1); floodfill(black, col, row+1); } } } } } It searches up, down, left, right for all the stones on the board. If the stones are white it captures them by making them 0, which represents empty.

    Read the article

  • SWF file not playing after being published

    - by rsquare
    I'm trying to run the "connector" example that comes bundled with the SmartFoxServer 2X downloads.. There it connects to the server and loads the correct configuration file. When I run it in Adobe Flash Professional 5, it runs correctly and connects to the server but after being published as SWF movie, it doesnt work. It loads the configuration file but can't connect and gives an error connection failure: ERROR 2048. This is the example I'm talking about.

    Read the article

  • How do GameEngines stop Pixel Seams appearing in adjacent mesh boundaries due to FP imprecision?

    - by ufomorace
    Graphics cards are mathematically imprecise. So when some meshes are joined by their borders, the graphics card often makes mistakes and decides that some pixels at the seam represent neither object, and unwanted pixels appear. It's a natural behaviour on all graphics cards. How are such worries avoided in Pro Games? Batching? Shaders? Different tangent vectors? Merging? Overlaping seams? Dark backgrounds? Extra vertices at borders? Z precision? Camera distance tweaks? Screencap of a fix that ended up not working:

    Read the article

  • What calls trigger a new batch?

    - by sebf
    I am finding my project is starting to show performance degradation and I need to optimize it. The answer to my previous question and this presentation from NVidia have helped greatly in understanding the performance characteristics of code using the GPU but there are a couple of things that aren't clear that I need to know to optimize my drawing. Specifically, what calls make the distinction between batches. I know that any state changes cause a new batch, so that includes: Render State Changes Buffer Changes Shader Changes Render Target Changes Correct? What else counts as a 'state change'? Does each Draw**Primitive() call constitute a new batch? Even if I were to issue the same call twice, with no state changes, or call it once on on part of the buffer, then again on another? If I were to update a buffer, but not change the bindings, would that be a new batch? That presentation and a DX9 page suggest using all of the texture slots available, which I take to mean loading multiple objects in 'parallel' by mapping their buffers/shaders/textures to slots 1-16. But I am not sure how this works - surely to do this you would need to change the buffer binding and that would count as a state change? (or is it a case of you do but it saves 16 calls so its OK?)

    Read the article

  • Saving a list of points into a text file

    - by dylanisawesome1
    I recently posted a question about this, but was not really sure where to go. I've gotten some progress, and have generated some simple noise here: http://pastie.org/5408655 That works well enough for me, but I would really like to be able to save the points into an ascii text file. currently it's formatted so that something like this: http://pastie.org/5409311 would create a square. I need to save in this format with the points(and lines connecting them) generated in the method above. Essentially, I need to write the array of points created in the first example to a text file formatted like the second example.

    Read the article

  • List of Open Source Java Games for Android

    - by BluFire
    I'm wondering if there are any more opensource games than the ones that you can plainly see when you search a list of open source games for android on google. Such as, is there a good website that has compiled open source games? I don't want an answer of "go google it" or "en.wikipedia.org/wiki/List_of_open_source_Android_applications" it gets really annoying on posts when people just give lazy answers.

    Read the article

  • Masking OpenGL texture by a pattern

    - by user1304844
    Tiled terrain. User wants to build a structure. He presses build and for each tile there is an "allow" or "disallow" tile sprite added to the scene. FPS drops right away, since there are 600+ tiles added to the screen. Since map equals screen, there is no scrolling. I came to an idea to make an allow grid covering the whole map and mask the disallow fields. Approach 1: Create allow and disallow grid textures. Draw a polygon on screen. Pass both textures to the fragment shader. Determine the position inside the polygon and use color from allowTexture if the fragment belongs to the allow field, disallow otherwise Problem: How do I know if I'm on the field that isn't allowed if I cannot pass the matrix representing the map (enum FieldStatus[][] (Allow / Disallow)) to the shader? Therefore, inside the shader I don't know which fragments should be masked. Approach 2: Create allow texture. Create an empty texture buffer same size as the allow texture Memset the pixels of the empty texture to desired color for each pixel that doesn't allow building. Draw a polygon on screen. Pass both textures to the fragment shader. Use texture2 color if alpha 0, texture1 color otherwise. Problem: I'm not sure what is the right way to manipulate pixels on a texture. Do I just make a buffer with width*height*4 size and memcpy the color[] to desired coordinates or is there anything else to it? Would I have to call glTexImage2D after every change to the texture? Another problem with this approach is that it takes a lot more work to get a prettier effect since I'm manipulating the color pixels instead of just masking two textures. varying vec2 TexCoordOut; uniform sampler2D Texture1; uniform sampler2D Texture2; void main(void){ vec4 allowColor = texture2D(Texture1, TexCoordOut); vec4 disallowColor = texture2D(Texture2, TexCoordOut); if(disallowColor.a > 0){ gl_FragColor= disallowColor; }else{ gl_FragColor= allowColor; }} I'm working with OpenGL on Windows. Any other suggestion is welcome.

    Read the article

  • How to Align Gun with Bullets

    - by Shane
    I have a top-down 2D shooter. I have an image of a player holding a gun, that rotates to face the mouse. Please note that the gun isn't a separate image tethered to the player, but rather part of the player. Right now, bullets are created at the player's x and y. This works when the player is facing the right way, but not when they rotate. The bullets move in the right direction, but don't come from the gun. How can I fix this? TL;DR: When the player rotates, bullets don't come from gun. public void fire() { angle = sprite.getRotation(); System.out.println(angle); x = sprite.getX(); y = sprite.getY(); Bullet b = new Bullet(x, y, angle); Utils.world.addBullet(b); }

    Read the article

  • OUYA and Unity set up problems

    - by Atkobeau
    I'm having trouble with the Unity / OUYA plugin. I'm using Unity 4 with the latest update on a Windows 7 machine. When I open the starter kit and try to compile the plugin I get the following error: Picked up _JAVA_OPTIONS: -Xmx512M And if I try to Build and Run I get this error: Error building Player: ArgumentException: Illegal characters in path. I'm stumped, I've gone through lots of forum posts here and on stackoverflow and I can't seem to resolve it. My environment variables look like this: PATH - C:\Users\dave\Documents\adt-bundle-windows-x86_64-20130219\sdk\tools; C:\Users\dave\Documents\adt-bundle-windows-x86_64-20130219\sdk\platform-tools\ JAVA_HOME - C:\Program Files (x86)\Java\jdk1.6.0_45\ Everything in the OUYA Panel is white Any ideas?

    Read the article

  • Best practices of texture size

    - by psal
    I wanted to know how should I determine a good texture size ? Currently, I always create UV texture that are 1024x1024px but if I create for example, a big house with a 1024px texture size, it will looks pretty bad. So, should I create different texture size (512, 1024, ...) for different mesh size like this ? : or is it better to always do high-resolution texture and then reduce it in the software (ie : increase the LODBias settings in UDK reduce the size of the texture) ? Thanks for your answer. ps : sorry for my english !

    Read the article

  • Error loading PCX image in FreeImage library

    - by khanhhh89
    I'm using FreeImage in C++ for loading texuture from the PCX image. My FreeImage code is as following: FREE_IMAGE_FORMAT fif = FIF_UNKNOWN; //pointer to the image data BYTE* bits(0); fif = FreeImage_GetFileType(m_fileName.c_str(), 0); if (FreeImage_FIFSupportsReading(fif)) dib = FreeImage_Load(fif, m_fileName.c_str()); //retrieve the image data bits = FreeImage_GetBits(dib); //get the image width and height width = FreeImage_GetWidth(dib); height = FreeImage_GetHeight(dib); My problem is the width and height variable are both 512, while the bits array is an empty string, which make the following OPENGL call corrupt: glTexImage2D(m_textureTarget, 0, GL_RGB, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, bits); While debugging, I notice that the "fif" variable (which contains the format of the image) is JPEG, while the Image is actually PCX. I wonder whether or not the FreeImage recognize the wrong format (from PCX to JPEG), so tha the bits array is an empty string. I hope to see your explanation about this problem. Thanks so much

    Read the article

  • XNA frame rate spikes in full screen mode

    - by ProgrammerAtWork
    I'm loading a simple texture and rotating it in XNA, and this works. But when I run it in full screen 1920x1080 mode I see spikes while my texture is rotating. If I run it windowed with 1920x1080 resolution, I don't get the spikes. The size of the texture does not seem to matter, I tried 512 texture size and 2048 texture size, same thing happens. Spikes in full screen, no spikes in windowed, resolution does not seem to matter, Debug or Release does not seem to do anything either. Anyone got ideas of what could be the problem? Edit: I think this problem has something to do with the vertical retrace. Set this property: _graphicsDeviceManager.SynchronizeWithVerticalRetrace = false; you'll lose vsync but it will not stutter.

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >