Search Results

Search found 25550 results on 1022 pages for 'umbraco development'.

Page 365/1022 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • Phone crash when try to use vibration on Android

    - by Diego Unanue
    Im developing an app that when you click a button the phone has to vibrate, the issue is that the phone just chashes. Saing that I need permitions to vibrate. I've already set this permition in the build.setting (android manifiest). Here is the code build.settings: settings = { orientation = { default = "portrait", supported = { "portrait", } }, iphone = { plist= { CoronaUseIOS7LandscapeOnlyWorkaround = true, CoronaUseIOS7IPadPhotoPickerLandscapeOnlyWorkaround = true, CoronaUseIOS6LandscapeOnlyWorkaround = true, CoronaUseIOS6IPadPhotoPickerLandscapeOnlyWorkaround = true, UIApplicationExitsOnSuspend = false, UIPrerenderedIcon = true, UIStatusBarHidden = false, CFBundleIconFile = "Icon.png", CFBundleIconFiles = { "Icon.png", "[email protected]", "Icon-60.png", "[email protected]", "Icon-72.png", "[email protected]", "Icon-76.png", "[email protected]", "Icon-Small.png", "[email protected]", "Icon-Small-40.png", "[email protected]", "Icon-Small-50.png", "[email protected]", }, }, }, android = { permissions = { { name = ".permission.C2D_MESSAGE", protectionLevel = "signature" }, }, usesPermissions = { "android.permission.INTERNET", "android.permission.VIBRATE", }, }, } the file that uses the vibration is: local onButtonEvent = function (event ) system.vibrate() end I read all post in Corona page without success. Can I see the android manifest to see if the permissions are there. I've read that is a Corona issue not sure.

    Read the article

  • What to do with input during movement?

    - by user1895420
    In a concept I'm working on, the player can move from one position in a grid to the next. Once movement starts it can't be changed and takes a predetermined amount of time to finish (about a quarter of a second). Even though their movement can't be altered, the player can still press keys (perhaps in anticipation of their next move). What do I do with this input? Possibilities i've thought of: Ignore all input during movement. Log all input and loop through them one by one once movement finishes. Log the first or last input and move when possible. I'm not really sure which is the most appropriate or most natural. Hence my question: What do I do with player-input during movement?

    Read the article

  • Economic modelling - Resources for valuing goods

    - by Rushyo
    tl;dr: What economic/computer science books would you suggest for learning about economic valuation of goods and simulations thereof? I'm looking to create an economic model for a game based on goods created procedurally. Every natural resource and produced good would be procedurally generated, with certain goods being assigned certain uses. Fakesium might be used for the production of Weapon A and produced from Fakesium factories which use Dilithium and Widgets as reagents, where Widgets are also the product of Foo and Bar The problem is not creating the resources and their various production utlities - but getting the game's AI empires and merchants to correctly value the goods according to their scarcity, utility and production costs. I need to create a simulation of goods which allows the various game factions to assign a common value denominator (credits) to each resource, depending on how much its worth to that empire. I see the simulation being something like: "I have a high requirement for Weapon A. Since I don't have much of Fakesium, which is needed for Weapon A - I must have a high demand for Fakesium. If I can acquire Fakesium, devalue it. If not, increase its value - and also increase demand for Dilithium and Widgets too." This is very naive - because it may be much much cheaper for the empire to simply purchase Dilithium and Widgets directly rather than purchasing Fakesium, for example. Another example is two resources might allow the creation of Weapon A (Fakesium and Lieron), so we'd need to consider that. I've been scratching my head over the problem and it keeps growing. By the time the player joins the world, I'd expect enough iterations of this process to have occurred that prices would have largely normalised - and would then only trigger rarely to compensate for major changes (eg. if the player blows up the world's only Foo mine!) Could anyone suggest resources (books, largely) which outline this style of modelling, preferably in the context of simulations? Since this problem would never occur outside fantasy worlds, I figured this is probably the most likely place to find people who have encountered similar problems and I'm sure there's people who know of good places for Games Developers to start looking at less specific economic theory too. Additionally, does anyone know of any developers with blogs whose games or research applications perform similar modelling?

    Read the article

  • How to Export Flash Animation Data

    - by charliep
    I'd love for my partner, the artist, to be able to animate using flash movieclips and timelines. Then I, the programmer, would like to read the raw Flash info and re-program it into my engine of choice (which happens to be Torque2D). The data I'd want is the bitmap images that were used in Flash, like the head and body the links between the images, like where the head connects to the body the motion data from the flash animation, like move, rotate (at what speed), shear, etc. for the head or arms or whatever. Is there any way to get this data? Here's what I know so far. There are tools like SWFSheet and Spriteloq that convert the entire flash animation into a frame by frame sprite animation (in a sprite sheet). This would take too much space in my case, so I'd like to avoid that. Re-animating on the fly would take much less texture memory. There is a PDF that describes the SWF file format but NOT the individual components like the movieclips. So anyone know of a library I can use, or how I can learn more about the movieclip components and whatnot? (more better tags: transform, export, convert)

    Read the article

  • Using heavyweight ORM implementation for light based games

    - by Holland
    I'm just about to engulf myself in an MVC-based/Component architecture in C#, using MySQL's connector/Net for the data storage, and probably some NHibernate/FluentNHibernate Object-relational-mapping to map out the data structure. The goal is to build a scalable 2D RPG. Then I think about it...and I can't help but think this seems a little "heavy weight" for a 2D RPG, especially one which, while I plan to incorporate a lot of functionality and entertaining gameplay, may be ported to something like Windows Phone or Android in the future. Yet, on the other hand even a 2-Dimensional RPG can become very complicated, and therefore must incorporate a lot of functionality. While this can be accomplished with text/XML/JSON for data storage, is there a better way? Is something such as Object-Relational-Mapping useful in such an application? So, what do you think? Would you say that there is a place for such technologies? I don't know what to think...

    Read the article

  • Loading a new instance of a class through XML not working quite right

    - by Thegluestickman
    I'm having trouble with XML and XNA. I want to be able to load weapon settings through XML to make my weapons easier to make and to have less code in the actual project file. So I started out making a basic XML document, something to just assign variables with. But no matter what I changed it gave me a new error every time. The code below gives me a "XML element 'Tag' not found", I added and it started to say the variables weren't found. What I wanted to do in the XML file as well, was load a texture for the file too. So I created a static class to hold my texture values, then in the Texture tag of my XML document I would set it to that instance too. I think that's were the problems are occuring because that's where the "XML element 'Tag' not found" error is pointing me too. My XML document: <XnaContent> <Asset Type="ConversationEngine.Weapon"> <weaponStrength>0</weaponStrength> <damageModifiers>0</damageModifiers> <speed>0</speed> <magicDefense>0</magicDefense> <description>0</description> <identifier>0</identifier> <weaponTexture>LoadWeaponTextures.ironSword</weaponTexture> </Asset> </XnaContent> My Class to load the weapon XML: public static class LoadWeaponXML { static Weapon Weapons; public static Weapon WeaponLoad(ContentManager content, int id) { Weapons = content.Load<Weapon>(@"Weapons/" + id); return Weapons; } } public static class LoadWeaponTextures { public static Texture2D ironSword; public static void TextureLoad(ContentManager content) { ironSword = content.Load<Texture2D>("Sword"); } } I'm not entirely sure if you can load textures through XML, but any help would be greatly appreciated.

    Read the article

  • Facilitating XNA game deployments for non programmers

    - by Sal
    I'm currently working on an RPG, using the RPG starter kit from XNA as a base. (http://xbox.create.msdn.com/en-US/education/catalog/sample/roleplaying_game) I'm working with a small team (two designers and one music/sound artist), but I'm the only programmer. Currently, we're working with the following (unsustainable) system: the team creates new pics/sounds to add to the game, or they modify existing sounds/pics, then they commit their work to a repository, where we keep a current build of everything. (Code, images, sound, etc.) Every day or so, I create a new installer, reflecting the new images, code changes, and sound, and everyone installs it. My issue is this: I want to create a system where the rest of the team can replace the combat sounds, for instance, and they can immediately see the changes, without having to wait for me to build. The way XNA's setup, if I publish, it encodes all of the image and sound files, so the team can't "hot swap." I can set up Microsoft VS on everyone's machine and show them how to quickly publish, but I wanted to know if there was a simpler way of doing this. Has anyone come up against this when working with teams using XNA?

    Read the article

  • Blender DirectX exporter to Panda3D

    - by jakebird451
    I have been experimenting with Panda3D lately. I have a character made in Blender with various bones and currently with one animation that I wish to export to a *.x format for Panda3D. My current attempt was to export the model was to first export with bones [Armatures] by checking the "Export Armatures" button in the export menu (file name: char.x). Thanks to the *.x file format, I read the file and it seems to have the same bone structure format as the model (with parenting and matrix positional data). The second export was selecting Animations - Full Animation to provide just the animation (file name: char_idle.x). The models exported just fine. I am not sure about the animation yet, but the file seems to be just fine. This is my code for loading the model into python & Panda3D: self.model = Actor("char.x",{"char_idle.x"}) When I run the program the command line provides a couple of errors, the main errors of interest are: :Actor(warning): char.x is not a character! and ... File "C:\Panda3D-1.8.0\direct\actor\Actor.py", line 284, in __init__ if (type(anims[anims.keys()[0]])==type({})): AttributeError: 'set' object has no attribute 'keys' The first error is the most interesting to me. The model works if I leave the animation dictionary blank. With no animations loaded the character appears in its un-animated T position, however the actor warning still shows up. The character should include the various bones when I exported the model right? I am not that experienced with blender, I'm just a programmer. So if the problem lies in blender please try to keep that in mind when posting a reply. I'll try my best to keep up. I also tried to print out the bone structure without any animations loaded and it provides a similar error with the line print self.model.listJoints(): File "C:\Panda3D-1.8.0\direct\actor\Actor.py", line 410, in listJoints Actor.notify.error("no part named: %s" % (partName)) File "C:\Panda3D-1.8.0\direct\directnotify\Notifier.py", line 132, in error raise exception(errorString) StandardError: no part named: modelRoot I really hope it is a simple exporting fix.

    Read the article

  • Limit The Game Loop?

    - by user1758938
    How do I make a game go at the same speed? You probly dont understand that so here are examples: This Would Loop Too Fast while(true) { GetInput(); Render(); } This Just Wont Work, Hard To Exlplain while(true) { GetInput(); Render(); Sleep(16); } Basicly How Do I Sync It To Any FrameRate And Still Have To Input And Funtions Going At The Same Rate?

    Read the article

  • Alchemy like game for the web, open source. Any ideas for element combinations?

    - by JohnDel
    I created a web game like the Android game Alchemy. It's open source and in the back-end you can create your own elements / your own game. I was wondering what elements - ideas would be good to implement as a prototype / demo? Some ideas are: Colors Programming languages Chemical Compounds Same as the original alchemy Evolution of biological organisms What do you think? Any specific combination ideas?

    Read the article

  • Pygame surfaces and their Rects

    - by Jaka Novak
    I am trying to understand how pygame surfaces work. I am confused about Rect position of Surface object. If I try blit surface on screen at some position then Surface is drawn at right position, but Rect of the surface is still at position (0, 0)... I tried write my own surface class with new rect, but i am not sure if is that right solution. My goal is that i could move surface like image with rect.move() or something like that. If there is any solution to do that i would be happy to read it. Thanks for answer and time for reading this awful English If helps i write some code for better understanding my problem. (run it first, and then uncomment two lines of code and run again to see the diference): import pygame from pygame.locals import * class SurfaceR(pygame.Surface): def __init__(self, size, position): pygame.Surface.__init__(self, size) self.rect = pygame.Rect(position, size) self.position = position self.size = size def get_rect(self): return self.rect def main(): pygame.init() screen = pygame.display.set_mode((640, 480)) pygame.display.set_caption("Screen!?") clock = pygame.time.Clock() fps = 30 white = (255, 255, 255) red = (255, 0, 0) green = (0, 255, 0) blue = (0, 0, 255) surface = pygame.Surface((70,200)) surface.fill(red) surface_re = SurfaceR((300, 50), (100, 300)) surface_re.fill(blue) while True: for event in pygame.event.get(): if event.type == QUIT: return 0 screen.blit(surface, (100,50)) screen.blit(surface_re, surface_re.position) #pygame.draw.rect(screen, white, surface.get_rect()) #pygame.draw.rect(screen, white, surface_re.get_rect()) pygame.display.update() clock.tick(fps) if __name__ == "__main__": main()

    Read the article

  • Render 2 images that uses different shaders

    - by Code Vader
    Based on the giawa/nehe tutorials, how can I render 2 images with different shaders. I'm pretty new to OpenGl and shaders so I'm not completely sure whats happening in my code, but I think the shaders that is called last overwrites the first one. private static void OnRenderFrame() { // calculate how much time has elapsed since the last frame watch.Stop(); float deltaTime = (float)watch.ElapsedTicks / System.Diagnostics.Stopwatch.Frequency; watch.Restart(); // use the deltaTime to adjust the angle of the cube angle += deltaTime; // set up the OpenGL viewport and clear both the color and depth bits Gl.Viewport(0, 0, width, height); Gl.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); // use our shader program and bind the crate texture Gl.UseProgram(program); //<<<<<<<<<<<< TOP PYRAMID // set the transformation of the top_pyramid program["model_matrix"].SetValue(Matrix4.CreateRotationY(angle * rotate_cube)); program["enable_lighting"].SetValue(lighting); // bind the vertex positions, UV coordinates and element array Gl.BindBufferToShaderAttribute(top_pyramid, program, "vertexPosition"); Gl.BindBufferToShaderAttribute(top_pyramidNormals, program, "vertexNormal"); Gl.BindBufferToShaderAttribute(top_pyramidUV, program, "vertexUV"); Gl.BindBuffer(top_pyramidTrianlges); // draw the textured top_pyramid Gl.DrawElements(BeginMode.Triangles, top_pyramidTrianlges.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); //<<<<<<<<<< CUBE // set the transformation of the cube program["model_matrix"].SetValue(Matrix4.CreateRotationY(angle * rotate_cube)); program["enable_lighting"].SetValue(lighting); // bind the vertex positions, UV coordinates and element array Gl.BindBufferToShaderAttribute(cube, program, "vertexPosition"); Gl.BindBufferToShaderAttribute(cubeNormals, program, "vertexNormal"); Gl.BindBufferToShaderAttribute(cubeUV, program, "vertexUV"); Gl.BindBuffer(cubeQuads); // draw the textured cube Gl.DrawElements(BeginMode.Quads, cubeQuads.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); //<<<<<<<<<<<< BOTTOM PYRAMID // set the transformation of the bottom_pyramid program["model_matrix"].SetValue(Matrix4.CreateRotationY(angle * rotate_cube)); program["enable_lighting"].SetValue(lighting); // bind the vertex positions, UV coordinates and element array Gl.BindBufferToShaderAttribute(bottom_pyramid, program, "vertexPosition"); Gl.BindBufferToShaderAttribute(bottom_pyramidNormals, program, "vertexNormal"); Gl.BindBufferToShaderAttribute(bottom_pyramidUV, program, "vertexUV"); Gl.BindBuffer(bottom_pyramidTrianlges); // draw the textured bottom_pyramid Gl.DrawElements(BeginMode.Triangles, bottom_pyramidTrianlges.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); //<<<<<<<<<<<<< STAR Gl.Disable(EnableCap.DepthTest); Gl.Enable(EnableCap.Blend); Gl.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.One); Gl.BindTexture(starTexture); //calculate the camera position using some fancy polar co-ordinates Vector3 position = 20 * new Vector3(Math.Cos(phi) * Math.Sin(theta), Math.Cos(theta), Math.Sin(phi) * Math.Sin(theta)); Vector3 upVector = ((theta % (Math.PI * 2)) > Math.PI) ? Vector3.Up : Vector3.Down; program_2["view_matrix"].SetValue(Matrix4.LookAt(position, Vector3.Zero, upVector)); // make sure the shader program and texture are being used Gl.UseProgram(program_2); // loop through the stars, drawing each one for (int i = 0; i < stars.Count; i++) { // set the position and color of this star program_2["model_matrix"].SetValue(Matrix4.CreateTranslation(new Vector3(stars[i].dist, 0, 0)) * Matrix4.CreateRotationZ(stars[i].angle)); program_2["color"].SetValue(stars[i].color); Gl.BindBufferToShaderAttribute(star, program_2, "vertexPosition"); Gl.BindBufferToShaderAttribute(starUV, program_2, "vertexUV"); Gl.BindBuffer(starQuads); Gl.DrawElements(BeginMode.Quads, starQuads.Count, DrawElementsType.UnsignedInt, IntPtr.Zero); // update the position of the star stars[i].angle += (float)i / stars.Count * deltaTime * 2 * rotate_stars; stars[i].dist -= 0.2f * deltaTime * rotate_stars; // if we've reached the center then move this star outwards and give it a new color if (stars[i].dist < 0f) { stars[i].dist += 5f; stars[i].color = new Vector3(generator.NextDouble(), generator.NextDouble(), generator.NextDouble()); } } Glut.glutSwapBuffers(); } The same goes for the textures, whichever one I mention last gets applied to both object?

    Read the article

  • Can and should a game design be patented?

    - by Christian
    I have an idea for a game that I want to develop and I feel is unique, and I'm wondering if I should patent it. I read on the web that games can be patented, but just because it can be done doesn't mean that it makes sense to do it. I actually don't really want patent it (it's expensive, a hassle and I don't believe in patenting of ideas... unless it's something truly revolutionary). However, I'm concerned a bigger company could come along, with more experienced game designers and developers and steal the idea.

    Read the article

  • Setting up cube map texture parameters in OpenGL

    - by KaiserJohaan
    I see alot of tutorials and sources use the following code snippet when defining each face of a cube map: for (i = 0; i < 6; i++) glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, InternalFormat, size, size, 0, Format, Type, NULL); Is it safe to assume GL_TEXTURE_CUBE_MAP_POSITIVE_X + i will properly iterate the following cube map targets, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y etc?

    Read the article

  • Resources for game networking in Java

    - by pudelhund
    I am currently working on a Java multiplayer game. The game itself (single player) already works perfectly fine and so does the chat. The only thing that is really missing is the multiplayer part. Sadly I am absolutely clueless on where to start with that. I roughly know that I will have to work with packages, and I also know many things about streaming etc (chat is already working). Oh and it should - according to this article - be a UDP server. My problem is that I can't find any resources on how to do this. A tutorial (book or website) would be perfect, alternatively a good example of an open source client/server (in Java of course) would be fine as well. If you feel like doing something helpful I'd also really appreciate someone "privately" teaching me via email or some chat program :) Thank you!

    Read the article

  • Android device - C++ OpenGL 2: eglCreateWindowSurface invalid

    - by ThreaderSlash
    I am trying to debug and run OGLES on Native C++ in my Android device in order to implement a native 3D game for mobile smart phones. The point is that I got an error and see no reason for that. Here is the line from the code that the debugger complains: mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL); And this is the error message: Invalid arguments ' Candidates are: void * eglCreateWindowSurface(void *, void *, unsigned long int, const int *) ' --x-- Here is the declaration: android_app* mApplication; EGLDisplay mDisplay; EGLint lFormat, lNumConfigs, lErrorResult; EGLConfig lConfig; // Defines display requirements. 16bits mode here. const EGLint lAttributes[] = { EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT, EGL_BLUE_SIZE, 5, EGL_GREEN_SIZE, 6, EGL_RED_SIZE, 5, EGL_SURFACE_TYPE, EGL_WINDOW_BIT, EGL_RENDER_BUFFER, EGL_BACK_BUFFER, EGL_NONE }; // Retrieves a display connection and initializes it. packt_Log_debug("Connecting to the display."); mDisplay = eglGetDisplay(EGL_DEFAULT_DISPLAY); if (mDisplay == EGL_NO_DISPLAY) goto ERROR; if (!eglInitialize(mDisplay, NULL, NULL)) goto ERROR; // Selects the first OpenGL configuration found. packt_Log_debug("Selecting a display config."); if(!eglChooseConfig(mDisplay, lAttributes, &lConfig, 1, &lNumConfigs) || (lNumConfigs <= 0)) goto ERROR; // Reconfigures the Android window with the EGL format. packt_Log_debug("Configuring window format."); if (!eglGetConfigAttrib(mDisplay, lConfig, EGL_NATIVE_VISUAL_ID, &lFormat)) goto ERROR; ANativeWindow_setBuffersGeometry(mApplication->window, 0, 0, lFormat); // Creates the display surface. packt_Log_debug("Initializing the display."); mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL); --x-- Hope someone here can shed some light on it.

    Read the article

  • Cannot convert parameter 1 from 'short *' to 'int *' [closed]

    - by Torben Carrington
    I'm trying to learn pointers and since I recently learned that short int takes up less memory [2 bytes as apposed to the long int's memory usage of 4 which is the default for int] I wanted to create a pointer that uses the memory address of a short integer. I'm following a tutorial in my book about Pointers and it's using the Swap function. The problem is I receive this error the moment I change everything from int to short int: error C2664: 'Swap' : cannot convert parameter 1 from 'short *' to 'int *' 1 Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast Since my code is so small here is the whole thing: void Swap(short int *sipX, short int *sipY) { short int siTemp = *sipX; *sipX = *sipY; *sipY = siTemp; } int main() { short int siBig = 100; short int siSmall = 1; std::cout << "Pre-Swap: " << siBig << " " << siSmall << std::endl; Swap(&siBig, &siSmall); std::cout << "Post-Swap: " << siBig << " " << siSmall << std::endl; return 0; }

    Read the article

  • Adding Vertices to a dynamic mesh via Method Call

    - by Raven Dreamer
    I have a C# Struct with a static method, "Get Shape" which populates a List with the vertices of a polyhedron. Method Signature: public static void GetShape(Block b, int x, int y, int z, List<Vector3> vertices, List<int> triangles, List<Vector2> uvs, List<Vector2> uv2s) Adding directly to the vertices list (via vertices.Add(vector3) ), the code works as expected, and the new polyhedron appears when I trigger the method. However, I want to do some processing on the vertices I'm adding (a rotation), and the most sensible way I can think to do that is by creating a separate list of Vector3s, and then combining the lists when I'm done. However, vertices.AddRange(newVerts) does not add the shape to the mesh, nor does a foreach loop with verts.Add(vertices[i]). And this is before I've added in any of the processing! I have a feeling this might stem from passing the list of vertices in as a parameter, rather than returning a list and then adding to the vertices in the calling object, but since I'm filling 4 lists, I was trying to avoid having to create a data struct to return all four at once. Any ideas? The working version of the method is reprinted below, in full: public static void GetShape(Block b, int x, int y, int z, List<Vector3> vertices, List<int> triangles, List<Vector2> uvs, List<Vector2> uv2s) { //List<Vector3> vertices = new List<Vector3>(); int l_blockShape = b.blockShape; int l_blockType = b.blockType; //CheckFace checks if the block is empty //if this block is empty, don't draw anything. int vertexIndex; //only y faces need to be hidden. //if((l_blockShape & BlockShape.NegZFace) == BlockShape.NegZFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.2f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //XY Z+1 face //if((l_blockShape & BlockShape.PosZFace) == BlockShape.PosZFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZY face //if((l_blockShape & BlockShape.NegXFace) == BlockShape.NegXFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.2f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZY X+1 face // if((l_blockShape & BlockShape.PosXFace) == BlockShape.PosXFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZX face if((l_blockShape & BlockShape.NegYFace) == BlockShape.NegYFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex); // second triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+1); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZX + 1 face if((l_blockShape & BlockShape.PosYFace) == BlockShape.PosYFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y+1 , z+.2f)); vertices.Add(new Vector3(x+.8f, y+1 , z+.8f)); vertices.Add(new Vector3(x+.2f, y+1 , z+.8f)); vertices.Add(new Vector3(x+.2f, y+1 , z+.2f)); // first triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex); // second triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+1); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } }

    Read the article

  • VBO and shaders confusion, what's their connection?

    - by Jeffrey
    Considering OpenGL 2.1 VBOs and 1.20 GLSL shaders: When creating an entity like "Zombie", is it good to initialize just the VBO buffer with the data once and do N glDrawArrays() calls per each N zombies? Is there a more efficient way? (With a single call we cannot pass different uniforms to the shader to calculate an offset, see point 3) When dealing with logical object (player, tree, cube etc), should I always use the same shader or should I customize (or be able to customize) the shaders per each object? Considering an entity class, should I create and define the shader at object initialization? When having a movable object such as a human, is there any more powerful way to deal with its coordinates than to initialize its VBO object at 0,0 and define an uniform offset to pass to the shader to calculate its real position? Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombielist class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie }

    Read the article

  • Create dynamic buffer SharpDX

    - by fedab
    I want to set a buffer that is updated every frame but can't figure it out, what i have to do. The only working thing i have is this: mdexcription = new BufferDescription(Matrix.SizeInBytes * Matrices.Length, ResourceUsage.Dynamic, BindFlags.VertexBuffer, CpuAccessFlags.Write, ResourceOptionFlags.None, 0); instanceBuffer = SharpDX.Direct3D11.Buffer.Create(Device, Matrices, mdexcription); vBB = new VertexBufferBinding(instanceBuffer, Matrix.SizeInBytes, 0); DeviceContext.InputAssembler.SetVertexBuffers(1, vBB); Draw: //Change Matrices (Matrix[]) every frame... instanceBuffer.Dispose(); instanceBuffer = SharpDX.Direct3D11.Buffer.Create(Device, Matrices, mdexcription); vBB = new VertexBufferBinding(instanceBuffer, Matrix.SizeInBytes, 0); DeviceContext.InputAssembler.SetVertexBuffers(1, vBB); I guess Dispose() and creating a new buffer is slow and can be done much faster. I've read about DataStream but i do not know, how to set this up properly. What steps do i have to do to set up a DataStream to achieve fast every-frame update?

    Read the article

  • Geometry shader for multiple primitives

    - by Byte56
    How can I create a geometry shader that can handle multiple primitives? For example when creating a geometry shader for triangles, I define a layout like so: layout(triangles) in; layout(triangle_strip, max_vertices=3) out; But if I use this shader then lines or points won't show up. So adding: layout(triangles) in; layout(triangle_strip, max_vertices=3) out; layout(lines) in; layout(line_strip, max_vertices=2) out; The shader will compile and run, but will only render lines (or whatever the last primitive defined is). So how do I define a single geometry shader that will handle multiple types of primitives? Or is that not possible and I need to create multiple shader programs and change shader programs before drawing each type?

    Read the article

  • Load different levels in XML

    - by Anearion
    my question is more about a theoretical gener than a pratical way to make things happen. I'm about start developing a game for android, text based so i won't need sprites or animation, nor a game engine. Let's say is similar to a sudoku game, where each level is an harder version of sudoku and each level has some question to be asnwered over the sudoku itself. I was wondering if the better way is to have only one XML and then inside all the different levels, each one with his meta-tags, or if the different approach of making n xml files where each one is a level is preferred. At the moment a level should have those tags: <level> <question>Question_1</question> <hint1>what does it do?</hint1> <hint2>where...</hint2> .... <hintN>how...</hintN> </level> So each level could have some items to read and that's what made me think that maybe different files are better cuz if i have to load lvl 10 i can read only the 10.xml file. I hope my question isn't too stupind. Thanks in advance

    Read the article

  • OpenGL camera moves faster than player

    - by opiop65
    I have a side scroller game made in OpenGL, and I'm trying to center the player in the viewport when he moves. I know how to do it: cameraX = Width / 2 / TileSize - playerPosX cameraY = Height / 2 / TileSize - playerPosY However, I have a problem. The player and "camera" move, but the player moves faster than the "camera" scrolls. So, the player can actually move out of the screen. Some code, this is how I translate the camera: public Camera(){ } public void update(Player p){ glTranslatef(-p.getPos().x - Main.WIDTH / 64 / 2, -p.getPos().y - Main.HEIGHT / 64 / 2, 1); } Here's how I move the player: public void update(){ if(Keyboard.isKeyDown(Keyboard.KEY_D)){ this.move(MOVESPEED, 0); } if(Keyboard.isKeyDown(Keyboard.KEY_A)){ this.move(-MOVESPEED, 0); } } The move method: public void move(float x, float y){ this.getPos().set(this.getPos().x + x, this.getPos().y + y); } And then after I move the player, I update the player's geometry, which shouldn't matter. What am I doing wrong here, this seems like such a simple problem, yet it doesn't work!

    Read the article

  • SDL to SFML simple question

    - by ultifinitus
    Hey! I've been working on a game in c++ for about a week and a half, and I've been using SDL. However, my current engine only needs the following from whatever library I use: enable double buffering load an image from path into something that I can apply to the screen apply an image to the screen with a certain x,y enable transparency on an image (possibly) image clipping, for sprite sheets. I am fairly sure that SFML has all of this functionality, I'm just not positive. Will someone confirm my suspicions? Also I have one or two questions regarding SFML itself. Do I have to do anything to enable hardware accelerated rendering? How quick is SFML at blending alpha values? (sorry for the less than intelligent question!)

    Read the article

  • Which algorithm used in Advance Wars type turn based games

    - by Jan de Lange
    Has anyone tried to develop, or know of an algorithm such as used in a typical turn based game like Advance Wars, where the number of objects and the number of moves per object may be too large to search through up to a reasonable depth like one would do in a game with a smaller search base like chess? There is some path-finding needed to to engage into combat, harvest, or move to an object, so that in the next move such actions are possible. With this you can build a search tree for each item, resulting in a large tree for all items. With a cost function one can determine the best moves. Then the board flips over to the player role (min/max) and the computer searches the best player move, and flips back etc. upto a number of cycles deep. Finally it has found the best move and now it's the players turn. But he may be asleep by now... So how is this done in practice? I have found several good sources on A*, DFS, BFS, evaluation / cost functions etc. But as of yet I do not see how I can put it all together.

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >