Search Results

Search found 5654 results on 227 pages for '3d rendering'.

Page 35/227 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Translating an object along its heading

    - by Kuros
    I am working on a simulation that requires me to have several objects moving around in 3D space (text output of their current position on the grid and heading is fine, I do not need graphics), and I am having some trouble getting objects to move along their relative headings. I have a basic understanding of vectors and matrices. I am using a vector to represent their position, and I am also using Euler Angles. I can translate one of my entities with a matrix along whatever axis, and I can alter their heading. For example, if I have an entity at (order is XYZ) 1, 1, 1, with a heading of 0, I can apply a translation matrix to get them to talk to 1, 1, 2 fine. However, if I change their heading to 270, they still walk to 1, 1, 3, instead of 2, 1, 2 as I desire. I have a feeling that my problem lies in not translating my matrix from world space to object space, but I am not sure how to go about that. How can I do this? Addition: I am using 3D vectors to represent their current position and their heading (using the three euler angles). For now, all I want to do is have an entity walk in a square, reporting their current position at each step. So, assuming it starts at 10, 10, 10 I want it to walk as follows: 10,10,10 -> 10, 10, 15 10, 10, 15 -> 5, 10, 15 5, 10, 15 -> 5, 10, 10 5, 10, 10 -> 10, 10, 10 My 1 Z unit translation matrix is as follows: [1 0 0 0] [0 1 0 0] [0 0 1 1] [0 0 0 1] My rotation matrix is as follows: [0 0 1 0] [0 1 0 0] [-1 0 0 0] [0 0 0 1]

    Read the article

  • How do you dynamically allocate a contiguous 3D array in C?

    - by Derek
    In C, I want to loop through an array in this order for(int z = 0; z < NZ; z++) for(int x = 0; x < NX; x++) for(int y = 0; y < NY; y++) 3Darray[x][y][z] = 100; How do I create this array in such a way that 3Darray[0][1][0] comes right before 3Darray[0][2][0] in memory? I can get an initialization to work that gives me "z-major" ordering, but I really want a y-major ordering for this 3d array This is the code I have been trying to use: char *space; char ***Arr3D; int y, z; ptrdiff_t diff; space = malloc(X_DIM * Y_DIM * Z_DIM * sizeof(char)) Arr3D = malloc(Z_DIM * sizeof(char **)); for (z = 0; z < Z_DIM; z++) { Arr3D[z] = malloc(Y_DIM * sizeof(char *)); for (y = 0; y < Y_DIM; y++) { Arr3D[z][y] = space + (z*(X_DIM * Y_DIM) + y*X_DIM); } }

    Read the article

  • 3D Sphereical Terrain with an 8 mesh sphere. The edges of the mesh are obvioulsy seen and I'm not su

    - by Justin808
    Hi :) I'm working in Unity3D, but my issue is with 3D meshes. I'm hoping someone here can help or point me in the right direction. I have 2 version of code, http://www.pasteit4me.com/695002 (old) and http://www.pasteit4me.com/690003 (new). The old code, makes a single mesh sphere and creates a terrain on it. The new code makes an 8 mesh sphere and creates a terrain on it. On the new version the edges of the meshes are obviously seen and I'm not sure why. It looks like the edges are adjusted no much, almost 2-3 times more than they should have been. GenerateB() in the old code and Generate() in the new code creates the sphere. MakeTerrain() in both create the terrain. If I dont run the MakeTerrain() function the new sphere looks like a solid mesh. I'm not sure where to start looking in the MakeTerrain() function in the new code to solve the issue :-/ Any ideas? An image of the issue is at http://img28.imageshack.us/img28/3784/screenshot20100611at850.png.

    Read the article

  • Codeigniter PHP Framework - Need to get query string

    - by Siva
    Hi, I'm creating a e-commerce site using Codeigniter. My question is, how should i get the query string. Am using a saferpay payment gateway, the gateway response will be like this http://www.test.com/registration/success/?DATA=<IDP+MSGTYPE%3D"PayConfirm"+KEYID%3D"1-0"+ID%3D"KI2WSWAn5UG3vAQv80AdAbpplvnb"+TOKEN%3D"(unused)"+VTVERIFY%3D"(obsolete)"+IP%3D" 123.25.37.43"+IPCOUNTRY%3D"IN"+AMOUNT%3D"832200"+CURRENCY%3D"CHF"+PROVIDERID%3D"90"+PROVIDERNAME%3D"Saferpay+Test+Card"+ACCOUNTID%3D"99867-94913159"+ECI%3D"2"+CCCOUNTRY%3D"XX"%2F>&SIGNATURE=bc8e253e2a8c9ee0271fc45daca05eecc43139be6e7d486f0d6f68a356865457a3afad86102a4d49cf2f6a33a8fc6513812e9bff23371432feace0580f55046c To handle the response i need to get the query string data. Help would be much appreciated. Thanks in advance. - Siva

    Read the article

  • Is there an HTML browser rendering engine for Ruby?

    - by Jose
    Given a URL, I would like to be able to render the returned HTML to know width and height for each div, fonts' size for each piece of text, color of each element, position of each element on screen, etc. A possible approach could be traversing the DOM tree with Hpricot and checking CSS style by parsing the associated stylesheet using css_parser gem. But this would not consider default styles, inheritance, floats, etc. In Java there's Cobra, a Java Web Renderer, which is able to render a web page and query attributes like width, font size, etc. for each fragment. I could use Cobra with JRuby or similar solutions, but prefer a Ruby native tool. Is there any library like this for Ruby?

    Read the article

  • Rendering videos online using flash as3 and AIR, how does it work?

    - by David
    I came accross this link that talks about the technology used with Animoto.com And it seems like they use AIR to export their flash animations to bitmaps that ffmpeg compil as a movie. http://labs.animoto.com/2009/06/07/presenting-filmstrip/ "It also takes time to render, so what you’re seeing isn’t realtime. It’s a series of Flash-generated frames that have been saved out using AIR and processed into an MP4 after the fact using a utility called FFmpeg." Does that mean AIR is installed on the server? How would that work to export a dynamicaly created aninmation into a series of pics that ffmpeg can then easily convert into a movie? David

    Read the article

  • Seperation of game- and rendering logic

    - by Qua
    What is the best way to seperate rendering code from the actually game engine/logic code? And is it even a good idea to seperate those? Let's assume we have a game object called Knight. The Knight has to be rendered on the screen for the user to see. We're now left with two choices. Either we give the Knight a Render/Draw method that we can call, or we create a renderer class that takes care of rendering all knights. In the scenario where the two is seperated the Knight should the knight still contain all the information needed to render him, or should this be seperated as well? In the last project we created we decided to let all the information required to render an object be stored inside the object itself, but we had a seperate component to actually read those informations and render the objects. The object would contain information such as size, rotation, scale, and which animation was currently playing and based on this the renderer object would compose the screen. Frameworks such as XNA seem to think joining the object and rendering is a good idea, but we're afraid to get tied up to a specific rendering framework, whereas building a seperate rendering component gives us more freedom to change framework at any given time.

    Read the article

  • suggestions for a 3D graph rendering library?

    - by Sandro
    Hello coders! So I'm not sure how stackoverflow friendly this question is since it doesn't have a quick clear cut answer but here we go... I have a java program that generates data for a directed graph. Now I need to render this graph. The data needs to be laid out in 3D, and I want to be able to define which plane an edge lives in. (Each edge will only need to occupy 1 plane of the 3D space). I also need the ability to navigate around the graph. Since I know that this kind of stuff is hard, I'm going shopping. So far I've looked into (In no particular order): JUNG: lacks 3D support Cytoscape: not sure how much I'll be able to define edge drawing, haven't seen a non bio-informatics application of it yet JGraph: I didn't see any 3D applications yet Perfuse: looks promising, does anyone know anything else about it? Gephi: Documentation looks scarce Processing: does this play well with java? I'm also considering doing some combination of opengl + swing rendering to create a 3D graph from multiple 2D graphs. I am also not adverse to the idea of linking from another language Any Ideas? Thank you.

    Read the article

  • OpenGLES - Rendering a background image only once and not wiping it

    - by chaosbeaker
    Hello, first time asking a question here but been watching others answers for a while. My own question is one for improving the performance of my program. Currently I'm wiping the viewFrameBuffer on each pass through my program and then rendering the background image first followed by the rest of my scene. I was wondering how I go about rendering the background image once, and only wiping the rest of the scene for updating/re-rendering. I tried using a seperate buffer but I'm not sure how to present this new buffer to the render buffer. // Set the current EAGLContext and bind to the framebuffer. This will direct all OGL commands to the // framebuffer and the associated renderbuffer attachment which is where our scene will be rendered [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); // Define the viewport. Changing the settings for the viewport can allow you to scale the viewport // as well as the dimensions etc and so I'm setting it for each frame in case we want to change i glViewport(0, 0, screenBounds.size.width , screenBounds.size.height); // Clear the screen. If we are going to draw a background image then this clear is not necessary // as drawing the background image will destroy the previous image glClearColor(0.0f, 1.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // Setup how the images are to be blended when rendered. This could be changed at different points during your // render process if you wanted to apply different effects glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); switch (currentViewInt) { case 1: { [background render:CGPointMake(240, 0) fromTopLeftBottomRightCenter:@"Bottom"]; // Other Rendering Code }} // Bind to the renderbuffer and then present this image to the current context glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; Hopefully by solving this I'll also be able to implement another buffer just for rendering particles as I can set them to always use a black background as their alpha source. Any help is greatly appreciated

    Read the article

  • Javascript (and HTML rendering) engine without a GUI for automation?

    - by MTsoul
    Are there any libraries or frameworks that provide the functionality of a browser, but do not need to actually render physically onto the screen? I want to automate navigation on web pages (Mechanize does this, for example), but I want the full browser experience, including Javascript. Thus, I'd like to have a virtual browser of some sort, that I can use to "click on links" programmatically, have DOM elements and JS scripts render within it, and manipulate these elements. Solution preferably in Python, but I can manage others.

    Read the article

  • 3d / 2.5d game library for iphone and pc

    - by Aaron Qian
    I'm trying to make a 2d shooter game with 3d background. The player and enemies are essentially just quads with textures. The background will be simple 3d polygons with textures and some fog and light. Therefore, I don't need a really powerful 3d library. I tried Unity3D and Torque2D, but I don't like to use their GUI editors. I prefer to work with code. So, is there a cross platform (mainly windows and iPhone) 3d / 2.5d game library, commercial or open source? I assume it will be only limited to c, c++, and object-c due to apple's new ToS.

    Read the article

  • Problem in Rendering images in UIImageView in iPhone programming.

    - by suse
    Hello, I've a UIViewController and a UIImageView, on UIImageView i want to flip between 2 images, which i'm not able to achieve. This is the code i've written, plz correct me if i'm wrong. UIViewController* VC = [[UIViewController alloc]init]; VC.view.backgroundColor = [UIColor redColor]; UIImageView* imgView = [[UIImageView alloc]initWithFrame:CGRectMake(0,0,320,400)]; [imgView setImage:[UIImage imageNamed:@"Image_A.jpg"]]; [VC.view addSubview:imgView]; sleep(2); [imgView setImage:[UIImage imageNamed:@"Image_B.jpg"]]; [VC.view addSubview:imgView]; [window addSubview:VC.view]; so when i execute this project, only Image_B is displayed on screen, while i want Image_A to be displayed then on sleep(2), Image_B has to be displayed. How would i make it possible?? plz guide me.. I'm struggling with this problem since 3days .. plz try to help me out... Thank You.

    Read the article

  • [Flash 3d engine] Away3d and events (Basic questions)

    - by Simon
    I'd like to play around 3d in Flash and I'm wondering how sophisticated objects can i load from 3D Max... cos as i read it's possible to load something from 3d Max I've read that popular 3d engine is Away3d (many tutorials), so if there is nothing better... i'd like to focus on it. I've forgot to mention that i'm not familar with Flash, but the best way to learn something is to do something interesting with it... :) Main question: Can I load object from 3ds and link parts of this object to some actions in Flash. Better example: I'd like to load a Car and when user click on car's door i'd like to show some informations, about those door, or pass this event outside for example to any other application in PHP, Java etc... and when he click on car mask i'd like to raise other event... Is it possible to create such interaction?? Thx in advance :)

    Read the article

  • Automatic images translation to 3d model

    - by farrakhov-bulat
    I'm quite interested in automatic images translation to 3d models. Not really for commercial product, but from the point of possible academic research and implementation. What I'd like to achieve is almost transparent for user process of transformation series of images (fewer is better) to 3d model which might be shown in flash/silverlight/javafx or similar. Consider online furniture store with 3d models of all items in stock. Kinda cool to have ability to see the product in 3d before purchasing it. I managed to find a few pieces of software, like insight3d, but it couldn't be used in my case I guess. So, are there any similar projects or tips for me? If it would require to write that piece of software - I'd really love to dig into research on this field.

    Read the article

  • Render rivers in a grid.

    - by Gabriel A. Zorrilla
    I have created a random height map and now i want to create rivers. I've made an algorithm based on a* to make rivers flow from peaks to sea and now i'm in the quest of figuring out an elegant algorithm to render them. It's a 2D, square, mapgrid. The cells which the river pases has a simple integer value with this form :rivernumber && pointOrder. Ie: 10, 11, 12, 13, 14, 15, 16...1+N for the first river, 20,21,22,23...2+N for the second, etc. This is created in the map grid generation time and it's executed just once, when the world is generated. I wanted to treat each river as a vector, but there is a problem, if the same river has branches (because i put some noise to generate branches), i can not just connect the points in order. The second alternative is to generate a complex algorithm where analizes each point, checks if the next is not a branch, if so trigger another algorithm that take care of the branch then returns to the main river, etc. Very complex and inelegant. Perhaps there is a solution in the world generation algorithm or in the river rendering algorithm that is commonly used in these cases and i'm not aware of. Any tips? Thanks!!

    Read the article

  • XNA Quadtree with LOD

    - by Byron Cobb
    I'm looking to create a fairly large environment, and as such would like to implement a quadtree and use LOD on it. I've looked through numerous examples and I get the basic idea of a quadtree. Start with a root node with 4 vertices covering the whole map and divide into 4 children nodes until I meet some criteria(max number of triangles) I'm looking for some very very basic algorithm or explanation with respect to drawing the quadtree. What vertices need to be stored per iteration? When do I determine what vertices to draw? When to update indices and vertices? Hope to integrate the bounding frustrum? Do I include parent and child vertices? I'm looking for very simple instruction on what to do. I've scoured the internet for days now looking, but everyone adds extra code and a different spin without explanation. I understand quadtrees, but not with respect to 3d rendering and lod. A link to an outside source will probably have been read by myself already and won't help. Regards, Byron.

    Read the article

  • Vertex fog producing black artifacts

    - by Nick
    I originally posted this question on the XNA forums but got no replies, so maybe someone here can help: I am rendering a textured model using the XNA BasicEffect. When I enable fog, the model outline is still visible as many small black dots when it should be "in the fog". Why is this happening? Here's what it looks like for me -- http://tinypic.com/r/fnh440/6 Here is a minimal example showing my problem: (the ship model that this example uses is from the chase camera sample on this site -- http://xbox.create.msdn.com/en-US/education/catalog/sample/chasecamera -- in case anyone wants to try it out ;)) public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Model model; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here model = Content.Load<Model>("ship"); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect be in mesh.Effects) { be.EnableDefaultLighting(); be.FogEnabled = true; be.FogColor = Color.CornflowerBlue.ToVector3(); be.FogStart = 10; be.FogEnd = 30; } } } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here model.Draw(Matrix.Identity * Matrix.CreateScale(0.01f) * Matrix.CreateRotationY(3 * MathHelper.PiOver4), Matrix.CreateLookAt(new Vector3(0, 0, 30), Vector3.Zero, Vector3.Up), Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, 16f/9f, 1, 100)); base.Draw(gameTime); } }

    Read the article

  • Mandelbrot set not displaying properly

    - by brainydexter
    I am trying to render mandelbrot set using glsl. I'm not sure why its not rendering the correct shape. Does the mandelbrot calculation require values to be within a range for the (x,y) [ or (real, imag) ] ? Here is a screenshot: I render a quad as follows: float w2 = 6; float h2 = 5; glBegin(GL_QUADS); glVertex3f(-w2, h2, 0.0); glVertex3f(-w2, -h2, 0.0); glVertex3f(w2, -h2, 0.0); glVertex3f(w2, h2, 0.0); glEnd(); My vertex shader: varying vec3 Position; void main(void) { Position = gl_Vertex.xyz; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } My fragment shader (where all the meat is): uniform float MAXITERATIONS; varying vec3 Position; void main (void) { float zoom = 1.0; float centerX = 0.0; float centerY = 0.0; float real = Position.x * zoom + centerX; float imag = Position.y * zoom + centerY; float r2 = 0.0; float iter; for(iter = 0.0; iter < MAXITERATIONS && r2 < 4.0; ++iter) { float tempreal = real; real = (tempreal * tempreal) + (imag * imag); imag = 2.0 * real * imag; r2 = (real * real) + (imag * imag); } vec3 color; if(r2 < 4.0) color = vec3(1.0); else color = vec3( iter / MAXITERATIONS ); gl_FragColor = vec4(color, 1.0); }

    Read the article

  • What is the best way to render a 2d game map?

    - by Deukalion
    I know efficiency is key in game programming and I've had some experiences with rendering a "map" earlier but probably not in the best of ways. For a 2D TopDown game: (simply render the textures/tiles of the world, nothing else) Say, you have a map of 1000x1000 (tiles or whatever). If the tile isn't in the view of the camera, it shouldn't be rendered - it's that simple. No need to render a tile that won't be seen. But since you have 1000x1000 objects in your map, or perhaps less you probably don't want to loop through all 1000*1000 tiles just to see if they're suppose to be rendered or not. Question: What is the best way to implement this efficiency? So that it "quickly/quicker" can determine what tiles are suppose to be rendered? Also, I'm not building my game around tiles rendered with a SpriteBatch so there's no rectangles, the shapes can be different sizes and have multiple points, say a curved object of 10 points and a texture inside that shape; Question: How do you determine if this kind of objects is "inside" the View of the camera? It's easy with a 48x48 rectangle, just see if it X+Width or Y+Height is in the view of the camera. Different with multiple points. Simply put, how to manage the code and the data efficiently to not having to run through/loop through a million of objects at the same time.

    Read the article

  • 2D scene graph not transforming relative to parent

    - by Dr.Denis McCracleJizz
    I am currently in the process of coding my own 2D Scene graph, which is basically a port of flash's render engine. The problem I have right now is my rendering doesn't seem to be working properly. This code creates the localTransform property for each DisplayObject. Matrix m_transform = Matrix.CreateRotationZ(rotation) * Matrix.CreateScale(scaleX, scaleY, 1) * Matrix.CreateTranslation(new Vector3(x, y, z)); This is my render code. float dRotation; Vector2 dPosition, dScale; Matrix transform; transform = this.localTransform; if (parent != null) transform = localTransform * parent.localTransform; DecomposeMatrix(ref transform, out dPosition, out dRotation, out dScale); spriteBatch.Draw(this.texture, dPosition, null, Color.White, dRotation, new Vector2(originX, originY), dScale, SpriteEffects.None, 0.0f); Here is the result when I try to add the Stage then to the stage a First DisplayObjectContainer and then a second one. It may look fine but the problem lies in the fact that I add a first DisplayObjectContainer at (400,400) and the second one within it (that's the smallest one) at position (0,0). So he should be right over its parent but he gets render within the parent at the same position the parent has (400, 400) for some reason. It's just as if I double the parent's localMatrix and then render the second cat there. This is the code i use to loop through every childs. base.Draw(spriteBatch); foreach (DisplayObject childs in _childs) { childs.Draw(spriteBatch); }

    Read the article

  • How can I use the dualforward parameter in my unity shader to use lightmaps and normal maps together?

    - by Raphaeltm
    I'm using the free version of unity and I would like to combine lightmaps with specularity and normal maps. After doing a -bunch- of research, I've figured out that there doesn't seem to be any easy way to do this in the free version of unity, which doesn't support deferred rendering/easy use of dual lightmaps. However, it looks like it's possible, by writing a custom shader, using the "dualforward" parameter in a shader, switching the lightmapping mode to "dual lightmaps" and turning on "Use in forward ren." (basically, writing a shader that specifies the use of dual lightmaps, which should allow for a combination of lightmaps and normal maps) So I downloaded the source code for the default shaders (because all I need is a normal specular bumped shader) and added "dualforward" to the parameters: Shader "Bumped Specular Dual Lightmaps" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} } SubShader { Tags { "RenderType"="Opaque" } LOD 400 CGPROGRAM #pragma surface surf BlinnPhong dualforward sampler2D _MainTex; sampler2D _BumpMap; fixed4 _Color; half _Shininess; struct Input { float2 uv_MainTex; float2 uv_BumpMap; }; void surf (Input IN, inout SurfaceOutput o) { fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Specular" } This, however, doesn't seem to work. When I keep the "dualforward" param, every object that uses it seems to be lit by the one directional light in the scene. When I remove the "dualforward" param, it they look like normal lightmapped objects with no normal maps or specularity. I noticed that the support for "dualforward" seems to be new in v.3.4.2, so I made sure to download it (I was running 3.4.1), but it still doesn't work. Anybody have any advice for me?

    Read the article

  • LOD in modern games

    - by Firas Assaad
    I'm currently working on my master's thesis about LOD and mesh simplification, and I've been reading many academic papers and articles about the subject. However, I can't find enough information about how LOD is being used in modern games. I know many games use some sort of dynamic LOD for terrain, but what about elsewhere? Level of Detail for 3D Graphics for example points out that discrete LOD (where artists prepare several models in advance) is widely used because of the performance overhead of continuous LOD. That book was published in 2002 however, and I'm wondering if things are different now. There has been some research in performing dynamic LOD using the geometry shader (this paper for example, with its implementation in ShaderX6), would that be used in a modern game? To summarize, my question is about the state of LOD in modern video games, what algorithms are used and why? In particular, is view dependent continuous simplification used or does the runtime overhead make using discrete models with proper blending and impostors a more attractive solution? If discrete models are used, is an algorithm used (e.g. vertex clustering) to generate them offline, do artists manually create the models, or perhaps a combination of both methods is used?

    Read the article

  • From a 3D modeler to an iPhone app - what are best practices?

    - by bonkey
    I am quite new in 3D programming on iPhone and I would like to ask for hints about organizing a work between designers and programmers on that platform. Most of all: what kind of tools, libraries or plugins cooperate the best on both sides. Although I consider the question as looking for general best-practices advice I would like to find a solution for my current situation which I describe further, too. I've already done some research and found following libraries: SIO2 Khronos OpenGL ES 1.x SDK for PowerVR MBX Unity3D Oolong Game Engine I've checked modellers or plugins to them giving output formats readable by those tools: obj2opengl Wavefront OBJ to plain header file converter Blender with SIO2 exporter iphonewavefrontloader Cheetah3D PVRGeoPOD for 3DS / Maya Unfortunately I still have no clear vision how to combine any of that tools to get a desinger's work in an application. I look for a way of getting it in the most possible complete way: models, lights, scenes, textures, maybe some simple animations (but rather no game-like physics), but I still got nothing. And here comes my situation: I would like to find right way to present few (but quite complicated) models from a single scene. The designers mostly use 3DS Max 9, sometimes 10 (which partly prevents using PVRGeoPOD) and are rather reluctant to switch to something else but if there's no other choice I suppose it would be possible. The basic rule I've already found in some places "use Wavefront OBJ" not always works. I haven't got any acceptable results with production files, actually. The only things worked fine were some mere examples. Some of my models did imported incomplete, sometimes exporters hung or generated enormous files not really useful on an iPhone, sometimes enabling textures (with GL_TEXTURE_2D) just crashed an app. I know it might be a problem with too complicated models or my mistakes coming from inexeperience but I am not able to find any guidelines for that process to have streamlined cooperation with designers. I am even willing to write some things from scratch in pure OpenGL-ES if it's necessary, but I would like to avoid what might be avoided and get the most from the model files. The best would be the effect I saw on some SIO2 tutorials: export, build & go. But at that moment I've got only "import, wrong", "import, where are textures?", "import, that almost looks fine, export, hang" and so on... Is it really so much frustrating or I am just missed something obvious? Can anybody share his/her experience in that field and tell what kind of software uses for "making things happen"?

    Read the article

  • Effectively implementing a game view using java

    - by kdavis8
    I am writing a 2d game in java. The game mechanics are similar to the Pokémon game boy advance series e.g. fire red, ruby, diamond and so on. I need a way to draw a huge map maybe 5000 by 5000 pixels and then load individual in game sprites to across the entirety of the map, like rendering a scene. Game sprites would be things like terrain objects, trees, rocks, bushes, also houses, castles, NPC's and so on. But i also need to implement some kind of camera view class that focuses on the player. the camera view class needs to follow the characters movements throughout the game map but it also needs to clip the rest of the map away from the user's field of view, so that the user can only see the arbitrary proximity adjacent to the player's sprite. The proximity's range could be something like 500 pixels in every direction around the player’s sprite. On top of this, i need to implement an independent resolution for the game world so that the game view will be uniform on all screen sizes and screen resolutions. I know that this does sound like a handful and may fall under the category of multiple questions, but the questions are all related and any advice would be very much appreciated. I don’t need a full source code listing but maybe some pointers to effective java API classes that could make doing what i need to do a lot simpler. Also any algorithmic/ design advice would greatly benefit me as well. example of what i am trying to do in source code form below package myPackage; /** * The Purpose of GameView is to: Render a scene using Scene class, Create a * clipping pane using CameraView class, and finally instantiate a coordinate * grid using Path class. * * Once all of these things have been done, GameView class should then be * instantiated and used jointly with its helper classes. CameraView should be * used as the main drawing image. CameraView is the the window to the game * world.Scene passes data constantly to CameraView so that the entire map flows * smoothly. Path uses the x and y coordinates from camera view to construct * cells for path finding algorithms. */ public class GameView { // Scene is a helper class to game view. it renders the entire map to memory // for the camera view. Scene scene; // Camera View is a helper class to game view. It clips the Scene into a // small image that follows the players coordinates. CameraView Camera; // Path is a helper class to game view. It observes and calculates the // coordinates of camera view and divides them into Grids/Cells for Path // finding. Path path; // this represents the player and has a getSprite() method that will return // the current frame column row combination of the passed sprite sheet. Sprite player; }

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >