Search Results

Search found 1000 results on 40 pages for 'scene'.

Page 18/40 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How to make and render a simple game just with 3d max?

    - by Sina
    I want to make a simple EXE file, where there is one object in the scene and the user can rotate that object by using arrow keys (or mouse). Is there any way in which I don't have to use a game engine and do it only using 3dsmax script? Cause there is a special renderer I want to use, which is V-Ray a kind of renderer which provides 3D images for 3D glasses. I am not good at making games and engines so I want to know if I can do it only with 3dsmax scripts?

    Read the article

  • shader coding: calculate screen coordinates of fragment

    - by Jay
    Good morning, I'm new to shader coding and trying to implement some visual effects code in shaders using billboards. (Yes, I couldn't have picked anything harder to start with, but I'm lucky that way) Setup: I have rendered the full screen z depth to an array of floats in a previous pass. In the fragment shader I need the scene depth where the rendered fragment is displayed (to see if it's occluded). I can use tex2d() to get the depth value if I have the screen coordinates of the point being rendered in the fragment shader. Question: In the fragment shader how do you calculate the screen coordinates of the pixel (in the range 0-1.0)? Is the position passed to the fragment shader a pixel offset? If so, I guess it would be: float2( position.x / screen-width, position.y / screen-height ) Thanks for any help/

    Read the article

  • Geometry instancing in OpenGL ES 2.0

    - by seahorse
    I am planning to do geometry instancing in OpenGL ES 2.0 Basically I plan to render the same geometry(a chair) maybe 1000 times in my scene. What is the best way to do this in OpenGL ES 2.0? I am considering passing model view mat4 as an attribute. Since attributes are per vertex data do I need to pass this same mat4, three times for each vertex of the same triangle(since modelview remains constant across vertices of the triangle). That would amount to a lot of extra data sent to the GPU( 2 extra vertices*16 floats*(Number of triangles) amount of extra data). Or should I be sending the mat4 only once per triangle?But how is that possible using attributes since attributes are defined as "per vertex" data? What is the best and efficient way to do instancing in OpenGL ES 2.0?

    Read the article

  • Blender Object Appearing Gray when all Lights are Off

    - by celestialorb
    I have an issue with Blender where, when I turn my only light off (a sun lamp) and render the image my object appears gray rather than black (and thus, not appear to the camera). I can't figure out why this is happening. Here's what I just did in my scene: Added a new UV Sphere mesh (to make a total of two spheres), made it visible to the camera, turned off the sun lamp (by setting energy to 0), and rendered. The result I obtained is below. I discovered this when attempting to render the first sphere with a material/texture on it and it was too bright. The material on the spheres (which are different) are very basic, there's no emit, diffuse and specular are at default values. Could there be an issue with the way my camera is setup? Thanks in advance!

    Read the article

  • Dynamic Memory Allocation and Memory Management

    - by Bunkai.Satori
    In an average game, there are hundreds or maybe thousands of objects in the scene. Is it completely correct to allocate memory for all objects, including gun shots (bullets), dynamically via default new()? Should I create any memory pool for dynamic allocation, or is there no need to bother with this? What if the target platform are mobile devices? Is there a need for a memory manager in a mobile game, please? Thank you. Language Used: C++; Currently developed under Windows, but planned to be ported later.

    Read the article

  • ALT.NET Seattle

    - by GeekAgilistMercenary
    Time to rock the ALT.NET scene and head up to the conference this weekend.  I must say, out of all the conferences I have been to the ALT.NET Conference is by far one of the best.  Great minds, great attitudes, awesome chances to learn, awesome changes to expand on one's ideas with others that hit on the same hurdles!  All in all, last year was great and I am expecting it to be a great conference this year also. For more information check out the ALT.NET site: http://2010conf.altnetseattle.org/ To get more involved in the monthly ALT.NET events in Seattle: http://groups.google.com/group/altnetseattle http://www.facebook.com/group.php?gid=111345965570 http://www.altnetseattle.org/ If you are in the Seattle area this weekend, be sure to hit up the conference. For original entry and other blog entries check out my personal blog.

    Read the article

  • Exporting .FBX model into XNA - unorthogonal bones

    - by Sweta Dwivedi
    I create a butterfly model in 3ds max with some basic animation, however trying to export it to .FBX format I get the following exception.. any idea how i can transform the wings to be orthogonal.. One or more objects in the scene has local axes that are not perpendicular to each other (non-orthogonal). The FBX plug-in only supports orthogonal (or perpendicular) axes and will not correctly import or export any transformations that involve non-perpendicular local axes. This can create an inaccurate appearance with the affected objects: -Right.Wing I have attached a picture for reference . . .

    Read the article

  • Tab Sweep - Jazoon aftermath, PaaS press, REST +WebSocket, ...

    - by alexismp
    Recent Tips and News on Java EE 6 & GlassFish: •The GlassFish Tale - Oracle Scene (Markus) • Notes from Jazoon 2011 (Marek) • Jazoon '11 presentations (Jazoon.com) • Enterprise Java upgrade geared to PaaS clouds (TechCentral.ie) • JavaOne 2011: Content review process and Tips for submissions (Arun) • REST + WebSocket applications? Why not using the Atmosphere Framework (Jeanfrancois) • Get your Java 7 screensaver! (Duke)

    Read the article

  • How can I create a fast, real-time, fixed length glowing ray?

    - by igf
    Similar to the disintegrate skill in Diablo 3. It should not light other objects in scene. Just glowing and animated. Like in this video http://www.youtube.com/watch?v=D_c4x6aQAG8. Should I use pack of pre-computed glow sources textures for each frame of ray animation like in this article http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html and put it in bloom shader? Is there any other efficient ways to achive this effect? I'm using OpenGL ES 2.0.

    Read the article

  • La Commission européenne examine le rachat de McAfee par Intel et craint une position de monopole pour la sécurité des processeurs

    La Commission européenne examine le rachat de McAfee par Intel, et craint une position de monopole pour la sécurité des processeurs Mise à jour du 20.12.2010 par Katleen L'acquisition de McAfee par Intel pourrait être compromise. La Commission européenne examine actuellement le dossier et émet quelques réserves. Car si Intel occupe pour l'instant une place importante sur la scène des processeurs, et que ce rachat ne lui octroierait donc aucunement une position de monopole dans le domaine de la sécurité, Bruxelles veut voir plus loin. Les instances européennes pensent à l'avenir, et craignent que le visage du marché de la sécurité informatique ne soit modifié. Certains spécialistes pens...

    Read the article

  • ASSIMP in my program is much slower to import than ASSIMP view program

    - by Marco
    The problem is really simple: if I try to load with the function aiImportFileExWithProperties a big model in my software (around 200.000 vertices), it takes more than one minute. If I try to load the very same model with ASSIMP view, it takes 2 seconds. For this comparison, both my software and Assimp view are using the dll version of the library at 64 bit, compiled by myself (Assimp64.dll). This is the relevant piece of code in my software // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; cout << "Loading " << pFile << "... "; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,80.f); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE, aiPrimitiveType_LINE | aiPrimitiveType_POINT); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file scene = (aiScene*)aiImportFileExWithProperties(pFile.c_str(), ppsteps | /* default pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges //aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); if(!scene){ cout << aiGetErrorString() << endl; return 0; } this is the relevant piece of code in assimp view code // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,g_smoothAngle); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE,nopointslines ? aiPrimitiveType_LINE | aiPrimitiveType_POINT : 0 ); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file g_pcAsset->pcScene = (aiScene*)aiImportFileExWithProperties(g_szFileName, ppsteps | /* configurable pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); As you can see the code is nearly identical because I copied from assimp view. What could be the reason for such a difference in performance? The two software are using the same dll Assimp64.dll (compiled in my computer with vc++ 2010 express) and the same function aiImportFileExWithProperties to load the model, so I assume that the actual code employed is the same. How is it possible that the function aiImportFileExWithProperties is 100 times slower when called by my sotware than when called by assimp view? What am I missing? I am not good with dll, dynamic and static libraries so I might be missing something obvious. ------------------------------ UPDATE I found out the reason why the code is going slower. Basically I was running my software with "Start debugging" in VC++ 2010 Express. If I run the code outside VC++ 2010 I get same performance of assimp view. However now I have a new question. Why does the dll perform slower in VC++ debugging? I compiled it in release mode without debugging information. Is there any way to have the dll go fast in debugmode i.e. not debugging the dll? Because I am interested in debugging only my own code, not the dll that I assume is already working fine. I do not want to wait 2 minutes every time I want to load my software to debug. Does this request make sense?

    Read the article

  • Need help starting with a game in Flash/AS3

    - by Hossein
    Hi, I am kinda new to flash/AS3. I have written some stuff and now I now the basics of AS. I want to develop a game in which my character moves. exactly like the way Super Mario moves(the background changes and new obstacles come in the way like and animation with which you can interact by climbing and shooting etc.) There is one big thing that I don't understand is that how to make my background moving and introduce new obstacles when my character moves forward. Currently my scene is static which means the background is the same and obstacles are also stationary. Can some point me to a tutorial or give me an answer that solves my confusion? thanks

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at StackOverflow

    Read the article

  • Shadow mapping with deffered shading for directional lights - shadow map projection problem

    - by Harry
    I'm trying to implement shadow mapping to my engine. I started with directional lights because they seemed to be the easiest one, but I was wrong :) I have implemented deferred shading and I retrieve position from depth. I think that there is the biggest problem but code looks ok for me. Now more about problem: Shadow map projected onto meshes looks bad scaled and translated and also some informations from shadow map texture aren't visible. You can see it on this screen: http://img5.imageshack.us/img5/2254/93dn.png Yelow frustum is light frustum and I have mixed shadow map preview and actual scene. As you can see shadows are in wrong place and shadow of cone and sphere aren't visible. Could you look at my codes and tell me where I have a mistake? // create shadow map if(!_shd)glGenTextures(1, &_shd); glBindTexture(GL_TEXTURE_2D, _shd); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT,NULL); // shadow map size glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, _shd, 0); glDrawBuffer(GL_NONE); // setting camera Vector dire=Vector(0,0,1); ACamera.setLookAt(dire,Vector(0)); ACamera.setPerspectiveView(60.0f,1,0.1f,10.0f); // currently needed for proper frustum corners calculation Vector min(ACamera._point[0]),max(ACamera._point[0]); for(int i=0;i<8;i++){ max=Max(max,ACamera._point[i]); min=Min(min,ACamera._point[i]); } ACamera.setOrthogonalView(min.x,max.x,min.y,max.y,-max.z,-min.z); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _s_buffer); // framebuffer for shadow map // rendering to depth buffer glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _g_buffer); Shaders["DirLight"].set(true); Matrix4 bias; bias.x.set(0.5,0.0,0.0,0.0); bias.y.set(0.0,0.5,0.0,0.0); bias.z.set(0.0,0.0,0.5,0.0); bias.w.set(0.5,0.5,0.5,1.0); Shaders["DirLight"].set("textureMatrix",ACamera.matrix*Projection3D*bias); // order of multiplications are 100% correct, everything gives mi the same result as using glm glActiveTexture(GL_TEXTURE5); glBindTexture(GL_TEXTURE_2D,_shd); lightDir(dir); // light calculations Vertex Shader makes nothing related to shadow calculatons Pixel shader function which calculates if pixel is in shadow or not: float readShadowMap(vec3 eyeDir) { // retrieve depth of pixel float z = texture2D(depth, gl_FragCoord.xy/screen).z; vec3 pos = vec3(gl_FragCoord.xy/screen, z); // transform by the projection and view inverse vec4 worldSpace = inverse(View)*inverse(ProjectionMatrix)*vec4(pos*2-1,1); worldSpace /= worldSpace.w; vec4 coord=textureMatrix*worldSpace; float vis=1.0f; if(texture2D(shadow, coord.xy).z < coord.z-0.001)vis=0.2f; return vis; } I also have question about shadows specifically for directional light. Currently I always look at 0,0,0 position and in further implementation I have to move light frustum along to camera frustum. I've found how to do this here: http://www.gamedev.net/topic/505893-orthographic-projection-for-shadow-mapping/ but it doesn't give me what I want. Maybe because of problems mentioned above, but I want know your opinion. EDIT: vec4 worldSpace is position read from depht of the scene (not shadow map). Maybe I wasn't precise so I'll try quick explain what is what: View is camera view matrix, ProjectionMatrix is camera projection,. First I try to get world space position from depth map and then multiply it by textureMatrix which is light view *light projection*bias. Rest of code is the same as in many tutorials. I can't use vertex shader to make something like gl_Position=textureMatrix*gl_Vertex and get it interpolated in fragment shader because of deffered rendering use so I want get it from depht buffer. EDIT2: I also tried make it as in Coding Labs tutorial about Shadow Mapping with Deferred Rendering but unfortunately this either works wrong.

    Read the article

  • How to use lemodev highscore plugin for unity?

    - by user3889649
    I am trying to add a server-sided highscore system to my game in unity. I have downloaded the free lemodev highscore plugin from the asset store but I cant figure out how to use it. I know where to put my server info and so on but other what are you supposed to do ? I added the main camera prefab that came with the package to my scene but other than adding an additional camera it did precisely nothing ( at least it seems that way ). Could anyone look into it and tell me how to use it ? The developer's website seems to have no information on the subject.

    Read the article

  • First time shadow mapping problems

    - by user1294203
    I have implemented basic shadow mapping for the first time in OpenGL using shaders and I'm facing some problems. Below you can see an example of my rendered scene: The process of the shadow mapping I'm following is that I render the scene to the framebuffer using a View Matrix from the light point of view and the projection and model matrices used for normal rendering. In the second pass, I send the above MVP matrix from the light point of view to the vertex shader which transforms the position to light space. The fragment shader does the perspective divide and changes the position to texture coordinates. Here is my vertex shader, #version 150 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform mat4 lightMVP; uniform float scale; in vec3 in_Position; in vec3 in_Normal; in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; smooth out vec4 lightspace_Position; void main(void){ pass_Normal = NormalMatrix * in_Normal; pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; lightspace_Position = lightMVP * vec4(scale * in_Position, 1.0); TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } And my fragment shader, #version 150 core struct Light{ vec3 direction; }; uniform Light light; uniform sampler2D inSampler; uniform sampler2D inShadowMap; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; smooth in vec4 lightspace_Position; out vec4 out_Color; float CalcShadowFactor(vec4 lightspace_Position){ vec3 ProjectionCoords = lightspace_Position.xyz / lightspace_Position.w; vec2 UVCoords; UVCoords.x = 0.5 * ProjectionCoords.x + 0.5; UVCoords.y = 0.5 * ProjectionCoords.y + 0.5; float Depth = texture(inShadowMap, UVCoords).x; if(Depth < (ProjectionCoords.z + 0.001)) return 0.5; else return 1.0; } void main(void){ vec3 Normal = normalize(pass_Normal); vec3 light_Direction = -normalize(light.direction); vec3 camera_Direction = normalize(-pass_Position); vec3 half_vector = normalize(camera_Direction + light_Direction); float diffuse = max(0.2, dot(Normal, light_Direction)); vec3 temp_Color = diffuse * vec3(1.0); float specular = max( 0.0, dot( Normal, half_vector) ); float shadowFactor = CalcShadowFactor(lightspace_Position); if(diffuse != 0 && shadowFactor > 0.5){ float fspecular = pow(specular, 128.0); temp_Color += fspecular; } out_Color = vec4(shadowFactor * texture(inSampler, TexCoord).xyz * temp_Color, 1.0); } One of the problems is self shadowing as you can see in the picture, the crate has its own shadow cast on itself. What I have tried is enabling polygon offset (i.e. glEnable(POLYGON_OFFSET_FILL), glPolygonOffset(GLfloat, GLfloat) ) but it didn't change much. As you see in the fragment shader, I have put a static offset value of 0.001 but I have to change the value depending on the distance of the light to get more desirable effects , which not very handy. I also tried using front face culling when I render to the framebuffer, that didn't change much too. The other problem is that pixels outside the Light's view frustum get shaded. The only object that is supposed to be able to cast shadows is the crate. I guess I should pick more appropriate projection and view matrices, but I'm not sure how to do that. What are some common practices, should I pick an orthographic projection? From googling around a bit, I understand that these issues are not that trivial. Does anyone have any easy to implement solutions to these problems. Could you give me some additional tips? Please ask me if you need more information on my code. Here is a comparison with and without shadow mapping of a close-up of the crate. The self-shadowing is more visible.

    Read the article

  • Using model tools as map editor

    - by cooky451
    I want to make a game which would require a 3D map editor. Of course, I would like to avoid creating such an editor. My idea is now to use modeling tools (3DS Max, Maya, Blender) to create the map, and to give game specific objects specified names. This way I'd just need to write an COLLADA - native map format converter. But I'm not sure if this is possible the way I imagine it, that's why I'd like to hear your thoughts on the matter. Are modeling tools suitable to create big open world maps? Can this "naming convention"-idea for game specific objects work? Are the modeling tools able to export a scene in chunks / in a way that occlusion culling and collision detection can be properly done? If not: Is there a way to build a suitable data structure from the exported data?

    Read the article

  • as3 3D camera lookat

    - by Johannes Jensen
    I'm making a 3D camera scene in Flash, draw using drawTriangles() and rotated and translated using a Matrix3D. I've got the camera to look after a specific point, but only on the Y-axis, using the x and z coordinates, here is my code so far: var dx:Number = camera.x - lookAt.x; var dy:Number = camera.y - lookAt.y; var dz:Number = camera.z - lookAt.z; camera.rotationY = Math.atan2(dz, dx) * (180 / Math.PI) + 270; so no matter the x or z position, the point is always on the mid of the screen, IF and only if y matches with the camera. So what I need is to calculate the rotationX (which are measured in degrees not radians), and I was wondering how I would do this?

    Read the article

  • Question about Target parameter of Matrix.CreateLookAt

    - by manning18
    I have a newbie question that's causing me a little bit of confusion when experimenting with cameras and reading other peoples implementations - does this parameter represent a point or a vector? In some examples I've seen people treat it like a specific point they are looking at (eg a position in the world), other times I see people caching the orientation of the camera in a rotation matrix and simply using the Matrix.Forward property as the "target", and other times it's a vector that's the result of targetPos - camPos and also I saw a camPos + orientation.Forward I was also just playing around with hard-coded target positions with same direction eg 1 to 10000 with no discernible difference in what I saw in the scene. Is the "Target" parameter actually a position or a direction (irrespective of magnitude)? Are there any subtle differences in behaviors, common mistakes or gotchas that are associated with what values you provide, or HOW you provide this paramter? Are all the methods I mentioned above equivalent? (sorry, I've only recently started and my math is still catching up)

    Read the article

  • Caption Competition 8 – Captions Take Manhattan

    - by Simple-Talk Editorial Team
    Update: Congratulations to Dimitrios for winning this week’s caption competition. It’s that time again. We present a bucolic scene, you tell us what you think is happening, a grand time is had by all. Something to do with computing would be nice, but we’re honestly not making it easy on you.   A few suggested bon mots to get you on your way: It certainly wasn’t the best corporate teambuilding day, but it wasn’t the worst either. Prior art is discovered for Google’s driverless car, including a military application. As he opened fire, Nigel thought back to a more innocent time, before anyone made changes in his production database. Fresh air and exercise was more exciting before the current obsession with health and safety. Leave your entries in the comments below- the funniest will win a $50 Amazon voucher.

    Read the article

  • How to make rigid bodies collide with Apex Clothing in PhysX for Maya

    - by b1nary.atr0phy
    According to the [Apex] Clothing Overview section of the documentation: Colliding with Rigid Bodies Rigid bodies present in your scene will push clothing around roughly as you might expect. Well, I beg to differ. The Apex Cloth collides with the floor just fine, but that's about the only thing it collides with (unless I add ragdoll to the same skeleton that the cloth is attached to.) So for example, if I try to bounce a ball (dynamic rigid body) into the cloth, it simply bounces through it. If I try to walk an actor with ragdoll through it, he simply clips through it as well. Anyone have any insight on this?

    Read the article

  • Rain effect looks like snowfall effect?

    - by Nikhil Lamba
    i am making a game in that game i want rain effect i am little bit far from this right now i am doing like below particleSystem.addParticleInitializer(new ColorInitializer(1, 1, 1)); particleSystem.addParticleInitializer(new AlphaInitializer(0)); particleSystem.setBlendFunction(GL10.GL_SRC_ALPHA, GL10.GL_ONE); particleSystem.addParticleInitializer(new VelocityInitializer(2, 2, 20, 10)); particleSystem.addParticleInitializer(new RotationInitializer(0.0f, 30.0f)); particleSystem.addParticleModifier(new ScaleModifier(1.0f, 2.0f, 0, 150)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1, 1f, 1, 1, 1, 3)); particleSystem.addParticleModifier(new ColorModifier(1, 1, 1f, 1, 1, 1, 1, 6)); particleSystem.addParticleModifier(new AlphaModifier(0, 1, 0, 3)); particleSystem.addParticleModifier(new AlphaModifier(1, 0, 1, 125)); particleSystem.addParticleModifier(new ExpireModifier(50, 50)); scene.attachChild(particleSystem); But its looks like snowfall effect what changes i can do for make it rain effect please correct me EDIT : here is link for snapshot http://i.imgur.com/bRIMP.png

    Read the article

  • Unity3D: How To Smoothly Switch From One Camera To Another

    - by www.Sillitoy.com
    The Question is basically self explanatory. I have a scene with many cameras and I'd like to smoothly switch from one to another. I am not looking for a cross fade effect but more to a camera moving and rotating the view in order to reach the next camera point of view and so on. To this end I have tried the following code: firstCamera.transform.position.x = Mathf.Lerp(firstCamera.transform.position.x, nextCamer.transform.position.x,Time.deltaTime*smooth); firstCamera.transform.position.y = Mathf.Lerp(firstCamera.transform.position.y, nextCamera.transform.position.y,Time.deltaTime*smooth); firstCamera.transform.position.z = Mathf.Lerp(firstCamera.transform.position.z, nextCamera.transform.position.z,Time.deltaTime*smooth); firstCamera.transform.rotation.x = Mathf.Lerp(firstCamera.transform.rotation.x, nextCamera.transform.rotation.x,Time.deltaTime*smooth); firstCamera.transform.rotation.z = Mathf.Lerp(firstCamera.transform.rotation.z, nextCamera.transform.rotation.z,Time.deltaTime*smooth); firstCamera.transform.rotation.y = Mathf.Lerp(firstCamera.transform.rotation.y, nextCamera.transform.rotation.y,Time.deltaTime*smooth); But the result is actually not that good.

    Read the article

  • Intel NUC Video Blur

    - by donopj2
    I recently purchased the D34010WYKH NUC and I figured this would be a great time to make the jump to a Linux based system. I'm running Ubuntu 14.04, and I'm having an issue with video rendering that is driving me mad. Essentially videos (all 1080p mkv files) appear to be slowly blurring, and its most noticeable when the camera remains on a scene for a long period of time. Then all of a sudden the video will correct the blur and the image will be sharp, only to begin happening again followed by more sudden and noticeable corrections. I have seen the exact same issue in both VLC and XBMC and across several different videos. I have installed the latest Intel graphics drivers, and searched the web but to be honest I'm not sure how to accurately describe this problem. I'm also quite new to the OS, so my experience tinkering is limited. Has anyone experienced this type of issue before? Can it be resolved?

    Read the article

  • Unity3D - Projection matrix camera frustum

    - by MulletDevil
    I've used off centre projection to create a custom projection matrix for my camera. When I run the game I can see the scene correctly in the game view but in the editor view the camera frustum is not correct. It still shows the original frustum shape not the new one. It also appears that Unity is using the original frustum for frustum culling and not the new one as I can see object being culled which are visible to the new frustum but would not be visible in the old one. Am I wrong in thinking that a custom projection matrix would alter the view frustum? Or am I missing something else?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >