Search Results

Search found 2967 results on 119 pages for '3d meshes'.

Page 21/119 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Z-order with Alpha blending in a 3D world

    - by user41765
    I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C++) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element = 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment: Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist: create a new one. Batch exist for element with this Material: Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type = 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here: Explication here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that? Thanks!

    Read the article

  • Keep 3d model facing the camera at all angles

    - by Sparky41
    I'm trying to keep a 3d plane facing the camera at all angles but while i have some success with this: Vector3 gunToCam = cam.cameraPosition - getWorld.Translation; Vector3 beamRight = Vector3.Cross(torpDirection, gunToCam); beamRight.Normalize(); Vector3 beamUp = Vector3.Cross(beamRight, torpDirection); shipeffect.beamWorld = Matrix.Identity; shipeffect.beamWorld.Forward = (torpDirection) * 1f; shipeffect.beamWorld.Right = beamRight; shipeffect.beamWorld.Up = beamUp; shipeffect.beamWorld.Translation = shipeffect.beamPosition; *Note: Logic not wrote by me i just found this rather useful It seems to only face the camera at certain angles. For example if i place the camera behind the plane you can see it that only Roll's around the axis like this: http://i.imgur.com/FOKLx.png (imagine if you are looking from behind where you have fired from. Any idea what to what the problem is (angles are not my specialty) shipeffect is an object that holds this class variables: public Texture2D altBeam; public Model beam; public Matrix beamWorld; public Matrix[] gunTransforms; public Vector3 beamPosition;

    Read the article

  • Handling early/late/dropped packets for interpolation in a 3D multiplayer game

    - by Ben Cracknell
    I'm working on a multiplayer game that for the purposes of this question, is most similar to Team Fortress. Each network data packet will contain the 3D position of the target moving object. (this object could be another player) The packets are sent on a fixed interval, and linear interpolation will be used to smooth the transition between packets. Under normal circumstances, interpolation will occur between the second-to-last packet, and the last packet received. The linear interpolation algorithm is the same as this post: Interpolating positions in a multiplayer game I have the same issue as in that post, but the answers don't seem like they will work in my situation. Consider the following scenario: Normal packet timing, everything is okay The next expected packet is late. That's okay, we'll just extrapolate based on previous positions The late packet eventually arrives with corrections to our extrapolation. Now what do we do with its information? The answers on the above post suggest we should just interpolate to this new packet's position, but that would not work at all. If we have already extrapolated past that point in time, moving back would cause rubber-banding. The issue is similar in the case of an early or dropped packet. So I believe what I am looking for is some way to smoothly deal with new information in an ongoing interpolation/extrapolation process. Since I might be moving on to quadratic or even cubic interpolation, it would be great if the same solutiuon could be applied to those as well.

    Read the article

  • vector rotations for branches of a 3d tree

    - by freefallr
    I'm attempting to create a 3d tree procedurally. I'm hoping that someone can check my vector rotation maths, as I'm a bit confused. I'm using an l-system (a recursive algorithm for generating branches). The trunk of the tree is the root node. It's orientation is aligned to the y axis. In the next iteration of the tree (e.g. the first branches), I might create a branch that is oriented say by +10 degrees in the X axis and a similar amount in the Z axis, relative to the trunk. I know that I should keep a rotation matrix at each branch, so that it can be applied to child branches, along with any modifications to the child branch. My questions then: for the trunk, the rotation matrix - is that just the identity matrix * initial orientation vector ? for the first branch (and subsequent branches) - I'll "inherit" the rotation matrix of the parent branch, and apply x and z rotations to that also. e.g. using glm::normalize; using glm::rotateX; using glm::vec4; using glm::mat4; using glm::rotate; vec4 vYAxis = vec4(0.0f, 1.0f, 0.0f, 0.0f); vec4 vInitial = normalize( rotateX( vYAxis, 10.0f ) ); mat4 mRotation = mat4(1.0); // trunk rotation matrix = identity * initial orientation vector mRotation *= vInitial; // first branch = parent rotation matrix * this branches rotations mRotation *= rotate( 10.0f, 1.0f, 0.0f, 0.0f ); // x rotation mRotation *= rotate( 10.0f, 0.0f, 0.0f, 1.0f ); // z rotation Are my maths and approach correct, or am I completely wrong? Finally, I'm using the glm library with OpenGL / C++ for this. Is the order of x rotation and z rotation important?

    Read the article

  • which flash 3d particle engine generate such xml file

    - by Huang F. Lei
    I found some particle config files like below one, but I don't know which flash 3d particle engine use them, they are different from away3d's which use 'root' as root element of xml. <effect pos="0 0 0"> <property cache="1" lifetime="10000"/> <mesh blendmode="add"> <path> <frame y="100" durtime="1000" x="0" z="0"/> </path> <scale> <frame y="0.2000000001" durtime="300" x="2.2" z="2.2"/> <frame y="0.4" durtime="300" x="2.7" z="2.7"/> </scale> </mesh> <vibrate delayTime="100" amplitude="10" durationTime="750" intension="50"/> <quad billboard="false" > </quad> <particle global="false" pos=""> <scale> <frame y="1" durtime="0" x="1" z="1"/> <frame y="1" durtime="2000" x="1.5" z="1.5"/> </scale> </particle> </effect>

    Read the article

  • Getting Unity 3D working on legacy Nvidia card

    - by user69545
    I installed the latest nVIDIA drivers for my FX5500 card. I understand that the X server version does not officially support this driver or card but was wondering what I can do to get compiz running. I have researched for hours on this issue but cannot come up with an answer for myself. I might be doing all this for nothing but I wanted to at least try. Here is the output of my test: mike@mike-linux-box:~$ /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce FX 5500/AGP/SSE2 OpenGL version string: 2.1.2 NVIDIA 173.14.35 Not software rendered: yes Not blacklisted: no GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no So I was wondering what is the "Not Blacklisted" test? Is this the Nouveau Blacklisting? nVIDIA driver did that automatically. Does this need to be removed? Any help would be appreciated. I just want to run compiz effects. Thanks.

    Read the article

  • Cursor seems to freeze in the first attempt of typing - Unity 3D, 12.04

    - by Denis
    It happens in the first attempt of typing, no matter is after the startup, or 5 minutes later, or then after. The cursor (or maybe it's the system) seems to freeze, no matter the application I use, taking up 5 sec to appear what is typed. Subsequently, everything is normal, using another applications. @Anwar Shah suggested it could be a daemon waiting to run before the lauching of the first application. Turning off Zeitgest didn't help. It occurs only with Unity-3d. Tested with Unity-2d, everything is fine. Tried to change some Compiz settings, nothing worked, although not tested with every single parameter. Also I deactivated Ati proprietary driver, no effect. My system: AMD E350 1.6Gh, 2G-Ram, ATI graphics - Ubuntu 12.04, 64bits. Update 1: the cursor is blinking normally before I start typing. After the first character (which is not showed), seems to freeze, taking 5 seconds to get normal again. Very annoying, specially when you want to access login sites. Update 2: I tested on a different and old machine (Athlon 64 4800 x2, 4Gb ram, no problems - takes 2 seconds, acceptable. I think it could be related to my specific hardware (Samsung RV415), but not sure about it. Anyone experiencing something similar? Is that what I should expect, or can be fixed or improved? Thanks.

    Read the article

  • ATI Radeon 5800 series dual monitor unity not 3D accelerated

    - by Victor S
    When I had a single monitor setup, without Xinerama, with my current setup of Ubuntu 11.10, ATI 5800 series card, Unity showed transparencies, shadows, etc. (although graphics was reported as 'Standard' in the control/settings panel). Having switched to a dual monitor setup, dell 24" UltraSharp and a smaller Acer monitor, Unity shows only as 2d, even though I'm not loggin in to that display manager. WebGL performance is very sluggish, I'm getting the impression that the processor is doing all the work and the card isn't even accessible even though the drivers are installed (from the ubuntu repository, I did not compile custom drivers). Any tips on how to enable full 3D accelerationn and video card support. Here is my xorg.conf file: Section "Monitor" Identifier "0-DFP3" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1680x1050" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "0-DFP4" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1920x1200" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Screen" Identifier "Default Screen" DefaultDepth 24 SubSection "Display" EndSubSection EndSection Section "Screen" Identifier "amdcccle-Screen[1]-0" Device "amdcccle-Device[1]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "amdcccle-Screen[1]-1" Device "amdcccle-Device[1]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Module" Load "glx" EndSection Section "ServerLayout" Identifier "amdcccle Layout" Screen 0 "amdcccle-Screen[1]-0" 0 0 Screen "amdcccle-Screen[1]-1" 1920 0 EndSection Section "Device" Identifier "amdcccle-Device[1]-0" Driver "fglrx" Option "Monitor-DFP4" "0-DFP4" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "amdcccle-Device[1]-1" Driver "fglrx" Option "Monitor-DFP3" "0-DFP3" BusID "PCI:1:0:0" Screen 1 EndSection Section "ServerFlags" Option "Xinerama" "on" EndSection More info: fglrxinfo display: :0 screen: 0 OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: ATI Radeon HD 5800 Series OpenGL version string: 4.1.11005 Compatibility Profile Context

    Read the article

  • 3d Model Scaling With Camera

    - by spasarto
    I have a very simple 3D maze program that uses a first person camera to navigate the maze. I'm trying to scale the blocks that make up the maze walls and floor so the corridors seem more roomy to the camera. Every time I scale the model, the camera seems to scale with it, and the corridors always stay the same width. I've tried apply the scale to the model in the content pipe (setting the scale property of the model in the properties window in VS). I've also tried to apply the scale using Matrix.CreateScale(float) using the Scale-Rotate-Transform order with the same result. If I leave the camera speed the same, the camera moves slower, so I know it's traversing a larger distance, but the world doesn't look larger; the camera just seems slower. I'm not sure what part of the code to include since I don't know if it is an issue with my model, camera, or something else. Any hints at what I'm doing wrong? Camera: Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, _device.Viewport.AspectRatio, 1.0f, 1000.0f ); Matrix camRotMatrix = Matrix.CreateRotationX( _cameraPitch ) * Matrix.CreateRotationY( _cameraYaw ); Vector3 transCamRef = Vector3.Transform( _cameraForward, camRotMatrix ); _cameraTarget = transCamRef + CameraPosition; Vector3 camRotUpVector = Vector3.Transform( _cameraUpVector, camRotMatrix ); View = Matrix.CreateLookAt( CameraPosition, _cameraTarget, camRotUpVector ); Model: World = Matrix.CreateTranslation( Position );

    Read the article

  • How to determine if a 3D voxel-based room is sealed, efficiently

    - by NigelMan1010
    I've been having some issues with efficiently determining if large rooms are sealed in a voxel-based 3D rooms. I'm at a point where I have tried my hardest to solve the problem without asking for help, but not tried enough to give up, so I'm asking for help. To clarify, sealed being that there are no holes in the room. There are oxygen sealers, which check if the room is sealed, and seal depending on the oxygen input level. Right now, this is how I'm doing it: Starting at the block above the sealer tile (the vent is on the sealer's top face), recursively loop through in all 6 adjacent directions If the adjacent tile is a full, non-vacuum tile, continue through the loop If the adjacent tile is not full, or is a vacuum tile, check if it's adjacent blocks are, recursively. Each time a tile is checked, decrement a counter If the count hits zero, if the last block is adjacent to a vacuum tile, return that the area is unsealed If the count hits zero and the last block is not a vacuum tile, or the recursive loop ends (no vacuum tiles left) before the counter is zero, the area is sealed If the area is not sealed, run the loop again with some changes: Checking adjacent blocks for "breathable air" tile instead of a vacuum tile Instead of using a decrementing counter, continue until no adjacent "breathable air" tiles are found. Once loop is finished, set each checked block to a vacuum tile. Here's the code I'm using: http://pastebin.com/NimyKncC The problem: I'm running this check every 3 seconds, sometimes a sealer will have to loop through hundreds of blocks, and a large world with many oxygen sealers, these multiple recursive loops every few seconds can be very hard on the CPU. I was wondering if anyone with more experience with optimization can give me a hand, or at least point me in the right direction. Thanks a bunch.

    Read the article

  • Algorithm to shoot at a target in a 3d game

    - by Sebastian Bugiu
    For those of you remembering Descent Freespace it had a nice feature to help you aim at the enemy when shooting non-homing missiles or lasers: it showed a crosshair in front of the ship you chased telling you where to shoot in order to hit the moving target. I tried using the answer from http://stackoverflow.com/questions/4107403/ai-algorithm-to-shoot-at-a-target-in-a-2d-game?lq=1 but it's for 2D so I tried adapting it. I first decomposed the calculation to solve the intersection point for XoZ plane and saved the x and z coordinates and then solving the intersection point for XoY plane and adding the y coordinate to a final xyz that I then transformed to clipspace and put a texture at those coordinates. But of course it doesn't work as it should or else I wouldn't have posted the question. From what I notice the after finding x in XoZ plane and the in XoY the x is not the same so something must be wrong. float a = ENG_Math.sqr(targetVelocity.x) + ENG_Math.sqr(targetVelocity.y) - ENG_Math.sqr(projectileSpeed); float b = 2.0f * (targetVelocity.x * targetPos.x + targetVelocity.y * targetPos.y); float c = ENG_Math.sqr(targetPos.x) + ENG_Math.sqr(targetPos.y); ENG_Math.solveQuadraticEquation(a, b, c, collisionTime); First time targetVelocity.y is actually targetVelocity.z (the same for targetPos) and the second time it's actually targetVelocity.y. The final position after XoZ is crossPosition.set(minTime * finalEntityVelocity.x + finalTargetPos4D.x, 0.0f, minTime * finalEntityVelocity.z + finalTargetPos4D.z); and after XoY crossPosition.y = minTime * finalEntityVelocity.y + finalTargetPos4D.y; Is my approach of separating into 2 planes and calculating any good? Or for 3D there is a whole different approach? sqr() is square not sqrt - avoiding a confusion.

    Read the article

  • Can't get Unity 3D to work in 11.10

    - by pmoseph
    I recently upgraded to 11.10 on my Lenovo ThinkPad T520, and I'm not able to load Unity 3D (I'm not selecting 2D at login menu either). me@mycomp:~$ echo $DESKTOP_SESSION ubuntu-2d I ran the unity support test below as well. me@mycomp:~$ /usr/lib/nux/unity_support_test -p Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: unable to create the OpenGL context And it looks like I only have one graphics card: me@mycomp:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Also, Ubuntu lists nothing under the "Additional Drivers" window. Any help would be extremely appreciated as I'm somewhat of a noob. Thanks! Edit 1: Here is the output of lshw -C display me@mycomp:~$ sudo lshw -C display *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:43 memory:f0000000-f03fffff memory:e0000000-efffffff ioport:5000(size=64)

    Read the article

  • Can't login to Unity 3d after enabling Xinerama for a short moment

    - by Amir Adar
    Today I connected a second monitor to my computer. I set it up using nVidia's control panel, and all was working quite well, so I figured it won't be a problem to try Xinerama, just to see the difference between that and twinview. After enabling Xinerama and restarting the X session, I saw that I was logged into a Unity 2d session. I thought it was a problem with Xinerama, so I switched back to twinview, but it still logged me into Unity 2d. I tried disconnecting the second monitor, no luck: still Unity 2d. I tried changing GPU drivers and installing drivers from a separate ppa, and still I was logged into Unity 2d. Up until this point, I didn't have any problem logging into Unity 3d. It only happened after I tried using Xinerama. I should note that I was doing all this while updates were going on in the background, so it could be something related to that, though I can't imagine what (I tried booting with another kernel, but no luck). So what exactly happened? Did changing the mode to Xinerama triggered some other changes that I'm not aware of? Did these updates cause a certain malfunction in the driver? Is it something else?

    Read the article

  • Why is it so hard to find a C++ 3d game tutorial

    - by Dave
    I'm planning on learning 3d game development for the iphone using a 3d engine, but because of lack of tutorials for the iphone I was planning on using C++ game tutorials and making the necessary changes. The problem is that I've had limited success when searching for things such as 'c++ 3d fps tutorial ' I dont really get anything useful. Are there any 3d c++ tutorials you can recommend?

    Read the article

  • How to read the 3D chart data with directX?

    - by MemoryLeak
    I am reading a open source project, and I found there is a function which read 3D data(let's say a character) from obj file, and draw it . the source code: List<Vertex3f> verts=new List<Vertex3f>(); List<Vertex3f> norms=new List<Vertex3f>(); Groups=new List<ToothGroup>(); //ArrayList ALf=new ArrayList();//faces always part of a group List<Face> faces=new List<Face>(); MemoryStream stream=new MemoryStream(buffer); using(StreamReader sr = new StreamReader(stream)){ String line; Vertex3f vertex; string[] items; string[] subitems; Face face; ToothGroup group=null; while((line = sr.ReadLine()) != null) { if(line.StartsWith("#")//comment || line.StartsWith("mtllib")//material library. We build our own. || line.StartsWith("usemtl")//use material || line.StartsWith("o")) {//object. There's only one object continue; } if(line.StartsWith("v ")) {//vertex items=line.Split(new char[] { ' ' }); vertex=new Vertex3f();//float[3]; if(flipHorizontally) { vertex.X=-Convert.ToSingle(items[1],CultureInfo.InvariantCulture); } else { vertex.X=Convert.ToSingle(items[1],CultureInfo.InvariantCulture); } vertex.Y=Convert.ToSingle(items[2],CultureInfo.InvariantCulture); vertex.Z=Convert.ToSingle(items[3],CultureInfo.InvariantCulture); verts.Add(vertex); continue; } And why it need to read the data manually in directX? As far as I know, in XDA programming, we just need to call a function a load the resource. Is this because it is in DirectX, there is no function to read resource? If yes, then how to prepare the 3D resource ? in XDA we just need to use other software draw the 3D picture and then export. but what should I do in DirectX?

    Read the article

  • (Ogre3D) Using a 3D texture slice as a 2D texture input.

    - by ~mech
    Hello, I am trying to do something with Ogre using 3D textures. I would like to update a 3D-texture by going through it slice-by-slice and recalculating the color values. However, in each step I also need to access the previous slice somehow to read the values. Setting up a slice as a render target is easy, but is it possible to feed such a slice as a 2D-texture input to a shader, or do I need to explicitly copy it into a separate 2D texture? Thanks.

    Read the article

  • How do I construct a 3D model of a room from 2 stereo cameras? What is the determining factor to an

    - by yasumi
    Currently, I have extracted depth points to construct a 3D model from 2 stereo cameras. The methods I have used are openCV graphCut method and a software from http://sourceforge.net/projects/reconststereo/. However, the generated 3D models are not very accurate, which leads me to question: 1) What is the problem with pixel-based method? 2) Should I change my pixel-based method to feature-based or object-recognition-based method? Is there a best method? 3) Are there any other ways to do such reconstruction? Additionally, the depth extracted comes only from 2 images. What if I am turning the camera 360 degrees to obtain a video? Looking forward to suggestion on how to combine this depth information. Thank you very much :)

    Read the article

  • 12.04 Unity 3D 80% CPU load with Compiz

    - by user39288
    EDIT : I have been able to to determine that the problem is not compiz, but is actually Xorg. I don't know why, but by quickly maximizing terminal and taking a screenshot with top running before the problem went away I am able to see xorg takes up 72% of cpu, with bamfdaemon taking up 18%, and compiz taking up 14%. Seems the nvidia drivers are to blame, will play more with settings and perhaps do a clean nvidia-current install to try to fix the problem. Having a very annoying problem with high CPU usage. Running 12.04 with latest drivers and nvidia-current installed. Have not had any issues for days, now I have a strange problem. Unity 3d runs great most of the time, 1-2% CPU usage with only transmission running in background. Windows open and close smoothly. However,no matter what programs are open, if I minimize all open programs to the unity bar on the left, my CPU jumps to about 80% and slows down all maximize and minimize effects. Mouse movement stays smooth the whole time, but unity becomes unresponsive for up to 30 seconds at times. Hitting alt + tab to bring up even a single window fixes the problem. The window I bring back up doesn't even have to be maximized to solve the problem. Hitting the super button to bring up the dash makes CPU drop back to idle until I close it, then high CPU usage resumes. Believe the problem is compiz, but even just having only terminal running "top", I have to minimize it to the tray for the problem to show, so I can't see the problem process. I can only tell about the high CPU usage using indicator-sysmonitor. Even tried quitting the indicator, but I can still tell very poor performance with all applications when minimized. Reset compiz back to defaults, tried going to the post-release update nvidia drivers, played with vsync settings in both the nvidia settings and compiz. Even forced refresh rate, but cannot solve the problem. The problem does NOT occur in Unity 2D. Specs are core 2 duo 2.0ghz, 4GB ddr2 ram, 2x 320's HDD in RAID 0, and Nvidia GTX 260M graphics card.

    Read the article

  • Model won't render in my XNA game

    - by Daniel Lopez
    I am trying to create a simple 3D game but things aren't working out as they should. For instance, the mode will not display. I created a class that does the rendering so I think that is where the problem lies. P.S I am using models from the MSDN website so I know the models are compatible with XNA. Code: class ModelRenderer { private float aspectratio; private Model model; private Vector3 camerapos; private Vector3 modelpos; private Matrix rotationy; float radiansy = 0; public ModelRenderer(Model m, float AspectRatio, Vector3 initial_pos, Vector3 initialcamerapos) { model = m; if (model.Meshes.Count == 0) { throw new Exception("Invalid model because it contains zero meshes!"); } modelpos = initial_pos; camerapos = initialcamerapos; aspectratio = AspectRatio; return; } public Vector3 CameraPosition { set { camerapos = value; } get { return camerapos; } } public Vector3 ModelPosition { set { modelpos = value; } get { return modelpos; } } public void RotateY(float radians) { radiansy += radians; rotationy = Matrix.CreateRotationY(radiansy); } public float AspectRatio { set { aspectratio = value; } get { return aspectratio; } } public void Draw() { Matrix world = Matrix.CreateTranslation(modelpos) * rotationy; Matrix view = Matrix.CreateLookAt(this.CameraPosition, this.ModelPosition, Vector3.Up); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), this.AspectRatio, 1.0f, 10000f); model.Draw(world, view, projection); } } If you need more code just make a comment.

    Read the article

  • What is the best way to "carve" a terrain created from a heightmap?

    - by tigrou
    I have a 3d landscape created from a heightmap. I'd like to "carve" some holes in that terrain. That will allow me to create bridges, caverns and tunnels inside it. That operation will be done in the game editor so it doesn't need to be realtime. In the end, rendering is done using traditional polygons. What would be the best/easiest way to do that ? I already think about several solutions : Solution 1 1) Create voxels from the heightmap (very easy). In other words, fill a 3D array like this : voxels[32][32][32] from the heightmap values. 2) Carve holes in the voxels as i want (easy too). 3) Convert voxels to polygons using some iso-surface extraction technique (like marching cubes). 4) Reduce (decimate) polygons created in 3). This technique seems to be the most promising for giving good results (untested). However the problem with marching cubes is that they tends to produce lots of polygons thus reducing them is mandatory. Implementing 4) also seems not trivial, i have read several papers on the web and it seems pretty complex. I was also unable to find an example, code snippet or something to start writing an algorithm for triangle mesh decimation. Maybe there is a special decimation algorithm (simpler) for meshes created from marching cubes ? Solution 2 1) Create some triangle mesh from the heighmap (easy). 2) Apply severals 3D boolean operation (eg: subtraction with a sphere) to carve the mesh. 3) apply some procedure to reduce polygons (optional). Operation 2) seems to be very complex and to be honest i have no idea how to do that. Also applying many boolean operation seems to be slow and will maybe degrade the triangle mesh every time a boolean operation is applied.

    Read the article

  • Fixed-Function vs Shaders: Which for beginner?

    - by Rob Hays
    I'm currently going to college for computer science. Although I do plan on utilizing an existing engine at some point to create a small game, my aim right now is towards learning the fundamentals: namely, 3D programming. I've already done some research regarding the choice between DirectX and OpenGL, and the general sentiment that came out of that was that whether you choose OpenGL or DirectX as your training-wheels platform, a lot of the knowledge is transferrable to the other platform. Therefore, since OpenGL is supported by more systems (probably a silly reason to choose what to learn), I decided that I'm going to learn OpenGL first. After I made this decision to learn OpenGL, I did some more research and found out about a dichotomy that I was somewhere unaware of all this time: fixed-function OpenGL vs. modern programmable shader-based OpenGL. At first, I thought it was an obvious choice that I should choose to learn shader-based OpenGL since that's what's most commonly used in the industry today. However, I then stumbled upon the very popular Learning Modern 3D Graphics Programming by Jason L. McKesson, located here: http://www.arcsynthesis.org/gltut/ I read through the introductory bits, and in the "About This Book" section, the author states: "First, much of what is learned with this approach must be inevitably abandoned when the user encounters a graphics problem that must be solved with programmability. Programmability wipes out almost all of the fixed function pipeline, so the knowledge does not easily transfer." yet at the same time also makes the case that fixed-functionality provides an easier, more immediate learning curve for beginners by stating: "It is generally considered easiest to teach neophyte graphics programmers using the fixed function pipeline." Naturally, you can see why I might be conflicted about which paradigm to learn: Do I spend a lot of time learning (and then later unlearning) the ways of fixed-functionality, or do I choose to start out with shaders? My primary concern is that modern programmable shaders somehow require the programmer to already understand the fixed-function pipeline, but I doubt that's the case. TL;DR == As an aspiring game graphics programmer, is it in my best interest to learn 3D programming through fixed-functionality or modern shader-based programming?

    Read the article

  • Converting a DrawModel() using BasicEffect to one using Effect

    - by Fibericon
    Take this DrawModel() provided by MSDN: private void DrawModel(Model m) { Matrix[] transforms = new Matrix[m.Bones.Count]; float aspectRatio = graphics.GraphicsDevice.Viewport.Width / graphics.GraphicsDevice.Viewport.Height; m.CopyAbsoluteBoneTransformsTo(transforms); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); Matrix view = Matrix.CreateLookAt(new Vector3(0.0f, 50.0f, Zoom), Vector3.Zero, Vector3.Up); foreach (ModelMesh mesh in m.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting(); effect.View = view; effect.Projection = projection; effect.World = gameWorldRotation * transforms[mesh.ParentBone.Index] * Matrix.CreateTranslation(Position); } mesh.Draw(); } } How would I apply a custom effect to a model with that? Effect doesn't have View, Projection, or World members. This is what they recommend replacing the foreach loop with: foreach (ModelMesh mesh in terrain.Meshes) { foreach (Effect effect in mesh.Effects) { mesh.Draw(); } } Of course, that doesn't really work. What else needs to be done?

    Read the article

  • For photography use, is Unity is overheating my laptop? Should I try OpenSuse instead?

    - by SoT
    I am a perfect noob here in the Linux world. Previously was using Windows 7. Mine is an HP laptop - Intel core2duo T5470 @ 1.60GHz × 2 / 965GM with 2GB RAM. I installed Ubuntu 12.04TLS and is quite liking it's display. I really recognized it is 3D before knowing it was Unity 3D interface. My uses are image editing, home uses, downloads, browsing etc.. No video-editing/gaming at all. Being a Photography enthusiast I use image editing programs fairly more. But I am now feeling my laptop is getting a bit overheated - processor and hard-disk. I tried lm-sensor and could not make out much of it. Installed Xsensors.7. It gives the same output as lm-sensors gave me. It gives temperature for 4 things Temp1, temp2, temp3, and temp4. For "acpitz". Please guide me in this. However I wanted to ask something more. Which one is better for working with images - photography I mean - openSUSE 12.1 or Ubuntu with unity 3D? Can I get the display quality with the openSUSE distribution? I heard for laptops openSUSE uses power more efficiently, is there any truth? Please suggest me whether I should try openSUSE or not. If so with which GUI? KDE or GNOME? Thanks in advance. Regards SoT

    Read the article

  • Unity 3d support for multiple X-screens

    - by stewbond
    I've installed Ubuntu 12.04 this weekend and have had problems getting Unity3d to work with my triple monitor set-up. I've installed the latest nVidia drivers for my 2 nVidia video cards and have used NVIDIA X Server Settings to configure everything. If I stick each monitor on a separate X screen, all but the primary X screen will appear white and the cursor will be a black X. I can start a terminal on this screen but cannot drag other windows to it. If I kill the "Nautilus" process, the white background disappears and I see my desktop background, but my cursor is still an X and I still cannot drag windows to it. If I enable TwinView, I can get one screen on two monitors to work properly, but the white screen remains on my third monitor. In addition, I don't like using TwinView because my full-screen applications get stretched. My current solution is to "Enable Xinerama", use all separate X screens and revert to Unity2d. I'd love a solution to this as Ubuntu 12.10 is not planned to support Unity-2d and prefer the aesthetics of 3d anyways. I can provide Xorg.conf for all configurations and any other suggested diagnostic information. Below is my closest-to-working xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@zirconium) Fri Mar 30 13:38:49 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" RightOf "Screen2" Screen 1 "Screen1" RightOf "Screen0" Screen 2 "Screen2" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Microvitec PLC MV191" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "CRT-1" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "Microvitec PLC MV191" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor3" VendorName "Unknown" ModelName "Microvitec PLC MV191" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTS 450" BusID "PCI:1:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9500 GT" BusID "PCI:2:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTS 450" BusID "PCI:1:0:0" Screen 1 EndSection Section "Device" Identifier "Device3" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTS 450" BusID "PCI:1:0:0" Screen 2 EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP-0: 1280x1024_75 +0+0; DFP-0: 1280x1024 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP-2: 1280x1024_75 +0+0; DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen3" Device "Device3" Monitor "Monitor3" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection Other people have had similar issues. I haven't been successful at any of the few suggestions submitted. Can not get Dual Monitors to work on Different GPUs http://askubuntu.com/questions/30412/3-monitors-with-2-video-cards-not-working?rq=1 How to get second display to work alongside primary display? http://askubuntu.com/questions/148007/multiple-monitors-only-work-with-unity-2d/201086#201086 xorg.conf and Unity3D?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >