Search Results

Search found 8165 results on 327 pages for '3d graphics'.

Page 40/327 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • aticonfig detects all graphics cards, but amdcccle shows the cards as disabled

    - by DevZer0
    I been having the strangest problem since few days ago. I have 3 ATI graphics cards and the aticonfig --list-adapter shows the following output * 0. 01:00.0 AMD Radeon HD 7900 Series 1. 02:00.0 AMD Radeon HD 7900 Series 2. 05:00.0 AMD Radeon HD 7900 Series when i do aticonfig --adapter=all --initial -f it generates the correct X config also. but after rebooting the output only show on primary monitor and when i look in amdcccle it shows the other 2 adapters as disabled. I tried with monitors attached and dummy plugs both but the situation doesn't change. Any idea whats causing this? also right clicking and making the adapter enabled in amdcccle and saving changes causes the X config to only have the adapter section and no screen section. after reboot the situation stays the same.

    Read the article

  • xrandr shows VGA1 as disconnected

    - by Felix
    I have a Thinkpad W520 with Nvidia Optimus graphics. I have disabled the Nvidia card in BIOS (by selecting "integrated graphics"), so I'm running only on the integrated Intel graphics. I get full 3D acceleration, which would suggest the drivers are properly installed. However, I'm not able to use an external monitor. With the external monitor connected and turned on, running xrandr always gives: $ xrandr Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 8192 x 8192 LVDS1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 344mm x 193mm 1920x1080 60.0*+ 59.9 50.0 1680x1050 60.0 59.9 1600x1024 60.2 1400x1050 60.0 1280x1024 60.0 1440x900 59.9 1280x960 60.0 1360x768 59.8 60.0 1152x864 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) What gives? It sees the VGA1 port (to which the external display is connected), but it appears disconnected. I have tried forcing a resolution as per these instructions, but when I do that X becomes unresponsive and I have to Ctrl-Alt-F1 and restart it.

    Read the article

  • Depending on another open source library: copy/paste code or include

    - by user5794
    I'm working on a large class and started implementing new features that need graphics. I started writing the graphics functions myself, but I know that open source libraries exist that can provide me with this functionality without me having to write it myself. The problem is that I prefer the class to be self-sufficient and not dependent on any other library. If I don't write it myself, I would have to ask the user to make sure a graphics library is already installed (less user-friendly). If I write it myself, I do a lot more work than I have to. I could also copy/paste some of the relevant code into my own class, but not sure about the disadvantages of doing this (it's an open source library that matches my license, so I'm not concerned with legality, just programming-wise if there are disadvantages). So what should I do: copy paste code from the external library write the code myself so it's truly self-sufficient ask the user to download and install another library

    Read the article

  • Bumblebee optirun appears to depend on Intel

    - by user206398
    I have a Lenovo T420 with Intel and Nvidia graphics. On upgrade to Ubuntu Saucy, I had to purge and reinstall bumblebee-nvidia to get beyond optirun failing to find a GPU driver. Now, "optirun glxgears" and "optirun sol" succeed, but optirun fails on 2 Virtual Life viewers that it supported in the past, Cool VL (CoolVLViewer-1.26.8.34-Linux-x86) and Imprudence (Imprudence 1.4.0 beta2). In both cases, the error output is huge, but it starts with libGL error: failed to load driver: i965 and libGL error: failed to load driver: swrast From the little I can discover, i965 is an Intel graphics driver, which should not be invoked at all. I haven't found any information about swrast. I suspect that some of the X configuration associated with Bumblebee has some Intel dependence that is invoked on certain library calls, but not others. I haven't discovered any definite information on this line. The Cool VL Viewer runs without optirun, but complains about the insufficiency of the Intel graphics.

    Read the article

  • After reinstalling ATI graphics drivers, my keyboard and mouse aren't working anymore

    - by Lifelike27
    On Ubuntu 11.04 I tried to install the open source drivers for my laptops ATI Mobility Radeon 5470M but I messed that up a bit and lost xserver. Now I've managed to solve that problem with this and by downloading the ATI proprietary drivers and install those manually. Now, when Ubuntu loads up I get to the login screen but I can't use my mouse or my keyboard (usb keyboard and mouse doesn't work either). If I use the recovery console, then login with that and then run 'startx'. I can login fine (though Unity doesn't show, the graphics seem to be working because it shows the fading animation of libnotify), but I can't type or move my mouse.

    Read the article

  • Making video from 3D gaphics in OpenGL

    - by MVTC
    What are some of the preferred methods or libraries for creating video from an OpenGL graphics simulation? For example, I want to create a visualization(video) of an N-Body gravity simulation by rendering non-real-time OpenGL frames. The simulation is already coded, I just don't know how to convert it to video. EDIT: I am also interested in providing the described functionality: The user can adjust parameters including the time step between captured frames and then initiate the simulation. The user waits for the simulation to complete, and then can watch the results. The user is able to increase or decrease the playback speed of the simulation whereas in slow motion, more frames are used i.e., you see higher resolution time steps, and when the speed is increased, you see lower resolution time steps at a higher rate, but the frames per second flashing on the screen is constant.

    Read the article

  • The system is running in low graphics mode error before ubuntu gets installed

    - by hellodear
    I am using Bootable USB for installing Ubuntu with Windows 8.1. I have put pendrive and then booting my USB. When it shows me to try Ubuntu or install Ubuntu. Then I press Try ubuntu and then it says" "The system is running in low graphics mode". Then I press Ok. Then it shows 4 options. Then again I click Ok. Then it shows a black screen and nothing happens. I have tried all possible answers provided in AU. What should I do? Please help. I am using Windows 8.1 with dedicated graphic card (Video card - AMD Radeon HD 8670M). Installing 12.04 LTS with pre-installe Windows 8.1 in a Dell Laptop 3537 inspiron.

    Read the article

  • lightdm.conf content erased, now stuck in low graphics mode

    - by user79318
    This evening I was attempting to disable the guest account and something went awry. Currently on boot Ubuntu enters low graphics mode. No specific error report. What did I do? Before this error occurred I added a line of code in lightdm.conf to disable the guest account. I think I may have accidentally erased the contents of lightdm.conf. Not entirely sure. I troubleshooted for the past hour using various suggestions from other Questions to no avail.

    Read the article

  • Hyper-V Blue Screens with Nvidia GeForce 8400 GS Graphics Card

    - by Mahmoud Saleh
    I am using Windows Server 2008 R2 Enterprise x64. After installing the Hyper-V role and restarting the machine, I get a blue screen error and an immediate reboot. I have Googled the issue and tracked it down to the graphics card, so I uninstalled it, and then Windows loads fine. However, after installing the graphics driver again, the Blue Screen returns. The graphics card is an Nvidia GeForce 8400 GS. Does anyone know how I can resolve this issue?

    Read the article

  • Unity 3D (with Nvidia driver) becomes very slow and laggy

    - by Graham
    How can I prevent my Unity 3D desktop from becoming slow after a while, given that I have an Nvidia Quadro NVS 290 graphics in TwinView mode? The desktop starts out fast on login, but becomes slow / lagging / hesitant / high latency after a while, symptoms being spikes in CPU usage by /usr/bin/X whenever I cause any graphical activity with the mouse or keyboard (e.g. typing, changing tabs, dragging windows). The desktop remains slow even with all windows (except htop in Terminal) and extraneous processes killed. Detail: Changing tabs in Terminal takes about a second, and X spikes to 76% CPU. As I type into Firefox, X spikes to 95% CPU. Dragging Termiinal window, X goes to 70% CPU. Basically, every graphical action sends CPU usage of X through the roof. Device: Nvidia Quadro NVS 290 Driver package: binary driver nvidia-current-updates (280.13-0ubuntu5) Dual Monitors: Pair of DELL UltraSharp 1908FP in TwinView (X screen 2560x1024) OS: Fresh install of Ubuntu 11.10 amd64 Desktop with all updates. Hardware: Dell Precision T5400 Workstation Pastebin of Xorg.0.log Pastebin of xorg.conf Pastebin of nvidia-xconfig -t output (easier to read than xorg.conf) Output of /usr/lib/nux/unity_support_test -p: To obtain the following htop screenshow I typed "asdf" several times in in this text box, alt-tabbed to Terminal and took a screenshot of the high X CPU usage. This also happens when firefox is not running: Quadro NVS 290 has "No" thermal sensor according to sensors-detect: Next adapter: NVIDIA i2c adapter 0 at 2:00.0 (i2c-0) Do you want to scan it? (YES/no/selectively): Client found at address 0x50 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No P robing for `SPD EEPROM'... No Probing for `EDID EEPROM'... Yes (confidence 8, not a hardware monitoring chip) I tried the nouveau driver by disabling the nvidia-current-updates under Additional Drivers, but Ubuntu and xrandr -q fail to detect the second monitor. This may be issue 737349. Funniest thing is that Nouveau wiki says that XRandR 1.2 dual-monitor is supported so it should work with a second monitor.

    Read the article

  • Mixing XNA and silverlight gives wierd graphics

    - by Mech0z
    I making a small 3dgame which is made as a Silverlight and XNA app, but when I draw the sprites the graphics becomes all wierd. All my primitive types are rendered correctly, but my 3d models are just wierd My Draw is like this when silverlight is set to draw private void OnDraw(object sender, GameTimerEventArgs e) { // Render the Silverlight controls using the UIElementRenderer elementRenderer.Render(); // Clear the screen to a solid color SharedGraphicsDeviceManager.Current.GraphicsDevice.Clear(Color.CornflowerBlue); switch (gameState) { case GameState.ChooseStarter: TextBlockStatus.Text = "Find Starting Player"; break; case GameState.PlaceBrick: TextBlockPlayer.Text = (playerTurn == PlayerTurn.PlayerOne) ? "Player One" : "Player Two"; TextBlockState.Text = "Place Brick"; foreach (IGraphicObject obj in _3dObjects) { obj.Draw(cameraPosition, e); } break; case GameState.GiveBrick: TextBlockState.Text = "Give Brick"; break; } spriteBatch.Begin(); // Using the texture from the UIElementRenderer, // draw the Silverlight controls to the screen spriteBatch.Draw(elementRenderer.Texture, cameraProjection, Color.White); spriteBatch.End(); } This gives me this output If I comment the spritebatch lines out I get the correct output, except the silverlight text is of course not shown I am not entirely sure what to look for except that zero vector I am giving to the spritebatch, but if thats the source I have no idea what I am supposed to set it as epspecially when its a 2d vector

    Read the article

  • Bad 3D Performance in Ubuntu 12.04

    - by Pandem
    I already posted a question before but I didn't really get any advice/help. I'll be a bit more brief/general in hope it'll help. I have an MSI HD 7850 with the Catalyst 12.4 drivers installed. I've found that I'm having bad 3D performance for some reason but I'm not entirely sure what. I suspect it may just that the graphics card is new and AMD just need to work on their drivers but it would be nice to get advice and narrow the problem down so that I can be sure rather than wait for driver updates that may not even help. I ran gxlgears to give some general idea of how bad the performance is. At default size it is averaging around 2000 FPS. The command glxinfo confirms the renderer is using AMD Radeon HD 7800 Series with OpenGL version 4.2. Edits below: As asked for others: lspci -v output is here. fglrxinfo output is here xvinfo output is here glxinfo | grep rendering says yes for direct rendering. These confirmed that everything was configured correctly. Within Unity and Gnome Classic: glxgears had an FPS of around 2000 FPS fgl_glxgears had an FPS of around 544 FPS Within LDXE: glxgears had an FPS of around 4600 FPS fgl_glxgears had an FPS of around 1600 FPS In the end it was discovered that Compiz was causing a large performance decrease and solution was simply to change window manager for the time being. Thanks to TechZilla for all his help!

    Read the article

  • Using 2D sprites and 3D models together

    - by Sweta Dwivedi
    I have gone through a few posts that talks about changing the GraphicsDevice.BlendState and GraphicsDevice.DepthStencilState (SpriteBatch & Render states). . however even after changing the states .. i cant see my 3D model on the screen.. I see the model for a second before i draw my video in the background. . Here is the code: case GameState.InGame: GraphicsDevice.Clear(Color.AliceBlue); spriteBatch.Begin(); if (player.State != MediaState.Stopped) { videoTexture = player.GetTexture(); } Rectangle screen = new Rectangle(GraphicsDevice.Viewport.X, GraphicsDevice.Viewport.Y, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height); // Draw the video, if we have a texture to draw. if (videoTexture != null) { spriteBatch.Draw(videoTexture, screen, Color.White); if (Selected_underwater == true) { spriteBatch.DrawString(font, "MaxX , MaxY" + maxWidth + "," + maxHeight, new Vector2(400, 10), Color.Red); spriteBatch.Draw(kinectRGBVideo, new Rectangle(0, 0, 100, 100), Color.White); spriteBatch.Draw(butterfly, handPosition, Color.White); foreach (AnimatedSprite a in aSprites) { a.Draw(spriteBatch); } } if(Selected_planet == true) { spriteBatch.Draw(kinectRGBVideo, new Rectangle(0, 0, 100, 100), Color.White); spriteBatch.Draw(butterfly, handPosition, Color.White); spriteBatch.Draw(videoTexture,screen,Color.White); GraphicsDevice.BlendState = BlendState.Opaque; GraphicsDevice.DepthStencilState = DepthStencilState.Default; GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap; foreach (_3DModel m in Solar) { m.DrawModel(); } } spriteBatch.End(); break;

    Read the article

  • Z-order with Alpha blending in a 3D world

    - by user41765
    I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C++) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element = 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment: Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist: create a new one. Batch exist for element with this Material: Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type = 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here: Explication here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that? Thanks!

    Read the article

  • Keep 3d model facing the camera at all angles

    - by Sparky41
    I'm trying to keep a 3d plane facing the camera at all angles but while i have some success with this: Vector3 gunToCam = cam.cameraPosition - getWorld.Translation; Vector3 beamRight = Vector3.Cross(torpDirection, gunToCam); beamRight.Normalize(); Vector3 beamUp = Vector3.Cross(beamRight, torpDirection); shipeffect.beamWorld = Matrix.Identity; shipeffect.beamWorld.Forward = (torpDirection) * 1f; shipeffect.beamWorld.Right = beamRight; shipeffect.beamWorld.Up = beamUp; shipeffect.beamWorld.Translation = shipeffect.beamPosition; *Note: Logic not wrote by me i just found this rather useful It seems to only face the camera at certain angles. For example if i place the camera behind the plane you can see it that only Roll's around the axis like this: http://i.imgur.com/FOKLx.png (imagine if you are looking from behind where you have fired from. Any idea what to what the problem is (angles are not my specialty) shipeffect is an object that holds this class variables: public Texture2D altBeam; public Model beam; public Matrix beamWorld; public Matrix[] gunTransforms; public Vector3 beamPosition;

    Read the article

  • Handling early/late/dropped packets for interpolation in a 3D multiplayer game

    - by Ben Cracknell
    I'm working on a multiplayer game that for the purposes of this question, is most similar to Team Fortress. Each network data packet will contain the 3D position of the target moving object. (this object could be another player) The packets are sent on a fixed interval, and linear interpolation will be used to smooth the transition between packets. Under normal circumstances, interpolation will occur between the second-to-last packet, and the last packet received. The linear interpolation algorithm is the same as this post: Interpolating positions in a multiplayer game I have the same issue as in that post, but the answers don't seem like they will work in my situation. Consider the following scenario: Normal packet timing, everything is okay The next expected packet is late. That's okay, we'll just extrapolate based on previous positions The late packet eventually arrives with corrections to our extrapolation. Now what do we do with its information? The answers on the above post suggest we should just interpolate to this new packet's position, but that would not work at all. If we have already extrapolated past that point in time, moving back would cause rubber-banding. The issue is similar in the case of an early or dropped packet. So I believe what I am looking for is some way to smoothly deal with new information in an ongoing interpolation/extrapolation process. Since I might be moving on to quadratic or even cubic interpolation, it would be great if the same solutiuon could be applied to those as well.

    Read the article

  • vector rotations for branches of a 3d tree

    - by freefallr
    I'm attempting to create a 3d tree procedurally. I'm hoping that someone can check my vector rotation maths, as I'm a bit confused. I'm using an l-system (a recursive algorithm for generating branches). The trunk of the tree is the root node. It's orientation is aligned to the y axis. In the next iteration of the tree (e.g. the first branches), I might create a branch that is oriented say by +10 degrees in the X axis and a similar amount in the Z axis, relative to the trunk. I know that I should keep a rotation matrix at each branch, so that it can be applied to child branches, along with any modifications to the child branch. My questions then: for the trunk, the rotation matrix - is that just the identity matrix * initial orientation vector ? for the first branch (and subsequent branches) - I'll "inherit" the rotation matrix of the parent branch, and apply x and z rotations to that also. e.g. using glm::normalize; using glm::rotateX; using glm::vec4; using glm::mat4; using glm::rotate; vec4 vYAxis = vec4(0.0f, 1.0f, 0.0f, 0.0f); vec4 vInitial = normalize( rotateX( vYAxis, 10.0f ) ); mat4 mRotation = mat4(1.0); // trunk rotation matrix = identity * initial orientation vector mRotation *= vInitial; // first branch = parent rotation matrix * this branches rotations mRotation *= rotate( 10.0f, 1.0f, 0.0f, 0.0f ); // x rotation mRotation *= rotate( 10.0f, 0.0f, 0.0f, 1.0f ); // z rotation Are my maths and approach correct, or am I completely wrong? Finally, I'm using the glm library with OpenGL / C++ for this. Is the order of x rotation and z rotation important?

    Read the article

  • which flash 3d particle engine generate such xml file

    - by Huang F. Lei
    I found some particle config files like below one, but I don't know which flash 3d particle engine use them, they are different from away3d's which use 'root' as root element of xml. <effect pos="0 0 0"> <property cache="1" lifetime="10000"/> <mesh blendmode="add"> <path> <frame y="100" durtime="1000" x="0" z="0"/> </path> <scale> <frame y="0.2000000001" durtime="300" x="2.2" z="2.2"/> <frame y="0.4" durtime="300" x="2.7" z="2.7"/> </scale> </mesh> <vibrate delayTime="100" amplitude="10" durationTime="750" intension="50"/> <quad billboard="false" > </quad> <particle global="false" pos=""> <scale> <frame y="1" durtime="0" x="1" z="1"/> <frame y="1" durtime="2000" x="1.5" z="1.5"/> </scale> </particle> </effect>

    Read the article

  • Getting Unity 3D working on legacy Nvidia card

    - by user69545
    I installed the latest nVIDIA drivers for my FX5500 card. I understand that the X server version does not officially support this driver or card but was wondering what I can do to get compiz running. I have researched for hours on this issue but cannot come up with an answer for myself. I might be doing all this for nothing but I wanted to at least try. Here is the output of my test: mike@mike-linux-box:~$ /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce FX 5500/AGP/SSE2 OpenGL version string: 2.1.2 NVIDIA 173.14.35 Not software rendered: yes Not blacklisted: no GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no So I was wondering what is the "Not Blacklisted" test? Is this the Nouveau Blacklisting? nVIDIA driver did that automatically. Does this need to be removed? Any help would be appreciated. I just want to run compiz effects. Thanks.

    Read the article

  • Cursor seems to freeze in the first attempt of typing - Unity 3D, 12.04

    - by Denis
    It happens in the first attempt of typing, no matter is after the startup, or 5 minutes later, or then after. The cursor (or maybe it's the system) seems to freeze, no matter the application I use, taking up 5 sec to appear what is typed. Subsequently, everything is normal, using another applications. @Anwar Shah suggested it could be a daemon waiting to run before the lauching of the first application. Turning off Zeitgest didn't help. It occurs only with Unity-3d. Tested with Unity-2d, everything is fine. Tried to change some Compiz settings, nothing worked, although not tested with every single parameter. Also I deactivated Ati proprietary driver, no effect. My system: AMD E350 1.6Gh, 2G-Ram, ATI graphics - Ubuntu 12.04, 64bits. Update 1: the cursor is blinking normally before I start typing. After the first character (which is not showed), seems to freeze, taking 5 seconds to get normal again. Very annoying, specially when you want to access login sites. Update 2: I tested on a different and old machine (Athlon 64 4800 x2, 4Gb ram, no problems - takes 2 seconds, acceptable. I think it could be related to my specific hardware (Samsung RV415), but not sure about it. Anyone experiencing something similar? Is that what I should expect, or can be fixed or improved? Thanks.

    Read the article

  • ATI Radeon 5800 series dual monitor unity not 3D accelerated

    - by Victor S
    When I had a single monitor setup, without Xinerama, with my current setup of Ubuntu 11.10, ATI 5800 series card, Unity showed transparencies, shadows, etc. (although graphics was reported as 'Standard' in the control/settings panel). Having switched to a dual monitor setup, dell 24" UltraSharp and a smaller Acer monitor, Unity shows only as 2d, even though I'm not loggin in to that display manager. WebGL performance is very sluggish, I'm getting the impression that the processor is doing all the work and the card isn't even accessible even though the drivers are installed (from the ubuntu repository, I did not compile custom drivers). Any tips on how to enable full 3D accelerationn and video card support. Here is my xorg.conf file: Section "Monitor" Identifier "0-DFP3" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1680x1050" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "0-DFP4" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1920x1200" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Screen" Identifier "Default Screen" DefaultDepth 24 SubSection "Display" EndSubSection EndSection Section "Screen" Identifier "amdcccle-Screen[1]-0" Device "amdcccle-Device[1]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "amdcccle-Screen[1]-1" Device "amdcccle-Device[1]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Module" Load "glx" EndSection Section "ServerLayout" Identifier "amdcccle Layout" Screen 0 "amdcccle-Screen[1]-0" 0 0 Screen "amdcccle-Screen[1]-1" 1920 0 EndSection Section "Device" Identifier "amdcccle-Device[1]-0" Driver "fglrx" Option "Monitor-DFP4" "0-DFP4" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "amdcccle-Device[1]-1" Driver "fglrx" Option "Monitor-DFP3" "0-DFP3" BusID "PCI:1:0:0" Screen 1 EndSection Section "ServerFlags" Option "Xinerama" "on" EndSection More info: fglrxinfo display: :0 screen: 0 OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: ATI Radeon HD 5800 Series OpenGL version string: 4.1.11005 Compatibility Profile Context

    Read the article

  • 3d Model Scaling With Camera

    - by spasarto
    I have a very simple 3D maze program that uses a first person camera to navigate the maze. I'm trying to scale the blocks that make up the maze walls and floor so the corridors seem more roomy to the camera. Every time I scale the model, the camera seems to scale with it, and the corridors always stay the same width. I've tried apply the scale to the model in the content pipe (setting the scale property of the model in the properties window in VS). I've also tried to apply the scale using Matrix.CreateScale(float) using the Scale-Rotate-Transform order with the same result. If I leave the camera speed the same, the camera moves slower, so I know it's traversing a larger distance, but the world doesn't look larger; the camera just seems slower. I'm not sure what part of the code to include since I don't know if it is an issue with my model, camera, or something else. Any hints at what I'm doing wrong? Camera: Projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, _device.Viewport.AspectRatio, 1.0f, 1000.0f ); Matrix camRotMatrix = Matrix.CreateRotationX( _cameraPitch ) * Matrix.CreateRotationY( _cameraYaw ); Vector3 transCamRef = Vector3.Transform( _cameraForward, camRotMatrix ); _cameraTarget = transCamRef + CameraPosition; Vector3 camRotUpVector = Vector3.Transform( _cameraUpVector, camRotMatrix ); View = Matrix.CreateLookAt( CameraPosition, _cameraTarget, camRotUpVector ); Model: World = Matrix.CreateTranslation( Position );

    Read the article

  • How to determine if a 3D voxel-based room is sealed, efficiently

    - by NigelMan1010
    I've been having some issues with efficiently determining if large rooms are sealed in a voxel-based 3D rooms. I'm at a point where I have tried my hardest to solve the problem without asking for help, but not tried enough to give up, so I'm asking for help. To clarify, sealed being that there are no holes in the room. There are oxygen sealers, which check if the room is sealed, and seal depending on the oxygen input level. Right now, this is how I'm doing it: Starting at the block above the sealer tile (the vent is on the sealer's top face), recursively loop through in all 6 adjacent directions If the adjacent tile is a full, non-vacuum tile, continue through the loop If the adjacent tile is not full, or is a vacuum tile, check if it's adjacent blocks are, recursively. Each time a tile is checked, decrement a counter If the count hits zero, if the last block is adjacent to a vacuum tile, return that the area is unsealed If the count hits zero and the last block is not a vacuum tile, or the recursive loop ends (no vacuum tiles left) before the counter is zero, the area is sealed If the area is not sealed, run the loop again with some changes: Checking adjacent blocks for "breathable air" tile instead of a vacuum tile Instead of using a decrementing counter, continue until no adjacent "breathable air" tiles are found. Once loop is finished, set each checked block to a vacuum tile. Here's the code I'm using: http://pastebin.com/NimyKncC The problem: I'm running this check every 3 seconds, sometimes a sealer will have to loop through hundreds of blocks, and a large world with many oxygen sealers, these multiple recursive loops every few seconds can be very hard on the CPU. I was wondering if anyone with more experience with optimization can give me a hand, or at least point me in the right direction. Thanks a bunch.

    Read the article

  • Algorithm to shoot at a target in a 3d game

    - by Sebastian Bugiu
    For those of you remembering Descent Freespace it had a nice feature to help you aim at the enemy when shooting non-homing missiles or lasers: it showed a crosshair in front of the ship you chased telling you where to shoot in order to hit the moving target. I tried using the answer from http://stackoverflow.com/questions/4107403/ai-algorithm-to-shoot-at-a-target-in-a-2d-game?lq=1 but it's for 2D so I tried adapting it. I first decomposed the calculation to solve the intersection point for XoZ plane and saved the x and z coordinates and then solving the intersection point for XoY plane and adding the y coordinate to a final xyz that I then transformed to clipspace and put a texture at those coordinates. But of course it doesn't work as it should or else I wouldn't have posted the question. From what I notice the after finding x in XoZ plane and the in XoY the x is not the same so something must be wrong. float a = ENG_Math.sqr(targetVelocity.x) + ENG_Math.sqr(targetVelocity.y) - ENG_Math.sqr(projectileSpeed); float b = 2.0f * (targetVelocity.x * targetPos.x + targetVelocity.y * targetPos.y); float c = ENG_Math.sqr(targetPos.x) + ENG_Math.sqr(targetPos.y); ENG_Math.solveQuadraticEquation(a, b, c, collisionTime); First time targetVelocity.y is actually targetVelocity.z (the same for targetPos) and the second time it's actually targetVelocity.y. The final position after XoZ is crossPosition.set(minTime * finalEntityVelocity.x + finalTargetPos4D.x, 0.0f, minTime * finalEntityVelocity.z + finalTargetPos4D.z); and after XoY crossPosition.y = minTime * finalEntityVelocity.y + finalTargetPos4D.y; Is my approach of separating into 2 planes and calculating any good? Or for 3D there is a whole different approach? sqr() is square not sqrt - avoiding a confusion.

    Read the article

  • Model a chain with different elements in Unity 3D

    - by Alex
    I have to model, in unity 3D, a chain that is composed of various elements. some flexible, some rigid. The idea is to realize a human-chain where each person is linked to the other by their hands. I've not tried to implement it yet as i've no idea on what could be a good way to do it. In the game i've to manage a lot of chains of people... maybe also 100 chains composed of 11-15 people. The chain will be pretty simple and there won't be much interaction... Probabily some animation of the people one at time for each chain and some physic reaction (for example pushing a people in a chain should slightle flex the chain) the very problem of this work is that in the chain each object is composed by flexible parts (arms) and rigid parts (the body) and that the connection should remain firm... just like when people handshake... hands are firm and are the wrists to move. i can use C4D to model the meshes. i know this number may cause performance problems, but it's also true i will use low-poly versions of human. (for the real it won't be human, but very simple toonish characters that have harms and legs). So actually i'm trying to find a way to manage this in a way it can work, the performance will be a later problem that i can solve. If there is not a fast 'best-practiced' solution and you have any link/guide/doc that could help me in finding a way to realize this is, it would be very appreciated anyway if you post it. thanks

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >