Search Results

Search found 2863 results on 115 pages for '3d kreativ'.

Page 3/115 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • 3d point cloud render from x,y,z 2d array with texture

    - by user1733628
    Need some direction on 3d point cloud display using OpenGL in c++ (vs2008). I am brand new to OpenGL and trying to do a 3d point cloud display with a texture. I have 3 2D arrays (each same size 1024x512) representing x,y,z of each point. I think I am on the right track with glBegin(GL_POLYGON); for(int i=0; i<1024; i++) { for(int j=0; j<512; j++) { glVertex3f(x[i][j], y[i][j], z[i][j]); } } glEnd(); Now this loads all the vertices in the buffer (I think) but from here I am not sure how to proceed. Or I am completely wrong here. Then I have another 2D array (same size) that contains color data (values from 0-255) that I want to use as texture on the 3D point cloud and display. I understand that this maybe a very basic OpenGL implementation for some but for me this is a huge learning curve. So any pointers, nudge or kick in the right direction will be appreciated.

    Read the article

  • opengl 3d point cloud render from x,y,z 2d array with texture

    - by user1733628
    Need some direction on 3d point cloud display using openGl in c++ (vs2008). I am brand new to openGl and trying to do a 3d point cloud display with a texture. I have 3 2D arrays (each same size 1024x512) representing x,y,z of each point. I think I am on the right track with glBegin(GL_POLYGON); for(int i=0; i<1024; i++) { for(int j=0; j<512; j++) { glVertex3f(x[i][j], y[i][j], z[i][j]); } } glEnd(); Now this loads all the vertices in the buffer (i think) but from here I am not sure how to proceed. Or I am completely wrong here. Then I have another 2D array (same size) that contains color data (values from 0-255) that I want to use as texture on the 3D point cloud and display. I understand that this maybe a very basic opengl implementation for some but for me this is a huge learning curve. So any pointers, nudge or kick in the right direction will be appreciated.

    Read the article

  • 3D Box Collision Data Import

    - by cboe
    I'm trying to implement a collision system using oriented bounding boxes, using a center for the box, it's extents as a 3D Vector and a rotation matrix, which is all stuff I picked up online and seem to be somewhat the standard. Detecting the center is no problem so I'm gonna leave these out here. My problem however is importing the data from a 3D file. Say I've placed a box with 2 units length on each side aligned to the world axis. The logic results here are extents of 1,1,1 and I use an identity matrix for rotation - easy. However I'm stuck when I rotate the box in the 3D program, say 30 degrees each axis. How would I parse the box? I only have these 8 vertices as information, and I guess what I would need to do is to find out the rotation of said box, apply it to the vertices so they are aligned to world axes and then calculate the extents out of that. How do I get the rotation of the box when I only have the vertex information of the box available?

    Read the article

  • Blender/POV-Ray differences

    - by Magnap
    I am currently following a tutorial for Blender, but came across POV-Ray mentioned as a renderer. After having researched it a bit, i took a look at it's scene description (scripting) language, which kind of fascinated me. But, after even more googling of the topic, i am still wondering: What are the main and key differences between working with 3D in resp. Blender and POV-Ray? PS: I suspect this might not be the best place for a question such as this, but it seems to be the most suitable.

    Read the article

  • Fun with Sun Ray, 3D, Oracle VM x86 and SRIOV

    - by wim.coekaerts
    One of the things I like about my job is that I get to play around with stuff and make use of the technologies we work on in my teams. Sort of my own little playground. It allows me to study the products in great detail and put them to use in ways that individual product teams don't always intend them to be used for :) but that makes it fun. I have a lot of this set up at home because... work is sort of hobby and I just like to tinker with it. Anyway, a few weeks ago I was looking at my sun ray rig at home and how well 3D works. Google Earth and some basic opengl tests like glxspheres combined with virtualgl. It resulted in some very cool demos recorded with my little camera (sorry for the crappy quality of the video :-) : OVDC (soft client) on my mac Sun Ray 2FS Never mind the hickups during zoom, that's because I was using the scrollwheel on my mouse and I can't scroll uninterrupted :) Anyway, this is quite cool ! The setup for this was the following : Sun Ray on LAN, Sun Ray Server 5 latest installed on OL5.5 inside a VM running on Oracle VM 2.2 (hardware virt, with a virtual network (vif)) and the virtualgl rendering happened on another box (wopr5) that runs linux on a little atom D520 with an ION2 gpu. So network goes from Sun Ray to Sun Ray Server to wopr5 and back. Given that this is full screen 3D it puts a good amount of load on the network and it's pretty cool that SRS was just a VM :) So, separately, I had written a little blog entry about using sriov and oracle vm a while back. link to sriov blog entry Last night when I came home I wanted to do some more playing around with SRIOV and live migrate. To do this, I wanted to set up a VM with 2 network interfaces, one virtual network (vif) and then one that's one of the SRIOV virtual functions from my network card. Inside the guest they show as eth0 and eth1, and then bond them using a standard linux bonding device (bond0 here) with active active links. The goal here is that on live migrate, we would detach the VF (eth1 in guest in this case), the bond would then just hum along on eth0 (vif) we can live migrate the VM and then on the other server after the migrate completes we re-attach a VF to the VM there and eth1 pops up again and the bond uses both eth0/eth1 to do its work. So, to set this up, I figured, why not use my sun ray server VM because the 3D work generates a nice network load and is very latency/timing sensitive. In the end, I ran glxspheres on my sunray server (vm) displaying on my sun ray 2 fs and while that was running, I did my live migrate test of this vm (unplug pci VF, migrate, reconnect vf) and guess what, it just kept running :) veryyyyyy cool. now, it was supposed to, but it's always nice to see it actually work, for real. Here's a diagram of it. No gimics - just real technology at work ! enjoy :)

    Read the article

  • I need help with 3d shading/lighting.

    - by Xavier
    How do you guys handle shading in a 3d game? I have a directional light source that shades one side of a tree made of cubes. The remaining 3 sides all get ambient shading only. So the 3d effect is lost when looking at two ambient shaded sides. Am I missing something? Should I be shading the side furthest from the light source even darker? I tried looking at Fallout 3 and it kinda looks like this is what they do however Minecraft appears to shade a grass mound with two opposite sides light and the remaining two opposite sides dark kinda giving the effect that there are two directional lights for the two light shaded sides and ambient light for the dark shaded sides.

    Read the article

  • what is the simplest 3d software for unity?

    - by kdavis8
    Ive heard a lot about Daz studio, Poser, Maya, K-3d, Anim8or, Blender, and all the rest. My question is which one is the best choice in terms of simplicity and quality. price is not an issue really. I'm programming games in java for android mobile devices at the moment but i will eventually move onto larger platforms. I would like to utilize unity3d for the game programming itself and utilize a 3d modeling software just to create the game objects. I just need to know the best one to get started with from scratch or should i use a combination of multiple ones? Any insight for this would be great, thanks!

    Read the article

  • 3d vertex translated onto 2d viewport

    - by Dan Leidal
    I have a spherical world defined by simple trigonometric functions to create triangles that are relatively similar in size and shape throughout. What I want to be able to do is use mouse input to target a range of vertices in the area around the mouse click in order to manipulate these vertices in real time. I read a post on this forum regarding translating 3d world coordinates into the 2d viewport.. it recommended that you should multiply the world vector coordinates by the viewport and then the projection, but they didn't include any code examples, and suffice to say i couldn't get any good results. Further information.. I am using a lookat method for the viewport. Does this cause a problem, and if so is there a solution? If this isn't the problem, does anyone have a simple code example illustrating translating one vertex in a 3d world into a 2d viewspace? I am using XNA.

    Read the article

  • iOS : Creating a 3D Compass

    - by Md. Abdul Munim
    Originally posted here: iOS : Creating a 3D Compass Hi everybody, Quite new in this forum.Posted the same question in stackoverflow and there some people suggested to shift it here, so that I can get a quick help from more specialists in this regard. So what's the big matter? Actually, I want to make a 3D metal compass in iOS which will have a movable cover. That is when you touch it by 3 fingers and try to move your fingers upward the cover keeps moving with your fingers and after certain distance it gets opened.Once you pull it down using 3 fingers again, it gets closed.I can not attach an image here as I don't have that much reputation. So I request you to check the original question at stack overflow that I linked at top. Is it possible using core animations and CALayers? Or would I have to use OpenGL ES? Please someone help me out, I am badly in need of it.And I need to complete it asap!

    Read the article

  • Resources on expected behaviour when manipulating 3D objects with the mouse

    - by sebf
    Hello, In my animation editor, I have a 3D gizmo that sits on the origin of a bone; the user drags the mesh around to rotate the bone. I've found that translating the 2D movements of the mouse into sensible 3D transforms is not near as simple as i'd hoped. For example what is intuitively 'up' or 'down'? How should the magnitude of rotations change with respect to dX/dY? How to implement this? What happens when the gizmo changes position or orientation with respect to the camera? ect. So far with trial and error i've written something (very) simple that works 70% of the time. I could probably continue to hack at it until I made something that works 99% of the time, but there must be someone who needed the same thing, and spent the time coming up with a much more elegant solution. Does anyone know of one?

    Read the article

  • Best choise of gui platform/framework for 3d development [closed]

    - by Miguel P
    The title pretty much says it all, so I'm developing a 3d engine for Directx 11, and it's going good so far. I started using .net forms as a GUI, but then(Of curiosity), i jumped to MFC, and the app looked great, but in my perspective, MFC is badly written, and it's too complicated, meaning that some things just took forever, while it would have taken a few seconds in .net forms. But my real question is: If i want to make an Editor for a 3d scene, where directx renders in the form( This was accomplished in .net forms), what would be the best choise of gui platform/framework? MFC,.net forms, Qt, etc....

    Read the article

  • How to make Pokémon White 3D effect?

    - by Pipo
    I just wondered how to create a 3D effect similar to Pokemon White/Black? It seems to be not polygon based, but created just with sprites. If the perspective changes the sprites stay sharp and don't get blurred. How can I archive this? Source: https://www.youtube.com/watch?v=fZEPUPYOnRc&feature=youtube_gdata_player Edit: Wow, two downvotes because I used a video instead of screenshots? Don't get me wrong, I thank you, because you want to help me, but the 3D effect can be better understand in motion. Anyway, here is a screenshot: http://wearearcade.com/wp-content/uploads/2011/03/pokemon-black-white-starter-town.jpg So, if this is a hardware limitation, how can I archive this o na different hardware, e.g. a HTML5 game? Thank you.

    Read the article

  • 3d trajectory - calculate initial velocity

    - by Skoder
    Hey, I've got a 2D projectile code sample working, but would like to extend it to 3D. How would I calculate the initial velocity of the Z-axis? At the moment, I've got: initVel.X = (float)Math.Cos(45.0); initVel.Y = (float)Math.Sin(45.0); How would I convert this to work in 3D (and add the initial velocity for the Z-axis)? In my example, X is across, Y is up down and Z is going into the screen. I also normalize the vector and multiply it by the speed. Thanks

    Read the article

  • Best choice for 3D Physics Engine for C#/XNA

    - by Nic Foster
    Since 2007 I've been working on the development of an open-source game engine, and have been using JigLibX for 3D Physics. However, the developers on the project have stopped contributing to it for over 2 years now, and it's lacking features I need, or have major bugs in certain features. What are some good choices for 3D physics engines that are written purely in C#? Are there any that are more complete than JigLibX? EDIT: I just stumbled upon an engine called BEPUphysics. It was supported up until May 2012, which is fairly recent. I may check it out, any information that you guys could give on how complete the engine is would be great.

    Read the article

  • How to program a cutting tool for 3D model in game

    - by Jesse S
    I'm looking for a resource to figure out how to program a function to cut a 3d model in game. Example: Enemy/NPC is sliced into 2 pieces with a sword. His body is not hollow, you can see bloody texture where normally a 'polygon hole' would be. The first step is to actually 'cut/slice' the model, then add in polygons to fill the hole in the model. I know this can be done in 3D modelling software, but I'm not sure how to go about doing this in a game, code-wise. I do not wish to use 'pre cut-up" models, the code will determine where the cut is. Any pointers in the right direction would be greatly appreciated.

    Read the article

  • 3D Location Handling

    - by tgrosinger
    I am thinking about making a simulator type game that will involve having lots of small objects in a 3D space. What is the typical solution for handling these objects? The first thing that comes to mind is a 3D Array, but I can't help but think there is a more efficient solution. Another idea that comes to mind is objects having possession of smaller items. For example a House possesses a Table which possesses a Cup and Bowl. The final way I can think of handling this is just having an array of "objects" that each have an x, y, z value. While this would make storing them easy I do not understand how you would detect collisions without just looking at every possible object and checking to see if it is in the way. Are there other ways of holding onto these objects that is more efficient?

    Read the article

  • How to watch 3D on Acer "3D Ready" projector?

    - by glenneroo
    We have here a Acer P1200 DLP projector which is "3D Ready" (using the BrilliantColor™ DarkChip™ 3) and documentation was not included in the packaging. We don't have a Blu-ray player and have no intention of purchasing one in the near future so I'm looking for a way to view encoded or streamed content. My question is: How is it possible to watch 3D content? What extra hardware/software will I need? EDIT: Found this information in the user manual: DLP 3D function: Choose while using DLP 3D glasses, quad buffer (NVIDIA/ATI…) graphic card and HQFS format file or DVD with corresponding SW player. 3D Sync L/R: If you see a discrete or overlapping image while wearing DLP 3D glasses, you may need to execute "Invert" to get best match of left/ right image sequence to get the correct image (for DLP 3D). ...but I'm still at a loss. What is a quad buffer graphics card?

    Read the article

  • How to create art assets for a 3d avatar editor

    - by Andrew Garrison
    I am currently prototyping an idea for an iPhone game. I'd like to create an avatar editor inside the game so that the player can create a 3d avatar face and modify certain features (using slider controls), such as nose shape, eye color, mouth size, etc. This has been done in several games, but what I'm looking to do would be fairly cartoon-ish/caricature-ish, similar to the Mii editor on the Nintendo Wii (http://www.myavatareditor.com/). I'd also like the final result to have the ability to use some canned animations, such as simple speech animations, smiling, frowning, etc. I am not an artist, so I would be unable to create these assets, but what kind of effort is required for an artist to create the 3d models necessary for this type of game? Also what mechanism would be required to tweak the face's characteristics? Would you use bones or morph targets? How would the final result be animated? Would facial animation use bones or morph targets? I've seen several tools that do this sort of thing too, such as FacialStudio. Are there any facial generation tools out there you'd recommend for generating some base content for this game, or should I just hire an artist to do this type of work. Thanks!

    Read the article

  • Keystone Correction using 3D-Points of Kinect

    - by philllies
    With XNA, I am displaying a simple rectangle which is projected onto the floor. The projector can be placed at an arbitrary position. Obviously, the projected rectangle gets distorted according to the projectors position and angle. A Kinect scans the floor looking for the four corners. Now my goal is to transform the original rectangle such that the projection is no longer distorted by basically pre-warping the rectangle. My first approach was to do everything in 2D: First compute a perspective transformation (using OpenCV's warpPerspective()) from the scanned points to the internal rectangle's points und apply the inverse to the rectangle. This seemed to work but was too slow as it couldn't be rendered on the GPU. The second approach was to do everything in 3D in order to use XNA's rendering features. First, I would display a plane, scan its corners with Kinect and map the received 3D-Points to the original plane. Theoretically, I could apply the inverse of the perspective transformation to the plane, as I did in the 2D-approach. However, in since XNA works with a view and projection matrix, I can't just call a function such as warpPerspective() and get the desired result. I would need to compute the new parameters for the camera's view and projection matrix. Question: Is it possible to compute these parameters and split them into two matrices (view and projection)? If not, is there another approach I could use?

    Read the article

  • Strange 3D game engine camera with X,Y,Zoom instead of X,Y,Z

    - by Jenko
    I'm using a 3D game engine, that uses a 4x4 matrix to modify the camera projection, in this format: r r r x r r r y r r r z - - - zoom Strangely though, the camera does not respond to the Z translation parameter, and so you're forced to use X, Y, Zoom to move the camera around. Technically this is plausible for isometric-style games such as Age Of Empires III. But this is a 3D engine, and so why would they have designed the camera to ignore Z and respond only to zoom? Am I missing something here? I've tried every method of setting the camera and it really seems to ignore Z. So currently I have to resort to moving the main object in the scene graph instead of moving the camera in relation to the objects. My question: Do you have any idea why the engine would use such a scheme? Is it common? Why? Or does it seem like I'm missing something and the SetProjection(Matrix) function is broken and somehow ignores the Z translation in the matrix? (unlikely, but possible) Anyhow, what are the workarounds? Is moving objects around the only way? Edit: I'm sorry I cannot reveal much about the engine because we're in a binding contract. It's a locally developed engine (Australia) written in managed C# used for data visualizations. Edit: The default mode of the engine is orthographic, although I've switched it into perspective mode. Its probably more effective to use X, Y, Zoom in orthographic mode, but I need to use perspective mode to render everyday objects as well.

    Read the article

  • Boot ubuntu 12.04 in 3D - nomodeset quiet splash install

    - by rahi
    I would like to enable 3D in Ubuntu 12.04. I recently tried to install ubuntu on a new computer. When the installation was complete and I rebooted the machine, I could only see a blank screen. After some searching, I followed this tutorial which instructed me to boot with "nomodeset" enabled. I choose this on the USB I was installing ubuntu 12.04 from. Fortunately, the ubuntu installation on the new computer was successful. When I tried to change the size of the unity launcher icons, I did not see that option (as I see on my other computer running ubuntu 12.04). I tried installing MyUnity and it told me that the computer I had just installed 12.04 to was running in 2D. To my knowledge. all the software is up to date (as I ran the Software Updater). In addition, when I look for Additional Drivers, I see a message that says "No proprietary drivers are in use on this system". When I look under System Details Graphics, I see the Driver as "VESA:Intel Sandybridge/Ivybridge Graphics. When I hold the shift key on when my machine boots up, and type "e" on the Grub menu, I see the following towards the end, "nomodeset quiet splash $vt_handoff". Does this have anything to do with the plain 2D ubuntu 12.04 experience? Again, what I'd like to do now is get the 3D experience on my new machine running 12.04. Please let me know if you need more information.

    Read the article

  • 3D space game development

    - by user1693061
    I want to develop a 3D game (sci-fi type with spaceships) which can be played on multiplayer mode and by multiplayer i mean around 10 players for start as it will be a personal testing project and mostly educational. I have been searching for some days about the available languages and engines but i am kinda confused. Since i have been learning Java for my 1st year in I.T university and i have pretty good understanding i thought i would go with the Java language and develop that game on an applet so it could be played on a browser. After going through an applet game tutorial i understood how graphics work on an applet. So.. 1st question: Could an applet carry the burden of a 3D game especially on multiplayer? My thinking: It's a space game so the graphics should not be such a big problem since it wont be that crowded with entities apart from ships, planets and some effects. If the java applet is not the way for my project i would't mind "developing it on desktop"(i mean not making it a browser game). 2nd question: Should i use Unity engine for my purpose(space game)? If not name other language/engine combo.

    Read the article

  • 3d Picking under reticle

    - by Wolftousen
    i'm currently trying to work out some 3d picking code that I started years ago, but then lost interested the assignment was completed (this part wasn't actually part of the assignment). I am not using the mouse coords for picking, i'm just using the position in 3d space and a ray directly out from there. A small hitch though is that I want to use a cone and not a ray. Here are the variables i'm using: float iReticleSlope = 95/3000; //inverse reticle slope float baseReticle = 1; //radius of the reticle at z = 0 float maxRange = 3000; //max range to target Quaternion orientation; //the cameras orientation Vector3d position; //the cameras position Then I loop through each object in the world: Vector3d transformed; //object position after transformations float d, r; //holder variables for(i = 0; i < objects.length; i++) { transformed = position - objects[i].position; //transform the position relative to camera orientation.multiply(transformed); //orient the object relative to the camera if(transformed.z < 0) { d = sqrt(transformed[0] * transformed[0] + transformed[1] * transformed[1]); r = -transformed[2] * iReticleSlope + objects[i].radius; if(d < r && -transformed[2] - objects[i].radius <= maxRange) { //the object is under the reticle } else { //the object is not under the reticle } } else { //the object is not under the reticle } } Now this all works fine and dandy until the window ratio doesn't match the resolution ratio. Is there any simple way to account for that

    Read the article

  • Checking for collisions on a 3D heightmap

    - by Piku
    I have a 3D heightmap drawn using OpenGL (which isn't important). It's represented by a 2D array of height data. To draw this I go through the array using each point as a vertex. Three vertices are wound together to form a triangle, two triangles to make a quad. To stop the whole mesh being tiny I scale this by a certain amount called 'gridsize'. This produces a fairly nice and lumpy, angular terrain kind of similar to something you'd see in old Atari/Amiga or DOS '3D' games (think Virus/Zarch on the Atari ST). I'm now trying to work out how to do collision with the terrain, testing to see if the player is about to collide with a piece of scenery sticking upwards or fall into a hole. At the moment I am simply dividing the player's co-ordinates by the gridsize to find which vertex the player is on top of and it works well when the player is exactly over the corner of a triangle piece of terrain. However... How can I make it more accurate for the bits between the vertices? I get confused since they don't exist in my heightmap data, they're a product of the GPU trying to draw a triangle between three points. I can calculate the height of the point closest to the player, but not the space between them. I.e if the player is hovering over the centre of one of these 'quads', rather than over the corner vertex of one, how do I work out the height of the terrain below them? Later on I may want the player to slide down the slopes in the terrain.

    Read the article

  • Transform 3D vectors between coordinate systems

    - by Nir Cig
    I've got 6 points in 3D space: A,B,C,D,E,F, that represent 4 vectors. AB is perpendicular to AC and DE is perpendicular to DF. I need to find a transformation matrix M, that transforms AB to DE and AC to DF. In other words: M·AB=DE, M·AC=DF If no scaling was involved, this could be solved with a simple rotation matrix. But since the ratios |AB|/|DE|, |AC|/|DF| might be different, I'm not sure how to proceed.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >