Search Results

Search found 2867 results on 115 pages for '3d modeller'.

Page 9/115 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Matplotlib plotting non uniform data in 3D surface

    - by Raj Tendulkar
    I have a simple code to plot the points in 3D for Matplotlib as below - from mpl_toolkits.mplot3d import axes3d import matplotlib.pyplot as plt import numpy as np from numpy import genfromtxt import csv fig = plt.figure() ax = fig.add_subplot(111, projection='3d') my_data = genfromtxt('points1.csv', delimiter=',') points1X = my_data[:,0] points1Y = my_data[:,1] points1Z = my_data[:,2] ## I remove the header of the CSV File. points1X = np.delete(points1X, 0) points1Y = np.delete(points1Y, 0) points1Z = np.delete(points1Z, 0) # Convert the array to 1D array points1X = np.reshape(points1X,points1X.size) points1Y = np.reshape(points1Y,points1Y.size) points1Z = np.reshape(points1Z,points1Z.size) my_data = genfromtxt('points2.csv', delimiter=',') points2X = my_data[:,0] points2Y = my_data[:,1] points2Z = my_data[:,2] ## I remove the header of the CSV File. points2X = np.delete(points2X, 0) points2Y = np.delete(points2Y, 0) points2Z = np.delete(points2Z, 0) # Convert the array to 1D array points2X = np.reshape(points2X,points2X.size) points2Y = np.reshape(points2Y,points2Y.size) points2Z = np.reshape(points2Z,points2Z.size) ax.plot(points1X, points1Y, points1Z, 'd', markersize=8, markerfacecolor='red', label='points1') ax.plot(points2X, points2Y, points2Z, 'd', markersize=8, markerfacecolor='blue', label='points2') plt.show() My problem is that I tried to make a decent surface plot out of these data points that I have. I already tried to use ax.plot_surface() function to make it look nice. For this I eliminated some points, and recalculated the matrix kind of input needed by this function. However, the graph I generated was far more difficult to interpret and understand. So there might be 2 possibilities: either I am not using the function correctly, or otherwise, the data I am trying to plot is not good for the surface plot. What I was expecting was 3D graph which would have an effect similar to that we have of 3D pie chart. We see that one piece (that which is extracted out) is part of another piece. I was not expecting it to be exactly same like that, but some kind of effect like that. What I would like to ask is: Do you think it will be possible to make such 3D graph? Is there any way better, I could express my data in 3 dimension? Here are the 2 files - points1.csv Dim1,Dim2,Dim3 3,8,1 3,8,2 3,8,3 3,8,4 3,8,5 3,9,1 3,9,2 3,9,3 3,9,4 3,9,5 3,10,1 3,10,2 3,10,3 3,10,4 3,10,5 3,11,1 3,11,2 3,11,3 3,11,4 3,11,5 3,12,1 3,12,2 3,13,1 3,13,2 3,14,1 3,14,2 3,15,1 3,15,2 3,16,1 3,16,2 3,17,1 3,17,2 3,18,1 3,18,2 4,8,1 4,8,2 4,8,3 4,8,4 4,8,5 4,9,1 4,9,2 4,9,3 4,9,4 4,9,5 4,10,1 4,10,2 4,10,3 4,10,4 4,10,5 4,11,1 4,11,2 4,11,3 4,11,4 4,11,5 4,12,1 4,13,1 4,14,1 4,15,1 4,16,1 4,17,1 4,18,1 5,8,1 5,8,2 5,8,3 5,8,4 5,8,5 5,9,1 5,9,2 5,9,3 5,9,4 5,9,5 5,10,1 5,10,2 5,10,3 5,10,4 5,10,5 5,11,1 5,11,2 5,11,3 5,11,4 5,11,5 5,12,1 5,13,1 5,14,1 5,15,1 5,16,1 5,17,1 5,18,1 6,8,1 6,8,2 6,8,3 6,8,4 6,8,5 6,9,1 6,9,2 6,9,3 6,9,4 6,9,5 6,10,1 6,11,1 6,12,1 6,13,1 6,14,1 6,15,1 6,16,1 6,17,1 6,18,1 7,8,1 7,8,2 7,8,3 7,8,4 7,8,5 7,9,1 7,9,2 7,9,3 7,9,4 7,9,5 and points2.csv Dim1,Dim2,Dim3 3,12,3 3,12,4 3,12,5 3,13,3 3,13,4 3,13,5 3,14,3 3,14,4 3,14,5 3,15,3 3,15,4 3,15,5 3,16,3 3,16,4 3,16,5 3,17,3 3,17,4 3,17,5 3,18,3 3,18,4 3,18,5 4,12,2 4,12,3 4,12,4 4,12,5 4,13,2 4,13,3 4,13,4 4,13,5 4,14,2 4,14,3 4,14,4 4,14,5 4,15,2 4,15,3 4,15,4 4,15,5 4,16,2 4,16,3 4,16,4 4,16,5 4,17,2 4,17,3 4,17,4 4,17,5 4,18,2 4,18,3 4,18,4 4,18,5 5,12,2 5,12,3 5,12,4 5,12,5 5,13,2 5,13,3 5,13,4 5,13,5 5,14,2 5,14,3 5,14,4 5,14,5 5,15,2 5,15,3 5,15,4 5,15,5 5,16,2 5,16,3 5,16,4 5,16,5 5,17,2 5,17,3 5,17,4 5,17,5 5,18,2 5,18,3 5,18,4 5,18,5 6,10,2 6,10,3 6,10,4 6,10,5 6,11,2 6,11,3 6,11,4 6,11,5 6,12,2 6,12,3 6,12,4 6,12,5 6,13,2 6,13,3 6,13,4 6,13,5 6,14,2 6,14,3 6,14,4 6,14,5 6,15,2 6,15,3 6,15,4 6,15,5 6,16,2 6,16,3 6,16,4 6,16,5 6,17,2 6,17,3 6,17,4 6,17,5 6,18,2 6,18,3 6,18,4 6,18,5 7,10,1 7,10,2 7,10,3 7,10,4 7,10,5 7,11,1 7,11,2 7,11,3 7,11,4 7,11,5 7,12,1 7,12,2 7,12,3 7,12,4 7,12,5 7,13,1 7,13,2 7,13,3 7,13,4 7,13,5 7,14,1 7,14,2 7,14,3 7,14,4 7,14,5 7,15,1 7,15,2 7,15,3 7,15,4 7,15,5 7,16,1 7,16,2 7,16,3 7,16,4 7,16,5 7,17,1 7,17,2 7,17,3 7,17,4 7,17,5 7,18,1 7,18,2 7,18,3 7,18,4 7,18,5

    Read the article

  • 3D Web Sites and Applications

    - by Scott Evernden
    I have for the last several years been struggling to understand why the Internet has so few actually useful 3D web applications. It's 2009 and still everything looks like pages from a Sears catalog. You can turn on your TV and find flying logos every night. After that you can get nostalgic and flip on ol' N-64 and play some Zelda or Mario Kart. On the PC, Sims 2 is approaching 6 years old already.. And then there's WoW. Current generation of users - the Facebook crowd, let's say - has ~no~ problem dealing with multi-dimensional environments.. And yet, nothing really immersive seems to happen on the web. I've been hearing about VRML and X3D for at least 10 years and ... pffft .. - nothing earth shaking going on there. Java 3D ? .. cool ! .. but ...... Still .... waiting and waiting. Do you think it will take a killer-web app before people become accustomed-to or will seek to use what could more more engaging web experiences? I am not talking about Second Life and other dedicated downloaded applications. I probably am more focused on apps like Lively or SceneCaster or Hangout or a half dozen others that are delivered 'painlessly' directly into web pages. My own particular interest is in the domain of virtual stores and immersive shopping. Its been a challenge trying to understand why an average user would not want to browse and wander a changing mall-space - like in the real world -- entertained by unexpected discovery. Is the 3D web always going to be 5 years in the future ?

    Read the article

  • Reverse-projection 2D points into 3D

    - by ehsan baghaki
    Suppose we have a 3d Space with a plane on it with an arbitary equation : ax+by+cz+d=0 now suppose that we pick 3 random points on that plane: (x0,y0,z0) (x1,y1,z1) (x1,y1,z1) now i have a different point of view(camera) for this plane. i mean i have a different camera that will look at this plane from a different point of view. From that camera point of view these points have different locations. for example (x0,y0,z0) will be (x0',y0') and (x1,y1,z1) will be (x1',y1') and (x2,y2,z2) will be (x2',y2') from the new camera point of view. So here is my a little hard question! I want to pick a point for example (X,Y) from the new camera point of view and tell where it will be on that plane. All i know is that 3 points and their locations on 3d space and their projection locations on the new camera view. Do you know the coefficients of the plane-equation and the camera positions (along with the projection), or do you only have the six points? - Nils i know the location of first 3 points. therefore we can calculate the coefficients of the plane. so we know exactly where the plane is from (0,0,0) point of view. and then we have the camera that can only see the points! So the only thing that camera sees is 3 points and also it knows their locations in 3d space (and for sure their locations on 2d camera view plane). and after all i want to look at camera view, pick a point (for example (x1,y1)) and tell where is that point on that plane. (for sure this (X,Y,Z) point should fit on the plane equation). Also i know nothing about the camera location.

    Read the article

  • Which way to go in Linux 3D programming?

    - by Tek
    I'm looking for some answers for a project I'm thinking of. I've searched and from what I understand (correct me if I'm wrong) the only way the program I want to make will work is through 3D application. Let me explain. I plan to make a studio production program but it's unique in the fact that I want to be able to make it fluid. Let me explain. Imagine Microsoft's Surface program where you're able to touch and drag pictures across the screen. Instead of pictures I want them to be sound samples (wavs,mp3,etc). Of course instead the input will be with the mouse but if I ever do finish the project I would totally add touch screen input compatibility! Anyway, I'm guessing there's "physics" to do with it which is why I'm thinking that even though it'll be a 2D application I'll need to code it in a 3D environment. Assuming that I'm correct in how I want to approach my project, where can I start learning about 3D programming? I actually come from PHP programming which will make C++ easier for me to learn. But I don't even know where to start. If I'm not wrong OpenGL is the most up to date API as far as I know. Anyway, please give me your insights guys. I could really use some guidance here since I could totally be wrong in everything that I wrote :)

    Read the article

  • How can I enable Unity 3d after installing Bumblebee? GLX Problems

    - by ashley
    I'm new to Ubuntu, I'm running 12.04 64 Bit on a Dell XPS L207x with a Nvidia GT555M card. From what I could understand online I needed to install Bumblebee to get the most out of the Optimus system and better battery life. I can test the Bumblebee is working by running optirun glxgears for example. If I run just glxgears then I get the following error Error: couldn't get an RGB, Double-buffered visual. I'm also unable to run Unity 3d, which I would very much like. I'd greatly appreciate any and all help, please be gentle.

    Read the article

  • 3D Display Issue When Using Latest Java Runtime Versions - Patch now available...

    - by [email protected]
    Typically I focus my blog posts on Support process topics, and reserve most of the technical topics for the Support newsletter. This topic, however, warrants a quick mention in the blog since I know it's been affecting many users recently. For customers using the Client/Server Deployment of AutoVue, users that had upgraded their client Java Runtime Environment (JRE) to version 1.6.0_19 or later suddenly noticed that their 3D files were opening blank in AutoVue. This issue was due to a change in JRE version 1.6.0_19, and the AutoVue team now offers a patch to address the issue in AutoVue version 20.0.0. The patch number is 10268316, is available through the My Oracle Support portal, and is described further in KM Note 1104821.1. We'll mention it again in our next Support newsletter, and the AutoVue team will target to roll the same fix into the next available release of the product.

    Read the article

  • iPhone: CALayer + rotate in 3D + antialias?

    - by Colin
    Hi all, An iPhone SDK question: I'm drawing a UIImageView on the screen. I've rotated it in 3D and provided a bit of perspective, so the image looks like it's pointing into the screen at an angle. That all works fine. Now the problem is the edges of the resulting picture don't seem to be antialiased at all. Anybody know how to make it so? Essentially, I'm implementing my own version of CoverFlow (yeah yeah, design patent blah blah) using quartz 3d transformations to do everything. It works fine, except that each cover isn't antialiased, and Apples version is. I've tried messing around with the edgeAntialisingMask of the CALayer, but that didn't help - the defaults are that every edge should be antialiased... thanks!

    Read the article

  • translation/rotation of a HUD against a camera using vectors in Euclidian 3D space

    - by Jakob
    i've got 2 points in 3D space: the camera position and the camera lookAt. the camera movement is restricted akin to typical first person shooter games. you can move the cam freely, tilt horizontally and up to 90 degrees vertically, but not roll. so now i want to draw a HUD to the screen, on which i can move the mouse freely, with the position of the cursor correctly translating into 3D space. the easy part was to draw something directly in front of the camera. V0 = camPos; V1 = lookAt; V2 = lookAt-camPos; normalize V2; mutiply V2 according to camera frustum V3 = V0+V2 draw something at V3 now the part i don't get: i could use V3 and add to that the rotations of the cam combined with the x/y of the mouse cursor, somehow, right? that's what i want.

    Read the article

  • More complex view matrix calculation required to composite 3d models with 2d video

    - by lzcd
    I'm utilising some 2d / 3d tracking data (provided by pfHoe) to help integrate some 3d models into the playback of some 2d video. Things are working.... okay... but there's still some visible 'slipping' of the models against the video background and I suspect this is may be because the XNA CreatePerspective helper method isn't taking into account some of the additional data supplied by pfHoe such as independent horizontal / vertical field of view angles and focal length. Would anyone be able to point me towards some examples of constructing view matrices that include such details?

    Read the article

  • Level of Detail for 3D terrains/models in Mobile Devices (Android / XNA )

    - by afriza
    I am planning to develop for WP7 and Android. What is the better way to display (and traverse) 3D scene/models in term of LoD? The data is planned to be island-wide (Singapore). 1) Real-Time Dynamic Level of Detail Terrain Rendering 2) Discrete LoD 3) Others? And please advice some considerations/algorithms/resources/source codes. something like LoD book also Okay. Side note: I am a beginner in this area but pretty well-versed in C/C++. And I haven't read the LoD book. Related posts: - Distant 3D object rendering [games]

    Read the article

  • Animate 3D model programmatically-where to start?

    - by amile
    I am having a task to create a 3D model face that can talk like humans.without having any knowledge about 3D modeling. I have no clue where to start.I have searched a lot and find some of these things.OpenGL,WebGL,XNA and other similar tool can be helpful. please guide me where to start a step by step approach and which platform is better as i have programming background in JAVA. here is an idea what I need to do https://docs.google.com/viewer?a=v&pid=gmail&attid=0.2&thid=13f65486a46a1f67&mt=application/pdf&url=https://mail.google.com/mail/?ui%3D2%26ik%3D49f1f393c6%26view%3Datt%26th%3D13f65486a46a1f67%26attid%3D0.2%26disp%3Dsafe%26realattid%3Df_hi6ylzbv2%26zw&sig=AHIEtbQc8KQNHdprmEnL4UXyD3ox8vlKKQ

    Read the article

  • Dynamically generate Triangle Lists for a Complex 3D Mesh

    - by Vulcan Eager
    In my application, I have the shape and dimensions of a complex 3D solid (say a Cylinder Block) taken from user input. I need to construct vertex and index buffers for it. Since the dimensions are taken from user input, I cannot user Blender or 3D Max to manually create my model. What is the textbook method to dynamically generate such a mesh? Edit: I am looking for something that will generate the triangles given the vertices, edges and holes. Something like TetGen. As for TetGen itself, I have no way of excluding the triangles which fall on the interior of the solid/mesh.

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • How does the new google maps make buildings and cityscapes 3D?

    - by Aerovistae
    Anyone who's seen the new Google maps has no doubt taken note of the incredible amount of three-dimensional detail in select American cities such as Boston, New York, Chicago, and San Francisco. They've even modeled the trees, bridges and some of the boats in the harbor! Minor architectural details are present. It's crazy. Looking at it up close, I've found there's a rectangular area around each of those cities, and anything within them is 3Dified, but it cuts off hard and fast at the edge, even if it's in the middle of a building. The edge of the rectangle is where the 3D stops. This leads me to think it's being done algorithmically (which would make sense, given the scale of the project, how many trees and buildings and details there are), and yet I can't imagine how that's possible. How could an algorithm model all these things without extensive data on their shapes and contours? How could it model the individual wires of a bridge, or the statues in a park? It must be done by hand, and yet how could it be for so much detail! Does anyone have any insight on this?

    Read the article

  • 3D transformations in WPF & DirectX/Direct3D or OpenGL

    - by user2723417
    I need your help with 3D transformations. I have a sphere and I want to deform it by a mouse click or a mouse move. I want to make a furrow or to bite off a piece of the sphere without any breaks of 3D material. It is possible in WPF, but if the quantity of 3D points is more then 25 000, it creates some freezes in a dynamic mode (animation breaks), because the object of MeshGeometry3D class should be reconstructed every time to stop the breaks of 3D material. Give me advice about tools for the realization of my task. Maybe it can be done with the help of DirectX/Direct3D or OpenGL? I am a newcomer in these collections of APIs, but I would like to study them. I need to integrate the process of transformation in WPF application.

    Read the article

  • What is the status in 3D technology for TVs [closed]

    - by Eduardo Molteni
    Maybe it is off-topic for programmers.SE, but don't know where else to ask, and I trust my fellow programmers :) I've recently being lightly following new 3D technologies and I thought that glass-free 3D TV was about to take over the scene since it make much more sense to TVs (I can't imagine having glasses for all the family, kids breaking them, etc) But it seems that I'm wrong, since LG, Sony and Samsumg keep fighting over glasses (active-passive) What is the current and future status in 3D technology for TVs?

    Read the article

  • Calculating 3D camera positions from a video

    - by Geotarget
    I need to calculate the 3D camera position and rotation for each frame in a given video. This is typically used for motion-tracking, and to insert 3D objects into a video. I'm currently using VideoTrace to calculate this for me, and I'm getting the data exported as a 3DS Maxscript file. However when I try to use the 3D camera rotations, I'm getting strange errors in my 3D calculations, as if there is an error with the 3x3 rotation matrices. Can you spot any error with the data itself? Or is it my other calculations that are erroneous? frame 1 rotation=(matrix3[-0.011938, 0.756018, -0.654442][-0.382040, -0.608284, -0.695727][-0.924068, 0.241718, 0.296091][0, 0, 0]).rotationpart position=[-0.767177, 0.308723, -0.232722] fov=57.352135 frame 2 rotation=(matrix3[-0.460922, -0.726580, -0.509541][-0.200163, 0.644491, -0.737947][ 0.864572, -0.238145, -0.442495][0, 0, 0]).rotationpart position=[-0.856630, 0.198654, -0.243853] fov=57.352135

    Read the article

  • Java Applet or Unity3D for Cross-Platform 3D Surveying App

    - by Jake M
    Do you think a Java Applet or Unity3D Application is the best option to make a cross-browser 3d web-app? I intend to make a web application that displays 3d environments that can be navigated by dragging(with a finger or mouse depending on the platform). The web app will render 3d environments of development sites including contours, water pipeline locations, buildings etc. The application must work on Windows Desktop, Android, iOS and Windows Phone. So this is why I am tending towards a web-app as opposed to cross-platform smart phone library(like Mosync or Marmalade). The 3d environments will be navigable(by dragging around) and contain simple(not detailed) 3d objects like buildings, mountains, pipelines, etc. One thing I know is that WebGL is out because it doesn't work on IE and has limited support on Smart Phones(am I correct to completely disregard WebGL?). Will future Smart Phone browsers continue to support Java Applets? Also is it really true I can write ONE Application/Game in Unity3D and simply compile it to run on Windows Windows, Mac, Xbox 360, PlayStation 3, Wii, iPad, iPhone and Android? Would you suggest the Unity3D application path or the Unity3D Web Player path? Concerning Unity3D, there's one thing I am unsure about: do all Unity3D features work on iOS and Android?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >