Search Results

Search found 2992 results on 120 pages for 'blender 3d'.

Page 26/120 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Le PDF en 3D avec WebGL inventé par des membres de Developpez.com, ils fondent la société SPACEGOO

    Le PDF en 3D avec WebGL inventé par des membres de Developpez.com Qui créent un nouveau mode de navigation et fondent SPACEGOO La standardisation de la technologie WebGL à l'été 2010, et son implémentation par les navigateurs récents (Firefox 4, Chrome 7, Opera 11.50, Webkit) ont permis de faire simplement du rendu 3D de qualité dans le navigateur. La scène 3D est rendue dans un canvas HTML5, ce qui permet d'intégrer ces éléments 3D à un site « standard », développé en PHP et CSS par exemple. Utilisée essentiellement pour les jeux, les démos, et la visualisation d'objets 3D, des membres de développez ont fondé la société SPACEGOO et ont eu l'idée d'utiliser le WebGL pour représenter du contenu te...

    Read the article

  • CommonFilter 0.3D now released on CodePlex.

    CommonFilter is a subset of the CommonData project, containing just the functions and unit tests for filtering user input. The functions include filters for: Input of upper case and lower case alpha Input of numeric fields Input of text containing HTML markup to check that it only contains permitted markup. The general functions are available both as a form that silently drops non-permitted characters or in a try-parse format....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CommonFilter 0.3D now released on CodePlex.

    CommonFilter is a subset of the CommonData project, containing just the functions and unit tests for filtering user input. The functions include filters for: Input of upper case and lower case alpha Input of numeric fields Input of text containing HTML markup to check that it only contains permitted markup. The general functions are available both as a form that silently drops non-permitted characters or in a try-parse format....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • XNA move from start position to target position exactly in 3D

    - by robasaurus
    If I have a list of positions that map out a path a character should follow. What would be the best way to move at a constant speed to each position making sure the character lands exactly at each position before moving onto the next? For example the character is at position A, we then queue up position B and position C. The character cannot move towards position C until it reaches position B exactly. It would be great if the solution worked at slower frame rates/update speeds as well.

    Read the article

  • 3D open source physics engine suitable for mobile platforms (Android and iOS)

    - by lukeluke
    I have made some research and found that bullet, ode, newton and some others are open source physics engines that should be portable enough (but I have never tried to comile/use anyone of them on phones). I am writing my games for mobile platforms in C++, so the engine should be C or C++. I need a fast engine, since mobile platforms have limited resources. I need a free engine. A good design would be nice to have too. What engine is best suited for my task? What I really would like to hear from you is your direct experience. Documentation and support (for example, forum or an IRC channel) is a really important aspect to take into consideration.

    Read the article

  • Calculate initial velocity of a 3d vector-based projectile

    - by Frotty
    Okay, so I got a Projectile with 2 Vectors, position and velocity. I now want to calculate the initial velocity for it in order to reach a specific point on the map. Or actually, how high has the start z-velocity to be (because x and y are probably defined by a speed variable) in order for the projectile to hit the marked position. The projectile is influenced by a constant gravity vector. All calculations are done 32 times per second. I want this, because I don't want to use a parabola function, so the projectile can still be influenced by other sources, simply adding some velocity. I didn't really find anything referring to that topic and would be glad for every helping answer, Thanks.

    Read the article

  • Making video from 3D gaphics in OpenGL

    - by MVTC
    What are some of the preferred methods or libraries for creating video from an OpenGL graphics simulation? For example, I want to create a visualization(video) of an N-Body gravity simulation by rendering non-real-time OpenGL frames. The simulation is already coded, I just don't know how to convert it to video. EDIT: I am also interested in providing the described functionality: The user can adjust parameters including the time step between captured frames and then initiate the simulation. The user waits for the simulation to complete, and then can watch the results. The user is able to increase or decrease the playback speed of the simulation whereas in slow motion, more frames are used i.e., you see higher resolution time steps, and when the speed is increased, you see lower resolution time steps at a higher rate, but the frames per second flashing on the screen is constant.

    Read the article

  • Unity 3D Display Lag

    - by ec2011
    I find that there is quite a lot of lag when using Unity after upgrading to Precise. It seems that when I first start up the computer, everything is fine, but then if I put the computer on standy for a while and then come back to it, the display is extremely slow and there is a significant delay to visual effects that make the system rather cumbersome to use. I tried to report this as a bug but then bug reporting tool directed me to ask this is a question here first. Thanks for any ideas

    Read the article

  • Problems with 3D transformation - (SharpDX)

    - by Morphex
    First of all , I have been trying to get this right for a couple of day already, I have read so much info and still fail miserably to understand this. So I am going to tell you that even though I have done fairly amount of research myself, I failed to implement it. I must say miserably I am trying to create a generic camera class for a game engine of sorts - for research purposes only - the thing is I have no idea how to go about it. I have read about quaternions and matrices, but when it comes to the actual implementation I suck at it. Sharpdx has already Matrices and QUaternions implemented. SO no big deal on the map behind it. How in the world would I go about creating a camera? I have seen so many camera examples and still can't make one that works as expected. I would like to implement diferent types too (Orbital, 6DoF, FPS). So what is need for a camera? UP, Forward and Right vectors I read they are needed, also a quaternion for rotations, and View and Projection matrices. I understand that a FPS camera for instance only rotates around the World Y and the Right Axis of the camera. the 6DoF rotates always around their own axis, and the orbital is just translating for set distance and making it look always at a fixed target point. The concepts are there, now implementing this is not trivial for me. Can anyone point me on what am I missing, what I got wrong? I would really enjoy if you could give a tutorial, some piece of code, or just plain explanation of the concepts. Thank you for readin, a frustrated coder.

    Read the article

  • La navigation 3D bientôt intégrée à Safari ? Des dépôts de brevets d'Apple révèlent une nouvelle fonctionnalité pour le navigateur

    La navigation 3D bientôt intégrée à Safari ? Des dépôts de brevets d'Apple révèlent une nouvelle fonctionnalité pour le navigateur La navigation 3D pourrait bientôt être possible avec le navigateur Safari. Selon plusieurs demandes de brevets d'Apple repérées par le site spécialisé Pantetly Apple, la société travaillerait sur une nouvelle fonctionnalité pour Safari. Les brevets décrivent une version 3D de Safari qui permettra aux utilisateurs d'empiler des signets, e-mails, documents et applications dans des formes 3D. [IMG]http://rdonfack.developpez.com/Safari3D.PNG[/IMG] Plusieurs utilisations du système d'empilage 3D que pourrait fournir l...

    Read the article

  • How to determine which side of a 3D plane is showing?

    - by Josh Santangelo
    This is a 3d n00b question. I'm working on a WPF control which implements the basics of Silverlight's PerspectiveTransform feature, allowing a 2D plane to be rotated on any of the three axes. It works pretty well. However I'm a little stuck on the math required to determine whether or not the back of the plane is showing. My naive code for figuring that out now is: bool isBackShowing = Math.Abs(RotationX) > 90 && Math.Abs(RotationY) < 90; if (!isBackShowing) { isBackShowing = Math.Abs(RotationX) < 90 && Math.Abs(RotationY) > 90; } However, this fails when the rotation is between +-270 and +-360 on either axis. The underlying transform is using a Quaternion object to do the actual rotation, and that has nice Axis and Angle properties, so I'm guessing I could just use that if I knew how.

    Read the article

  • signed angle between two 3d vectors with same origin within the same plane? recipe?

    - by Advanced Customer
    Was looking through the web for an answer but it seems like there is no clear recipe for it. What I need is a signed angle of rotation between two vectors Va and Vb lying within the same 3D plane and having the same origin knowing that: the plane contatining both vectors is an arbitrary and is not parallel to XY or any other of cardinal planes Vn - is a plane normal both vectors along with the normal have the same origin O = { 0, 0, 0 } Va - is a reference for measuring the left handed rotation at Vn The angle should be measured in such a way so if the plane would be XY plane the Va would stand for X axis unit vector of it. I guess I should perform a kind of coordinate space transformation by using the Va as the X-axis and the cross product of Vb and Vn as the Y-axis and then just using some 2d method like with atan2() or something. Any ideas? Formulas?

    Read the article

  • How to color a mesh with values at the vertices in WPF 3D?

    - by Christo
    We've got a sphere which we want to display in 3D and color given a fuction that depends on spherical coordinates. The sphere was triangulated using a regular grid in (theta, phi), but this produced a lot of small triangles near the poles. In an attempt to reduce the number triangles at the poles, we've changed out mesh generation to produce more evenly sized triangles over the surface. The first triangulation method had the advantage that we could easily create a texture and drape it over the surface. It seems that in WPF it isn't possible to assign colors to vertices the way one would go about in OpenGL or Direct3D. With the second triangulation method it isn't apparent how to go about generating the texture and setting the texture coordinates, since the vertices aren't aligned to a grid anymore. Maybe it would be possible to create a linear texture containing a color for each vertex, but then how will that effect the coloring? Will it still render smoothly over the triangle surfaces as one would expect by applying per vertex coloring?

    Read the article

  • How to get a 3D picture in an iphone app?

    - by wolverine
    In an application, i saw that they used to display pictures of vehicles. But what was amazing was when we touch and swipe in that picture, it rotates in 3d way left and right. And from the front view we can rotate and get to see its back view also. It is a very good feature and i was trying to replicate it. But couldnt get an idea of how and where to start. My doubts are Whats the actual format of the thing, it surely isn't a picture. How do they get to rotate it? Could someone give me an idea where i should start or where I should look upon?

    Read the article

  • From a 3D modeler to an iPhone app - what are best practices?

    - by bonkey
    I am quite new in 3D programming on iPhone and I would like to ask for hints about organizing a work between designers and programmers on that platform. Most of all: what kind of tools, libraries or plugins cooperate the best on both sides. Although I consider the question as looking for general best-practices advice I would like to find a solution for my current situation which I describe further, too. I've already done some research and found following libraries: SIO2 Khronos OpenGL ES 1.x SDK for PowerVR MBX Unity3D Oolong Game Engine I've checked modellers or plugins to them giving output formats readable by those tools: obj2opengl Wavefront OBJ to plain header file converter Blender with SIO2 exporter iphonewavefrontloader Cheetah3D PVRGeoPOD for 3DS / Maya Unfortunately I still have no clear vision how to combine any of that tools to get a desinger's work in an application. I look for a way of getting it in the most possible complete way: models, lights, scenes, textures, maybe some simple animations (but rather no game-like physics), but I still got nothing. And here comes my situation: I would like to find right way to present few (but quite complicated) models from a single scene. The designers mostly use 3DS Max 9, sometimes 10 (which partly prevents using PVRGeoPOD) and are rather reluctant to switch to something else but if there's no other choice I suppose it would be possible. The basic rule I've already found in some places "use Wavefront OBJ" not always works. I haven't got any acceptable results with production files, actually. The only things worked fine were some mere examples. Some of my models did imported incomplete, sometimes exporters hung or generated enormous files not really useful on an iPhone, sometimes enabling textures (with GL_TEXTURE_2D) just crashed an app. I know it might be a problem with too complicated models or my mistakes coming from inexeperience but I am not able to find any guidelines for that process to have streamlined cooperation with designers. I am even willing to write some things from scratch in pure OpenGL-ES if it's necessary, but I would like to avoid what might be avoided and get the most from the model files. The best would be the effect I saw on some SIO2 tutorials: export, build & go. But at that moment I've got only "import, wrong", "import, where are textures?", "import, that almost looks fine, export, hang" and so on... Is it really so much frustrating or I am just missed something obvious? Can anybody share his/her experience in that field and tell what kind of software uses for "making things happen"?

    Read the article

  • Ubuntu 11.10 Virtual-box Unity 3D not working

    - by naveen
    After struggling for four hours, I still cannot get Unity 3D of Gnome 3 to work on my VirtualBox - I have been pouring through Internet and forum posts but to no avail. Here's what I've done so far: VirtualBox 4.1.4r74921 on Windows 7 Installed Ubuntu Desktop 11.10 ( 32 bit ) Enabled 3D acceleration Allocated 1.5GB of RAM Allocated 50MB video memory (hope this is not the culprit) Installed Guest edition 4.1.4 Did apt-get update and apt-get upgrade Booted back in to Ubuntu - falls back to Unity 2D Shared folder, mouse integration all works, so guest edition is properly installed Tried the command and below is the output /usr/lib/nux/unity_support_test –p OpenGL vendor string: Mesa Project OpenGL renderer string: Software Rasterizer OpenGL version string: 2.1 Mesa 7.11 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no I am trying to find what the "no" means but cannot find any good answers. Inter Core i5 processor 4GB of RAM on the host Display adapter: NVIDIA GeForce 8400GS Is anyone else facing the same problem? If so, can you point me to a solution or any reference where I can find a solution?

    Read the article

  • Obtaining a world point from a screen point with an orthographic projection

    - by vargonian
    I assumed this was a straightforward problem but it has been plaguing me for days. I am creating a 2D game with an orthographic camera. I am using a 3D camera rather than just hacking it because I want to support rotating, panning, and zooming. Unfortunately the math overwhelms me when I'm trying to figure out how to determine if a clicked point intersects a bounds (let's say rectangular) in the game. I was under the impression that I could simply transform the screen point (the clicked point) by the inverse of the camera's View * Projection matrix to obtain the world coordinates of the clicked point. Unfortunately this is not the case at all; I get some point that seems to be in some completely different coordinate system. So then as a sanity check I tried taking an arbitrary world point and transforming it by the camera's View*Projection matrices. Surely this should get me the corresponding screen point, but even that didn't work, and it is quickly shattering any illusion I had that I understood 3D coordinate systems and the math involved. So, if I could form this into a question: How would I use my camera's state information (view and projection matrices, for instance) to transform a world point to a screen point, and vice versa? I hope the problem will be simpler since I'm using an orthographic camera and can make several assumptions from that. I very much appreciate any help. If it makes a difference, I'm using XNA Game Studio.

    Read the article

  • Missing z-axis rotation for transforming between two vectors

    - by Steve Baughman
    I'm trying to rotate a cube so that it's facing up, but am getting hung up on the final implementation details. It now reliably will rotate the x,y axis to the correct side, but the z-axis is never rotating (See photos of before and after rotation). When I'm using the code below I always get '0' for my rotationVector.z. What am I missing here? // Define lookAt vector lookAtVector = GLKVector3Make(0,0,1); // Define axes vectors axes[0] = GLKVector3Make(0,0,1); axes[1] = GLKVector3Make(-1,0,0); axes[2] = GLKVector3Make(0,1,0); axes[3] = GLKVector3Make(1,0,0); axes[4] = GLKVector3Make(0,-1,0); axes[5] = GLKVector3Make(0,0,-1); CGFloat highest_dot = -1.0; GLKVector3 closest_axis; for(int i = 0; i < 6; i++) { // multiply cube's axes by existing matrix GLKVector3 axis = GLKMatrix4MultiplyVector3(matrix, axes[i]); CGFloat dot = GLKVector3DotProduct(axis, lookAtVector); if(dot > highest_dot) { closest_axis = axis; highest_dot = dot; } } GLKVector3 rotationVector = GLKVector3CrossProduct(closest_axis, lookAtVector); // Get angle between vectors CGFloat angle = atan2(GLKVector3Length(rotationVector), GLKVector3DotProduct(closest_axis, lookAtVector)); // normalize the rotation vector rotationVector = GLKVector3Normalize(rotationVector); // Create transform CATransform3D rotationTransform = CATransform3DMakeRotation(angle, rotationVector.x, rotationVector.y, rotationVector.z); // add rotation transform to existing transformation baseTransform = CATransform3DConcat(baseTransform, rotationTransform); return baseTransform; Before 3d Rotation After 3d Rotation Implementation based on this post

    Read the article

  • Why are bugs responsible for big deficiencies in functionality given such low priority?

    - by keepitsimpleengineer
    Well, first of all, change is inevitable and mostly good. Furthermore attempts at simplifying the User Interface such as Gnome 3, Unity to make Linux more inclusive hold much promise, even though they adversely affect my style of working. Additionally, though now retired, I have worked with computers for 47 years, and though I do nothing serious for others now, I still do heavy duty things. 10.04 LTS is my big workstation, and I had three 10.10 systems for Mythtv, and one of which is further adapted for video & related. The Mythtv were 10.10 because of a dormant bug regarding installing to 10.04. My work habits consistently use dual monitors and compiz cube and 3D windows with the computing horsepower to support them. The dual monitors with separate X screens has been not been functional since 11.04, and cube/3D windows not functional in Unity, and with diminished functionality Gnome. There is a bug filed (after upgrade to 12.04 amd64 Gnome Classic not properly draw second screen) I have mitigated the situation some by switching to Xubuntu and eschewing Unity. The question that comes to mind is why this bug is not given more attention in that it nearly cuts functionality in half for more competent workstations. Sample workspace... Please know that I appreciate all the hard work, dedication require to pull off something as big as Ubuntu et al.

    Read the article

  • Translating an object along its heading

    - by Kuros
    I am working on a simulation that requires me to have several objects moving around in 3D space (text output of their current position on the grid and heading is fine, I do not need graphics), and I am having some trouble getting objects to move along their relative headings. I have a basic understanding of vectors and matrices. I am using a vector to represent their position, and I am also using Euler Angles. I can translate one of my entities with a matrix along whatever axis, and I can alter their heading. For example, if I have an entity at (order is XYZ) 1, 1, 1, with a heading of 0, I can apply a translation matrix to get them to talk to 1, 1, 2 fine. However, if I change their heading to 270, they still walk to 1, 1, 3, instead of 2, 1, 2 as I desire. I have a feeling that my problem lies in not translating my matrix from world space to object space, but I am not sure how to go about that. How can I do this? Addition: I am using 3D vectors to represent their current position and their heading (using the three euler angles). For now, all I want to do is have an entity walk in a square, reporting their current position at each step. So, assuming it starts at 10, 10, 10 I want it to walk as follows: 10,10,10 -> 10, 10, 15 10, 10, 15 -> 5, 10, 15 5, 10, 15 -> 5, 10, 10 5, 10, 10 -> 10, 10, 10 My 1 Z unit translation matrix is as follows: [1 0 0 0] [0 1 0 0] [0 0 1 1] [0 0 0 1] My rotation matrix is as follows: [0 0 1 0] [0 1 0 0] [-1 0 0 0] [0 0 0 1]

    Read the article

  • How do you dynamically allocate a contiguous 3D array in C?

    - by Derek
    In C, I want to loop through an array in this order for(int z = 0; z < NZ; z++) for(int x = 0; x < NX; x++) for(int y = 0; y < NY; y++) 3Darray[x][y][z] = 100; How do I create this array in such a way that 3Darray[0][1][0] comes right before 3Darray[0][2][0] in memory? I can get an initialization to work that gives me "z-major" ordering, but I really want a y-major ordering for this 3d array This is the code I have been trying to use: char *space; char ***Arr3D; int y, z; ptrdiff_t diff; space = malloc(X_DIM * Y_DIM * Z_DIM * sizeof(char)) Arr3D = malloc(Z_DIM * sizeof(char **)); for (z = 0; z < Z_DIM; z++) { Arr3D[z] = malloc(Y_DIM * sizeof(char *)); for (y = 0; y < Y_DIM; y++) { Arr3D[z][y] = space + (z*(X_DIM * Y_DIM) + y*X_DIM); } }

    Read the article

  • 3D Sphereical Terrain with an 8 mesh sphere. The edges of the mesh are obvioulsy seen and I'm not su

    - by Justin808
    Hi :) I'm working in Unity3D, but my issue is with 3D meshes. I'm hoping someone here can help or point me in the right direction. I have 2 version of code, http://www.pasteit4me.com/695002 (old) and http://www.pasteit4me.com/690003 (new). The old code, makes a single mesh sphere and creates a terrain on it. The new code makes an 8 mesh sphere and creates a terrain on it. On the new version the edges of the meshes are obviously seen and I'm not sure why. It looks like the edges are adjusted no much, almost 2-3 times more than they should have been. GenerateB() in the old code and Generate() in the new code creates the sphere. MakeTerrain() in both create the terrain. If I dont run the MakeTerrain() function the new sphere looks like a solid mesh. I'm not sure where to start looking in the MakeTerrain() function in the new code to solve the issue :-/ Any ideas? An image of the issue is at http://img28.imageshack.us/img28/3784/screenshot20100611at850.png.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >