Search Results

Search found 1000 results on 40 pages for 'scene'.

Page 16/40 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Aero to Accelerate Android Application Development

    The mobile application development arena saw the dawn of a new mobile handset that capitalized on the rapid strides that have recently characterized the Android application development scene. The Aero is an Android handset unveiled by AT&T developed in the trusted stables of Dell. The Android smartphone has laid to rest all rumours that have floated around Dell's launch of their first Android device.

    Read the article

  • Exposure Blending with digiKam

    <b>Scribbles and Snaps:</b> "One way to solve this problem is to use exposure blending. This technique involves taking several shots of the same scene or subject with different exposures and then fusing these shots into one perfectly exposed photo."

    Read the article

  • 4 SEO Trick Schemes to Avoid

    With the increasing influence and change of marketing focus to Internet, we have the usual unsavoury characters arriving on the scene. Where there is money, there is a scam. SMEs are particularly vulnerable to this as most do not have the resources, the time or the knowledge to distinguish a genuine SEO company from the fraudsters.

    Read the article

  • What's the best way to start up a opengl context in my setup?

    - by NoobScratcher
    Would it be better to create a callback function which contains a OpenGL 3.0+ Context including viewport, matrix, etc or setup OpenGL in a function called GL_StartUp and use that GL_StartUp Function in the mainloop and callback function to that Function. I want my program to only show a OpenGL default scene when the user clicks on the New Game menu item in the menu bar rather then just have one setup when the program starts. I'm using Ubuntu 64bit, GTK 3.0 and GTK OpenGL

    Read the article

  • HTC lève le voile sur "Windows Phone 8X" et "Windows Phone 8S", ses smartphones sous Windows Phone 8

    HTC lève le voile sur « Windows Phone 8X » et « Windows Phone 8S » ses smartphones sous Windows Phone 8 Après Samsung avec son smartphone ATIV S, Nokia avec ses terminaux Lumia 920 et Lumia 820, c'est au tour d'un autre géant du mobile de dévoiler ses dispositifs sous le système d'exploitation mobile Windows Phone 8. HTC a présenté lors d'un événement organisé pour la presse par le constructeur à New York ses modèles de smartphones Windows Phone 8X et 8S. C'est le PDG de la société en personne Peter Chou, qui est monté sur scène avec le numéro 1 de...

    Read the article

  • Windows Embedded 8 sortira en mars 2013, Microsoft publie la Roadmap et une préversion de son OS embarqué, fondé sur Windows 8

    Windows Embedded Standard 8 : la deuxième CTP disponible Plus de 6 000 développeurs utilisent déjà l'OS embarqué dérivé de Windows 8 Mise à jour du 07/06/12 Hier, au salon Computex de Taïwan, Steven Guggenheimer, vice-président du service OEM de Microsoft, est monté sur scène pour présenter les dernières technologies de Microsoft sur les « systèmes intelligents » (appareils qui embarquent un OS et qui permettent de se connecter à des terminaux spécialisés ou à des services en ligne comme des serveurs distants ou des bases de données). Il en a profité pour annoncer la disponibilité de la deuxième community technology preview (CTP) de Windows Embedded Standard 8...

    Read the article

  • Découverte de sept nouvelles failles de sécurité dans OpenSSL, des correctifs sont disponibles

    Découverte de sept nouvelles failles de sécurité dans OpenSSL des correctifs sont disponibles OpenSSL, la bibliothèque de chiffrement open source largement utilisé sur le Web revient au-devant de la scène après la faille Heartbleed (« coeur qui saigne »), qui avait fait un véritable tollé sur le Web.Sept nouvelles vulnérabilités ont été découvertes dans la solution, dont l'une étiquetée comme critique, permet d'espionner des communications sécurisées avec TLS/SSL. Selon le « CVE-2014-0224 » utilisé...

    Read the article

  • The need to reduce mesh count

    - by OJW
    In Panda3d, I load a model and place 10000 references to it in the scene-graph. It runs at (say) 2Hz. I load a 3d model containing 10000 copies of that exact same object, and it runs at (say) 60Hz. As does using the flattenStrong() command which is effectively the same thing but at runtime. So the question is: is this behaviour a peculiarity of Panda3d, or is it a fundamental law which applies to all games engines?

    Read the article

  • Unity 4.5 est disponible et apporte le support d'OpenGL ES 3, un rendu plus naturel de la physique 2D et des optimisations

    Unity 4.5 est disponible : les corrections et améliorations du moteur sont très nombreuses OpenGL ES 3.0, amélioration de la physique 2D, chargement de scène amélioré, amélioration du flux de travail sur les shaders... Alors que dernièrement nous avons découvert ce que Unity 5 sera, l'équipe de développement nous apporte la version 4.5 du moteur. Celle-ci corrige près de 450 bogues mais apporte aussi de nouvelles fonctionnalités au moteur : support d'OpenGL ES 3.0 pour iOS (à partir des périphérique...

    Read the article

  • SEO Tips For Yahoo!

    At first, Yahoo! was the most popular search engine until Google entered the scene. Yahoo! said goodbye to its throne with Google's amazingly accurate results.

    Read the article

  • Software to capture 3d geometry?

    - by user712092
    Programs I found I found these programs to capture OpenGL 3D scene : 3D Ripper, OpenGL and D3D geometry capture, there are some solved problems with 3D Ripper GLIntercept captures OpenGL function calls OpenGL Extractor captures 3d geometry; should work as plugin for GLIntercept another tool to capture OpenGL 3D data EDIT: There is also HijackGL which changes how a scene is rendered so it probably can be used to capture geometry; it is backed up by a academic paper; it is just just a nice program, not related to what I want i think (or it would might be hard to change it to be for what I want, because it would require programming). 3D Ripper captures geometry, textures and shaders. OpenGL Extractor captures just geometry ... General questions about such programs What is Your experience with these programs? Which of these programs would You recommmend? Do You know other such programs? Were there any problems with them, or are there problems with them in general? Are there programs which work best overall, or is it specific to certain 3d applications? What I need to do? I am looking to program which can capture 3d geometry for study purposes. And also for a program to capture 3D animation (frames of 3d animation). I tried only 3D Ripper because application I try to capture data from is on Direct 3D. 3D Ripper works with at least Direct 3D 9, this application has Direct 3D 6. Are there applications which work with older version of Direct 3D? Thank You very much. :) (I was verbose in link names because I want them to be indexed better by search engines.)

    Read the article

  • exporting bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    I'm having a hard time trying to understand how exactly Blender's concept of bone transforms maps to the usual math of skinning (which I'm implementing in an OpenGL-based engine of sorts). Or I'm missing out something in the math.. It's gonna be long, but here's as much background as I can think of. First, a few notes and assumptions: I'm using column-major order and multiply from right to left. So for instance, vertex v transformed by matrix A and then further transformed by matrix B would be: v' = BAv. This also means whenever I export a matrix from blender through python, I export it (in text format) in 4 lines, each representing a column. This is so I can then I can read them back into my engine like this: if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[0], &skeleton.joints[currentJointIndex].inverseBindTransform.m[1], &skeleton.joints[currentJointIndex].inverseBindTransform.m[2], &skeleton.joints[currentJointIndex].inverseBindTransform.m[3])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[4], &skeleton.joints[currentJointIndex].inverseBindTransform.m[5], &skeleton.joints[currentJointIndex].inverseBindTransform.m[6], &skeleton.joints[currentJointIndex].inverseBindTransform.m[7])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[8], &skeleton.joints[currentJointIndex].inverseBindTransform.m[9], &skeleton.joints[currentJointIndex].inverseBindTransform.m[10], &skeleton.joints[currentJointIndex].inverseBindTransform.m[11])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[12], &skeleton.joints[currentJointIndex].inverseBindTransform.m[13], &skeleton.joints[currentJointIndex].inverseBindTransform.m[14], &skeleton.joints[currentJointIndex].inverseBindTransform.m[15])) { I'm simplifying the code I show because otherwise it would make things unnecessarily harder (in the context of my question) to explain / follow. Please refrain from making remarks related to optimizations. This is not final code. Having said that, if I understand correctly, the basic idea of skinning/animation is: I have a a mesh made up of vertices I have the mesh model-world transform W I have my joints, which are really just transforms from each joint's space to its parent's space. I'll call these transforms Bj meaning matrix which takes from joint j's bind pose to joint j-1's bind pose. For each of these, I actually import their inverse to the engine, Bj^-1. I have keyframes each containing a set of current poses Cj for each joint J. These are initially imported to my engine in TQS format but after (S)LERPING them I compose them into Cj matrices which are equivalent to the Bjs (not the Bj^-1 ones) only that for the current spacial configurations of each joint at that frame. Given the above, the "skeletal animation algorithm is" On each frame: check how much time has elpased and compute the resulting current time in the animation, from 0 meaning frame 0 to 1, meaning the end of the animation. (Oh and I'm looping forever so the time is mod(total duration)) for each joint: 1 -calculate its world inverse bind pose, that is Bj_w^-1 = Bj^-1 Bj-1^-1 ... B0^-1 2 -use the current animation time to LERP the componets of the TQS and come up with an interpolated current pose matrix Cj which should transform from the joints current configuration space to world space. Similar to what I did to get the world version of the inverse bind poses, I come up with the joint's world current pose, Cj_w = C0 C1 ... Cj 3 -now that I have world versions of Bj and Cj, I store this joint's world- skinning matrix K_wj = Cj_w Bj_w^-1. The above is roughly implemented like so: - (void)update:(NSTimeInterval)elapsedTime { static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.jointPoses[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.jointPoses[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeTranslation(LERPTranslation.x, LERPTranslation.y, LERPTranslation.z)); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeScale(LERPScaling.x, LERPScaling.y, LERPScaling.z)); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = aSkeleton.joints[i].inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(aSkeleton.joints[i].inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } } At this point, I should have my skinning palette. So on each frame in my vertex shader, I do: uniform mat4 modelMatrix; uniform mat4 projectionMatrix; uniform mat3 normalMatrix; uniform mat4 skinningPalette[6]; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { vec3 eyeNormal = normalize(normalMatrix * normal); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } gl_Position = projectionMatrix * modelMatrix * skinnedVertexPosition; } The result: The mesh parts that are supposed to animate do animate and follow the expected motion, however, the rotations are messed up in terms of orientations. That is, the mesh is not translated somewhere else or scaled in any way, but the orientations of rotations seem to be off. So a few observations: In the above shader notice I actually did not multiply the vertices by the mesh modelMatrix (the one which would take them to model or world or global space, whichever you prefer, since there is no parent to the mesh itself other than "the world") until after skinning. This is contrary to what I implied in the theory: if my skinning matrix takes vertices from model to joint and back to model space, I'd think the vertices should already be premultiplied by the mesh transform. But if I do so, I just get a black screen. As far as exporting the joints from Blender, my python script exports for each armature bone in bind pose, it's matrix in this way: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: poseJoint = skeleton.pose.bones[joint.name] jointTransform = poseJoint.matrix.inverted() file.write('Joint ' + joint.name + ' Transform {\n') for col in jointTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') And for current / keyframe poses (assuming I'm in the right keyframe): def exportAnimations(filepath): # Only one skeleton per scene objList = [object for object in bpy.context.scene.objects if object.type == 'ARMATURE'] if len(objList) == 0: return elif len(objList) > 1: return #raise exception? dialog box? skeleton = objList[0] jointNames = [bone.name for bone in skeleton.data.bones] for action in bpy.data.actions: # One animation clip per action in Blender, named as the action animationClipFilePath = filepath[0 : filepath.rindex('/') + 1] + action.name + ".aClip" file = open(animationClipFilePath, 'w') file.write('target skeleton: ' + skeleton.name + '\n') file.write('joints count: {:d}'.format(len(jointNames)) + '\n') skeleton.animation_data.action = action keyframeNum = max([len(fcurve.keyframe_points) for fcurve in action.fcurves]) keyframes = [] for fcurve in action.fcurves: for keyframe in fcurve.keyframe_points: keyframes.append(keyframe.co[0]) keyframes = set(keyframes) keyframes = [kf for kf in keyframes] keyframes.sort() file.write('keyframes count: {:d}'.format(len(keyframes)) + '\n') for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) joint = skeleton.pose.bones[i] jointCurrentPoseTransform = joint.matrix translationV = jointCurrentPoseTransform.to_translation() rotationQ = jointCurrentPoseTransform.to_3x3().to_quaternion() scaleV = jointCurrentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') file.close() Which I believe follow the theory explained at the beginning of my question. But then I checked out Blender's directX .x exporter for reference.. and what threw me off was that in the .x script they are exporting bind poses like so (transcribed using the same variable names I used so you can compare): if joint.parent: jointTransform = poseJoint.parent.matrix.inverted() else: jointTransform = Matrix() jointTransform *= poseJoint.matrix and exporting current keyframe poses like this: if joint.parent: jointCurrentPoseTransform = joint.parent.matrix.inverted() else: jointCurrentPoseTransform = Matrix() jointCurrentPoseTransform *= joint.matrix why are they using the parent's transform instead of the joint in question's? isn't the join transform assumed to exist in the context of a parent transform since after all it transforms from this joint's space to its parent's? Why are they concatenating in the same order for both bind poses and keyframe poses? If these two are then supposed to be concatenated with each other to cancel out the change of basis? Anyway, any ideas are appreciated.

    Read the article

  • Day 6 - Game Menuing Woes and Future Screen Sneak Peeks

    - by dapostolov
    So, after my last post on Day 5 I dabbled with my game class design. I took the approach where each game objects is tightly coupled with a graphic. The good news is I got the menu working but not without some hard knocks and game growing pains. I'll explain later, but for now...here is a class diagram of my first stab at my class structure and some code...   Ok, there are few mistakes, however, I'm going to leave it as is for now... As you can see I created an inital abstract base class called GameSprite. This class when inherited will provide a simple virtual default draw method:        public virtual void DrawSprite(SpriteBatch spriteBatch)         {             spriteBatch.Draw(Sprite, Position, Color.White);         } The benefits of coding it this way allows me to inherit the class and utilise the method in the screen draw method...So regardless of what the graphic object type is it will now have the ability to render a static image on the screen. Example: public class MyStaticTreasureChest : GameSprite {} If you remember the window draw method from Day 3's post, we could use the above code as follows...         protected override void Draw(GameTime gameTime)         {             GraphicsDevice.Clear(Color.CornflowerBlue);             spriteBatch.Begin(SpriteBlendMode.AlphaBlend);             foreach(var gameSprite in ListOfGameObjects)            {                 gameSprite.DrawSprite(spriteBatch);            }             spriteBatch.End();             base.Draw(gameTime);         } I have to admit the GameSprite object is pretty plain as with its DrawSprite method... But ... we now have the ability to render 3 static menu items on the screen ... BORING! I want those menu items to do something exciting, which of course involves animation... So, let's have a peek at AnimatedGameSprite in the above game diagram. The idea with the AnimatedGameSprite is that it has an image to animate...such as ... characters, fireballs, and... menus! So after inheriting from GameSprite class, I added a few more options such as UpdateSprite...         public virtual void UpdateSprite(float elapsed)         {             _totalElapsed += elapsed;             if (_totalElapsed > _timePerFrame)             {                 _frame++;                 _frame = _frame % _framecount;                 _totalElapsed -= _timePerFrame;             }         }  And an overidden DrawSprite...         public override void DrawSprite(SpriteBatch spriteBatch)         {             int FrameWidth = Sprite.Width / _framecount;             Rectangle sourcerect = new Rectangle(FrameWidth * _frame, 0, FrameWidth, Sprite.Height);             spriteBatch.Draw(Sprite, Position, sourcerect, Color.White, _rotation, _origin, _scale, SpriteEffects.None, _depth);         } With these two methods...I can animate and image, all I had to do was add a few more lines to the screens Update Method (From Day 3), like such:             float elapsed = (float) gameTime.ElapsedGameTime.TotalSeconds;             foreach (var item in ListOfAnimatedGameObjects)             {                 item.UpdateSprite(elapsed);             } And voila! My images begin to animate in one spot, on the screen... Hmm, but how do I interact with the menu items using a mouse...well the mouse cursor was easy enough... this.IsMouseVisible = true; But, to have it "interact" with an image was a bit more tricky...I had to perform collision detection!             mouseStateCurrent = Mouse.GetState();             var uiEnabledSprites = (from s in menuItems                                    where s.IsEnabled                                    select s).ToList();             foreach (var item in uiEnabledSprites)             {                 var r = new Rectangle((int)item.Position.X, (int)item.Position.Y, item.Sprite.Width, item.Sprite.Height);                 item.MenuState = MenuState.Normal;                 if (r.Intersects(new Rectangle(mouseStateCurrent.X, mouseStateCurrent.Y, 0, 0)))                 {                     item.MenuState = MenuState.Hover;                     if (mouseStatePrevious.LeftButton == ButtonState.Pressed                         && mouseStateCurrent.LeftButton == ButtonState.Released)                     {                         item.MenuState = MenuState.Pressed;                     }                 }             }             mouseStatePrevious = mouseStateCurrent; So, basically, what it is doing above is iterating through all my interactive objects and detecting a rectangle collision and the object , plays the state animation (or static image).  Lessons Learned, Time Burned... So, I think I did well to start, but after I hammered out my prototype...well...things got sloppy and I began to realise some design flaws... At the time: I couldn't seem to figure out how to open another window, such as the character creation screen Input was not event based and it was bugging me My menu design relied heavily on mouse input and I couldn't use keyboard. Mouse input, is tightly bound with graphic rendering / positioning, so its logic will have to be in each scene. Menu animations would stop mid frame, then continue when the action occured again. This is bad, because...what if I had a sword sliding onthe screen? Then it would slide a quarter of the way, then stop due to another action, then render again mid-slide... it just looked sloppy. Menu, Solved!? To solve the above problems I did a little research and I found some great code in the XNA forums. The one worth mentioning was the GameStateManagementSample. With this sample, you can create a basic "text based" menu system which allows you to swap screens, popup screens, play the game, and quit....basic game state management... In my next post I'm going to dwelve a bit more into this code and adapt it with my code from this prototype. Text based menus just won't cut it for me, for now...however, I'm still going to stick with my animated menu item idea. A sneak peek using the Game State Management Sample...with no changes made... Cool Things to Mention: At work ... I tend to break out in random conversations every-so-often and I get talking about some of my challenges with this game (or some stupid observation about something... stupid) During one conversation I was discussing how I should animate my images; I explained that I knew I had to use the Update method provided, but I didn't know how (at the time) to render an image at an appropriate "pace" and how many frames to use, etc.. I also got thinking that if a machine rendered my images faster / slower, that was surely going to f-up my animations. To which a friend, Sheldon,  answered, surely the Draw method is like a camera taking a snapshot of a scene in time. Then it clicked...I understood the big picture of the game engine... After some research I discovered that the Draw method attempts to keep a framerate of 60 fps. From what I understand, the game engine will even leave out a few calls to the draw method if it begins to slow down. This is why we want to put our sprite updates in the update method. Then using a game timer (provided by the engine), we want to render the scene based on real time passed, not framerate. So even the engine renders at 20 fps, the animations will still animate at the same real time speed! Which brings up another point. Why 60 fps? I'm speculating that Microsoft capped it because LCD's dont' refresh faster than 60 fps? On another note, If the game engine knows its falling behind in rendering...then surely we can harness this to speed up our games. Maybe I can find some flag which tell me if the game is lagging, and what the current framerate is, etc...(instead of coding it like I did last time) Sheldon, suggested maybe I can render like WoW does, in prioritised layers...I think he's onto something, however I don't think I'll have that many graphics to worry about such a problem of graphic latency. We'll see. People to Mention: Well,as you are aware I hadn't posted in a couple days and I was surprised to see a few emails and messenger queries about my game progress (and some concern as to why I stopped). I want to thank everyone for their kind words of support and put everyone at ease by stating that I do intend on completing this project. Granted I only have a few hours each night, but, I'll do it. Thank you to Garth for mailing in my next screen! That was a nice surprise! The Sneek Peek you've been waiting for... Garth has also volunteered to render me some wizard images. He was a bit shocked when I asked for them in 2D animated strips. He said I was going backward (and that I have really bad Game Development Lingo). But, I advised Garth that I will use 3D images later...for now...2D images. Garth also had some great game design ideas to add on. I advised him that I will save his ideas and include them in the future design document (for the 3d version?). Lastly, my best friend Alek, is going to join me in developing this game. This was a project we started eons ago but never completed because of our careers. Now, priorities change and we have some spare time on our hands. Let's see what trouble Alek and I can get into! Tonight I'll be uploading my prototypes and base game to a source control for both of us to work off of. D.

    Read the article

  • DirectX: Render to a screen buffer without using a render target

    - by knight666
    Hello, I'm writing an open source 2D game engine, and I want to support as many devices and platforms as possible. I currently only have Windows Mobile though. I'm rendering using DirectX Mobile, with DirectDraw as a fallback path. However, I've run into a bit of trouble. It seems that while the reference driver supports createRenderTarget, many many many physical devices do not. I need some way to render to the screen without using a render target, because I render sprites using textured quads, but I also need to be able to draw individual pixels. This is how I do it right now: // save old values if (Error::Failed(m_D3DDevice->GetRenderTarget(&m_D3DOldTarget))) { ERROR_EXPLAIN("Could not retrieve backbuffer."); return false; } // clear render surface if (Error::Failed(m_D3DDevice->SetRenderTarget(m_D3DRenderSurface, NULL))) { ERROR_EXPLAIN("Could not set render target to render texture."); return false; } if (Error::Failed (m_D3DDevice->Clear( 0, NULL, // target rectangle D3DMCLEAR_TARGET, D3DMCOLOR_XRGB(0, 0, 0), // clear color 1.0f, 0 ) ) ) { ERROR_EXPLAIN("Failed to clear render texture."); return false; } D3DMLOCKED_RECT render_rect; if (Error::Failed(m_D3DRenderSurface->LockRect(&render_rect, NULL, NULL))) { ERROR_EXPLAIN("Failed to lock render surface pixels."); } else { m_D3DBackSurf->SetBuffer((Pixel*)render_rect.pBits); m_D3DRenderSurface->UnlockRect(); } // begin scene if (Error::Failed(m_D3DDevice->BeginScene())) { ERROR_EXPLAIN("Failed to start rendering."); return false; } // ===================== // example rendering // ===================== // some other stuff, but the most important part of rendering a sprite: device->SetTexture(0, m_Texture)); device->SetStreamSource(0, m_VertexBuffer, sizeof(Vertex)); device->DrawPrimitive(D3DMPT_TRIANGLELIST, 0, 2); // plotting a pixel Surface* target = (Surface*)Device::GetRenderMethod()->GetRenderTarget(); buffer = target->GetBuffer(); buffer[somepixel] = MAKECOLOR(255, 0, 0); // end scene if (Error::Failed(device->EndScene())) { ERROR_EXPLAIN("Failed to end scene."); return false; } // clear screen if (Error::Failed(device->SetRenderTarget(m_D3DOldTarget, NULL))) { ERROR_EXPLAIN("Couldn't set render target to backbuffer."); return false; } if (Error::Failed(device->GetBackBuffer ( 0, D3DMBACKBUFFER_TYPE_MONO, &m_D3DBack ) ) ) { ERROR_EXPLAIN("Couldn't retrieve backbuffer."); return false; } RECT dest = { 0, 0, Device::GetWidth(), Device::GetHeight() }; if (Error::Failed( device->StretchRect ( m_D3DRenderSurface, NULL, m_D3DBack, &dest, D3DMTEXF_NONE ) ) ) { ERROR_EXPLAIN("Failed to stretch render texture to backbuffer."); return false; } if (Error::Failed(device->Present(NULL, NULL, NULL, NULL))) { ERROR_EXPLAIN("Failed to present device."); return false; } I'm looking for a way to do the same thing (render sprites using hardware acceleration and plot pixels on a buffer) without using a render target. Thanks in advance.

    Read the article

  • How we should load the MFMailViewController in cocos2d ?

    - by srikanth rongali
    I am writing an app in using cocos2d. This method I have written for the selector goToFirstScreen: . The view is in landscape mode. I need to send an email. So, I need to launch the MFMailComposeViewController. I need it in portrait mode. But, the control is not entering in to viewDidLoad of the mailMe class. The problem is in goToScreen: method. But, I do not get where I am wrong ? -(void)goToFirstScreen:(id)sender { NSLog(@"goToFirstScreen: "); CCScene *Scene = [CCScene node]; CCLayer *Layer = [mailME node]; [Scene addChild:Layer]; [[CCDirector sharedDirector] setAnimationInterval:1.0/60]; [[CCDirector sharedDirector] pushScene: Scene]; } This is my mailMe class to launch mail controller #import <UIKit/UIKit.h> #import <MessageUI/MessageUI.h> #import <MessageUI/MFMailComposeViewController.h> #import "cocos2d.h" @interface mailME : CCLayer <MFMailComposeViewControllerDelegate> { UIViewController *mailComposer; } -(void)displayComposerSheet; -(void)launchMailAppOnDevice; @end #import "mailME.h" @implementation mailME -(void)viewDidLoad { NSLog(@"Enetrd in to mail"); Class mailClass = (NSClassFromString(@"MFMailComposeViewController")); if (mailClass != nil) { if ([mailClass canSendMail]) { [self displayComposerSheet]; } else { [self launchMailAppOnDevice]; } } else { [self launchMailAppOnDevice]; } } -(void)displayComposerSheet { CCDirector *director = [CCDirector sharedDirector]; [director pause]; [director stopAnimation]; [director.openGLView setUserInteractionEnabled:NO]; mailComposer = [[UIViewController alloc] init]; [mailComposer setView:[[CCDirector sharedDirector]openGLView]]; [mailComposer setModalTransitionStyle:UIModalTransitionStyleCoverVertical]; MFMailComposeViewController *picker = [[MFMailComposeViewController alloc] init]; picker.mailComposeDelegate = self; [picker setSubject:@"Hello!"]; NSArray *toRecipients = [NSArray arrayWithObject:@"[email protected]"]; [picker setToRecipients:toRecipients]; NSString *emailBody = @"It is not working!"; [picker setMessageBody:emailBody isHTML:YES]; [mailComposer presentModalViewController:picker animated:NO]; [picker release]; } - (void)mailComposeController:(MFMailComposeViewController*)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError*)error { switch (result) { case MFMailComposeResultCancelled: break; case MFMailComposeResultSaved: break; case MFMailComposeResultSent: break; case MFMailComposeResultFailed: break; default: break; } [mailComposer dismissModalViewControllerAnimated:NO]; [[UIApplication sharedApplication] setStatusBarOrientation:CCDeviceOrientationLandscapeLeft animated:NO]; CCDirector *director = [CCDirector sharedDirector]; [director.openGLView setUserInteractionEnabled:YES]; [director startAnimation]; [director resume]; [mailComposer.view.superview removeFromSuperview]; } -(void)launchMailAppOnDevice { NSString *recipients = @"mailto:[email protected]?&subject=Hello!"; NSString *body = @"&body=It is not working"; NSString *email = [NSString stringWithFormat:@"%@%@", recipients, body]; email = [email stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]; [[UIApplication sharedApplication] openURL:[NSURL URLWithString:email]]; } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • Memory management in iphone cocos2d

    - by muthu
    i am iphone developer very new to this field....i am developing a ebook app in iphone using cocos2d...i use more than 150 images(i guess) the problem while turning from one page to another images get hanged randomly...... i tried this also [[TextureMgr sharedTextureMgr] removeAllTextures]; but went in vain...i guess the the problem is with the memory.....this my coding for all the pages -(id)init { if( (self=[super init] )) { self.isTouchEnabled = YES; [SimpleAudioEngine sharedEngine]; NSLog(@"b4 cover"); Sprite *bg1 = [Sprite spriteWithFile:@"a.jpg"]; bg1.anchorPoint = CGPointZero; [self addChild:bg1 z:-1]; once = TRUE; soundId = [[SimpleAudioEngine sharedEngine] playEffect:@".mp3"]; } return self; } -(void) transitionfront:(id) sender { [[SimpleAudioEngine sharedEngine] stopEffect:soundId]; soundId1 = [[SimpleAudioEngine sharedEngine] playEffect:@"page_turn.mp3"]; flip = [[Sprite spriteWithFile:@"a.jpg"] retain]; [self addChild: flip z:1]; [flip setPosition:ccp(160,240)]; Animation* animation1 = [Animation animationWithName:@"Page1" delay:0.09]; for( int i=1;i<4;i++) [animation1 addFrameWithFilename: [NSString stringWithFormat:@".jpg", i]]; id action = [Animate actionWithAnimation: animation1]; //id action = [RepeatForever actionWithAction:[Animate actionWithAnimation: animation1]]; [flip runAction:action]; [NSTimer scheduledTimerWithTimeInterval:0.3 target:self selector:@selector(moveforward) userInfo:nil repeats:NO]; } -(void) moveforward { [[SimpleAudioEngine sharedEngine] stopEffect:soundId1]; [[Director sharedDirector] replaceScene: [ [Scene node] addChild: [nextpage node] z:0] ]; } -(void) transitionback:(id) sender { [[SimpleAudioEngine sharedEngine] stopEffect:soundId]; soundId1 = [[SimpleAudioEngine sharedEngine] playEffect:@".mp3"]; flip = [[Sprite spriteWithFile:@".jpg"] retain]; [self addChild: flip z:1]; [flip setPosition:ccp(160,240)]; Animation* animation1 = [Animation animationWithName:@"Page1" delay:0.09]; for( int i=3;i>0;i--) [animation1 addFrameWithFilename: [NSString stringWithFormat:@".jpg", i]]; id action = [Animate actionWithAnimation: animation1]; //id action = [RepeatForever actionWithAction:[Animate actionWithAnimation: animation1]]; [flip runAction:action]; [NSTimer scheduledTimerWithTimeInterval:0.3 target:self selector:@selector(movebackward) userInfo:nil repeats:NO]; } -(void) movebackward{ //[[SimpleAudioEngine sharedEngine]stopEffect:@".mp3"]; [[Director sharedDirector]replaceScene:[[Scene node]addChild:[b4page node] z:0]]; } -(void) glossary :(id) sender { [[SimpleAudioEngine sharedEngine]stopEffect:soundId]; [[Director sharedDirector]replaceScene:[[Scene node]addChild:[ node] z:0]]; } -(BOOL)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint cocosTouchPoint = [touch locationInView: [touch view]]; CGPoint point = [[Director sharedDirector] convertToGL:cocosTouchPoint]; NSLog(@"pointx: %f pointy:%f", point.x, point.y); // Was a tab touched, if so, which one... if (CGRectContainsPoint(CGRectMake(220, 0, 100, 70), point)) { if(once) { NSLog(@"enterred page1"); [self transitionfront:nil]; once = FALSE; } } if (CGRectContainsPoint(CGRectMake(0,0,60,60), point)) { if(once) { NSLog(@"enterred cover"); [self transitionback:nil]; once = FALSE; } } if (CGRectContainsPoint(CGRectMake(100, 15, 30, 30), point)) { if(once){ [self glossary :nil]; once = FALSE; } } return kEventHandled; } -(void)playEffect:(NSString*)sound{ if(effectPlayer!=nil){ [effectPlayer release]; } NSURL *url = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:sound ofType:@"mp3"]]; effectPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:nil]; [effectPlayer setDelegate:self]; [effectPlayer play]; } -(void)stopEffect { [effectPlayer stop]; } -(void) dealloc{ [super dealloc]; } do pls help me........ do give me a exact coding this is the err..... *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFDictionary setObject:forKey:]: attempt to insert nil value (key: aesop.mp3)' 2010-05-27 10:43:09.834 abc[276:20b] Stack: ( 11674715, 2476006971, 11758651, 11758490, 5126917, 660698, 660881, 661061, 131577, 448857, 120432, 153433, 630890, 23694899, 23603228, 23630005, 47120081, 11459456, 11455560, 47114125, 47114322, 23633923, 9928, 9814 )

    Read the article

  • glsl shader to allow color change of skydome ogre3d

    - by Tim
    I'm still very new to all this but learning a lot. I'm putting together an application using Ogre3d as the rendering engine. So far I've got it running, with a simple scene, a day/night cycle system which is working okay. I'm now moving on to looking at changing the color of the skydome material based on the time of day. What I've done so far is to create a struct to hold the ColourValues for the different aspects of the scene. struct todColors { Ogre::ColourValue sky; Ogre::ColourValue ambient; Ogre::ColourValue sun; }; I created an array to store all the colours todColors sceneColours [4]; I populated the array with the colours I want to use for the various times of the day. For instance DayTime (when the sun is high in the sky) sceneColours[2].sky = Ogre::ColourValue(135/255, 206/255, 235/255, 255); sceneColours[2].ambient = Ogre::ColourValue(135/255, 206/255, 235/255, 255); sceneColours[2].sun = Ogre::ColourValue(135/255, 206/255, 235/255, 255); I've got code to work out the time of the day using a float currentHours to store the current hour of the day 10.5 = 10:30 am. This updates constantly and updates the sun as required. I am then calculating the appropriate colours for the time of day when relevant using else if( currentHour >= 4 && currentHour < 7) { // Lerp from night to morning Ogre::ColourValue lerp = Ogre::Math::lerp<Ogre::ColourValue, float>(sceneColours[GT_TOD_NIGHT].sky , sceneColours[GT_TOD_MORNING].sky, (currentHour - 4) / (7 - 4)); } My original attempt to get this to work was to dynamically generate a material with the new colour and apply that material to the skydome. This, as you can probably guess... didn't go well. I know it's possible to use shaders where you can pass information such as colour to the shader from the code but I am unsure if there is an existing simple shader to change a colour like this or if I need to create one. What is involved in creating a shader and material definition that would allow me to change the colour of a material without the overheads of dynamically generating materials all the time? EDIT : I've created a glsl vertex and fragment shaders as follows. Vertex uniform vec4 newColor; void main() { gl_FrontColor = newColor; gl_Position = ftransform(); } Fragment void main() { gl_FragColor = gl_Color; } I can pass a colour to it using ShaderDesigner and it seems to work. I now need to investigate how to use it within Ogre as a material. EDIT : I created a material file like this : vertex_program colour_vs_test glsl { source test.vert default_params { param_named newColor float4 0.0 0.0 0.0 1 } } fragment_program colour_fs_glsl glsl { source test.frag } material Test/SkyColor { technique { pass { lighting off fragment_program_ref colour_fs_glsl { } vertex_program_ref colour_vs_test { } } } } In the code I have tried : Ogre::MaterialPtr material = Ogre::MaterialManager::getSingleton().getByName("Test/SkyColor"); Ogre::GpuProgramParametersSharedPtr params = material->getTechnique(0)->getPass(0)->getVertexProgramParameters(); params->setNamedConstant("newcolor", Ogre::Vector4(0.7, 0.5, 0.3, 1)); I've set that as the Skydome material which seems to work initially. I am doing the same with the code that is attempting to lerp between colours, but when I include it there, it all goes black. Seems like there is now a problem with my colour lerping.

    Read the article

  • Speeding up procedural texture generation

    - by FalconNL
    Recently I've begun working on a game that takes place in a procedurally generated solar system. After a bit of a learning curve (having neither worked with Scala, OpenGL 2 ES or Libgdx before), I have a basic tech demo going where you spin around a single procedurally textured planet: The problem I'm running into is the performance of the texture generation. A quick overview of what I'm doing: a planet is a cube that has been deformed to a sphere. To each side, a n x n (e.g. 256 x 256) texture is applied, which are bundled in one 8n x n texture that is sent to the fragment shader. The last two spaces are not used, they're only there to make sure the width is a power of 2. The texture is currently generated on the CPU, using the updated 2012 version of the simplex noise algorithm linked to in the paper 'Simplex noise demystified'. The scene I'm using to test the algorithm contains two spheres: the planet and the background. Both use a greyscale texture consisting of six octaves of 3D simplex noise, so for example if we choose 128x128 as the texture size there are 128 x 128 x 6 x 2 x 6 = about 1.2 million calls to the noise function. The closest you will get to the planet is about what's shown in the screenshot and since the game's target resolution is 1280x720 that means I'd prefer to use 512x512 textures. Combine that with the fact the actual textures will of course be more complicated than basic noise (There will be a day and night texture, blended in the fragment shader based on sunlight, and a specular mask. I need noise for continents, terrain color variation, clouds, city lights, etc.) and we're looking at something like 512 x 512 x 6 x 3 x 15 = 70 million noise calls for the planet alone. In the final game, there will be activities when traveling between planets, so a wait of 5 or 10 seconds, possibly 20, would be acceptable since I can calculate the texture in the background while traveling, though obviously the faster the better. Getting back to our test scene, performance on my PC isn't too terrible, though still too slow considering the final result is going to be about 60 times worse: 128x128 : 0.1s 256x256 : 0.4s 512x512 : 1.7s This is after I moved all performance-critical code to Java, since trying to do so in Scala was a lot worse. Running this on my phone (a Samsung Galaxy S3), however, produces a more problematic result: 128x128 : 2s 256x256 : 7s 512x512 : 29s Already far too long, and that's not even factoring in the fact that it'll be minutes instead of seconds in the final version. Clearly something needs to be done. Personally, I see a few potential avenues, though I'm not particularly keen on any of them yet: Don't precalculate the textures, but let the fragment shader calculate everything. Probably not feasible, because at one point I had the background as a fullscreen quad with a pixel shader and I got about 1 fps on my phone. Use the GPU to render the texture once, store it and use the stored texture from then on. Upside: might be faster than doing it on the CPU since the GPU is supposed to be faster at floating point calculations. Downside: effects that cannot (easily) be expressed as functions of simplex noise (e.g. gas planet vortices, moon craters, etc.) are a lot more difficult to code in GLSL than in Scala/Java. Calculate a large amount of noise textures and ship them with the application. I'd like to avoid this if at all possible. Lower the resolution. Buys me a 4x performance gain, which isn't really enough plus I lose a lot of quality. Find a faster noise algorithm. If anyone has one I'm all ears, but simplex is already supposed to be faster than perlin. Adopt a pixel art style, allowing for lower resolution textures and fewer noise octaves. While I originally envisioned the game in this style, I've come to prefer the realistic approach. I'm doing something wrong and the performance should already be one or two orders of magnitude better. If this is the case, please let me know. If anyone has any suggestions, tips, workarounds, or other comments regarding this problem I'd love to hear them.

    Read the article

  • Create bullet physics rigid body along the vertices of a blender model

    - by Krishnabhadra
    I am working on my first 3D game, for iphone, and I am using Blender to create models, Cocos3D game engine and Bullet for physics simulation. I am trying to learn the use of physics engine. What I have done I have created a small model in blender which contains a Cube (default blender cube) at the origin and a UVSphere hovering exactly on top of this cube (without touching the cube) I saved the file to get MyModel.blend. Then I used File -> Export -> PVRGeoPOD (.pod/.h/.cpp) in Blender to export the model to .pod format to use along with Cocos3D. In the coding side, I added necessary bullet files to my Cocos3D template project in XCode. I am also using a bullet objective C wrapper. -(void) initializeScene { _physicsWorld = [[CC3PhysicsWorld alloc] init]; [_physicsWorld setGravity:0 y:-9.8 z:0]; /*Setup camera, lamp etc.*/ .......... ........... /*Add models created in blender to scene*/ [self addContentFromPODFile: @"MyModel.pod"]; /*Create OpenGL ES buffers*/ [self createGLBuffers]; /*get models*/ CC3MeshNode* cubeNode = (CC3MeshNode*)[self getNodeNamed:@"Cube"]; CC3MeshNode* sphereNode = (CC3MeshNode*)[self getNodeNamed:@"Sphere"]; /*Those boring grey colors..*/ [cubeNode setColor:ccc3(255, 255, 0)]; [sphereNode setColor:ccc3(255, 0, 0)]; float *cVertexData = (float*)((CC3VertexArrayMesh*)cubeNode.mesh).vertexLocations.vertices; int cVertexCount = (CC3VertexArrayMesh*)cubeNode.mesh).vertexLocations.vertexCount; btTriangleMesh* cTriangleMesh = new btTriangleMesh(); // for (int i = 0; i < cVertexCount * 3; i+=3) { // printf("\n%f", cVertexData[i]); // printf("\n%f", cVertexData[i+1]); // printf("\n%f", cVertexData[i+2]); // } /*Trying to create a triangle mesh that curresponds the cube in 3D space.*/ int offset = 0; for (int i = 0; i < (cVertexCount / 3); i++){ unsigned int index1 = offset; unsigned int index2 = offset+6; unsigned int index3 = offset+12; cTriangleMesh->addTriangle( btVector3(cVertexData[index1], cVertexData[index1+1], cVertexData[index1+2] ), btVector3(cVertexData[index2], cVertexData[index2+1], cVertexData[index2+2] ), btVector3(cVertexData[index3], cVertexData[index3+1], cVertexData[index3+2] )); offset += 18; } [self releaseRedundantData]; /*Create a collision shape from triangle mesh*/ btBvhTriangleMeshShape* cTriMeshShape = new btBvhTriangleMeshShape(cTriangleMesh,true); btCollisionShape *sphereShape = new btSphereShape(1); /*Create physics objects*/ gTriMeshObject = [_physicsWorld createPhysicsObjectTrimesh:cubeNode shape:cTriMeshShape mass:0 restitution:1.0 position:cubeNode.location]; sphereObject = [_physicsWorld createPhysicsObject:sphereNode shape:sphereShape mass:1 restitution:0.1 position:sphereNode.location]; sphereObject.rigidBody->setDamping(0.1,0.8); } When I run the sphere and cube shows up fine. I expect the sphere object to fall directly on top of the cube, since I have given it a mass of 1 and the physics world gravity is given as -9.8 in y direction. But What is happening the spere rotates around cube three or times and then just jumps out of the scene. Then I know I have some basic misunderstanding about the whole process. So my question is, how can I create a physics collision shape which corresponds to the shape of a particular mesh model. I may need complex shapes than cube and sphere, but before going into them I want to understand the concepts.

    Read the article

  • Child transforms problem when loading 3DS models using assimp

    - by MhdSyrwan
    I'm trying to load a textured 3d model into my scene using assimp model loader. The problem is that child meshes are not situated correctly (they don't have the correct transformations). In brief: all the mTansform matrices are identity matrices, why would that be? I'm using this code to render the model: void recursive_render (const struct aiScene *sc, const struct aiNode* nd, float scale) { unsigned int i; unsigned int n=0, t; aiMatrix4x4 m = nd->mTransformation; m.Scaling(aiVector3D(scale, scale, scale), m); // update transform m.Transpose(); glPushMatrix(); glMultMatrixf((float*)&m); // draw all meshes assigned to this node for (; n < nd->mNumMeshes; ++n) { const struct aiMesh* mesh = scene->mMeshes[nd->mMeshes[n]]; apply_material(sc->mMaterials[mesh->mMaterialIndex]); if (mesh->HasBones()){ printf("model has bones"); abort(); } if(mesh->mNormals == NULL) { glDisable(GL_LIGHTING); } else { glEnable(GL_LIGHTING); } if(mesh->mColors[0] != NULL) { glEnable(GL_COLOR_MATERIAL); } else { glDisable(GL_COLOR_MATERIAL); } for (t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace* face = &mesh->mFaces[t]; GLenum face_mode; switch(face->mNumIndices) { case 1: face_mode = GL_POINTS; break; case 2: face_mode = GL_LINES; break; case 3: face_mode = GL_TRIANGLES; break; default: face_mode = GL_POLYGON; break; } glBegin(face_mode); for(i = 0; i < face->mNumIndices; i++)// go through all vertices in face { int vertexIndex = face->mIndices[i];// get group index for current index if(mesh->mColors[0] != NULL) Color4f(&mesh->mColors[0][vertexIndex]); if(mesh->mNormals != NULL) if(mesh->HasTextureCoords(0))//HasTextureCoords(texture_coordinates_set) { glTexCoord2f(mesh->mTextureCoords[0][vertexIndex].x, 1 - mesh->mTextureCoords[0][vertexIndex].y); //mTextureCoords[channel][vertex] } glNormal3fv(&mesh->mNormals[vertexIndex].x); glVertex3fv(&mesh->mVertices[vertexIndex].x); } glEnd(); } } // draw all children for (n = 0; n < nd->mNumChildren; ++n) { recursive_render(sc, nd->mChildren[n], scale); } glPopMatrix(); } What's the problem in my code ? I've added some code to abort the program if there's any bone in the meshes, but the program doesn't abort, this means : no bones, is that normal? if (mesh->HasBones()){ printf("model has bones"); abort(); } Note: I am using openGL & SFML & assimp

    Read the article

  • (CanvsEngine) Collission problem ( TypeError: this._polygon[this._frame] is undefined) [on hold]

    - by user2127102
    How can i fix this error TypeError: this._polygon[this._frame] is undefined Heres my code: html: <!DOCTYPE Html> <head> <meta charset="utf-8"> <title>Project</title> <link href="css/style.css" rel="stylesheet"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js" type="text/javascript"></script> <script src="js/canvasengine-1.3.0.all.min.js"></script> <script src="js/extends/Input.js"></script> <script src="main.js"></script> </head> <body> <canvas id="window"></canvas> </body> main.js: var canvas = CE.defines("window"). extend(Input). ready(function() { canvas.Scene.call("Game"); }); canvas.Scene.new({ name: "Game", materials: { images: { player: "img/character.png", Wall: "img/TestWall.png" } }, ready: function(stage) { var _canvas = this.getCanvas(); _canvas.setSize("browser", "strech"); this.Player = Class.new("Entity", [stage]); this.Player.el.drawImage("player"); stage.append(this.Player.el); this.Wall = Class.new("Entity", [stage]); this.Wall.el.drawImage("Wall"); this.Wall.position(300, 0); stage.append(this.Wall.el); }, render: function(stage) { //Controls ====== //Control calculations var self = this; this.Mover_A; this.Mover_D; this.Mover_W; this.Mover_S; canvas.Input.keyDown(Input.A, function(e) { self.Mover_A = true; }); canvas.Input.keyDown(Input.D, function(e) { self.Mover_D = true; }); canvas.Input.keyDown(Input.W, function(e) { self.Mover_W = true; }); canvas.Input.keyDown(Input.S, function(e) { self.Mover_S = true; console.log(self.Mover_S); }); canvas.Input.keyUp(Input.A, function(e) { self.Mover_A = false; }); canvas.Input.keyUp(Input.D, function(e) { self.Mover_D = false; }); canvas.Input.keyUp(Input.W, function(e) { self.Mover_W = false; }); canvas.Input.keyUp(Input.S, function(e) { self.Mover_S = false; }); x = 0; y = 0; if(this.Mover_A)x -= 1.5; //A if(this.Mover_D)x += 1.5;//D if(this.Mover_W)y -= 1.5;//W if(this.Mover_S)y += 1.5; //S this.Player.move(x, y); this.Player.hit("over", [this.Wall], function(state, el) { this.Player.move(x * -1, y * -1); }); //End Controls ===== stage.refresh(); } });

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • XNA vs SlimDX for offscreen renderer

    - by Groky
    Hello, I realise there are numerous questions on here asking about choosing between XNA and SlimDX, but these all relate to game programming. A little background: I have an application that renders scenes from XML descriptions. Currently I am using WPF 3D and this mostly works, except that WPF has no way to render scenes offscreen (i.e. on a server, without displaying them in a window), and also rendering to a bitmap causes WPF to fallback to software rendering. So I'm faced with having to write my own renderer. Here are the requirements: Mix of 3D and 2D elements. Relatively few elements per scene (tens of meshes, tens of 2D elements). Large scenes (up to 3000px square for print). Only a single frame will be rendered (i.e. FPS is not an issue). Opacity masks. Pixel shaders. Software fallback (servers may or may not have a decent gfx card). Possibility of being rendered offscreen. As you can see it's pretty simple stuff and WPF can manage it quite nicely except for the not-being-able-to-export-the-scene problem. In particular I don't need many of the things usually needed in game development. So bearing that in mind, would you choose XNA or SlimDX? The non-rendering portion of the code is already written in C#, so want to stick with that.

    Read the article

  • Multiple mesh with one geometry and diferent textures. Error

    - by user1821834
    I have a loop where I create a multiple Mesh with different geometry, because each mesh has one texture: .... var geoCube = new THREE.CubeGeometry(voxelSize, voxelSize, voxelSize); var geometry = new THREE.Geometry(); for( var i = 0; i < voxels.length; i++ ){ var voxel = voxels[i]; var object; color = voxel.color; texture = almacen.textPlaneTexture(voxel.texto,color,voxelSize); //Return the texture with a color and a text for each face of the geometry material = new THREE.MeshBasicMaterial({ map: texture }); object = new THREE.Mesh(geoCube, material); THREE.GeometryUtils.merge( geometry, object ); } //Add geometry merged at scene mesh = new THREE.Mesh( geometry, new THREE.MeshFaceMaterial() ); mesh.geometry.computeFaceNormals(); mesh.geometry.computeVertexNormals(); mesh.geometry.computeTangents(); scene.add( mesh ); .... But now I have this error in the javascript code Three.js Uncaught TypeError: Cannot read property 'map' of undefined In the funcion: function bufferGuessUVType ( material ) { .... } Update: Finally I have removed the merged solution and I can use an unnique geometry for the all voxels. Altough I think that If I use merge meshes the app would have a better performance...

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >