Search Results

Search found 1000 results on 40 pages for 'scene'.

Page 32/40 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • 2D Engine scrolling on OpenGL via hardware?

    - by drudru
    hi, I'm using OpenGL as the bottom end for a 2D tiling engine. When everything is 2D, it is simple to optimize certain issues. For example, scrolling. If I know a certain section of the screen needs to scroll off the bottom, then I can just blit over that portion. I'm evening moving more than 1 pixel at a time. Without explicit hardware support (think old nintendo hw), this requires a lot of pixel writes. An on chip bitblt would be the next best thing. Essentially, I'm looking at how I can optimize my GL calls to use VRAM texture renders as efficient hardware blits. Is it possible to have GL scroll the framebuffer, or should I just resign myself to double-buffering and re-rendering an entire scene for each frame? Thx

    Read the article

  • Samsung TV Emulator font size

    - by jperovic
    Can someone please help me with this issue. I've been banging my head to find the solution but no help... The problem is that Samsung TV Emulator displays everything enlarged (line font-size 30ish pixels) and there does seem to have a way to override it. This only happens within Samsung UI components. To make sure it wasn't something with my project I've downloaded sample project from Brightcove: Sample project but noticed the same behavior with that as well. Here is the screenshot of my "project". It only one scene with two UI components: http://tinypic.com/r/124evqc/6 Opposed to that, here's what I see in my IDE view: http://tinypic.com/r/ezmn4l/6. As a side-note, I had to put height: 20px in both of my UI components' CSS in order for IDE to show them that way. Can anyone suggest what am I supposed to do?

    Read the article

  • Iphone 3d games using Open Gl Es

    - by Dave
    Hi, I want to make 3d games for the iphone and with all this doubt about Unity and Apples new sdk agreement I'm wondering what the best way forward is? A lot of people recommend opengl es and point me in the direction of the open gl es bible and likewise, the problem is none of this actually talk about setting a game up i.e loading a character, scene , AI etc. Yet a lot of people are using Open GL es please could someone help me out, I really feel like I'm missing out on something. Are there any good tutorials/books that cover this? Thanks,

    Read the article

  • Unity 2D Instantiating a prefab with it's components

    - by TazmanNZL
    I have some block prefabs that I add to an array of game objects: public GameObject[]blocks; Each prefab has a BoxCollider2D, Script & Rigidbody2D components. But when I try to instantiate the prefab in the scene it doesn't appear to have the components attached? Here is how I am instantiating the prefab: for (int i = 0; i < 4; i++) { for (int j = 0; j < gridWidth; j++) { blockClone = Instantiate (blocks [Random.Range (0, blocks.Length)] as GameObject, new Vector3 (j, -i-2, 0f), transform.rotation) as GameObject; } } What am I doing wrong?

    Read the article

  • how to make this "action-packed, random data" being echoed in a terminal?

    - by RiMMER
    OK, this isn't really a question to achieve anything practical, but still it is a serious question and I hope it will be taken seriously and mods won't punish me for this :) I'm sure majority of you have seen some good action movie, where CIA or FBI or hackers or any other "pc nerds" are "retrieving some information" and when they actually show their screens/monitors/desktops, there is a lot of random data being displayed and it's just so thrilling :D So, we're shooting a movie and I need to reconstruct such a scene. My OS is ubuntu 10.10. I think i've read somewhere on the internet once that shell can actually be recorded and then played back, but I'm not sure how it worked. If there's anyone who could come up with a solution, it would be so cool! Let's make this fun, shall we?

    Read the article

  • Noob question: Draw a quad parallel to the view.

    - by Jack
    Hi all, ok what I want to do is to draw a quad in the scene that lays on a plane parallel to the view. So it should appear flat. More in particular, I think I didn't get very well how the mechanism of gluLookAt works in comparison with the functions glTranslate and glRotate: If I position the view "manually" using the functions glTranslate and glRotate whenever I draw an object its position is relative to the current view. And I understand that this is due to the transformation matrix in the stack. However when I use the gluLookAt that should automatically set the view, the coordinates of the object I want to draw must be "absolute" to show it properly. Thanks in advance.

    Read the article

  • How does ‘Servers’ view work underlying in Eclipse?

    - by Michael Lu
    ‘Servers’ is built-in view in Eclipse. We could integrate jee server into Eclipse easily. It could start/stop server both in normal and debug modes. Moreover, we could even set timeout and deployment path, things like that. Various types of server tomcat, jboss, websphere are supported, no intrusive to server. I am just curious about how these cool things happen behind the scene. The complete mechanism is large and complex, so I just want to know general mechanism about it, an article also could be fine for me. Thank you!

    Read the article

  • Is PThread a good choice for multi-platorm C/C++ multi-threading program?

    - by RogerV
    Been doing mostly Java and smattering of .NET for last five years and haven't written any significant C or C++ during that time. So have been away from that scene for a while. If I want to write a C or C++ program today that does some multi-threading and is source code portable across Windows, Mac OS X, and Linux/Unix - is PThread a good choice? The C or C++ code won't be doing any GUI, so won't need to worry with any of that. For the Windows platform, I don't want to bring a lot of Unix baggage, though, in terms of unix emulation runtime libraries. Would prefer a PThread API for Windows that is a thin-as-possible wrapper over existing Windows threading APIs. ADDENDUM EDIT: Am leaning toward going with boost:thread - I also want to be able to use C++ try/catch exception handling too. And even though my program will be rather minimal and not particularly OOPish, I like to encapsulate using class and namespace - as opposed to C disembodied functions.

    Read the article

  • MovieClip progress bar width is too small relative to parent container

    - by egyedg
    I am new to ActionScripot and Flash and I am stuck with the following problem: On the stage I have a movieclip (Container, originally 200px width) and inside it with a progressbar movieclip (originally 700px width), scaled with Free Transform Tool to fit the parent container. The width of the container changes run-time while resizing the scene. In ActionScript I have a function which should set the progress bar width according to a calculated percentage value: private function updateProgress(event:TimerEvent):void { var barWidth:int = _container.width; var progress:Number = _stream.time / _stream.duration * barWidth; _progressBar.width = progress; } My problem is that the progressBar even at full time (100%) is only at 1/4 of the parent container. I assume that it comes from the symbols original size. Can I correct this programatically, or I must redesign it with the "designer"? I hope I made clear my problem, as I said, I'm new in Flash. Thanks in advance.

    Read the article

  • does xml ignores (,) OR (-)??? (xml of flash object).

    - by nectar
    <SCENE> <xml_image>telcos.jpg</xml_image> <xml_bigtext>TELCOS</xml_bigtext> <xml_smalltext>New-Age Properties, Wifi Everywhere</xml_smalltext> <xml_align>right</xml_align> <xml_bigtextcolor>#ffffff</xml_bigtextcolor> <xml_bigtextshadow>#0000000</xml_bigtextshadow> <xml_bigtextcolor>#ffffff</xml_bigtextcolor> <xml_bigtextshadow>#0000000</xml_bigtextshadow> this is the xml file of my flash.But ignores the highfun(-) and comma(,) like New-Age comes like NewAge and Properties, comes like PropertiesWifi WHY?

    Read the article

  • As2 loading swf instance

    - by user1748474
    I have a .swf which loads an external .swf: this.createEmptyMovieClip("container_mc", this.getNextHighestDepth()); var my_listener:Object = new Object(); my_listener.onLoadComplete = function(target_mc:MovieClip) { target_mc._x = 50; target_mc._y = 50; addChild(my_loader); var blocker = my_loader.content.test blocker._visible = false; } my_listener.onLoadProgress = function(target_mc:MovieClip) { trace(target_mc.getBytesLoaded() + " out of " + target_mc.getBytesTotal()); } var my_loader:MovieClipLoader = new MovieClipLoader(); my_loader.addListener(my_listener); my_loader.loadClip("child_as2.swf", container_mc); I want to acces the external swf and make the movieclip with instance name test visible = false; but it won't work. I have tried a lot of codes and right now it throws me this error: Scene=Escena 1, layer=Capa 1, frame=1, Line 9 There is no property with the name 'content'. Any idea? If you have a better code i will thank you so much.

    Read the article

  • How to Quickly Cut a Clip From a Video File with Avidemux

    - by Trevor Bekolay
    Whether you’re cutting out the boring parts of your vacation video or getting a hilarious scene for an animated GIF, Avidemux provides a quick and easy way to cut clips from any video file. It’s overkill to use a full-featured video editing program if you just want to cut a few clips from a video file. Even programs that are designed to be small can have confusing interfaces when dealing with video. We’ve found that a great free program, Avidemux, makes the job of cutting clips extremely simple. Note: While the screenshots in this guide are taken from the Windows version, Avidemux runs on all of the major platforms – Windows, Mac OS X and Linux (GTK). Image by Keith Williamson. Cutting Clips from a Video File Open up Avidemux, and load the video file that you want to work with. If you get a prompt like this one: we recommend clicking Yes to use the safer mode. Find the portion of the video that you’d like to isolate. Get as close as you can to the start of the clip you want to cut. Once you find the start of your clip, look at the “Frame Type” of the current frame. You want it to read I; if it isn’t frame type I, then use the single left and right arrow buttons to go forward or backward one frame until you find an appropriate I frame. Once you’ve found the right starting frame, click the button with the A over a red bar. This will set the start of the clip. Advance to where you want your clip to end. Click on the button with a B when you’ve found the appropriate frame. This frame can be of any type. You can now save the clip, either by going to File –> Save –> Save Video… or by pressing Ctrl+S. Give the file a name, and Avidemux will prepare your clip. And that’s it! You should now have a movie file that contains only the portion of the original file that you want. Download Avidemux free for all platforms Latest Features How-To Geek ETC How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? Battlestar Galactica – Caprica Map of the 12 Colonies (Wallpaper Also Available) View Enlarged Versions of Thumbnail Images with Thumbnail Zoom for Firefox IntoNow Identifies Any TV Show by Sound Walk Score Calculates a Neighborhood’s Pedestrian Friendliness Factor Fantasy World at Twilight Wallpaper Hack a Wireless Doorbell into a Snail Mail Indicator

    Read the article

  • cocos2dx - Custom Fragment Shader and CCRenderTexture

    - by saiy2k
    I have a CCRenderTexture that is filled with a sprite when the scene is loaded, as follows, canvas = CCRenderTexture::create(this->getContentSize().width, this->getContentSize().height); canvas->setPosition(data->position); canvas->beginWithClear(0.0, 0.0, 0.0, 0); this->visit(); canvas->end(); The above code is written within a class, which derives from CCSprite (Hence this). Then, in another function applyShader(), I create a sprite named splat, from the texture of CCRenderTexture *canvas. Thus splat will contain the whole texture of canvas. Now I apply a custom fragment shader to the splat by calling the function splat->renderShader(), which will modify some small portion of the whole texture. Then I draw the modified texture back to the CCRenderTexture *canvas. Hence, applyShader() will * take a texture from CCRenderTexture, * create a sprite based on it, * apply a fragment shader to it * and draw the modified texture back to CCRenderTexture. This applyShader() will be called repetitively and its code is as follows: splat = Splat::createWithTexture(art->canvas->getSprite()->getTexture()); splat->renderShader(); art->canvas->begin(); splat->visit(); art->canvas->end(); My shader code is (nothing fancy) precision mediump float; varying vec2 v_texCoord; uniform sampler2D u_texture; uniform sampler2D u_colorRampTexture; uniform float params[5]; void main() { gl_FragColor = texture2D(u_texture, v_texCoord); return; } So, with the above code I expect the original sprite this to get rendered over and over again without any visual changes. But on each call to applyShader(), the texture is getting stretched a little and the stretched image is getting rendered. After some 10 calls, the image gets so distorted. Can someone please tell me where I am going wrong? Thanks :-) PS: All code shown here is partial, not complete code. Edit: Adding Screens Update: The problem has nothing to do with shaders it seems. It happens even when I dont call renderShader(). The actual lines of code is: splat = Splat::createWithTexture(art->canvas->getSprite()->getTexture()); splat->setPosition( ccp( art->getContentSize().width * 0.5, art->getContentSize().height * 0.5 ) ); splat->setFlipY(true); art->canvas->begin(); splat->visit(); art->canvas->end();

    Read the article

  • Java Spotlight Episode 139: Mark Heckler and José Pereda on JES based Energy Monitoring @MkHeck @JPeredaDnr

    - by Roger Brinkley
    Interview with Mark Heckler and José Pereda on using JavaSE Embedded with the Java Embedded Suite on a RaspberryPI along with a JavaFX client to monitor an energy production system and their JavaOne Tutorial- Java Embedded EXTREME MASHUPS: Building self-powering sensor nets for the IoT Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes News Java Virtual Developer Day Session Videos Available JavaFX Maven Plugin 2.0 Released JavaFX Scene Builder 1.1 build b28 FXForm 2 release 0.2.2 OpenJDK8/Zero cross compile build for Foundation model HSAIL-based GPU offload: the Quest for Java Performance Begins Progress on Moving to Gradle Java EE 7 Launch Keynote Replay Java EE 7 Technical Breakouts Replay Java EE 7 support in NetBeans 7.3.1 Java EE 7 support in Eclipse 4.3 Java Magazine - May/June Events Jul 16-19, Uberconf, Denver, USA Jul 22-24, JavaOne Shanghai, China Jul 29-31, JVM Language Summit, Santa Clara Sep 11-12, JavaZone, Oslo, Norway Sep 19-20, Strange Loop, St. Louis Sep 22-26 JavaOne San Francisco 2013, USA Feature Interview Mark Heckler is an Oracle Corporation Java/Middleware/Core Tech Engineer with development experience in numerous environments. He has worked for and with key players in the manufacturing, emerging markets, retail, medical, telecom, and financial industries to develop and deliver critical capabilities on time and on budget. Currently, he works primarily with large government customers using Java throughout the stack and across the enterprise. He also participates in open-source development at every opportunity, being a JFXtras project committer and developer of DialogFX, MonologFX, and various other projects. When Mark isn't working with Java, he enjoys writing about his experiences at the Java Jungle website (https://blogs.oracle.com/javajungle/) and on Twitter (@MkHeck). José Pereda is a Structural Engineer working in the School of Engineers in the University of Valladolid in Spain for more than 15 years, and his passion is related to applying programming to solve real problems. Being involved with Java since 1999, José shares his time between JavaFX and the Embedded world, developing commercial applications and open source projects (https://github.com/jperedadnr), and blogging (http://jperedadnr.blogspot.com.es/) or tweeting (@JPeredaDnr) of both. What’s Cool AquaFX 0.1 - Mac OS X skin for JavaFX by Claudine Zillmann DromblerFX adds a docking framework Part 2 of Gerrit’s taming the Nashorn for writing JavaFX apps in Javascript Tool from mihosoft called JSelect for quickly switching JDKs Apache Maven Javadoc Plugin 2.9.1 Released Proposal: Java Concurrency Stress tests (jcstress) Slide-free Code-driven session at SV JUG JavaOne approvals/rejects gone out

    Read the article

  • Cocos3d lighting problem

    - by Parasithe
    I'm currently working on a cocos3d project, but I'm having some trouble with lighting and I have no idea how to solve it. I've tried everything and the lighting is always as bad in the game. The first picture is from 3ds max (the software we used for 3d) and the second is from my iphone app. http://prntscr.com/ly378 http://prntscr.com/ly2io As you can see, the lighting is really bad in the app. I manually add my spots and the ambiant light. Here is all my lighting code : _spot = [CC3Light lightWithName: @"Spot" withLightIndex: 0]; // Set the ambient scene lighting. ccColor4F ambientColor = { 0.9, 0.9, 0.9, 1 }; self.ambientLight = ambientColor; //Positioning _spot.target = [self getNodeNamed:kCharacterName]; _spot.location = cc3v( 400, 400, -600 ); // Adjust the relative ambient and diffuse lighting of the main light to // improve realisim, particularly on shadow effects. _spot.diffuseColor = CCC4FMake(0.8, 0.8, 0.8, 1.0); _spot.specularColor = CCC4FMake(0, 0, 0, 1); [_spot setAttenuationCoefficients:CC3AttenuationCoefficientsMake(0, 0, 1)]; // Another mechansim for adjusting shadow intensities is shadowIntensityFactor. // For better effect, set here to a value less than one to lighten the shadows // cast by the main light. _spot.shadowIntensityFactor = 0.75; [self addChild:_spot]; _spot2 = [CC3Light lightWithName: @"Spot2" withLightIndex: 1]; //Positioning _spot2.target = [self getNodeNamed:kCharacterName]; _spot2.location = cc3v( -550, 400, -800 ); _spot2.diffuseColor = CCC4FMake(0.8, 0.8, 0.8, 1.0); _spot2.specularColor = CCC4FMake(0, 0, 0, 1); [_spot2 setAttenuationCoefficients:CC3AttenuationCoefficientsMake(0, 0, 1)]; _spot2.shadowIntensityFactor = 0.75; [self addChild:_spot2]; I'd really appreciate if anyone would have some tip on how to fix the lighting. Maybe my spots are bad? maybe it's the material? I really have no idea. Any help would be welcomed. I already ask some help on cocos2d forums. I had some answers but I need more help.

    Read the article

  • Meet Peter, 80 years old today

    - by AdamRG
    You have to arrive at the office early in the morning to meet Peter. He arrives at 5am and by 8:30am he's gone. Peter has been a cleaner here for several years. He is 80 years old today. Peter was born only a couple of km from our office in Cambridge, England and was for many years an Engineer for Pye Electronics. I'm lucky enough to arrive in the office early enough to catch Peter, dressed smarter than most of us in shirt and tie, and he tells stories of how Cambridge was years ago. He says the site of our office is on land between what would have been a prisoner of war camp (camp 1025), and a few hundred metres North, a camp of American allies. In February 1944, Peter was 13 years old. One night, a Dornier Do 217 heavy bomber heading towards London was hit by anti-aircraft fire and the crew of four parachuted from the plane. The plane however, continued on autopilot for over 50km. Gradually dropping lower and lower, narrowly missing the spires of Cambridge, it eventually came to land, largely intact, in allotment gardens by Peter's house near Milton Road. He told me that he was quick to the scene, along with some other young lads, and grabbed parts of the plane as souvenirs. It's one of many tales that Peter recounts, but I happened to discover a chapter about this particular plane crash in a history book called the War Torn Skies of Great Britain by Julian Evan-Hart. It reads: 'It slid to a halt in the allotment gardens of Milton Road. The cockpit ended up crumpled against a wooden fence and several incendiary bombs that had broken loose from their containers in the ruptured bomb bay were strewn over the ground behind the Dornier.' I smiled when I read the following line: 'Many residents came to see the Dornier in the allotments. Several lads made off with souvenirs' It seems a young Peter has been captured in print! For his birthday, among other things, we gave him a copy of the book. Working for a software company and rushing headlong through the 21st century, it's easy to forget even our recent history, or what feet stood on the same ground before us. That aircraft crashed only 700 metres from where our office now stands. The disused and overgrown railway line that runs down the side of the office closed to passengers 30 years ago. The industrial estate the other side was the site of a farm, Trinity Hall Farm, as recently as 60 years ago. Roman rings and Palaeolithic handaxes have been unearthed nearby. I suppose Peter will be one of the last people I'll ever hear talking first-hand about Cambridge during the war. It's a privilege to know him. Happy birthday Peter.

    Read the article

  • What does your Python development workbench look like?

    - by Fabian Fagerholm
    First, a scene-setter to this question: Several questions on this site have to do with selection and comparison of Python IDEs. (The top one currently is What IDE to use for Python). In the answers you can see that many Python programmers use simple text editors, many use sophisticated text editors, and many use a variety of what I would call "actual" integrated development environments – a single program in which all development is done: managing project files, interfacing with a version control system, writing code, refactoring code, making build configurations, writing and executing tests, "drawing" GUIs, and so on. Through its GUI, an IDE supports different kinds of workflows to accomplish different tasks during the journey of writing a program or making changes to an existing one. The exact features vary, but a good IDE has sensible workflows and automates things to let the programmer concentrate on the creative parts of writing software. The non-IDE way of writing large programs relies on a collection of tools that are typically single-purpose; they do "one thing well" as per the Unix philosophy. This "non-integrated development environment" can be thought of as a workbench, supported by the OS and generic interaction through a text or graphical shell. The programmer creates workflows in their mind (or in a wiki?), automates parts and builds a personal workbench, often gradually and as experience accumulates. The learning curve is often steeper than with an IDE, but those who have taken the time to do this can often claim deeper understanding of their tools. (Whether they are better programmers is not part of this question.) With advanced editor-platforms like Emacs, the pieces can be integrated into a whole, while with simpler editors like gedit or TextMate, the shell/terminal is typically the "command center" to drive the workbench. Sometimes people extend an existing IDE to suit their needs. What does your Python development workbench look like? What workflows have you developed and how do they work? For the first question, please give the main "driving" program – the one that you use to control the rest (Emacs, shell, etc.) the "small tools" -- the programs you reach for when doing different tasks For the second question, please describe what the goal of the workflow is (eg. "set up a new project" or "doing initial code design" or "adding a feature" or "executing tests") what steps are in the workflow and what commands you run for each step (eg. in the shell or in Emacs) Also, please describe the context of your work: do you write small one-off scripts, do you do web development (with what framework?), do you write data-munching applications (what kind of data and for what purpose), do you do scientific computing, desktop apps, or something else? Note: A good answer addresses the perspectives above – it doesn't just list a bunch of tools. It will typically be a long answer, not a short one, and will take some thinking to produce; maybe even observing yourself working.

    Read the article

  • OpenGL - Cascaded shadow mapping - Texture lookup

    - by Silverlan
    I'm trying to implement cascaded shadow mapping in my engine, but I'm somewhat stuck at the last step. For testing purposes I've made sure all cascades encompass my entire scene. The result is currently this: The different intensity of the cascades is not on purpose, it's actually the problem. This is how I do the texture lookup for the shadow maps inside the fragment shader: layout(std140) uniform CSM { vec4 csmFard; // far distances for each cascade mat4 csmVP[4]; // View-Projection Matrix int numCascades; // Number of cascades to use. In this example it's 4. }; uniform sampler2DArrayShadow csmTextureArray; // The 4 shadow maps in vec4 csmPos[4]; // Vertex position in shadow MVP space float GetShadowCoefficient() { int index = numCascades -1; vec4 shadowCoord; for(int i=0;i<numCascades;i++) { if(gl_FragCoord.z < csmFard[i]) { shadowCoord = csmPos[i]; index = i; break; } } shadowCoord.w = shadowCoord.z; shadowCoord.z = float(index); shadowCoord.x = shadowCoord.x *0.5f +0.5f; shadowCoord.y = shadowCoord.y *0.5f +0.5f; return shadow2DArray(csmTextureArray,shadowCoord).x; } I then use the return value and simply multiply it with the diffuse color. That explains the different intensity of the cascades, since I'm grabbing the depth value directly from the texture. I've tried to do a depth comparison instead, but with limited success: [...] // Same code as above shadowCoord.w = shadowCoord.z; shadowCoord.z = float(index); shadowCoord.x = shadowCoord.x *0.5f +0.5f; shadowCoord.y = shadowCoord.y *0.5f +0.5f; float z = shadow2DArray(csmTextureArray,shadowCoord).x; if(z < shadowCoord.w) return 0.25f; return 1.f; } While this does give me the same shadow value everywhere, it only works for the first cascade, all others are blank: (I colored the cascades because otherwise the transitions wouldn't be visible in this case) What am I missing here?

    Read the article

  • Slow Firefox Javascript Canvas Performance?

    - by jujumbura
    As a followup from a previous post, I have been trying to track down some slowdown I am having when drawing a scene using Javascript and the canvas element. I decided to narrow down my focus to a REALLY barebones animation that only clears the canvas and draws a single image, once per-frame. This of course runs silky smooth in Chrome, but it still stutters in Firefox. I added a simple FPS calculator, and indeed it appears that my page is typically getting an FPS in the 50's when running Firefox. This doesn't seem right to me, I must be doing something wrong here. Can anybody see anything I might be doing that is causing this drop in FPS? <!DOCTYPE HTML> <html> <head> </head> <body bgcolor=silver> <canvas id="myCanvas" width="600" height="400"></canvas> <img id="myHexagon" src="Images/Hexagon.png" style="display: none;"> <script> window.requestAnimFrame = (function(callback) { return window.requestAnimationFrame || window.webkitRequestAnimationFrame || window.mozRequestAnimationFrame || window.oRequestAnimationFrame || window.msRequestAnimationFrame || function(callback) { window.setTimeout(callback, 1000 / 60); }; })(); var animX = 0; var frameCounter = 0; var fps = 0; var time = new Date(); function animate() { var canvas = document.getElementById("myCanvas"); var context = canvas.getContext("2d"); context.clearRect(0, 0, canvas.width, canvas.height); animX += 1; if (animX == canvas.width) { animX = 0; } var image = document.getElementById("myHexagon"); context.drawImage(image, animX, 128); context.lineWidth=1; context.fillStyle="#000000"; context.lineStyle="#ffffff"; context.font="18px sans-serif"; context.fillText("fps: " + fps, 20, 20); ++frameCounter; var currentTime = new Date(); var elapsedTimeMS = currentTime - time; if (elapsedTimeMS >= 1000) { fps = frameCounter; frameCounter = 0; time = currentTime; } // request new frame requestAnimFrame(function() { animate(); }); } window.onload = function() { animate(); }; </script> </body> </html>

    Read the article

  • How to impale and stack targets correctly according to the collider and its coordinate?

    - by David Dimalanta
    I'm making another simple game, a catch game, where a spawning target game object must be captured using a skewer to impale it. Here how: At the start, the falling object (in red) will fall in a vertical direction (in blue) When aimed properly, the target will fall down along the line of the skewer. (in blue) Then, another target is spawned and will fall vertically. (in red) When aimed successfully again in a streak, the second target will fall along the skewer and stacked. Same process over and over when another target is spawned. However, when I test run it on the scene tab in Unity, when impaled several targets, instead of a smooth flow and stacking it ended up overlaying it instead of stacking it up like a pancake. Here's what it look like: As I noticed when reaching the half-way of my progress, I tried to figure out how to deal with collider bodies without sticking each other so that it will actually stack like in the example of the image at no. 3. Here's the script code I added in the target game object: using UnityEngine; using System.Collections; public class ImpaleStateTest : MonoBehaviour { public GameObject target; public GameObject skewer; public bool drag = false; private float stack; private float impaleCount; void Start () { stack = 0; impaleCount = 0; } void Update () { if(drag) { target.transform.position = new Vector3 (DragTest.dir.transform.position.x, DragTest.dir.transform.position.y - 0.35f, 0); target.transform.rotation = DragTest.degrees; target.rigidbody2D.fixedAngle = true; target.rigidbody2D.isKinematic = true; target.rigidbody2D.gravityScale = 0; if(Input.GetMouseButton(0)) { Debug.Log ("Skewer: " + DragTest.dir.transform.position.x); Debug.Log ("Target: " + target.transform.position.x); } } } void OnTriggerEnter2D(Collider2D collider) { impaleCount++; Debug.Log ("Impaled " + impaleCount + " time(s)!"); drag = true; audio.Play (); } } Aside from that, I'm not sure if it's right but, the only way to stick the impaled targets while dragging the skewer left or right is to get the X coordinates from the skewer only. Is there something else to recommend it in order to improve this behavior as realistic as possible? Please help.

    Read the article

  • OpenGL - Calculating camera view matrix

    - by Karle
    Problem I am calculating the model, view and projection matrices independently to be used in my shader as follows: gl_Position = projection * view * model * vec4(in_Position, 1.0); When I try to calculate my camera's view matrix the Z axis is flipped and my camera seems like it is looking backwards. My program is written in C# using the OpenTK library. Translation (Working) I've created a test scene as follows: From my understanding of the OpenGL coordinate system they are positioned correctly. The model matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(modelPosition); Matrix4 model = translation; The view matrix is created using: Matrix4 translation = Matrix4.CreateTranslation(-cameraPosition); Matrix4 view = translation; Rotation (Not-Working) I now want to create the camera's rotation matrix. To do this I use the camera's right, up and forward vectors: // Hard coded example orientation: // Normally calculated from up and forward // Similar to look-at camera. Vector3 r = Vector.UnitX; Vector3 u = Vector3.UnitY; Vector3 f = -Vector3.UnitZ; Matrix4 rot = new Matrix4( r.X, r.Y, r.Z, 0, u.X, u.Y, u.Z, 0, f.X, f.Y, f.Z, 0, 0.0f, 0.0f, 0.0f, 1.0f); This results in the following matrix being created: I know that multiplying by the identity matrix would produce no rotation. This is clearly not the identity matrix and therefore will apply some rotation. I thought that because this is aligned with the OpenGL coordinate system is should produce no rotation. Is this the wrong way to calculate the rotation matrix? I then create my view matrix as: // OpenTK is row-major so the order of operations is reversed: Matrix4 view = translation * rot; Rotation almost works now but the -Z/+Z axis has been flipped, with the green cube now appearing closer to the camera. It seems like the camera is looking backwards, especially if I move it around. My goal is to store the position and orientation of all objects (including the camera) as: Vector3 position; Vector3 up; Vector3 forward; Apologies for writing such a long question and thank you in advance. I've tried following tutorials/guides from many sites but I keep ending up with something wrong. Edit: Projection Matrix Set-up Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView( (float)(0.5 * Math.PI), (float)display.Width / display.Height, 0.1f, 1000.0f);

    Read the article

  • Physics/Graphics Components

    - by Brett Powell
    I have spent the last 48 hours reading up on Object Component systems, and feel I am ready enough to start implementing it. I got the base Object and Component classes created, but now that I need to start creating the actual components I am a bit confused. When I think of them in terms of HealthComponent or something that would basically just be a property, it makes perfect sense. When it is something more general as a Physics/Graphics component, I get a bit confused. My Object class looks like this so far (If you notice any changes I should make please let me know, still new to this)... typedef unsigned int ID; class GameObject { public: GameObject(ID id, Ogre::String name = ""); ~GameObject(); ID &getID(); Ogre::String &getName(); virtual void update() = 0; // Component Functions void addComponent(Component *component); void removeComponent(Ogre::String familyName); template<typename T> T* getComponent(Ogre::String familyName) { return dynamic_cast<T*>(m_components[familyName]); } protected: // Properties ID m_ID; Ogre::String m_Name; float m_flVelocity; Ogre::Vector3 m_vecPosition; // Components std::map<std::string,Component*> m_components; std::map<std::string,Component*>::iterator m_componentItr; }; Now the problem I am running into is what would the general population put into Components such as Physics/Graphics? For Ogre (my rendering engine) the visible Objects will consist of multiple Ogre::SceneNode (possibly multiple) to attach it to the scene, Ogre::Entity (possibly multiple) to show the visible meshes, and so on. Would it be best to just add multiple GraphicComponent's to the Object and let each GraphicComponent handle one SceneNode/Entity or is the idea to have one of each Component needed? For Physics I am even more confused. I suppose maybe creating a RigidBody and keeping track of mass/interia/etc. would make sense. But I am having trouble thinking of how to actually putting specifics into a Component. Once I get a couple of these "Required" components done, I think it will make a lot more sense. As of right now though I am still a bit stumped.

    Read the article

  • Compute directional light frustum from view furstum points and light direction

    - by Fabian
    I'm working on a friends engine project and my task is to construct a new frustum from the light direction that overlaps the view frustum and possible shadow casters. The project already has a function that creates a frustum for this but its way to big and includes way to many casters (shadows) which can't be seen in the view frustum. Now the only parameter of this function are the normalized light direction vector and a view class which lets me extract the 8 view frustum points in world space. I don't have any additional infos about the scene. I have read some of the related Questions here but non seem to fit very well to my problem as they often just point to cascaded shadow maps. Sadly i can't use DX or openGl functions directly because this engine has a dedicated math library. From what i've read so far the steps are: Transform view frustum points into light space and find min/max x and y values (or sometimes minima and maxima of all three axis) and create a AABB using the min/max vectors. But what comes after this step? How do i transform this new AABB back to world space? What i've done so far: CVector3 Points[8], MinLight = CVector3(FLT_MAX), MaxLight = CVector3(FLT_MAX); for(int i = 0; i<8;++i){ Points[i] = Points[i] * WorldToShadowMapMatrix; MinLight = Math::Min(Points[i],MinLight); MaxLight = Math::Max(Points[i],MaxLight); } AABox box(MinLight,MaxLight); I don't think this is the right way to do it. The near plain probably has to extend into the direction of the light source to include potentional shadow casters. I've read the Microsoft article about cascaded shadow maps http://msdn.microsoft.com/en-us/library/windows/desktop/ee416307%28v=vs.85%29.aspx which also includes some sample code. But they seem to use the scenes AABB to determine the near and far plane which I can't since i cant access this information from the funtion I'm working in. Could you guys please link some example code which shows the calculation of such frustum? Thanks in advance! Additional questio: is there a way to construct a WorldToFrustum matrix that represents the above transformation?

    Read the article

  • Torchlight Black Screen and doesn't show up

    - by Lelouch Reyiz
    When I open it in full screen I get a black screen that covers whole screen,in windowed mode middle of screen.Here is a video: https://copy.com/fvrGw7QIJ8Z0 Terminal Output: alperen@alperen-Inspiron-N5010 /usr/local/games/Torchlight $ ./Torchlight.bin.x86_64 Creating resource group General Creating resource group Internal Creating resource group Autodetect SceneManagerFactory for type 'DefaultSceneManager' registered. Registering ResourceManager for type Material Registering ResourceManager for type Mesh Registering ResourceManager for type Skeleton MovableObjectFactory for type 'ParticleSystem' registered. OverlayElementFactory for type Panel registered. OverlayElementFactory for type BorderPanel registered. OverlayElementFactory for type TextArea registered. Registering ResourceManager for type Font ArchiveFactory for archive type FileSystem registered. ArchiveFactory for archive type Zip registered. FreeImage version: 3.13.1 This program uses FreeImage, a free, open source image library supporting all common bitmap formats. See http://freeimage.sourceforge.net for details Supported formats: bmp,ico,jpg,jif,jpeg,jpe,jng,koa,iff,lbm,mng,pbm,pbm,pcd,pcx,pgm,pgm,png,ppm,ppm,ras,tga,targa,tif,tiff,wap,wbmp,wbm,psd,cut,xbm,xpm,gif,hdr,g3,sgi,exr,j2k,j2c,jp2,pfm,pct,pict,pic,bay,bmq,cr2,crw,cs1,dc2,dcr,dng,erf,fff,hdr,k25,kdc,mdc,mos,mrw,nef,orf,pef,pxn,raf,raw,rdc,sr2,srf,arw,3fr,cine,ia,kc2,mef,nrw,qtk,rw2,sti,drf,dsc,ptx,cap,iiq,rwz DDS codec registering Registering ResourceManager for type HighLevelGpuProgram Registering ResourceManager for type Compositor MovableObjectFactory for type 'Entity' registered. MovableObjectFactory for type 'Light' registered. MovableObjectFactory for type 'BillboardSet' registered. MovableObjectFactory for type 'ManualObject' registered. MovableObjectFactory for type 'BillboardChain' registered. MovableObjectFactory for type 'RibbonTrail' registered. Loading library lib64/OGRE/RenderSystem_GL Installing plugin: GL RenderSystem OpenGL Rendering Subsystem created. Plugin successfully installed Loading library lib64/OGRE/Plugin_ParticleFX Installing plugin: ParticleFX Particle Emitter Type 'Point' registered Particle Emitter Type 'Box' registered Particle Emitter Type 'Ellipsoid' registered Particle Emitter Type 'Cylinder' registered Particle Emitter Type 'Ring' registered Particle Emitter Type 'HollowEllipsoid' registered Particle Affector Type 'LinearForce' registered Particle Affector Type 'ColourFader' registered Particle Affector Type 'ColourFader2' registered Particle Affector Type 'ColourImage' registered Particle Affector Type 'ColourInterpolator' registered Particle Affector Type 'Scaler' registered Particle Affector Type 'Rotator' registered Particle Affector Type 'DirectionRandomiser' registered Particle Affector Type 'DeflectorPlane' registered Plugin successfully installed Loading library lib64/OGRE/Plugin_OctreeSceneManager Installing plugin: Octree & Terrain Scene Manager Plugin successfully installed *-*-* OGRE Initialising *-*-* Version 1.6.5 (Shoggoth) terminate called after throwing an instance of 'std::out_of_range' what(): basic_string::substr Error: signal: 6 ./Torchlight.bin.x86_64(_ZN10LinuxUtils13crash_handlerEi+0x25)[0x17eb6f5] /lib/x86_64-linux-gnu/libc.so.6(+0x37000)[0x7fc647877000] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39)[0x7fc647876f89] /lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7fc64787a398] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x155)[0x7fc6481826b5] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5e836)[0x7fc648180836] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5e863)[0x7fc648180863] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5eaa2)[0x7fc648180aa2] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZSt20__throw_out_of_rangePKc+0x67)[0x7fc6481d25d7] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xbe3d3)[0x7fc6481e03d3] ./Torchlight.bin.x86_64(_ZN11CFileSystem21buildMassiveDataGroupEv+0x453)[0x1617805] ./Torchlight.bin.x86_64(_ZN11CFileSystemC1Eb+0x14be)[0x16145ae] ./Torchlight.bin.x86_64(_ZN22CMasterResourceManagerC1EP9CSettings+0x41a)[0xfe1d0a] ./Torchlight.bin.x86_64(_ZN5CGame5setupEb+0x79a)[0x73ceaa] ./Torchlight.bin.x86_64(_ZN5CGame5beginEPv+0x28d)[0x73b839] ./Torchlight.bin.x86_64(main+0x649)[0x146dbe4] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7fc647861ec5] ./Torchlight.bin.x86_64[0x739ca9]

    Read the article

  • Drawing multiple Textures as tilemap

    - by DocJones
    I am trying to draw a 2d game map and the objects on the map in a single pass. Here is my OpenGL initialization code // Turn off unnecessary operations glDisable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); glDisable(GL_CULL_FACE); glDisable(GL_STENCIL_TEST); glDisable(GL_DITHER); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); // activate pointer to vertex & texture array glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); My drawing code is being called by a NSTimer every 1/60 s. Here is the drawing code of my world object: - (void) draw:(NSRect)rect withTimedDelta:(double)d { GLint *t; glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:@"blocks"]); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); for (int x=0; x<[_map getWidth] ; x++) { for (int y=0; y<[_map getHeight] ; y++) { GLint v[] = { 16*x ,16*y, 16*x+16,16*y, 16*x+16,16*y+16, 16*x ,16*y+16 }; t=[_textureManager getBlockWithNumber:[_map getBlockAtX:x andY:y]]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } } } (_textureManager is a Singelton only loading a texture once!) The object drawing codes is identical (except the nested loops) in terms of OpenGL calls: - (void) drawWithTimedDelta:(double)d { GLint *t; GLint v[] = { 16*xpos ,16*ypos, 16*xpos+16,16*ypos, 16*xpos+16,16*ypos+16, 16*xpos ,16*ypos+16 }; glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:_textureName]); t=[_textureManager getBlockWithNumber:12]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } As soon as my central drawing routine calls the two drawing methods the second call overlays the first one. i would expect the call to world.draw to draw the map and "stamp" the objects upon it. Debugging shows me, that the first call is performed correctly (world is being drawn), but the following call to all objects ONLY draws the objects, the rest of the scene is getting black. I think i need to blend the drawn textures, but i cant seem to figure out how. Any help is appreciated. Thanks PS: Here is the github link to the project. It may not be in sync of my post here, but for some more in-depth analysis it may help.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >