Search Results

Search found 1000 results on 40 pages for 'scene'.

Page 35/40 | < Previous Page | 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Day 5 - Tada! My Game Menu Screen Graphics

    - by dapostolov
    So, tonight I took some time to mash up some graphics for my game menu screen. My artistic talent sucks...but here goes nothing...voila, my menu screen!! The Menu Screen The screen above is displaying 4 sprites, even though it looks like maybe 7... I guess one of the first things for me to test in the future is ... is it more memory efficient (and better frame rate) to draw one big background image OR tp paint the screen black, and place each sprite in set locations? To display the 4 sprites above, I borrowed my code from yesterday ... I know, tacky, but...I wanted to see it, feel it. Do you feel it? FEEL IT! (homer voice & shakes fist) Note: the menu items won't scale properly as it stands with this code, well pretty much they do nothing except look pretty... Paint.Net & Google Fun So how did I create that image above? Well, to create the background and 3 menu items I used Paint.Net. Basically, I scoured Google images for: a stone doorway, a stone pillar, an old book, a wizards hat, and...that's it pretty much it! I'll let you type in those searches and see if you can locate the images I used. I know, bad developer...but I figured since I modified the images considerably it doesn't count...well for a personal project it shouldn't count...*shrug* Anyhow, I extracted each key assest I wanted from each image and applied lots of matting, blurring, color changes, glow effects and such. Then, using my vivid imagination I placed / composed each of the layered assets into the mashed up the "scene" above. Pretty cool, eh? Hey, did you know, the cool mist effect is actually a fire rendition in Paint.net? I set it to black & white with opacity set next to nothing. I'm also very proud of the yellow "light" in the stone doorway. I drew that in and then applied gausian blur to it to give it the effect of light creeping out around the door and into the room...heheh. So did I achieve the dark, mysterious ritual as I stated in my design doc? I think I had a great stab at it! Maybe down the road I can get a real artist to crank out some quality graphics for the game... =) So, What's Next? Well, I don't have that animated brazier yet...however, I thought it would be even cooler if I can get that door pulsing that yellow light and it would be extremely cool to have the smoke / mist moving across the screen! Make the creative ideas stop!! (clutches head) haha! I'm having great fun working on this project =) I recommend others giving something like this a try, it's really fulfilling. OK. Tomorrow... I think I'm going to start creating some game / menu objects as per the design doc, maybe even get a custom mouse cursor up on the screen and handle a couple of mouse events, and lastly, maybe a feature to toggle a framerate display... D.

    Read the article

  • Come see us at JavaU at JavaOne!

    - by tmcginn
    In just a little under a month, JavaOne will be in full swing (no pun intended) and thousands of Java developers will gather to hear the latest Java news, immerse themselves in Java technology and learn some new things. This year, I am fortunate enough to be able to attend, along with my Java curriculum development colleagues Matt Heimer and Mike Williams. We start our week at JavaOne teaching a one-day session at JavaU on Sunday morning. If you have never attended a training session through JavaU, you should check it out. There are some terrific sessions this year, and it might help to justify your trip to JavaOne if you can say it was for training! This year I am teaching a one day session on Java SE 7 New Features - a great session for anyone interested in the specific details of what is new in Java SE 7. Matt is teaching a one-day session on Developing Portable Java EE applications with the Enterprise JavaBeans 3.1 API and Java Persistence 2.0 API  EJB, and Mike is doing a one-day session on developing Rich Client applications with Java SE 7 using Java FX 2. I asked Matt and Mike to tell me what developers can expect from their sessions. Matt: "My session will get you up to speed on everything you need to know to create portable Java EE 6 applications using EJB 3.1 and JPA 2. I am going to cover why everyone can benefit from using EJBs (and why developers should relearn them if they haven't looked at them for years). Students who attend my session will see JPA examples showcasing how to use relational databases in an enterprise applications without programming to JDBC and without writing SQL statements. EJB and JPA benefit from being paired together, so I will also show how transaction management is easier in a container. I encourage students to bring a laptop and code as they learn!" Mike: "My session covers how to develop a rich client application using Java FX 2. Starting with the basic concepts of JavaFX, students will see how a JavaFX application is built from its layout, to its controls, to its data structures. In addition, more advanced controls like charts, smart tables, and transitions will be added to the application. Finally, a quick review of JavaFX concurrency and data binding is included. Blended with the core concepts the session will include some of the latest JavaFX technology. This includes using Scene Builder to create a JavaFX UI and connecting your XML UI definition to Java code.  In addition, packaging of the JavaFX application will be covered with some examples of the new native packaging features." As I mentioned, my session covers the changes in the Java for SE 7, including the  language changes that were voted into Java SE 7 from Project Coin. I will also look at how you can take advantage if the the new I/O library (NIO.2) for writing applications that work with files, directories and file systems. We will also look at the changes in Asynchronous I/O that are a part of the changes in NIO/2. We will spend some time looking at the changes to the Java Virtual Machine as well, including support for dynamically typed languages (JSR-292). We will spend some time looking at the Java Concurrency enhancements (JSR-166), including the new Fork/Join framework. And we'll round out the day with a look at changes in Swing, XML and a number of smaller changes in the API's. And, if these topics aren't grabbing your interest, take a look at the other 10 sessions that range from topics on architecture to how to pass the Oracle Certified Programmer I and II exams. See you soon!

    Read the article

  • Big Data – Interacting with Hadoop – What is Sqoop? – What is Zookeeper? – Day 17 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Pig and Pig Latin in Big Data Story. In this article we will understand what is Sqoop and Zookeeper in Big Data Story. There are two most important components one should learn when learning about interacting with Hadoop – Sqoop and Zookper. What is Sqoop? Most of the business stores their data in RDBMS as well as other data warehouse solutions. They need a way to move data to the Hadoop system to do various processing and return it back to RDBMS from Hadoop system. The data movement can happen in real time or at various intervals in bulk. We need a tool which can help us move this data from SQL to Hadoop and from Hadoop to SQL. Sqoop (SQL to Hadoop) is such a tool which extract data from non-Hadoop data sources and transform them into the format which Hadoop can use it and later it loads them into HDFS. Essentially it is ETL tool where it Extracts, Transform and Load from SQL to Hadoop. The best part is that it also does extract data from Hadoop and loads them to Non-SQL (or RDBMS) data stores. Essentially, Sqoop is a command line tool which does SQL to Hadoop and Hadoop to SQL. It is a command line interpreter. It creates MapReduce job behinds the scene to import data from an external database to HDFS. It is very effective and easy to learn tool for nonprogrammers. What is Zookeeper? ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. In other words Zookeeper is a replicated synchronization service with eventual consistency. In simpler words – in Hadoop cluster there are many different nodes and one node is master. Let us assume that master node fails due to any reason. In this case, the role of the master node has to be transferred to a different node. The main role of the master node is managing the writers as that task requires persistence in order of writing. In this kind of scenario Zookeeper will assign new master node and make sure that Hadoop cluster performs without any glitch. Zookeeper is the Hadoop’s method of coordinating all the elements of these distributed systems. Here are few of the tasks which Zookeepr is responsible for. Zookeeper manages the entire workflow of starting and stopping various nodes in the Hadoop’s cluster. In Hadoop cluster when any processes need certain configuration to complete the task. Zookeeper makes sure that certain node gets necessary configuration consistently. In case of the master node fails, Zookeepr can assign new master node and make sure cluster works as expected. There many other tasks Zookeeper performance when it is about Hadoop cluster and communication. Basically without the help of Zookeeper it is not possible to design any new fault tolerant distributed application. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Big Data Analytics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Handling "related" work within a single agile work item

    - by Tesserex
    I'm on a project team of 4 devs, myself included. We've been having a long discussion on how to handle extra work that comes up in the course of a single work item. This extra work is usually things that are slightly related to the task, but not always necessary to accomplish the goal of the item (that may be an opinion). Examples include but are not limited to: refactoring of the code changed by the work item refactoring code neighboring the code changed by the item re-architecting the larger code area around the ticket. For example if an item has you changing a single function, you realize the entire class now could be redone to better accommodate this change. improving the UI on a form you just modified When this extra work is small we don't mind. The problem is when this extra work causes a substantial extension of the item beyond the original feature point estimation. Sometimes a 5 point item will actually take 13 points of time. In one case we had a 13 point item that in retrospect could have been 80 points or more. There are two options going around in our discussion for how to handle this. We can accept the extra work in the same work item, and write it off as a mis-estimation. Arguments for this have included: We plan for "padding" at the end of the sprint to account for this sort of thing. Always leave the code in better shape than you found it. Don't check in half-assed work. If we leave refactoring for later, it's hard to schedule and may never get done. You are in the best mental "context" to handle this work now, since you're waist deep in the code already. Better to get it out of the way now and be more efficient than to lose that context when you come back later. We draw a line for the current work item, and say that the extra work goes into a separate ticket. Arguments include: Having a separate ticket allows for a new estimation, so we aren't lying to ourselves about how many points things really are, or having to admit that all of our estimations are terrible. The sprint "padding" is meant for unexpected technical challenges that are direct barriers to completing the ticket requirements. It is not intended for side items that are just "nice-to-haves". If you want to schedule refactoring, just put it at the top of the backlog. There is no way for us to properly account for this stuff in an estimation, since it seems somewhat arbitrary when it comes up. A code reviewer might say "those UI controls (which you actually didn't modify in this work item) are a bit confusing, can you fix that too?" which is like an hour, but they might say "Well if this control now inherits from the same base class as the others, why don't you move all of this (hundreds of lines of) code into the base and rewire all this stuff, the cascading changes, etc.?" And that takes a week. It "contaminates the crime scene" by adding unrelated work into the ticket, making our original feature point estimates meaningless. In some cases, the extra work postpones a check-in, causing blocking between devs. Some of us are now saying that we should decide some cut off, like if the additional stuff is less than 2 FP, it goes in the same ticket, if it's more, make it a new ticket. Since we're only a few months into using Agile, what's the opinion of all the more seasoned Agile veterans around here on how to handle this?

    Read the article

  • History of Mobile Technology

    - by David Dorf
    Over the last ten years, mobile phones have gone through several incremental technology leaps that have added capabilities that impact the retail industry.  I've listed the six major ones below, along with their long-lasting impact. 1. Location In the US, the FCC required mobile phones to implement E911 (emergency calls) by 2006, requiring the caller to be located to within 300 meters.  Back in 2000, GPS was opened up for civilian use, and by 2004 Qualcomm had figured out how to use GPS in mobile phones.  So mobile operators moved from cell tower triangulation to GPS, principally for E911.  But then lots of other uses became apparent, especially navigation.  The earliest mobile apps from retailers made it easy to find nearby stores, and companies are looking at ways to use WiFi triangulation inside stores. 2. Computer Vision In 1997 Philippe Kahn shared a photo of his newborn using a mobile phone thus launching the popularity of instant visual communications.  Over the years the quality of the cameras got better, reaching the point where barcodes could be read around 2008.  That's when Occipital came on the scene with their Red Laser application, which was eventually acquired by eBay.  This opened up the ability for consumers to easily price compare inside stores.  Other interesting apps included Tesco's Wine Finder and Amazon's Price Checker, both allowing products to be identified by picture. 3. Augmented Reality Once the mobile phone had GPS, a video camera, and compass functionality it was suddenly possible to overlay digital information on the screen in real-time.  Yelp, which was using GPS to find nearby merchants, created a backdoor called Monocle on the iPhone that showed nearby merchants overlayed on the video camera view.  Today AR apps are mostly used by retailers for marketing, like Moosejaw's app that undresses models in their catalog. 4. Geo-Fencing So if we're able to track the location of a mobile phone, why not use that context to offer timely information?  My first experience with geo-fencing came courtesy of North Face, the outdoor enthusiast store. When a mobile phone enters a predetermined area, like near a store, a text message is sent to phone with an offer or useful information.  Of course retailers can geo-fence their competitors as well and find out which customers are aren't so loyal. 5. Digital Wallet Mobile payments leverage different technologies such as NFC, QRCodes, bluetooth, and SMS to facilitate communication between the consumers's phone and the retailer's point-of-sale. The key here is the potential to consolidate loyalty cards, coupons, and bank cards into the mobile phone and enable faster checkout.  Nobody does this better than Starbucks today, but McDonald's and Duncan Donuts aren't far behind.  Google, Isis, Paypal, Square, and MCX are all vying for leadership in this area.  If NFC does finally take off, it will be leveraged by retailers in more places than just the POS. 6. Voice Response Mobile Phones have had the ability to interpret simple voice commands for a while, but Google and Amazon were the first to use voice to allow searches for products.  Allowing searches by text, barcode, and voice makes it easy to comparison shop in the aisles.  Walmart even uses voice to build shopping lists, and if the Siri API is even opened we could see lots more innovation in this area.

    Read the article

  • Screen Aspect Ratio

    - by Bill Evjen
    Jeffrey Dean, Pixar Aspect Ratio is very important to home video. What is aspect ratio – the ratio from the height to the width 2.35:1 The image is 2.35 times wide as it is high Pixar uses this for half of our movies This is called a widescreen image When modified to fit your television screen They cut this to fit the box of your screen When a comparison is made huge chunks of picture is missing It is harder to find what is going on when these pieces are missing The whole is greater than the pieces themselves. If you are missing pieces – you are missing the movie The soul and the mood is in the film shots. Cutting it to fit a screen, you are losing 30% of the movie Why different aspect ratios? Film before the 1950s 1.33:1 Academy Standard There were all aspects of images though. There was no standard. Thomas Edison developed projecting images onto a wall/screen He didn’t patent it as he saw no value in it. Then 1.37:1 came about to add a strip of sound This is the same size as a 35mm film Around 1952 – TV comes along NTSC Television followed the Academy Standard (4x3) Once TV came out, movie theater attendance plummets So Film brought forth color to combat this. Also early 3D Also Widescreen was brought forth. Cinema-Scope Studios at the time made movies bigger and bigger There was a Napoleon movie that was actually 4x1 … really wide. 1.85:1 Academy Flat 2.35:1 Anamorphic Scope (aka Panavision/Cinemascope) Almost all movies are made in these two aspect ratios Pixar has done half in one and half in the other Why choose one over the other? Artist choice It is part of the story the director wants to tell Can we preserve the story outside of the theaters? TVs before 1998 – they were very square Now TVs are very wide Historical options Toy Story released as it was and people cut it in a way that wasn’t liked by the studio Pan and Scan is another option Cut and then scan left or right depending on where the action is Frame Height Pixar can go back and animate more picture to account for the bottom/top bars. You end up with more sky and more ground The characters seem to get lost in the picture You lose what the director original intended Re-staging For animated movies, you can move characters around – restage the scene. It is a new completely different version of the film This is the best possible option that Pixar came up with They have stopped doing this really as the demand as pretty much dropped off Why not 1.33 today? There has been an evolution of taste and demands. VHS is a linear item The focus is about portability and not about quality Most was pan and scan and the quality was so bad – but people didn’t notice DVD was introduced in 1996 You could have more content – two versions of the film You could have the widescreen version and the 1.33 version People realized that they are seeing more of the movie with the widescreen High Def Televisions (16x9 monitors) This was introduced in 2005 Blu-ray Disc was introduced in 2006 This is all widescreen You cannot find a square TV anymore TVs are roughly 1.85:1 aspect ratio There is a change in demand Users are used to black bars and are used to widescreen Users are educated now What’s next for in-flight entertainment? High Def IFE Personal Electronic Devices 3D inflight

    Read the article

  • Asynchrony in C# 5 (Part I)

    - by javarg
    I’ve been playing around with the new Async CTP preview available for download from Microsoft. It’s amazing how language trends are influencing the evolution of Microsoft’s developing platform. Much effort is being done at language level today than previous versions of .NET. In these post series I’ll review some major features contained in this release: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part I: Asynchronous Functions This is a mean of expressing asynchronous operations. This kind of functions must return void or Task/Task<> (functions returning void let us implement Fire & Forget asynchronous operations). The two new keywords introduced are async and await. async: marks a function as asynchronous, indicating that some part of its execution may take place some time later (after the method call has returned). Thus, all async functions must include some kind of asynchronous operations. This keyword on its own does not make a function asynchronous thought, its nature depends on its implementation. await: allows us to define operations inside a function that will be awaited for continuation (more on this later). Async function sample: Async/Await Sample async void ShowDateTimeAsync() {     while (true)     {         var client = new ServiceReference1.Service1Client();         var dt = await client.GetDateTimeTaskAsync();         Console.WriteLine("Current DateTime is: {0}", dt);         await TaskEx.Delay(1000);     } } The previous sample is a typical usage scenario for these new features. Suppose we query some external Web Service to get data (in this case the current DateTime) and we do so at regular intervals in order to refresh user’s UI. Note the async and await functions working together. The ShowDateTimeAsync method indicate its asynchronous nature to the caller using the keyword async (that it may complete after returning control to its caller). The await keyword indicates the flow control of the method will continue executing asynchronously after client.GetDateTimeTaskAsync returns. The latter is the most important thing to understand about the behavior of this method and how this actually works. The flow control of the method will be reconstructed after any asynchronous operation completes (specified with the keyword await). This reconstruction of flow control is the real magic behind the scene and it is done by C#/VB compilers. Note how we didn’t use any of the regular existing async patterns and we’ve defined the method very much like a synchronous one. Now, compare the following code snippet  in contrast to the previuous async/await: Traditional UI Async void ComplicatedShowDateTime() {     var client = new ServiceReference1.Service1Client();     client.GetDateTimeCompleted += (s, e) =>     {         Console.WriteLine("Current DateTime is: {0}", e.Result);         client.GetDateTimeAsync();     };     client.GetDateTimeAsync(); } The previous implementation is somehow similar to the first shown, but more complicated. Note how the while loop is implemented as a chained callback to the same method (client.GetDateTimeAsync) inside the event handler (please, do not do this in your own application, this is just an example).  How it works? Using an state workflow (or jump table actually), the compiler expands our code and create the necessary steps to execute it, resuming pending operations after any asynchronous one. The intention of the new Async/Await pattern is to let us think and code as we normally do when designing and algorithm. It also allows us to preserve the logical flow control of the program (without using any tricky coding patterns to accomplish this). The compiler will then create the necessary workflow to execute operations as the happen in time.

    Read the article

  • Take Snapshots of Your Favorite Movie Scenes in VLC

    - by DigitalGeekery
    Have you ever wanted to grab a screenshot of your favorite TV or movie scene? Today we’ll show you how to do so with VLC Media Player. If you don’t have it already, download and install the latest version of VLC (link below). Start playing your movie, and to grab a snapshot, select Video from the menu and click Snapshot.   When you take a snapshot, by default a preview is displayed at the top left and the folder where the file is saved is briefly displayed on the screen. If you enable the Advanced Controls, you can take a snapshot with a click of a button, and advance the video frame by frame to get a more accurate shot. To enable the Advanced Controls, select View and Advanced Controls.   You’ll see the Advanced Controls buttons appear below the slider. Now just click on the Snapshot button to grab an image.   You can more easily control the frame you wish to grab by pressing the Frame by Frame button. You can pause the movie when it is near the perfect spot for your snapshot, and then press the Frame by Frame button to advance a single frame at a time. By default, the snapshots are saved as PNG files in your My Pictures folder in Windows. You can change those setting in the Preferences. First, you’ll need to select All under Show settings at the bottom. Then click on Video on the left. Scroll down a bit and you’ll see the Snapshot section. Here you can change the format from PNG to JPG, change the directory to which the snapshots are stored, turn on and off the preview, and change the filename prefix. Click Save when finished.   Now you have nice screenshots of your favorite movie to display as you wish…such as a Desktop Background!   VLC is an excellent media player that runs on Windows, Mac, and Linux. In addition to playing almost any media file, it also makes grabbing screenshots of your videos a breeze. Want to know more about VLC? Check out some of our previous articles like how to rip DVDs and how to set a video as your desktop wallpaper. Download the Latest version of VLC Similar Articles Productive Geek Tips Desktop Fun: Rural Scenes WallpapersAutomatically Mount and View ISO files in Windows 7 Media CenterHow to Make/Edit a movie with Windows Movie Maker in Windows VistaAdd Images and Metadata to Windows 7 Media Center Movie LibraryQuickly Find Movies to Watch at Hello Movies TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 tinysong gives a shortened URL for you to post on Twitter (or anywhere) 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes

    Read the article

  • Viewport / Camera Calculation in 2D Game

    - by Dave
    we have a 2D game with some sprites and tiles and some kind of camera/viewport, that "moves" around the scene. so far so good, if we wouldn't had some special behaviour for your camera/viewport translation. normally you could stick the camera to your player figure and center it, resulting in a very cheap, undergraduate, translation equation, like : vec_translation -/+= speed (depending in what keys are pressed. WASD as default.) buuuuuuuuuut, we want our player figure be able to actually reach the bounds, when the viewport/camera has reached a maximum translation. we came up with the following solution (only keys a and d are the shown here, the rest is just adaption of calculation or maybe YOUR super-cool and elegant solution :) ): if(keys[A]) { playerX -= speed; if(playerScreenX <= width / 2 && tx < 0) { playerScreenX = width / 2; tx += speed; } else if(playerScreenX <= width / 2 && (tx) >= 0) { playerScreenX -= speed; tx = 0; if(playerScreenX < 0) playerScreenX = 0; } else if(playerScreenX >= width / 2 && (tx) < 0) { playerScreenX -= speed; } } if(keys[D]) { playerX += speed; if(playerScreenX >= width / 2 && (-tx + width) < sceneWidth) { playerScreenX = width / 2; tx -= speed; } if(playerScreenX >= width / 2 && (-tx + width) >= sceneWidth) { playerScreenX += speed; tx = -(sceneWidth - width); if(playerScreenX >= width - player.width) playerScreenX = width - player.width; } if(playerScreenX <= width / 2 && (-tx + width) < sceneWidth) { playerScreenX += speed; } } i think the code is rather self explaining: keys is a flag container for currently active keys, playerX/-Y is the position of the player according to world origin, tx/ty are the translation components vital to background / npc / item offset calculation, playerOnScreenX/-Y is the actual position of the player figure (sprite) on screen and width/height are the dimensions of the camera/viewport. this all looks quite nice and works well, but there is a very small and nasty calculation error, which in turn sums up to some visible effect. let's consider following piece of code: if(playerScreenX <= width / 2 && tx < 0) { playerScreenX = width / 2; tx += speed; } it can be translated into plain english as : if the x position of your player figure on screen is less or equal the half of your display / camera / viewport size AND there is enough space left LEFT of your viewport/camera then set players x position on screen to width half, increase translation (because we subtract the translation from something we want to move). easy, right?! doing this will create a small delta between playerX and playerScreenX. after so much talking, my question appears now here at the bottom of this document: how do I stick the calculation of my player-on-screen to the actual position of the player AND having a viewport that is not always centered aroung the players figure? here is a small test-case in processing: http://pastebin.com/bFaTauaa thank you for reading until now and thank you in advance for probably answering my question.

    Read the article

  • Help understand GLSL directional light on iOS (left handed coord system)

    - by Robse
    I now have changed from GLKBaseEffect to a own shader implementation. I have a shader management, which compiles and applies a shader to the right time and does some shader setup like lights. Please have a look at my vertex shader code. Now, light direction should be provided in eye space, but I think there is something I don't get right. After I setup my view with camera I save a lightMatrix to transform the light from global space to eye space. My modelview and projection setup: - (void)setupViewWithWidth:(int)width height:(int)height camera:(N3DCamera *)aCamera { aCamera.aspect = (float)width / (float)height; float aspect = aCamera.aspect; float far = aCamera.far; float near = aCamera.near; float vFOV = aCamera.fieldOfView; float top = near * tanf(M_PI * vFOV / 360.0f); float bottom = -top; float right = aspect * top; float left = -right; // projection GLKMatrixStackLoadMatrix4(projectionStack, GLKMatrix4MakeFrustum(left, right, bottom, top, near, far)); // identity modelview GLKMatrixStackLoadMatrix4(modelviewStack, GLKMatrix4Identity); // switch to left handed coord system (forward = z+) GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeScale(1, 1, -1)); // transform camera GLKMatrixStackMultiplyMatrix4(modelviewStack, GLKMatrix4MakeWithMatrix3(GLKMatrix3Transpose(aCamera.orientation))); GLKMatrixStackTranslate(modelviewStack, -aCamera.position.x, -aCamera.position.y, -aCamera.position.z); } - (GLKMatrix4)modelviewMatrix { return GLKMatrixStackGetMatrix4(modelviewStack); } - (GLKMatrix4)projectionMatrix { return GLKMatrixStackGetMatrix4(projectionStack); } - (GLKMatrix4)modelviewProjectionMatrix { return GLKMatrix4Multiply([self projectionMatrix], [self modelviewMatrix]); } - (GLKMatrix3)normalMatrix { return GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3([self modelviewProjectionMatrix]), NULL); } After that, I save the lightMatrix like this: [self.renderer setupViewWithWidth:view.drawableWidth height:view.drawableHeight camera:self.camera]; self.lightMatrix = [self.renderer modelviewProjectionMatrix]; And just before I render a 3d entity of the scene graph, I setup the light config for its shader with the lightMatrix like this: - (N3DLight)transformedLight:(N3DLight)light transformation:(GLKMatrix4)matrix { N3DLight transformedLight = N3DLightMakeDisabled(); if (N3DLightIsDirectional(light)) { GLKVector3 direction = GLKVector3MakeWithArray(GLKMatrix4MultiplyVector4(matrix, light.position).v); direction = GLKVector3Negate(direction); // HACK -> TODO: get lightMatrix right! transformedLight = N3DLightMakeDirectional(direction, light.diffuse, light.specular); } else { ... } return transformedLight; } You see the line, where I negate the direction!? I can't explain why I need to do that, but if I do, the lights are correct as far as I can tell. Please help me, to get rid of the hack. I'am scared that this has something to do, with my switch to left handed coord system. My vertex shader looks like this: attribute highp vec4 inPosition; attribute lowp vec4 inNormal; ... uniform highp mat4 MVP; uniform highp mat4 MV; uniform lowp mat3 N; uniform lowp vec4 constantColor; uniform lowp vec4 ambient; uniform lowp vec4 light0Position; uniform lowp vec4 light0Diffuse; uniform lowp vec4 light0Specular; varying lowp vec4 vColor; varying lowp vec3 vTexCoord0; vec4 calcDirectional(vec3 dir, vec4 diffuse, vec4 specular, vec3 normal) { float NdotL = max(dot(normal, dir), 0.0); return NdotL * diffuse; } ... vec4 calcLight(vec4 pos, vec4 diffuse, vec4 specular, vec3 normal) { if (pos.w == 0.0) { // Directional Light return calcDirectional(normalize(pos.xyz), diffuse, specular, normal); } else { ... } } void main(void) { // position highp vec4 position = MVP * inPosition; gl_Position = position; // normal lowp vec3 normal = inNormal.xyz / inNormal.w; normal = N * normal; normal = normalize(normal); // colors vColor = constantColor * ambient; // add lights vColor += calcLight(light0Position, light0Diffuse, light0Specular, normal); ... }

    Read the article

  • Why RenderTarget2D overwrites other objects when trying to put some text in a model?

    - by cad
    I am trying to draw an object composited by two cubes (A & B) (one on top of the other, but for now I have them a little bit more open). I am able to do it and this is the result. (Cube A is the blue and Cube B is the one with brown text that comes from a png texture) But I want to have any text as parameter in the cube B. I have tried what @alecnash suggested in his question, but for some reason when I try to draw cube B, cube A dissapears and everything turns purple. This is my draw code: public void Draw(GraphicsDevice graphicsDevice, SpriteBatch spriteBatch, Matrix viewMatrix, Matrix projectionMatrix) { graphicsDevice.BlendState = BlendState.Opaque; graphicsDevice.DepthStencilState = DepthStencilState.Default; graphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; graphicsDevice.SamplerStates[0] = SamplerState.LinearClamp; // CUBE A basicEffect.View = viewMatrix; basicEffect.Projection = projectionMatrix; basicEffect.World = Matrix.CreateTranslation(ModelPosition); basicEffect.VertexColorEnabled = true; foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Apply(); drawCUBE_TOP(graphicsDevice); drawCUBE_Floor(graphicsDevice); DrawFullSquareStripesFront(graphicsDevice, _numStrips, Color.Red, Color.Blue, _levelPercentage); DrawFullSquareStripesLeft(graphicsDevice, _numStrips, Color.Red, Color.Blue, _levelPercentage); DrawFullSquareStripesRight(graphicsDevice, _numStrips, Color.Red, Color.Blue, _levelPercentage); DrawFullSquareStripesBack(graphicsDevice, _numStrips, Color.Red, Color.Blue, _levelPercentage); } // CUBE B // Set the World matrix which defines the position of the cube texturedCubeEffect.World = Matrix.CreateTranslation(ModelPosition); // Set the View matrix which defines the camera and what it's looking at texturedCubeEffect.View = viewMatrix; // Set the Projection matrix which defines how we see the scene (Field of view) texturedCubeEffect.Projection = projectionMatrix; // Enable textures on the Cube Effect. this is necessary to texture the model texturedCubeEffect.TextureEnabled = true; Texture2D a = SpriteFontTextToTexture(graphicsDevice, spriteBatch, arialFont, "TEST ", Color.Black, Color.GhostWhite); texturedCubeEffect.Texture = a; //texturedCubeEffect.Texture = cubeTexture; // Enable some pretty lights texturedCubeEffect.EnableDefaultLighting(); // apply the effect and render the cube foreach (EffectPass pass in texturedCubeEffect.CurrentTechnique.Passes) { pass.Apply(); cubeToDraw.RenderToDevice(graphicsDevice); } } private Texture2D SpriteFontTextToTexture(GraphicsDevice graphicsDevice, SpriteBatch spriteBatch, SpriteFont font, string text, Color backgroundColor, Color textColor) { Vector2 Size = font.MeasureString(text); RenderTarget2D renderTarget = new RenderTarget2D(graphicsDevice, (int)Size.X, (int)Size.Y); graphicsDevice.SetRenderTarget(renderTarget); graphicsDevice.Clear(Color.Transparent); spriteBatch.Begin(); //have to redo the ColorTexture //spriteBatch.Draw(ColorTexture.Create(graphicsDevice, 1024, 1024, backgroundColor), Vector2.Zero, Color.White); spriteBatch.DrawString(font, text, Vector2.Zero, textColor); spriteBatch.End(); graphicsDevice.SetRenderTarget(null); return renderTarget; } The way I generate texture with dynamic text is: Texture2D a = SpriteFontTextToTexture(graphicsDevice, spriteBatch, arialFont, "TEST ", Color.Black, Color.GhostWhite); After commenting several parts to see what caused the problem, it seems to be located in this line graphicsDevice.SetRenderTarget(renderTarget);

    Read the article

  • Getting 2D Platformer entity collision Response Correct (side-to-side + jumping/landing on heads)

    - by jbrennan
    I've been working on a 2D (tile based) 2D platformer for iOS and I've got basic entity collision detection working, but there's just something not right about it and I can't quite figure out how to solve it. There are 2 forms of collision between player entities as I can tell, either the two players (human controlled) are hitting each other side-to-side (i. e. pushing against one another), or one player has jumped on the head of the other player (naturally, if I wanted to expand this to player vs enemy, the effects would be different, but the types of collisions would be identical, just the reaction should be a little different). In my code I believe I've got the side-to-side code working: If two entities press against one another, then they are both moved back on either side of the intersection rectangle so that they are just pushing on each other. I also have the "landed on the other player's head" part working. The real problem is, if the two players are currently pushing up against each other, and one player jumps, then at one point as they're jumping, the height-difference threshold that counts as a "land on head" is passed and then it registers as a jump. As a life-long player of 2D Mario Bros style games, this feels incorrect to me, but I can't quite figure out how to solve it. My code: (it's really Objective-C but I've put it in pseudo C-style code just to be simpler for non ObjC readers) void checkCollisions() { // For each entity in the scene, compare it with all other entities (but not with one it's already compared against) for (int i = 0; i < _allGameObjects.count(); i++) { // GameObject is an Entity GEGameObject *firstGameObject = _allGameObjects.objectAtIndex(i); // Don't check against yourself or any previous entity for (int j = i+1; j < _allGameObjects.count(); j++) { GEGameObject *secondGameObject = _allGameObjects.objectAtIndex(j); // Get the collision bounds for both entities, then see if they intersect // CGRect is a C-struct with an origin Point (x, y) and a Size (w, h) CGRect firstRect = firstGameObject.collisionBounds(); CGRect secondRect = secondGameObject.collisionBounds(); // Collision of any sort if (CGRectIntersectsRect(firstRect, secondRect)) { //////////////////////////////// // // // Check for jumping first (???) // // //////////////////////////////// if (firstRect.origin.y > (secondRect.origin.y + (secondRect.size.height * 0.7))) { // the top entity could be pretty far down/in to the bottom entity.... firstGameObject.didLandOnEntity(secondGameObject); } else if (secondRect.origin.y > (firstRect.origin.y + (firstRect.size.height * 0.7))) { // second entity was actually on top.... secondGameObject.didLandOnEntity.(firstGameObject); } else if (firstRect.origin.x > secondRect.origin.x && firstRect.origin.x < (secondRect.origin.x + secondRect.size.width)) { // Hit from the RIGHT CGRect intersection = CGRectIntersection(firstRect, secondRect); // The NUDGE just offsets either object back to the left or right // After the nudging, they are exactly pressing against each other with no intersection firstGameObject.nudgeToRightOfIntersection(intersection); secondGameObject.nudgeToLeftOfIntersection(intersection); } else if ((firstRect.origin.x + firstRect.size.width) > secondRect.origin.x) { // hit from the LEFT CGRect intersection = CGRectIntersection(firstRect, secondRect); secondGameObject.nudgeToRightOfIntersection(intersection); firstGameObject.nudgeToLeftOfIntersection(intersection); } } } } } I think my collision detection code is pretty close, but obviously I'm doing something a little wrong. I really think it's to do with the way my jumps are checked (I wanted to make sure that a jump could happen from an angle (instead of if the falling player had been at a right angle to the player below). Can someone please help me here? I haven't been able to find many resources on how to do this properly (and thinking like a game developer is new for me). Thanks in advance!

    Read the article

  • ????! ?????????????????????????????????JavaOne 2012????? ????×????

    - by ???02
    2012?9?30???10?4??4?????????????????????Java??????????????JavaOne 2012??????????????????????2???????????????Make the Future Java????????Java?????????????????????Java??????????????????????????????????????Java??????????????(Fusion Middleware??????)?????JavaOne 2012??????????(???=????[??????IT????]) Make the Future Java?????????????????????????????????? --???JavaOne????????Make the Future Java?????????????????????????... ??:?Java????????????????Java???????????????????????????????????????????????????????????????????????????Java???????????????????????????????????????????????????????????????????????????????????????????????? ?????? Fusion Middleware?????? ???Java?????????????? --???JavaOne????????3????????????????????????????????????? ??:???Java SE?Java EE?Java ME???3?????????????(???)?????????????????????1??????????????????????????????????????????????????????????????????????????????????????????? --????????????????????????????????????????????????????????????????Java EE 7????????????????????????????????? ??:???????????????????????????????????????????????Java????????????????????????????????????????????????????????????????????????? ????????????? ???????????? ????????? ?????????????? ??????????????? ?????????? ???????????????????????????????????????? ?????????/?????????·?????HTML5?????????????????????????????????????????Java??????????? ????????????????????Java?????????????????????????????????JCP(Java Community Process)??????????????????·??????????????????????????????????????·?????????????????????????????????????????????????????????·???????????????????????????Java????????????????????????????????????????????????????? JavaFX?Java???UI????Java SE 8??? JavaOne 2012??????????????IT?????????? --2013???????????????Java SE 8??????2?????????Java SE 9???????????????????????????????????????????JavaScript?????????Java SE 8???????????????????Jigsaw??Java SE 9???????????????????Java SE 8????????????JavaScript?????Nashorn(?????)???????Rhino(????)??????????????????????????????????????????? ??:JavaScript????????JVM?????????????????? ???Web?????????JavaScript?????????????????? ????????????Java???JavaScript??????????????Java SE 7??????InvokeDynamic????????????????????Nashorn??????????????????????????????JVM????????????????????????????????????????????????????????????????????????????????JVM??JavaScript??????????JavaScript???????????????????????????JavaOne?Nashorn????????????????????????????????????????????????????????????????? --Java SE 8??JavaFX 3.0????????????????????? ??:JavaFX??????Java???????????????????Java SE 8??????????????????Java????UI?????AWT?????????Swing??????HTML5????????????Web???????????????????????JavaFX????????????GUI??????????????????????? --???JavaFX?????????????????????????????????????????????????????? ??:????????????????????JavaFX????????????????JavaFX????????·????GUI????????????????????????Visual Basic??????????????????Swing???????????????????????GUI????????? --??????????????????????JavaFX for ARM?????????????? ??:??????ARM????????????????·??????????????????????JavaFX?????????1????????????????????????????JavaFX Scene Builder?Linux??JavaFX SceneBuilder for Linux???????????????????????????????????????????????? Java EE 7??????????????????????Java EE 8?????????????? --Java EE 7?????????????????JavaOne????????????????????????????????????2013?????????????????????????????????????? ??:??????????????Java EE 8????????????????????????????????????????·????????????????????????????? ???????????????????Java???????????????????????????????????????????????????????????????????????????????????????????????????·???????????????????????????????????????????????????????????????????????????????????????????????????????????2013???????????????????????? --????????????????????????????????????????????????????????????? ??:???????????????????????????Java EE 7??HTML5????????????????????????????????????????????JMS(Java Message Service)??????·????1??????????????????????Java EE 7???Java EE 6???????????????????????????CDI(Context Dependency Injection)???????????????????????? ??????Java EE 7????????????????????????Java EE 8??????????????????????????????????????? “Java??”??????·????????? --????????JavaOne??????????????????????? ??:????????????????NetBeans??????????????Project Easel??AMD?OpenJDK??????????????Project Sumatra????????? Easel?NetBeans 7.3????????????HTML5?CSS3?JavaScript?????????????????????????????????????????????????JavaScript?????????????????????? ???Sumatra?Java??GPU?GPU/CPU?????????????????????????????GPU??HotSpot???JVM????????????????????????????/?????????Java?????????????????????????????? --????·???????????Java EE???????JavaScript??????????????????????Project Avatar????????????????? ??:JavaScript?????????????????????????????Avatar????????????????????????????????????????2???????????????????????????????????????????????????Web???????????????????Avatar?????????????????????? --???Java EE??????????????????? ??:???????????????????JavaScript??Java EE?????????????Java????????JavaScript?????????????????????JVM????????????????????????????????????JavaScript????????Java?????????????????????????????????????????????Avatar????JavaScript?????????????????????????????·??????????????????????????????? --?JavaScript?????Nashorn???????????????????JavaScript?????????????????????????????????Avatar???????·???JavaScript????????????????????????“????·??????”????????????????(?) ??:Nahorn?Node.js??????????Java???????????JavaScript??????????????????????????????Java?JavaScript??????????????????????? --????????????????????????????????????????????????? ??:????????????????????????????????????????????????????·???????????????·????????????????????????????T???????????????????????????????????... ???????????! --?????????????·?????????????????????????! JavaOne????????????????????????????????“T?????”?????????????????????????????????????????????T???????????????(?) ??:???Liquid Robotics?????????????????/????????????????????Java?????????????????????????????Java???????????????????????????????JavaOne?????????

    Read the article

  • Using WPF and SlimDx (DirectX 10/11)

    - by slurmomatic
    I am using SlimDX with WinForms for a while now, but want to make the switch to WPF now. However, I can't figure out how to get DX10/11 working with WPF. The February release of SlimDX provides a WPF example, which only works with DX 9 though. I found the following solution: http://jmorrill.hjtcentral.com/Home/tabid/428/EntryId/437/Direct3D-10-11-Direct2D-in-WPF.aspx but can't get it to work with SlimDX. My main problem is the shared resource handle as I don't know how to retrieve the shared handle from a SlimDX texture. I can't find any information to this topic. In C++ the code looks like this: HRESULT D3DImageEx::GetSharedHandle(IUnknown *pUnknown, HANDLE * pHandle) { HRESULT hr = S_OK; *pHandle = NULL; IDXGIResource* pSurface; if (FAILED(hr = pUnknown->QueryInterface(__uuidof(IDXGIResource), (void**)&pSurface))) return hr; hr = pSurface->GetSharedHandle(pHandle); pSurface->Release(); return hr; } Basically, what I want to do (because I think that this is the solution), is to share a texture between a Direct3d9DeviceEx (for rendering the WPF D3DImage) and a Direct3d10Device (a texture render target for my scene). Any pointers in the right direction are greatly appreciated.

    Read the article

  • iPod/iPhone OpenGL ES UIView flashes when updating

    - by Dave Viner
    I have a simple iPhone application which uses OpenGL ES (v1) to draw a line based on the touches of the user. In the XCode Simulator, the code works perfectly. However, when I install the app onto an iPod or iPhone, the OpenGL ES view "flashes" when drawing the line. If I disable the line drawing, the flash disappears. By "flash", I mean that the background image (which is an OpenGL texture) disappears momentarily, and then reappears. It appears as if the entire scene is completely erased and redrawn. The code which handles the line drawing is the following: renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end { static GLfloat* vertexBuffer = NULL; static NSUInteger vertexMax = 64; NSUInteger vertexCount = 0, count, i; //Allocate vertex array buffer if(vertexBuffer == NULL) vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat)); //Add points to the buffer so there are drawing points every X pixels count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1); for(i = 0; i < count; ++i) { if(vertexCount == vertexMax) { vertexMax = 2 * vertexMax; vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat)); } vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count); vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count); vertexCount += 1; } //Render the vertex array glVertexPointer(2, GL_FLOAT, 0, vertexBuffer); glDrawArrays(GL_POINTS, 0, vertexCount); //Display the buffer [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } (This function is based on the function of the same name from the GLPaint sample application.) For the life of me, I can not figure out why this causes the screen to flash. The line is drawn properly (both in the Simulator and in the iPod). But, the flash makes it unusable. Anyone have ideas on how to prevent the "flash"?

    Read the article

  • Java (Tomcat): how to configure a cookieless subdomain to serve static content

    - by Webinator
    One of the tip given by both Google and Yahoo! to speed up webpages loading is to configure a cookieless subdomain to server static content. How do you configure a "cookieless subdomain" using Tomcat in standalone mode (this question is not about how to use Apache to serve static content in a cookieless-way, but about how to do it in Tomcat-standalone mode)? Note that I don't care about filters supporting If-Modified-Since nor care about filters supporting gzipping: the static content I'm serving is forever cacheable (or its name will change) and it is already compressed data (so gzip would only slow down the transfer). Do I need two different Tomcat webapps? (one "cookiefull" and one "cookieless") Do I need two different servlets? (as of now I've got only one dispatcher/controller servlet). Why would a "regular" link to, say, a static image be called in a cookiefull way when it would be on the same domain as the main webapp and then be called in a "cookie-less" way when it is on a subdomain? I don't understand exactly what is going on: is it the browser that decides to append or not cookies to the query? If so, why would it not append the cookies to a static query on a "cookieless" subdomain. Any example as to what is going on behind the scene is most welcome :)

    Read the article

  • WPF 3D extrude "a bitmap"

    - by Bgnt44
    Hi, I'm looking for a way to simulate a projector in wpf 3D : i've these "in" parameters : beam shape : a black and white bitmap file beam size ( ex : 30 °) beam color beam intensity ( dimmer ) projector position (x,y,z) beam position (pan(x),tilt(y) relative to projector) First i was thinking of using light object but it seem that wpf can't do that So, now i think that i can make for each projector a polygon from my bitmap... First i need to convert the black and white bitmap to vector. Only Simple shape ( bubble, line,dot,cross ...) Is there any WPF way to do that ? Or maybe a external program file (freeware); then i need to build the polygon, with the shape of the converted bitmap , color , size , orientation in parameter. i don't know how can i defined the lenght of the beam , and if it can be infiny ... To show the beam result, i think of making a room ( floor , wall ...) and beam will end to these wall... i don't care of real light render ( dispersion ...) but the scene render has to be real time and at least 15 times / second (with probably from one to 100 projectors at the same time), information about position, angle,shape,color will be sent for each render... Well so, i need sample for that, i guess that all of these things could be useful for other people If you have sample code : Convert Bitmap to vector Extrude vectors from one point with a angle parameter until collision of a wall Set x,y position of the beam depend of the projector position Set Alpha intensity of the beam, color Maybe i'm totally wrong and WPF is not ready for that , so advise me about other way ( xna,d3D ) with sample of course ;-) Thanks you

    Read the article

  • [SOLVED] How to create a FBO with stencil buffer in OpenGL ES 2.0?

    - by Alphones
    I need stencil buffer on 3GS to render planar shadow, and polygon offset won't work prefect, still has z-fighting problem. So I use stencil buffer to make the shadow correct, it works on win32 gles2 emulator, but not on iPhone. After I added a post effect to the whole scene. The stencil buffer won't work even on win32 gles2 emulator. And I tried to attach a stencil buffer to FBO, buf the screen turns to black. Here's my code, glGenRenderbuffers(1, &dbo); // depth buffer glBindRenderbuffer(GL_RENDERBUFFER, dbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24_OES, widthGL, heightGL); glGenRenderbuffers(1, &sbo); // stencil buffer glBindRenderbuffer(GL_RENDERBUFFER, sbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, widthGL, heightGL); glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, dbo); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, sbo); // this make the whole screen black. The eglContext is created with STENCIL_SIZE=8, it works without a RTT. I tried to change the RenderbufferStorage for both depth buffer and stencil buffer, but none of them works. Is there anything I have missed? Does the stencil buffer pack with depth buffer? (I cannot find things like GL_DEPTH24_STENCIL8 ...)

    Read the article

  • Changing App.config at Runtime

    - by born to hula
    I'm writing a test winforms / C# / .NET 3.5 application for the system we're developing and we fell in the need to switch between .config files at runtime, but this is turning out to be a nightmare. Here's the scene: the Winforms application is aimed at testing a WebApp, divided into 5 subsystems. The test proccess works with messages being sent between the subsystems, and for this proccess to be sucessful each subsystem got to have its own .config file. For my Test Application I wrote 5 separate configuration files. I wish I was able to switch between these 5 files during runtime, but the problem is: I can programatically edit the application .config file inumerous times, but these changes will only take effect once. I've been searching a long time for a form to address this problem but I still wasn't sucessful. I know the problem definition may be a bit confusing but I would really appreciate it if someone helped me. Thanks in advance! --- UPDATE 01-06-10 --- There's something I didn't mention before. Originally, our system is a Web Application with WCF calls between each subsystem. For performance testing reasons (we're using ANTS 4), we had to create a local copy of the assemblies and reference them from the test project. It may sound a bit wrong, but we couldn't find a satisfying way to measure performance of a remote application. --- End Update --- Here's what I'm doing: public void UpdateAppSettings(string key, string value) { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); foreach (XmlElement item in xmlDoc.DocumentElement) { foreach (XmlNode node in item.ChildNodes) { if (node.Name == key) { node.Attributes[0].Value = value; break; } } } xmlDoc.Save(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); System.Configuration.ConfigurationManager.RefreshSection("section/subSection"); }

    Read the article

  • Extremely slow MFMailComposeViewControllerDelegate

    - by Jeff B
    I have a bit of a strange problem. I am trying to send in-app email. I am also using Cocos2d. It works, so far as I get the mail window and I can send mail, but it is extremely slow. It seems to only accept touches every second or so. I checked the cpu usage, and it is quite low. I paused my director, so nothing else should be happening. Any ideas? I am pulling my hair out. I looked at some examples and did the following: Made my scene the mail delegate: @interface MyLayer : CCLayer <MFMailComposeViewControllerDelegate> { ... } And implemented the following function in the scenes: -(void) showEmailWindow: (id) sender { [[CCDirector sharedDirector] pause]; MFMailComposeViewController *picker = [[MFMailComposeViewController alloc] init]; picker.mailComposeDelegate = self; [picker setSubject: @"My subject here"]; NSString *emailBody = @"<h1>Here is my email!</h1>"; [picker setMessageBody:emailBody isHTML:YES]; [myMail presentModalViewController:picker animated:NO]; [picker release]; } I also implemented the mailComposeController, to handle the callback.

    Read the article

  • Which CEP product to start with?

    - by Andreas
    Hi, I want to learn more on how to build CEP based applications. So I looked around and found several products (overview found here: http://rulecore.com/CEPblog/?page_id=47). But as there are quite a few at the moment, I don't know which is the best to start with. And overall I just would consider the one available for free. The rest is a bit to expensive for just private use ;) Esper is for free, but without Esper studio it seems quite tedious to develop a cep app. Streambase offers a free trial, but I couldn't find out how long you can use this (if only for a month, no that helpful for longer research). Oracle CEP suite seems quite complete, but in the cep scene - as far as I can see - it is the least recognized compared to Esper or Streambase. So do you have any hints on what is the best way to start with cep development? Is it worth to spent time on working through the oracle documenation or is it better to start with Esper or Streambase? Cheers, Andreas

    Read the article

  • How to scale a sprite image without losing color key information?

    - by Michael P
    Hello everyone, I'm currently developing a simple application that displays map and draws some markers on it. I'm developing for Windows Mobile, so I decided to use DirectDraw and Imaging interfaces to make the application fast and pretty. The map moves when user moves finger on the touchscreen, so the whole map moving/scrolling animation has to be fast, but it is not. On every map update I have to draw portion of the map, control buttons, and markers - buttons and markers are preloaded on DirectDraw surface as a mipmap. So the only thing I do is BitBlit from the mipmap to a back buffer, and from the back buffer to a primary surface (I can't use page flipping due to the windowed mode of my application). Previously I used premultiplied-alpha surface with 32 bit ARGB pixel format for images mipmap, everything was looking good, but drawing entire "scene" was horribly slow - i could forget about smooth map scrolling. Now I'm using mipmap with native (RGB565) pixel format and fuchsia (0xFF00FF) color key. Drawing is much better my mipmap surface is generated on program loading - images are loaded from files, scaled (with filtering) and drawn on mipmap. The problem is, that image scaling process blends pixel colors, and those pixels which are on the border of a sprite region are blended with surrounding fuchsia pixels resulting semi-fuchsia color that is not treated as color key. When I do blitting with color key option, sprites have small fuchsia-like borders, and it looks really bad. How to solve this problem? I can use alpha blitting, but it is too slow - even in ARGB 1555 format.

    Read the article

  • Loading a javascript library in javax.script?

    - by Shane
    I want to run Protovis javascript from Java and get the evaluated SVG code. I am using javax.script.* to run the Javascript: public static void EvalScript() throws Exception { ScriptEngineManager factory = new ScriptEngineManager(); ScriptEngine engine = factory.getEngineByName("JavaScript"); Object result = engine.eval("var vis = new pv.Panel().width(300).height(300) .add (pv.Line).data ([1,0.5, 0.1, 0.01, 0.001, 0.3, 0.2,0.1,1]) .left (function () { return this.index * 30; }) .bottom (function (d) { return d * 250; }); vis.root.render(); vis.scene[0].canvas.innerHTML;"); System.out.println(result); } This would complain because I never loaded Protovis itself, as would ordinarily be done with <script type="text/javascript" src="../protovis-r3.1.0.js"></script> Is there a good way, short of sourcing in the full Javascript into the eval() command, of loading a library when running Javascript through javax.script? (Incidentally, I know of examples that use Rhino to do this from the Google discussion group.)

    Read the article

  • Using GKPeerPickerController within a selector from a Cocos2D CCMenuItem

    - by Mark Hazlett
    Hey everyone, So i'm trying to use GameKit along with Cocos2D so that when a user clicks on the multiplayer menu item it will display the GKPeerPickerController. I'm however, running into some snags. It doesn't seem to want to compile. However, it doesn't give me an error inside of the code that's in my selector. Anyways here's the code... @implementation GameOverLayer - (id) init { self = [super init]; if (self != nil) { [CCMenuItemFont setFontSize:20]; [CCMenuItemFont setFontName:@"Helvetica"]; CCMenuItem *start = [CCMenuItemFont itemFromString:@"Play Again!" target:self selector:@selector(startGame:)]; CCMenuItem *connect = [CCMenuItemFont itemFromString:@"Multiplayer" target:self selector:@selector(connect:)]; CCMenu *menu = [CCMenu menuWithItems:start,connect, nil]; [menu alignItemsVertically]; [self addChild:menu]; } return self; } -(void)startGame: (id)sender { [[CCDirector sharedDirector] replaceScene: [HelloWorld scene]]; } -(void)connect: (id)sender { GKPeerPickerController *peerPicker; peerPicker = [[GKPeerPickerController alloc] init]; peerPicker.delegate = self; peerPicker.connectionTypesMask = GKPeerPickerConnectionTypeOnline | GKPeerPickerConnectionTypeNearby; [peerPicker show]; } @end The error message i'm getting is... ".obj_class_name_GKPeerPickerController", referenced from: Literal-Pointer@_OBJC@_cls_refs@GKPeerPickerController in GameOverScene.o Symbol(s) not found Collect2: id returned 1 exit status Any ideas?

    Read the article

  • Javascript/PHP and timezones

    - by James
    Hi, I'd like to be able to guess the user's timezone offset and whether or not daylight savings is being applied. Currently, the most definitive code that I've found for this is here: http://www.michaelapproved.com/articles/daylight-saving-time-dst-detect/ So this gives me the offset along with the DST indicator. Now, I want to use these in my PHP scripts in order to ouput the local date/time for the user....but what's best for this? I figure I have 2 options: a) Pick a random timezone which has the same offset and DST setting from the output of timezone_abbreviations_list(). Then call date_timezone_set() with this in order to apply the correct treatment to the time. b) Continue treating the date as UTC but just do some timestamp addition to add the appropriate number of hours on. My feeling is that option B is the best way. The reason for this is that with A, I could be using a timezone which although correct in terms of offset/dst, may have some obscur rules in place behind the scene that could give surprising results (I don't know of any but nonetheless I don't think I can rule it out). I'd then re-check the timezone using Javascript at the start of each session in order to capture when either the user's timezone changes (very unlikely) or they pass in to the DST period. Sorry for the brain dump - I'm really just after some sort of reassurance that the approaches above are valid. Thanks, James.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40  | Next Page >