Search Results

Search found 12082 results on 484 pages for 'game footage'.

Page 467/484 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • Gamification = -10#/3mo

    - by erikanollwebb
    One of the purposes of gamification of anything is to see if you can modify the behavior of the user. In the enterprise, that might mean getting sales people to enter more information into a CRM system, encouraging employees to update their HR records, motivating people to participate in forums and discussions, or process invoices more quickly.  Wikipedia defines behavior modification as "the traditional term for the use of empirically demonstrated behavior change techniques to increase or decrease the frequency of behaviors, such as altering an individual's behaviors and reactions to stimuli through positive and negative reinforcement of adaptive behavior and/or the reduction of behavior through its extinction, punishment and/or satiation."  Gamification is just a way to modify someone's behavior using game mechanics. And the magic question is always whether it works. So I thought I would present my own little experiment from the last few months.  This spring, I upgraded to a Samsung Galaxy 4.  It's a pretty sweet phone in many ways, but one of the little extras I discovered was a built in app called S Health. S Health is an app that you can use to track calories, weight, exercise and it has a built in pedometer. I looked at it when I got the phone, but assumed you had to turn it on to use it so I didn't look at it much.  But sometime in July, I realized that in fact, it just ran in the background and was quietly tracking my steps, with a goal of 10,000 per day.  10,000 steps per day is this magic number recommended by the Surgeon General and the American Heart Association.  Dr. Oz pushes it as the goal for daily exercise.  It's about 5 miles of walking. I'm generally not the kind of person who always has my phone with me.  I leave it in my purse and pull it out when I need it.  But then I realized that meant I wasn't getting a good measure of my steps.  I decided to do a little experiment, and carry it with me as much as possible for a week.  That's when I discovered the gamification that changed my life over the last 3 months.  When I hit 10,000 steps, the app jingled out a little "success!" tune and I got a badge.  I was hooked.  I started carrying my phone.  I started making sure I had shoes I could walk in with me.  I started walking at lunch time, because I realized how often I sat at my desk for 8-10 hours every day without moving.  I started pestering my husband to walk with me after work because I hadn't hit my 10,000 yet, leading him at one point to say "I'm not as much a slave to that badge as you are!"  I started looking at parking lots differently.  Can't get a space up close?  No worries, just that many steps toward my 10,000.  I even tried to see if there was a second power user level at 15,000 or 20,000 (*sadly, no).  If I was close at the end of the day, I have done laps around my house until I got my badge.  I have walked around the block one more time to get my badge.  I have mentally chastised myself when I forgot to put my phone in my pocket because I don't know how many steps I got.  The badge below I got when my boss and I were in New York City and we walked around the block of our hotel just to watch the badge pop up. There are a bunch of tools out on the market now that have similar ideas for helping you to track your exercise, make it social.  There are apps (my favorite is still Zombies, Run!).  You could buy a FitBit or UP by Jawbone.   Interactive fitness makes the Expresso stationary bike with built in video games.  All designed to help you be more aware of your activity and keep you engaged and motivated.  And the idea is to help you change your behavior. I know someone who would spend extra time and work hard on the Expresso because he had built up strategies for how to kill the most dragons while he was riding to get more points.  When the machine broke down, he didn't ride a different bike because it just wasn't that interesting. But for me, just the simple jingle and badge have been all I needed.  I admit, I still giggle gleefully when I hear the tune sing out from my pocket. After a few weeks, I noticed I had dropped a few pounds.  Not a lot, just 2-3.  But then I was really hooked.  I started making a point both to eat a little less and hit 10,000 steps as much as I could.  I bemoaned that during the floods in Boulder, I wasn't hitting my 10,000 steps.  And now, a few months later, I'm almost 10 lbs lighter. All for 1 badge a day. So yes, simple gamification can increase motivation and engagement.  And that can lead to changes in behavior.  Now the job is to apply that to the enterprise space in a meaningful and engaging way. 

    Read the article

  • What Counts for a DBA: Humility

    - by drsql
    In football (the American sort, naturally,) there are a select group of players who really hope to never have their names called during the game. They are members of the offensive line, and their job is to protect other players so they can deliver the ball to the goal to score points. When you do hear their name called, it is usually because they made a mistake and the player that they were supposed to protect ended up flat on his back admiring the clouds in the sky instead of advancing towards the goal to scoring point. Even on the rare occasion their name is called for a good reason, it is usually because they were making up for a teammate who had made a mistake and they covered up for them. The role of offensive lineman is a very good analogy for the role of the admin DBA. As a DBA, you are called on to be barely visible and rarely heard, protecting the company data assets tenaciously, even though the enemies to our craft surround us on all sides:. Developers: Cries of ‘foul!’ often ensue when the DBA says that they want data integrity to be stringently enforced and that documentation is needed so they can support systems, mostly because every error occurrence in the enterprise will be initially blamed on the database and fall to the DBA to troubleshoot. Insisting too loudly may bring those cries of ‘foul’ that somewhat remind you of when your 2 year old daughter didn't want to go to bed. The result of this petulance is that the next "enemy" gets involved. Managers: The concerns that motivate DBAs to argue will not excite the kind of manager who gets his technical knowledge from a glossy magazine filled with buzzwords, charts, and pretty pictures. However, the other programmers in the organization will tickle the buzzword void with a stream of new-sounding ideas and technologies constantly, along with warnings that if we did care about data integrity and document things, the budget would explode! In contrast, the arguments for integrity of data and supportability tend to be about as exciting as watching grass grow, and far too many manager types seem to prefer to smoke it than watch it. Packaged Applications: The DBA is rarely given a chance to review a new application that is being demonstrated for the enterprise, and rarer still is the DBA that gets a veto of an application because the database it uses has clearly been created by an architect that won't read a data modeling book because he is already married. More often than not this leads to hours of work for the DBA trying to performance-tune a database with a menagerie of rules that must be followed to stay within the  application support agreement, such as no changing indexes on a third party schema even though there are 10 billion rows instead of the 10 thousand when the system was last optimized. Hardware Failures: Physical disks, networking devices, memory, and backup devices all come with a measure known as ‘mean time before failure’ and it is never listed in centuries or eons. More like years, and the term ‘mean’ indicates that half of the devices are expected to fail before that, which by my calendar means any hour of any day that it wants to fail it will. But the DBA sucks it up and does the task at hand with a humility that makes them nearly invisible to all but the most observant person in the organization. The best DBAs I know are so proactive in their relentless pursuit of perfection that they detect many of the bugs (which they seldom caused) in the system well before they become a problem. In the end the DBA gets noticed for one of same two reasons as the offensive lineman. You make a mistake, like dropping a critical production database that had never been backed up; or when a system crashes for any reason whatsoever and they are on the spot with troubleshooting and system restoration plans that have been well thought out, tested, and tested again. Not because there is any glory in it, but because it is what they do.   Note: The characteristics of the professions referred to in this blog are meant to be overstated stereotypes for humorous effect, and even some DBAs aren't quite this perfect. If you are reading this far and haven’t hand written a 10 page flaming comment about how you are a _______ and you aren’t like this, that is awesome. Not every situation applies to everyone, but if you have never worked with a bad packaged app, a magazine trained manager, programmers that aren’t team players, or hardware that occasionally failed, relax and go have a unicorn sandwich before you wake up.

    Read the article

  • BYOD-The Tablet Difference

    - by Samantha.Y. Ma
    By Allison Kutz, Lindsay Richardson, and Jennifer Rossbach, Sales Consultants Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Less than three years ago, Apple introduced a new concept to the world: The Tablet. It’s hard to believe that in only 32 months, the iPad induced an entire new way to do business. Because of their mobility and ease-of-use, tablets have grown in popularity to keep up with the increasing “on the go” lifestyle, and their popularity isn’t expected to decrease any time soon. In fact, global tablet sales are expected to increase drastically within the next five years, from 56 million tablets to 375 million by 2016. Tablets have been utilized for every function imaginable in today’s world. With over 730,000 active applications available for the iPad, these tablets are educational devices, portable book collections, gateways into social media, entertainment for children when Mom and Dad need a minute on their own, and so much more. It’s no wonder that 74% of those who own a tablet use it daily, 60% use it several times a day, and an average of 13.9 hours per week are spent tapping away. Tablets have become a critical part of a user’s personal life; but why stop there? Businesses today are taking major strides in implementing these devices, with the hopes of benefiting from efficiency and productivity gains. Limo and taxi drivers use tablets as payment devices instead of traditional cash transactions. Retail outlets use tablets to find the exact merchandise customers are looking for. Professors use tablets to teach their classes, and business professionals demonstrate solutions and review reports from tablets. Since an overwhelming majority of tablet users have started to use their personal iPads, PlayBooks, Galaxys, etc. in the workforce, organizations have had to make a change. In many cases, companies are willing to make that change. In fact, 79% of companies are making new investments in mobility this year. Gartner reported that 90% of organizations are expected to support corporate applications on personal devices by 2014. It’s not just companies that are changing. Business professionals have become accustomed to tablets making their personal lives easier, and want that same effect in the workplace. Professionals no longer want to waste time manually entering data in their computer, or worse yet in a notebook, especially when the data has to be later transcribed to an online system. The response: the Bring Your Own Device phenomenon. According to Gartner, BOYD is “an alternative strategy allowing employees, business partners and other users to utilize a personally selected and purchased client device to execute enterprise applications and access data.” Employees whose companies embrace this trend are more efficient because they get to use devices they are already accustomed to. Tablets change the game when it comes to how sales professionals perform their jobs. Sales reps can easily store and access customer information and analytics using tablet applications, such as Oracle Fusion Tap. This method is much more enticing for sales reps than spending time logging interactions on their (what seem to be outdated) computers. Forrester & IDC reported that on average sales reps spend 65% of their time on activities other than selling, so having a tablet application to use on the go is extremely powerful. In February, Information Week released a list of “9 Powerful Business Uses for Tablet Computers,” ranging from “enhancing the customer experience” to “improving data accuracy” to “eco-friendly motivations”. Tablets compliment the lifestyle of professionals who strive to be effective and efficient, both in the office and on the road. Three Things Businesses Need to do to Embrace BYOD Make customer-facing websites tablet-friendly for consistent user experiences Develop tablet applications to continue to enhance the customer experience Embrace and use the technology that comes with tablets Almost 55 million people in the U.S. own tablets because they are convenient, easy, and powerful. These are qualities that companies strive to achieve with any piece of technology. The inherent power of the devices coupled with the growing number of business applications ensures that tablets will transform the way that companies and employees perform.

    Read the article

  • Windows Phone 8 Announcement

    - by Tim Murphy
    As if the Surface announcement on Monday wasn’t exciting enough, today Microsoft announce that Windows Phone 8 will be coming this fall.  That itself is great news, but the features coming were like confetti flying in all different directions.  Given this speed I couldn’t capture every feature they covered.  A summary of what I did capture is listed below starting with their eight main features. Common Core The first thing that they covered is that Windows Phone 8 will share a core OS with Windows 8.  It will also run natively on multiple cores.  They mentioned that they have run it on up to 64 cores to this point.  The phones as you might expect will at least start as dual core.  If you remember there were metrics saying that Windows Phone 7 performed operations faster on a single core than other platforms did with dual cores.  The metrics they showed here indicate that Windows Phone 8 runs faster on comparable dual core hardware than other platforms. New Screen Resolutions Screen resolution has never been an issue for me, but it has been a criticism of Windows Phone 7 in the media.  Windows Phone 8 will supports three screen resolutions: WVGA 800 x 480, WXGA 1280 x 768, and 720 1280x720.  Hopefully this makes pixel counters a little happier. MicroSD Support This was one of my pet peeves when I got my Samsung Focus. With Windows Phone 8 the operating system will support adding MicroSD cards after initial setup.  Of course this is dependent on the hardware company on implementing it, but I think we have seen that even feature phone manufacturers have not had a problem supporting this in the past. NFC NFC has been an anticipated feature for some time.  What Microsoft showed today included the fact that they didn’t just want it to be for the phone.  There is cross platform NFC functionality between Windows Phone 8 and Windows 8.  The demos , while possibly a bit fanciful, showed would could be achieved even in a retail environment.  We are getting closer and closer to a Minority Report world with these technologies. Wallet Windows Phone 8 isn’t the first platform to have a wallet concept.  What they have done to differentiate themselves is to make it sot that it is not dependent on a SIM type chip like other platforms.  They have also expanded the concept beyond just banks to other types of credits such as airline miles. Nokia Mapping People have been envious of the Lumia phones having the Nokia mapping software.  Now all Windows Phone 8 devices will use NavTeq data and will have the capability to run in an offline fashion.  This is a major step forward from the Bing “touch for the next turn” maps. IT Administration The lack of features for enterprise administration and deployment was a complaint even before the Windows Phone 7 was released.  With the Windows Phone 8 release such features as Bitlocker and Secure boot will be baked into the OS. We will also have the ability to privately sign and distribute applications. Changing Start Screen Joe Belfiore made a big deal about this aspect of the new release.  Users will have more color themes available to them and the live tiles will be highly customizable. You will have the ability to resize and organize the tiles in a more dynamic way.  This allows for less important tiles or ones with less information to be made smaller.  And There Is More So what other tidbits came out of the presentation?  Later this summer the API for WP8 will be available.  There will be developer events coming to a city near you.  Another announcement of interest to developers is the ability to write applications at a native code level.  This is a boon for game developers and those who need highly efficient applications. As a topper on the cake there was mention of in app payment. On the consumer side we also found out that all updates will be available over the air.  Along with this came the fact that Microsoft will support all devices with updates for at least 18 month and you will be able to subscribe for early updates.  Update coming for Windows Phone 7.5 customers to WP7.8.  The main enhancement will be the new live tile features.  The big bonus is that the update will bypass the carriers.  I would assume though that you will be brought up to date with all previous patches that your carrier may not have released. There is so much more, but that is enough for one post.  Needless to say, EXCITING! del.icio.us Tags: Windows Phone 8,WP8,Windows Phone 7,WP7,Announcements,Microsoft

    Read the article

  • javaf, problem...plz help someone...urgent [closed]

    - by innovative_aj
    i have made a word guessing game, when i click myButton to check if the guessed word is right or wrong, ball1 is moved into the "container" if its right, i want that when i click the button again and if the typed word is right, the 2nd ball should move into the container too... means one ball per correct answer...plz help me someone and provide me with the code that i can implement, its quite urgent... controller class coding /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package project3; import java.net.URL; import java.util.ResourceBundle; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.control.Button; import javafx.scene.control.Label; import javafx.scene.control.TextField; import javafx.scene.layout.StackPane; import javafx.scene.shape.Circle; /** * FXML Controller class * * @xxx */ public class MyFxmlController implements Initializable { @FXML // fx:id="ball1" private Circle ball1; // Value injected by FXMLLoader @FXML // fx:id="ball2" private Circle ball2; // Value injected by FXMLLoader @FXML // fx:id="ball3" private Circle ball3; // Value injected by FXMLLoader @FXML // fx:id="ball4" private Circle ball4; // Value injected by FXMLLoader @FXML // fx:id="container" private Circle container; // Value injected by FXMLLoader @FXML // fx:id="myButton" private Button myButton; // Value injected by FXMLLoader @FXML // fx:id="myLabel1" private Label myLabel1; // Value injected by FXMLLoader @FXML // fx:id="myLabel2" private Label myLabel2; // Value injected by FXMLLoader @FXML // fx:id="pane" private StackPane pane; // Value injected by FXMLLoader @FXML // fx:id="txt" private TextField txt; // Value injected by FXMLLoader @Override // This method is called by the FXMLLoader when initialization is complete public void initialize(URL fxmlFileLocation, ResourceBundle resources) { assert ball1 != null : "fx:id=\"ball1\" was not injected: check your FXML file 'MyFxml.fxml'."; assert ball2 != null : "fx:id=\"ball2\" was not injected: check your FXML file 'MyFxml.fxml'."; assert ball3 != null : "fx:id=\"ball3\" was not injected: check your FXML file 'MyFxml.fxml'."; assert ball4 != null : "fx:id=\"ball4\" was not injected: check your FXML file 'MyFxml.fxml'."; assert container != null : "fx:id=\"container\" was not injected: check your FXML file 'MyFxml.fxml'."; assert myButton != null : "fx:id=\"myButton\" was not injected: check your FXML file 'MyFxml.fxml'."; assert myLabel1 != null : "fx:id=\"myLabel1\" was not injected: check your FXML file 'MyFxml.fxml'."; assert myLabel2 != null : "fx:id=\"myLabel2\" was not injected: check your FXML file 'MyFxml.fxml'."; assert pane != null : "fx:id=\"pane\" was not injected: check your FXML file 'MyFxml.fxml'."; assert txt != null : "fx:id=\"txt\" was not injected: check your FXML file 'MyFxml.fxml'."; // initialize your logic here: all @FXML variables will have been injected myButton.setOnAction(new EventHandler<ActionEvent>(){ @Override public void handle(ActionEvent event) { int count = 0; String guessed=txt.getText(); boolean result; result=MyCode.check(guessed); if(result) { ball1.setTranslateX(600); ball1.setTranslateY(250-container.getRadius()); //ball2.setTranslateX(600); // ball2.setTranslateY(250-container.getRadius()); } else System.out.println("wrong"); } }); } } word guessing logic public class MyCode { static String x="Netbeans"; static String y[]={"net","beans","neat","beat","bet"}; //static int counter; // public MyCode() { // counter++; //} static boolean check(String guessed) { int count=0; boolean result=false; //counter++; //System.out.println("turns"+counter); for(count=0;count<5;count++) { if(guessed.equals(y[count])) { result=true; break; } } if(result) System.out.println("Right"); else System.out.println("Wrong"); return result; } }

    Read the article

  • State of the (Commerce) Union: What the healthcare.gov hiccups teach us about the commerce customer experience

    - by Katrina Gosek
    Guest Post by Brenna Johnson, Oracle Commerce Product A lot has been said about the healthcare.gov debacle in the last week. Regardless of your feelings about the Affordable Care Act, there’s a hidden issue in this story that most of the American people don’t understand: delivering a great commerce customer experience (CX) is hard. It shouldn’t be, but it is. The reality of the government’s issues getting the healthcare site up and running smooth is something we in the online commerce community know too well.  If there’s one thing the botched launch of the site has taught us, it’s that regardless of the size of your budget or the power of an executive with a high-profile project, some of the biggest initiatives with the most attention (and the most at stake) don’t go as planned. It may even give you a moment of solace – we have the same issues! But why?  Organizations engage too many separate vendors with different technologies, running sections or pieces of a site to get live. When things go wrong, it takes time to identify the problem – and who or what is at the center of it. Unfortunately, this is a brittle way of setting up a site, making it susceptible to breaks, bugs, and scaling issues. But, it’s the reality of running a site with legacy technology constraints in today’s demanding, customer-centric market. This approach also means there’s also a lot of cooks in lots of different kitchens. You’ve got development and IT, the business and the marketing team, an external Systems Integrator to bring it all together, a digital agency or consultant, QA, product experts, 3rd party suppliers, and the list goes on. To complicate things, different business units are held responsible for different pieces of the site and managing different technologies. And again – due to legacy organizational structure and processes, this is all accepted as the normal State of the Union. Digital commerce has been commonplace for 15 years. Yet, getting a site live, maintained and performing requires orchestrating a cast of thousands (or at least, dozens), big dollars, and some finger-crossing. But it shouldn’t. The great thing about the advent of mobile commerce and the continued maturity of online commerce is that it’s forced organizations to think from the outside, in. Consumers – whether they’re shopping for shoes or a new healthcare plan – don’t care about what technology issues or processes you have behind the scenes. They just want it to work.  They want their experience to be easy, fast, and tailored to them and their needs – whatever they are. This doesn’t sound like a tall order to the American consumer – especially since they interact with sites that do work smoothly.  But the reality is that it takes scores of people, teams, check-ins, late nights, testing, and some good luck to get sites to run, and even more so at Black Friday (or October 1st) traffic levels.  The last thing on a customer’s mind is making excuses for why they can’t buy a product – just get it to work. So what is the government doing? My guess is working day and night to get the site performing  - and having to throw big money at the problem. In the meantime they’re sending frustrated online users to the call center, or even a location where a trained “navigator” can help them in-person to complete their selection. Sounds a lot like multichannel commerce (where broken communication between siloed touchpoints will only frustrate the consumer more). One thing we’ve learned is that consumers spend their time and money with brands they know and trust. When sites are easy to use and adapt to their needs, they tend to spend more, come back, and even become long-time loyalists. Achieving this may require moving internal mountains, but there’s too much at stake to ignore the sea change in how organizations are thinking about their customer. If the thought of re-thinking your internal teams, technologies, and processes sounds like a headache, think about the pain associated with losing valuable customers – and dollars. Regardless if you’re in B2B or B2C, it’s guaranteed that your competitors are making CX a priority. Those early to the game who have made CX a priority have already begun to outpace their competition. So as you’re planning for 2014, look to the news this week. Make sure the customer experience is a focus at your organization. Expectations are at record highs. Map your customer’s journey, and think from the outside, in. How easy is it for your customers to do business with you? If they interact with many touchpoints across your organization, are the call center, website, mobile environment, or brick and mortar location in sync? Do you have the technology in place to achieve this? It’s time to give the people what they want!

    Read the article

  • YouTube SEO: Video Optimization

    - by Mike Stiles
    SEO optimization is still regarded as one of the primary tools in the digital marketing kit. However and wherever a potential customer is conducting a search, brands want their content to surface in the top results. Makes sense. But without a regular flow of good, relevant content, your SEO opportunities run shallow. We know from several studies video is one of the most engaging forms of content, so why not make sure that in addition to being cool, your videos are helping you win the SEO game? Keywords:-Decide what search phrases make the most sense for your video. Don’t dare use phrases that have nothing to do with the content. You’ll make people mad.-Research those keywords to see how competitive they are. Adjust them so there are still lots of people searching for it, but there are not as many links showing up for it.-Search your potential keywords and phrases to see what comes up. It’s amazing how many people forget to do that. Video Title: -Try to start and/or end with your keyword.-When you search on YouTube, visual action words tend to come up as suggested searches. So try to use action words. Video Description: -Lead with a link to your site (include http://). -Don’t stuff this with your keyword. It leads to bad writing and it won’t work anyway. This is where you convince people to watch, so write for humans. Use some showmanship. -At the end, do a call to action (subscribe, see the whole playlist, visit our social channels, etc.) Video Tags:-Don’t over-tag. 5-10 tags per video is plenty. -If you’re compelled to have more than 10, that means you should probably make more videos specifically targeting all those keywords. Find Linking Pals:-45% of videos are discovered on video sites. But 44% are found through links on blogs and sites.-Write a blog about your video’s content, then link to the video in it. -A good site for finding places to guest blog is myblogguest.com-Once you find good linking partners, they’ll link to your future videos (as long as they’re good and you’re returning the favor). Tap the Power of Similar Videos:-Use Video Reply to associate your video with other topic-related videos. That’s when you make a video responding to or referencing a video made by someone else. Content:-Again, build up a portfolio of videos, not just one that goes after 30 keywords.-Create shorter, sequential videos that pull them deeper into the content and closer to a desired final action.-Organize your video topics separately using Playlists. Playlists show up as a whole in search results like individual videos, so optimize playlists the same as you would for a video. Meta Data:-Too much importance is placed on it. It accounts for only 15% of search success.-YouTube reads Captions or Transcripts to determine what a video is about. If you’re not using them, you’re missing out.-You get the SEO benefit of captions and transcripts whether the viewers has them toggled on or not. Promotion:-This accounts for 25% of search success.-Promote the daylights out of your videos using your social channels and digital assets. Don’t assume it’s going to magically get discovered. -You can pay to promote your video. This could surface it on the YouTube home page, YouTube search results, YouTube related videos, and across the Google content network. Community:-Accounts for 10% of search success.-Make sure your YouTube home page is a fun place to spend time. Carefully pick your featured video, and make sure your Playlists are featured. -Participate in discussions so users will see you’re present. The volume of ratings/comments is as important as the number of views when it comes to where you surface on search. Video Sitemaps:-As with a web site, a video sitemap helps Google quickly index your video.-Google wants to know title, description, play page URL, the URL of the thumbnail image you want, and raw video file location.-Sitemaps are xml files you host or dynamically generate on your site. Once you’ve made your sitemap, sign in and submit it using Google webmaster tools. Just as with the broadcast and cable TV channels, putting a video out there is only step one. You also have to make sure everybody knows it’s there so the largest audience possible can see it. Here’s hoping you get great ratings. @mikestiles

    Read the article

  • fatal error C1014: too many include files : depth = 1024

    - by numerical25
    I have no idea what this means. But here is the code that it supposely is happening in. //======================================================================================= // d3dApp.cpp by Frank Luna (C) 2008 All Rights Reserved. //======================================================================================= #include "d3dApp.h" #include <stream> LRESULT CALLBACK MainWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { static D3DApp* app = 0; switch( msg ) { case WM_CREATE: { // Get the 'this' pointer we passed to CreateWindow via the lpParam parameter. CREATESTRUCT* cs = (CREATESTRUCT*)lParam; app = (D3DApp*)cs->lpCreateParams; return 0; } } // Don't start processing messages until after WM_CREATE. if( app ) return app->msgProc(msg, wParam, lParam); else return DefWindowProc(hwnd, msg, wParam, lParam); } D3DApp::D3DApp(HINSTANCE hInstance) { mhAppInst = hInstance; mhMainWnd = 0; mAppPaused = false; mMinimized = false; mMaximized = false; mResizing = false; mFrameStats = L""; md3dDevice = 0; mSwapChain = 0; mDepthStencilBuffer = 0; mRenderTargetView = 0; mDepthStencilView = 0; mFont = 0; mMainWndCaption = L"D3D10 Application"; md3dDriverType = D3D10_DRIVER_TYPE_HARDWARE; mClearColor = D3DXCOLOR(0.0f, 0.0f, 1.0f, 1.0f); mClientWidth = 800; mClientHeight = 600; } D3DApp::~D3DApp() { ReleaseCOM(mRenderTargetView); ReleaseCOM(mDepthStencilView); ReleaseCOM(mSwapChain); ReleaseCOM(mDepthStencilBuffer); ReleaseCOM(md3dDevice); ReleaseCOM(mFont); } HINSTANCE D3DApp::getAppInst() { return mhAppInst; } HWND D3DApp::getMainWnd() { return mhMainWnd; } int D3DApp::run() { MSG msg = {0}; mTimer.reset(); while(msg.message != WM_QUIT) { // If there are Window messages then process them. if(PeekMessage( &msg, 0, 0, 0, PM_REMOVE )) { TranslateMessage( &msg ); DispatchMessage( &msg ); } // Otherwise, do animation/game stuff. else { mTimer.tick(); if( !mAppPaused ) updateScene(mTimer.getDeltaTime()); else Sleep(50); drawScene(); } } return (int)msg.wParam; } void D3DApp::initApp() { initMainWindow(); initDirect3D(); D3DX10_FONT_DESC fontDesc; fontDesc.Height = 24; fontDesc.Width = 0; fontDesc.Weight = 0; fontDesc.MipLevels = 1; fontDesc.Italic = false; fontDesc.CharSet = DEFAULT_CHARSET; fontDesc.OutputPrecision = OUT_DEFAULT_PRECIS; fontDesc.Quality = DEFAULT_QUALITY; fontDesc.PitchAndFamily = DEFAULT_PITCH | FF_DONTCARE; wcscpy(fontDesc.FaceName, L"Times New Roman"); D3DX10CreateFontIndirect(md3dDevice, &fontDesc, &mFont); } void D3DApp::onResize() { // Release the old views, as they hold references to the buffers we // will be destroying. Also release the old depth/stencil buffer. ReleaseCOM(mRenderTargetView); ReleaseCOM(mDepthStencilView); ReleaseCOM(mDepthStencilBuffer); // Resize the swap chain and recreate the render target view. HR(mSwapChain->ResizeBuffers(1, mClientWidth, mClientHeight, DXGI_FORMAT_R8G8B8A8_UNORM, 0)); ID3D10Texture2D* backBuffer; HR(mSwapChain->GetBuffer(0, __uuidof(ID3D10Texture2D), reinterpret_cast<void**>(&backBuffer))); HR(md3dDevice->CreateRenderTargetView(backBuffer, 0, &mRenderTargetView)); ReleaseCOM(backBuffer); // Create the depth/stencil buffer and view. D3D10_TEXTURE2D_DESC depthStencilDesc; depthStencilDesc.Width = mClientWidth; depthStencilDesc.Height = mClientHeight; depthStencilDesc.MipLevels = 1; depthStencilDesc.ArraySize = 1; depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthStencilDesc.SampleDesc.Count = 1; // multisampling must match depthStencilDesc.SampleDesc.Quality = 0; // swap chain values. depthStencilDesc.Usage = D3D10_USAGE_DEFAULT; depthStencilDesc.BindFlags = D3D10_BIND_DEPTH_STENCIL; depthStencilDesc.CPUAccessFlags = 0; depthStencilDesc.MiscFlags = 0; HR(md3dDevice->CreateTexture2D(&depthStencilDesc, 0, &mDepthStencilBuffer)); HR(md3dDevice->CreateDepthStencilView(mDepthStencilBuffer, 0, &mDepthStencilView)); // Bind the render target view and depth/stencil view to the pipeline. md3dDevice->OMSetRenderTargets(1, &mRenderTargetView, mDepthStencilView); // Set the viewport transform. D3D10_VIEWPORT vp; vp.TopLeftX = 0; vp.TopLeftY = 0; vp.Width = mClientWidth; vp.Height = mClientHeight; vp.MinDepth = 0.0f; vp.MaxDepth = 1.0f; md3dDevice->RSSetViewports(1, &vp); } void D3DApp::updateScene(float dt) { // Code computes the average frames per second, and also the // average time it takes to render one frame. static int frameCnt = 0; static float t_base = 0.0f; frameCnt++; // Compute averages over one second period. if( (mTimer.getGameTime() - t_base) >= 1.0f ) { float fps = (float)frameCnt; // fps = frameCnt / 1 float mspf = 1000.0f / fps; std::wostringstream outs; outs.precision(6); outs << L"FPS: " << fps << L"\n" << "Milliseconds: Per Frame: " << mspf; mFrameStats = outs.str(); // Reset for next average. frameCnt = 0; t_base += 1.0f; } } void D3DApp::drawScene() { md3dDevice->ClearRenderTargetView(mRenderTargetView, mClearColor); md3dDevice->ClearDepthStencilView(mDepthStencilView, D3D10_CLEAR_DEPTH|D3D10_CLEAR_STENCIL, 1.0f, 0); } LRESULT D3DApp::msgProc(UINT msg, WPARAM wParam, LPARAM lParam) { switch( msg ) { // WM_ACTIVATE is sent when the window is activated or deactivated. // We pause the game when the window is deactivated and unpause it // when it becomes active. case WM_ACTIVATE: if( LOWORD(wParam) == WA_INACTIVE ) { mAppPaused = true; mTimer.stop(); } else { mAppPaused = false; mTimer.start(); } return 0; // WM_SIZE is sent when the user resizes the window. case WM_SIZE: // Save the new client area dimensions. mClientWidth = LOWORD(lParam); mClientHeight = HIWORD(lParam); if( md3dDevice ) { if( wParam == SIZE_MINIMIZED ) { mAppPaused = true; mMinimized = true; mMaximized = false; } else if( wParam == SIZE_MAXIMIZED ) { mAppPaused = false; mMinimized = false; mMaximized = true; onResize(); } else if( wParam == SIZE_RESTORED ) { // Restoring from minimized state? if( mMinimized ) { mAppPaused = false; mMinimized = false; onResize(); } // Restoring from maximized state? else if( mMaximized ) { mAppPaused = false; mMaximized = false; onResize(); } else if( mResizing ) { // If user is dragging the resize bars, we do not resize // the buffers here because as the user continuously // drags the resize bars, a stream of WM_SIZE messages are // sent to the window, and it would be pointless (and slow) // to resize for each WM_SIZE message received from dragging // the resize bars. So instead, we reset after the user is // done resizing the window and releases the resize bars, which // sends a WM_EXITSIZEMOVE message. } else // API call such as SetWindowPos or mSwapChain->SetFullscreenState. { onResize(); } } } return 0; // WM_EXITSIZEMOVE is sent when the user grabs the resize bars. case WM_ENTERSIZEMOVE: mAppPaused = true; mResizing = true; mTimer.stop(); return 0; // WM_EXITSIZEMOVE is sent when the user releases the resize bars. // Here we reset everything based on the new window dimensions. case WM_EXITSIZEMOVE: mAppPaused = false; mResizing = false; mTimer.start(); onResize(); return 0; // WM_DESTROY is sent when the window is being destroyed. case WM_DESTROY: PostQuitMessage(0); return 0; // The WM_MENUCHAR message is sent when a menu is active and the user presses // a key that does not correspond to any mnemonic or accelerator key. case WM_MENUCHAR: // Don't beep when we alt-enter. return MAKELRESULT(0, MNC_CLOSE); // Catch this message so to prevent the window from becoming too small. case WM_GETMINMAXINFO: ((MINMAXINFO*)lParam)->ptMinTrackSize.x = 200; ((MINMAXINFO*)lParam)->ptMinTrackSize.y = 200; return 0; } return DefWindowProc(mhMainWnd, msg, wParam, lParam); } void D3DApp::initMainWindow() { WNDCLASS wc; wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = MainWndProc; wc.cbClsExtra = 0; wc.cbWndExtra = 0; wc.hInstance = mhAppInst; wc.hIcon = LoadIcon(0, IDI_APPLICATION); wc.hCursor = LoadCursor(0, IDC_ARROW); wc.hbrBackground = (HBRUSH)GetStockObject(NULL_BRUSH); wc.lpszMenuName = 0; wc.lpszClassName = L"D3DWndClassName"; if( !RegisterClass(&wc) ) { MessageBox(0, L"RegisterClass FAILED", 0, 0); PostQuitMessage(0); } // Compute window rectangle dimensions based on requested client area dimensions. RECT R = { 0, 0, mClientWidth, mClientHeight }; AdjustWindowRect(&R, WS_OVERLAPPEDWINDOW, false); int width = R.right - R.left; int height = R.bottom - R.top; mhMainWnd = CreateWindow(L"D3DWndClassName", mMainWndCaption.c_str(), WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, width, height, 0, 0, mhAppInst, this); if( !mhMainWnd ) { MessageBox(0, L"CreateWindow FAILED", 0, 0); PostQuitMessage(0); } ShowWindow(mhMainWnd, SW_SHOW); UpdateWindow(mhMainWnd); } void D3DApp::initDirect3D() { // Fill out a DXGI_SWAP_CHAIN_DESC to describe our swap chain. DXGI_SWAP_CHAIN_DESC sd; sd.BufferDesc.Width = mClientWidth; sd.BufferDesc.Height = mClientHeight; sd.BufferDesc.RefreshRate.Numerator = 60; sd.BufferDesc.RefreshRate.Denominator = 1; sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; sd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; sd.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED; // No multisampling. sd.SampleDesc.Count = 1; sd.SampleDesc.Quality = 0; sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; sd.BufferCount = 1; sd.OutputWindow = mhMainWnd; sd.Windowed = true; sd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; sd.Flags = 0; // Create the device. UINT createDeviceFlags = 0; #if defined(DEBUG) || defined(_DEBUG) createDeviceFlags |= D3D10_CREATE_DEVICE_DEBUG; #endif HR( D3D10CreateDeviceAndSwapChain( 0, //default adapter md3dDriverType, 0, // no software device createDeviceFlags, D3D10_SDK_VERSION, &sd, &mSwapChain, &md3dDevice) ); // The remaining steps that need to be carried out for d3d creation // also need to be executed every time the window is resized. So // just call the onResize method here to avoid code duplication. onResize(); }

    Read the article

  • How to use onSensorChanged sensor data in combination with OpenGL

    - by Sponge
    I have written a TestSuite to find out how to calculate the rotation angles from the data you get in SensorEventListener.onSensorChanged(). I really hope you can complete my solution to help people who will have the same problems like me. Here is the code, i think you will understand it after reading it. Feel free to change it, the main idea was to implement several methods to send the orientation angles to the opengl view or any other target which would need it. method 1 to 4 are working, they are directly sending the rotationMatrix to the OpenGl view. all other methods are not working or buggy and i hope someone knows to get them working. i think the best method would be method 5 if it would work, because it would be the easiest to understand but i'm not sure how efficient it is. the complete code isn't optimized so i recommend to not use it as it is in your project. here it is: import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.FloatBuffer; import javax.microedition.khronos.egl.EGL10; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import static javax.microedition.khronos.opengles.GL10.*; import android.app.Activity; import android.content.Context; import android.content.pm.ActivityInfo; import android.hardware.Sensor; import android.hardware.SensorEvent; import android.hardware.SensorEventListener; import android.hardware.SensorManager; import android.opengl.GLSurfaceView; import android.opengl.GLSurfaceView.Renderer; import android.os.Bundle; import android.util.Log; import android.view.WindowManager; /** * This class provides a basic demonstration of how to use the * {@link android.hardware.SensorManager SensorManager} API to draw a 3D * compass. */ public class SensorToOpenGlTests extends Activity implements Renderer, SensorEventListener { private static final boolean TRY_TRANSPOSED_VERSION = false; /* * MODUS overview: * * 1 - unbufferd data directly transfaired from the rotation matrix to the * modelview matrix * * 2 - buffered version of 1 where both acceleration and magnetometer are * buffered * * 3 - buffered version of 1 where only magnetometer is buffered * * 4 - buffered version of 1 where only acceleration is buffered * * 5 - uses the orientation sensor and sets the angles how to rotate the * camera with glrotate() * * 6 - uses the rotation matrix to calculate the angles * * 7 to 12 - every possibility how the rotationMatrix could be constructed * in SensorManager.getRotationMatrix (see * http://www.songho.ca/opengl/gl_anglestoaxes.html#anglestoaxes for all * possibilities) */ private static int MODUS = 2; private GLSurfaceView openglView; private FloatBuffer vertexBuffer; private ByteBuffer indexBuffer; private FloatBuffer colorBuffer; private SensorManager mSensorManager; private float[] rotationMatrix = new float[16]; private float[] accelGData = new float[3]; private float[] bufferedAccelGData = new float[3]; private float[] magnetData = new float[3]; private float[] bufferedMagnetData = new float[3]; private float[] orientationData = new float[3]; // private float[] mI = new float[16]; private float[] resultingAngles = new float[3]; private int mCount; final static float rad2deg = (float) (180.0f / Math.PI); private boolean mirrorOnBlueAxis = false; private boolean landscape; public SensorToOpenGlTests() { } /** Called with the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mSensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE); openglView = new GLSurfaceView(this); openglView.setRenderer(this); setContentView(openglView); } @Override protected void onResume() { // Ideally a game should implement onResume() and onPause() // to take appropriate action when the activity looses focus super.onResume(); openglView.onResume(); if (((WindowManager) getSystemService(WINDOW_SERVICE)) .getDefaultDisplay().getOrientation() == 1) { landscape = true; } else { landscape = false; } mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_ACCELEROMETER), SensorManager.SENSOR_DELAY_GAME); mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD), SensorManager.SENSOR_DELAY_GAME); mSensorManager.registerListener(this, mSensorManager .getDefaultSensor(Sensor.TYPE_ORIENTATION), SensorManager.SENSOR_DELAY_GAME); } @Override protected void onPause() { // Ideally a game should implement onResume() and onPause() // to take appropriate action when the activity looses focus super.onPause(); openglView.onPause(); mSensorManager.unregisterListener(this); } public int[] getConfigSpec() { // We want a depth buffer, don't care about the // details of the color buffer. int[] configSpec = { EGL10.EGL_DEPTH_SIZE, 16, EGL10.EGL_NONE }; return configSpec; } public void onDrawFrame(GL10 gl) { // clear screen and color buffer: gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); // set target matrix to modelview matrix: gl.glMatrixMode(GL10.GL_MODELVIEW); // init modelview matrix: gl.glLoadIdentity(); // move camera away a little bit: if ((MODUS == 1) || (MODUS == 2) || (MODUS == 3) || (MODUS == 4)) { if (landscape) { // in landscape mode first remap the rotationMatrix before using // it with glMultMatrixf: float[] result = new float[16]; SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, result); gl.glMultMatrixf(result, 0); } else { gl.glMultMatrixf(rotationMatrix, 0); } } else { //in all other modes do the rotation by hand: gl.glRotatef(resultingAngles[1], 1, 0, 0); gl.glRotatef(resultingAngles[2], 0, 1, 0); gl.glRotatef(resultingAngles[0], 0, 0, 1); if (mirrorOnBlueAxis) { //this is needed for mode 6 to work gl.glScalef(1, 1, -1); } } //move the axis to simulate augmented behaviour: gl.glTranslatef(0, 2, 0); // draw the 3 axis on the screen: gl.glVertexPointer(3, GL_FLOAT, 0, vertexBuffer); gl.glColorPointer(4, GL_FLOAT, 0, colorBuffer); gl.glDrawElements(GL_LINES, 6, GL_UNSIGNED_BYTE, indexBuffer); } public void onSurfaceChanged(GL10 gl, int width, int height) { gl.glViewport(0, 0, width, height); float r = (float) width / height; gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); gl.glFrustumf(-r, r, -1, 1, 1, 10); } public void onSurfaceCreated(GL10 gl, EGLConfig config) { gl.glDisable(GL10.GL_DITHER); gl.glClearColor(1, 1, 1, 1); gl.glEnable(GL10.GL_CULL_FACE); gl.glShadeModel(GL10.GL_SMOOTH); gl.glEnable(GL10.GL_DEPTH_TEST); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); // load the 3 axis and there colors: float vertices[] = { 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1 }; float colors[] = { 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1 }; byte indices[] = { 0, 1, 0, 2, 0, 3 }; ByteBuffer vbb; vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); vertexBuffer = vbb.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); vbb = ByteBuffer.allocateDirect(colors.length * 4); vbb.order(ByteOrder.nativeOrder()); colorBuffer = vbb.asFloatBuffer(); colorBuffer.put(colors); colorBuffer.position(0); indexBuffer = ByteBuffer.allocateDirect(indices.length); indexBuffer.put(indices); indexBuffer.position(0); } public void onAccuracyChanged(Sensor sensor, int accuracy) { } public void onSensorChanged(SensorEvent event) { // load the new values: loadNewSensorData(event); if (MODUS == 1) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); } if (MODUS == 2) { rootMeanSquareBuffer(bufferedAccelGData, accelGData); rootMeanSquareBuffer(bufferedMagnetData, magnetData); SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, bufferedMagnetData); } if (MODUS == 3) { rootMeanSquareBuffer(bufferedMagnetData, magnetData); SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, bufferedMagnetData); } if (MODUS == 4) { rootMeanSquareBuffer(bufferedAccelGData, accelGData); SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, magnetData); } if (MODUS == 5) { // this mode uses the sensor data recieved from the orientation // sensor resultingAngles = orientationData.clone(); if ((-90 > resultingAngles[1]) || (resultingAngles[1] > 90)) { resultingAngles[1] = orientationData[0]; resultingAngles[2] = orientationData[1]; resultingAngles[0] = orientationData[2]; } } if (MODUS == 6) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); final float[] anglesInRadians = new float[3]; SensorManager.getOrientation(rotationMatrix, anglesInRadians); if ((-90 < anglesInRadians[2] * rad2deg) && (anglesInRadians[2] * rad2deg < 90)) { // device camera is looking on the floor // this hemisphere is working fine mirrorOnBlueAxis = false; resultingAngles[0] = anglesInRadians[0] * rad2deg; resultingAngles[1] = anglesInRadians[1] * rad2deg; resultingAngles[2] = anglesInRadians[2] * -rad2deg; } else { mirrorOnBlueAxis = true; // device camera is looking in the sky // this hemisphere is mirrored at the blue axis resultingAngles[0] = (anglesInRadians[0] * rad2deg); resultingAngles[1] = (anglesInRadians[1] * rad2deg); resultingAngles[2] = (anglesInRadians[2] * rad2deg); } } if (MODUS == 7) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in x y z * order Rx*Ry*Rz */ resultingAngles[2] = (float) (Math.asin(rotationMatrix[2])); final float cosB = (float) Math.cos(resultingAngles[2]); resultingAngles[2] = resultingAngles[2] * rad2deg; resultingAngles[0] = -(float) (Math.acos(rotationMatrix[0] / cosB)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[10] / cosB)) * rad2deg; } if (MODUS == 8) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in z y x */ resultingAngles[2] = (float) (Math.asin(-rotationMatrix[8])); final float cosB = (float) Math.cos(resultingAngles[2]); resultingAngles[2] = resultingAngles[2] * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[9] / cosB)) * rad2deg; resultingAngles[0] = (float) (Math.asin(rotationMatrix[4] / cosB)) * rad2deg; } if (MODUS == 9) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in z x y * * note z axis looks good at this one */ resultingAngles[1] = (float) (Math.asin(rotationMatrix[9])); final float minusCosA = -(float) Math.cos(resultingAngles[1]); resultingAngles[1] = resultingAngles[1] * rad2deg; resultingAngles[2] = (float) (Math.asin(rotationMatrix[8] / minusCosA)) * rad2deg; resultingAngles[0] = (float) (Math.asin(rotationMatrix[1] / minusCosA)) * rad2deg; } if (MODUS == 10) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in y x z */ resultingAngles[1] = (float) (Math.asin(-rotationMatrix[6])); final float cosA = (float) Math.cos(resultingAngles[1]); resultingAngles[1] = resultingAngles[1] * rad2deg; resultingAngles[2] = (float) (Math.asin(rotationMatrix[2] / cosA)) * rad2deg; resultingAngles[0] = (float) (Math.acos(rotationMatrix[5] / cosA)) * rad2deg; } if (MODUS == 11) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in y z x */ resultingAngles[0] = (float) (Math.asin(rotationMatrix[4])); final float cosC = (float) Math.cos(resultingAngles[0]); resultingAngles[0] = resultingAngles[0] * rad2deg; resultingAngles[2] = (float) (Math.acos(rotationMatrix[0] / cosC)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[5] / cosC)) * rad2deg; } if (MODUS == 12) { SensorManager.getRotationMatrix(rotationMatrix, null, accelGData, magnetData); rotationMatrix = transpose(rotationMatrix); /* * this assumes that the rotation matrices are multiplied in x z y */ resultingAngles[0] = (float) (Math.asin(-rotationMatrix[1])); final float cosC = (float) Math.cos(resultingAngles[0]); resultingAngles[0] = resultingAngles[0] * rad2deg; resultingAngles[2] = (float) (Math.acos(rotationMatrix[0] / cosC)) * rad2deg; resultingAngles[1] = (float) (Math.acos(rotationMatrix[5] / cosC)) * rad2deg; } logOutput(); } /** * transposes the matrix because it was transposted (inverted, but here its * the same, because its a rotation matrix) to be used for opengl * * @param source * @return */ private float[] transpose(float[] source) { final float[] result = source.clone(); if (TRY_TRANSPOSED_VERSION) { result[1] = source[4]; result[2] = source[8]; result[4] = source[1]; result[6] = source[9]; result[8] = source[2]; result[9] = source[6]; } // the other values in the matrix are not relevant for rotations return result; } private void rootMeanSquareBuffer(float[] target, float[] values) { final float amplification = 200.0f; float buffer = 20.0f; target[0] += amplification; target[1] += amplification; target[2] += amplification; values[0] += amplification; values[1] += amplification; values[2] += amplification; target[0] = (float) (Math .sqrt((target[0] * target[0] * buffer + values[0] * values[0]) / (1 + buffer))); target[1] = (float) (Math .sqrt((target[1] * target[1] * buffer + values[1] * values[1]) / (1 + buffer))); target[2] = (float) (Math .sqrt((target[2] * target[2] * buffer + values[2] * values[2]) / (1 + buffer))); target[0] -= amplification; target[1] -= amplification; target[2] -= amplification; values[0] -= amplification; values[1] -= amplification; values[2] -= amplification; } private void loadNewSensorData(SensorEvent event) { final int type = event.sensor.getType(); if (type == Sensor.TYPE_ACCELEROMETER) { accelGData = event.values.clone(); } if (type == Sensor.TYPE_MAGNETIC_FIELD) { magnetData = event.values.clone(); } if (type == Sensor.TYPE_ORIENTATION) { orientationData = event.values.clone(); } } private void logOutput() { if (mCount++ > 30) { mCount = 0; Log.d("Compass", "yaw0: " + (int) (resultingAngles[0]) + " pitch1: " + (int) (resultingAngles[1]) + " roll2: " + (int) (resultingAngles[2])); } } }

    Read the article

  • Null reading in stream images? Unable to start activity ComponentInfo

    - by lasmith
    I have reviewed a lot of similar questions regarding not being able to launch an activity but they don't seem to quite match my problem. I am working on a simple black jack game but its force quitting. I suspect there is a problem with loading up the card png images I have. Stepping through the debugger it crashes right while in the resetGame() function. I'm sure I am doing something dumb. My Logcat: 10-15 20:21:43.309: E/AndroidRuntime(2863): FATAL EXCEPTION: main 10-15 20:21:43.309: E/AndroidRuntime(2863): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.smith.blackjack/com.smith.blackjack.Main}: java.lang.NullPointerException 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2059) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2084) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.access$600(ActivityThread.java:130) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1195) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.os.Handler.dispatchMessage(Handler.java:99) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.os.Looper.loop(Looper.java:137) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.main(ActivityThread.java:4745) 10-15 20:21:43.309: E/AndroidRuntime(2863): at java.lang.reflect.Method.invokeNative(Native Method) 10-15 20:21:43.309: E/AndroidRuntime(2863): at java.lang.reflect.Method.invoke(Method.java:511) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:786) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553) 10-15 20:21:43.309: E/AndroidRuntime(2863): at dalvik.system.NativeStart.main(Native Method) 10-15 20:21:43.309: E/AndroidRuntime(2863): Caused by: java.lang.NullPointerException 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.smith.blackjack.DeckOfCards.<init>(DeckOfCards.java:17) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.smith.blackjack.Main.resetGame(Main.java:98) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.smith.blackjack.Main.onCreate(Main.java:67) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.Activity.performCreate(Activity.java:5008) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1079) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2023) 10-15 20:21:43.309: E/AndroidRuntime(2863): ... 11 more My androidmanifest: <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.smith.blackjack" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="11" android:targetSdkVersion="15" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".Main" android:label="@string/title_activity_main" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> Here is my Main.java package com.smith.blackjack; import android.os.Bundle; import android.app.Activity; import android.content.res.AssetManager; import android.graphics.drawable.Drawable; import java.io.IOException; import java.io.InputStream; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.ImageView; public class Main extends Activity { private ImageView dealerCard0; private ImageView dealerCard1; private ImageView dealerCard2; private ImageView dealerCard3; private ImageView playerCard0; private ImageView playerCard1; private ImageView playerCard2; private ImageView playerCard3; private ImageView imgResult; private Button btnDeal; private Button btnDraw; private Button btnHold; private DeckOfCards deckOfCards; private int[] dealerValues; private int dealerSum; private int dealerCardNumber; private int[] playerValues; private int playerSum; private int playerCardNumber; private InputStream dealerHiddenCard; private Card dealerCard; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); dealerCard0 = (ImageView) findViewById(R.id.dealerCard0); dealerCard1 = (ImageView) findViewById(R.id.dealerCard1); dealerCard2 = (ImageView) findViewById(R.id.dealerCard2); dealerCard3 = (ImageView) findViewById(R.id.dealerCard3); playerCard0 = (ImageView) findViewById(R.id.playerCard0); playerCard1 = (ImageView) findViewById(R.id.playerCard1); playerCard2 = (ImageView) findViewById(R.id.playerCard2); playerCard3 = (ImageView) findViewById(R.id.playerCard3); imgResult = (ImageView) findViewById(R.id.imgResult); btnDeal = (Button) findViewById(R.id.deal); btnDraw = (Button) findViewById(R.id.draw); btnHold = (Button) findViewById(R.id.hold); btnDeal.setOnClickListener(btnDealListener); btnDraw.setOnClickListener(btnDrawListener); btnHold.setOnClickListener(btnHoldListener); resetGame(); } private void resetGame(){ AssetManager assets = getAssets(); dealerValues = new int[4]; playerValues = new int[4]; dealerSum = 0; playerSum = 0; dealerCardNumber = 0; playerCardNumber = 0; for (int i = 0; i < 4; i++) { dealerValues[i] = 0; playerValues[i] = 0; } try { InputStream stream = assets.open("cardback.png"); // stream = assets.open("cardback.png"); Drawable cardImage = Drawable.createFromStream(stream, null); dealerCard0.setImageDrawable(cardImage); dealerCard1.setImageDrawable(cardImage); dealerCard2.setImageDrawable(cardImage); dealerCard3.setImageDrawable(cardImage); playerCard0.setImageDrawable(cardImage); playerCard1.setImageDrawable(cardImage); playerCard2.setImageDrawable(cardImage); playerCard3.setImageDrawable(cardImage); imgResult.setImageDrawable(cardImage); deckOfCards = new DeckOfCards(); deckOfCards.shuffle(); assets.close(); } catch (IOException e){ Log.e("Reset Game", "Error Loading", e); } } public OnClickListener btnDealListener = new OnClickListener() { // @Override public void onClick(View v) { try { AssetManager assets = getAssets(); InputStream stream; // first player card Card newCard; newCard = deckOfCards.dealCard(); playerValues[playerCardNumber] = newCard.faceValue; playerCardNumber++; stream = assets.open(newCard.File); Drawable cardImage = Drawable.createFromStream(stream, newCard.File); playerCard0.setImageDrawable(cardImage); assets.close(); // second player card newCard = deckOfCards.dealCard(); playerValues[playerCardNumber] = newCard.faceValue; playerCardNumber++; stream = assets.open(newCard.File); cardImage = Drawable.createFromStream(stream, newCard.File); playerCard1.setImageDrawable(cardImage); assets.close(); // first dealer card hidden newCard = deckOfCards.dealCard(); dealerCard = newCard; dealerValues[dealerCardNumber] = newCard.faceValue; dealerCardNumber++; dealerHiddenCard = assets.open(newCard.File); stream = assets.open("cardback.png"); cardImage = Drawable.createFromStream(stream, "cardback"); dealerCard0.setImageDrawable(cardImage); assets.close(); // second dealer card open newCard = deckOfCards.dealCard(); dealerValues[dealerCardNumber] = newCard.faceValue; dealerCardNumber++; stream = assets.open(newCard.File); cardImage = Drawable.createFromStream(stream, newCard.File); dealerCard1.setImageDrawable(cardImage); assets.close(); } catch (IOException e){ Log.e("Deal", "Error Loading", e); } }; }; public OnClickListener btnDrawListener = new OnClickListener() { // @Override public void onClick(View v) { try { AssetManager assets = getAssets(); InputStream stream; // get next player card Card newCard; newCard = deckOfCards.dealCard(); playerValues[playerCardNumber] = newCard.faceValue; playerCardNumber++; stream = assets.open(newCard.File); Drawable cardImage = Drawable.createFromStream(stream, newCard.File); switch (playerCardNumber){ case 3: playerCard2.setImageDrawable(cardImage); case 4: playerCard3.setImageDrawable(cardImage); } assets.close(); } catch (IOException e){ Log.e("Draw", "Error Loading", e); } }; }; public OnClickListener btnHoldListener = new OnClickListener() { // @Override public void onClick(View v) { Drawable cardImage; // evaluate player hand playerSum = evaluate(playerValues); if (playerSum > 21){ // player losses } // flip over the dealer hidden card cardImage = Drawable.createFromStream(dealerHiddenCard, dealerCard.File); Card newCard; InputStream stream; AssetManager assets = getAssets(); for (int i=2; i<4; i++){ dealerSum = evaluate(dealerValues); if (dealerSum < 16 ) { newCard = deckOfCards.dealCard(); dealerValues[dealerCardNumber] = newCard.faceValue; dealerCardNumber++; try { stream = assets.open(newCard.File); cardImage = Drawable.createFromStream(stream, newCard.File); switch (dealerCardNumber){ case 3: dealerCard2.setImageDrawable(cardImage); case 4: dealerCard3.setImageDrawable(cardImage); } assets.close(); } catch (IOException e){ Log.e("Draw", "Error Loading", e); } if (dealerSum < playerSum) { // player wins } if (dealerSum > playerSum){ // dealer wins } if (dealerSum == playerSum){ // it is a draw } } } }; }; public int evaluate (int[]values) { int sumCards = 0; for (int i = 0; i < 4; i++){ sumCards += values[i]; } if (sumCards > 21) { for (int i = 0; i < 4; i++){ if (values[i] == 11) { values[i] = 1; sumCards -= 10; continue; } } } return sumCards; } } My DeckOfCards class: package com.smith.blackjack; import java.util.Random; public class DeckOfCards { private Card [] deck; private int currentCard; private static final int NUMBER_OF_CARDS = 52; private static final Random randomNumbers = new Random(); public DeckOfCards () { deck = new Card[NUMBER_OF_CARDS]; currentCard = 0 ; for(int count = 0; count < deck.length; count++) { deck[count].faceValue = count + 1; } } public void shuffle () { currentCard = 0; for (int first = 0; first < deck.length; first ++){ int second = randomNumbers.nextInt(NUMBER_OF_CARDS); int temp = deck[first].faceValue; deck[first].faceValue=deck[second].faceValue; deck[second].faceValue = temp; } } public Card dealCard(){ Card temp = new Card(); temp.faceValue = 0; temp.File = ""; if(currentCard < deck.length) { temp.faceValue = deck[currentCard].faceValue / 4; int suit = deck[currentCard].faceValue % 4; String suitString = ""; switch (suit){ case 0: suitString = "c"; case 1: suitString = "d"; case 2: suitString = "h"; case 3: suitString = "s"; } Integer face = temp.faceValue / 4 ; String faceString = face.toString(); temp.File = faceString + suitString + ".png"; switch (temp.faceValue){ case 11: temp.faceValue = 10; case 12: temp.faceValue = 10; case 13: temp.faceValue = 10; } return temp; } else return temp; } }

    Read the article

  • Using Optical Flow in EmguCV

    - by Meko
    HI. I am trying to create simple touch game using EmguCV.Should I use optical flow to determine for interaction between images on screen and with my hand ,if changes of points somewhere on screen more than 100 where the image, it means my hand is over image? But how can I track this new points? I can draw on screen here the previous points and new points but It shows on my head more points then my hand and I can not track my hands movements. void Optical_Flow_Worker(object sender, EventArgs e) { { Input_Capture.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_POS_FRAMES, ActualFrameNumber); ActualFrame = Input_Capture.QueryFrame(); ActualGrayFrame = ActualFrame.Convert<Gray, Byte>(); NextFrame = Input_Capture.QueryFrame(); NextGrayFrame = NextFrame.Convert<Gray, Byte>(); ActualFeature = ActualGrayFrame.GoodFeaturesToTrack(500, 0.01d, 0.01, 5); ActualGrayFrame.FindCornerSubPix(ActualFeature, new System.Drawing.Size(10, 10), new System.Drawing.Size(-1, -1), new MCvTermCriteria(20, 0.3d)); OpticalFlow.PyrLK(ActualGrayFrame, NextGrayFrame, ActualFeature[0], new System.Drawing.Size(10, 10), 3, new MCvTermCriteria(20, 0.03d), out NextFeature, out Status, out TrackError); OpticalFlowFrame = new Image<Bgr, Byte>(ActualFrame.Width, ActualFrame.Height); OpticalFlowFrame = NextFrame.Copy(); for (int i = 0; i < ActualFeature[0].Length; i++) DrawFlowVectors(i); ActualFrameNumber++; pictureBox1.Image = ActualFrame.Resize(320, 400).ToBitmap() ; pictureBox3.Image = OpticalFlowFrame.Resize(320, 400).ToBitmap(); } } private void DrawFlowVectors(int i) { System.Drawing.Point p = new Point(); System.Drawing.Point q = new Point(); p.X = (int)ActualFeature[0][i].X; p.Y = (int)ActualFeature[0][i].Y; q.X = (int)NextFeature[i].X; q.Y = (int)NextFeature[i].Y; p.X = (int)(q.X + 6 * Math.Cos(angle + Math.PI / 4)); p.Y = (int)(q.Y + 6 * Math.Sin(angle + Math.PI / 4)); p.X = (int)(q.X + 6 * Math.Cos(angle - Math.PI / 4)); p.Y = (int)(q.Y + 6 * Math.Sin(angle - Math.PI / 4)); OpticalFlowFrame.Draw(new Rectangle(q.X,q.Y,1,1), new Bgr(Color.Red), 1); OpticalFlowFrame.Draw(new Rectangle(p.X, p.Y, 1, 1), new Bgr(Color.Blue), 1); }

    Read the article

  • Silverlight animation not smooth

    - by Andrej
    Hi, When trying to animate objects time/frame based in Silverlight (in contrast to using something like DoubleAnimation or Storyboard, which is not suitable e.g. for fast paced games), for example moving a spaceship in a particular direction every frame, the movement is jumpy and not really smooth. The screen even seems to tear. There seems to be no difference between CompositionTarget and DistpatcherTimer. I use the following approach (in pseudocode): Register Handler to Tick-Event of a DispatcherTimer In each Tick: Compute the elapsed time from the last frame in milliseconds Object.X += movementSpeed * ellapsedMilliseconds This should result in a smooth movement, right? But it doesn't. Here is an example (Controls: WASD and Mouse): Silverlight Game. Although the effect I described is not too prevalent in this sample, I can assure you that even moving a single rectangle over a canvas produces a jumpy animation. Does someone have an idea how to minimize this. Are there other approaches to to frame based animation exept using Storyboards/DoubleAnimations which could solve this? Edit: Here a quick and dirty approach, animating a rectangle with minimum code (Controls: A and D) Animation Sample Xaml: <Grid x:Name="LayoutRoot" Background="Black"> <Canvas Width="1000" Height="400" Background="Blue"> <Rectangle x:Name="rect" Width="48" Height="48" Fill="White" Canvas.Top="200" Canvas.Left="0"/> </Canvas> </Grid> C#: private bool isLeft = false; private bool isRight = false; private DispatcherTimer timer = new DispatcherTimer(); private double lastUpdate; public Page() { InitializeComponent(); timer.Interval = TimeSpan.FromMilliseconds(1); timer.Tick += OnTick; lastUpdate = Environment.TickCount; timer.Start(); } private void OnTick(object sender, EventArgs e) { double diff = Environment.TickCount - lastUpdate; double x = Canvas.GetLeft(rect); if (isRight) x += 1 * diff; else if (isLeft) x -= 1 * diff; Canvas.SetLeft(rect, x); lastUpdate = Environment.TickCount; } private void UserControl_KeyDown(object sender, KeyEventArgs e) { if (e.Key == Key.D) isRight = true; if (e.Key == Key.A) isLeft = true; } private void UserControl_KeyUp(object sender, KeyEventArgs e) { if (e.Key == Key.D) isRight = false; if (e.Key == Key.A) isLeft = false; } Thanks! Andrej

    Read the article

  • Facebook connect displaying invite friends dialog and closing on completion

    - by Dougnukem
    I'm trying to create a Facebook Connect application that displays a friend invite dialog within the page using Facebook's Javascript API (through a FBMLPopupDialog). The trouble is to display a friend invite dialog you use a multi-friend form which requires an action="url" attribute that represents the URL to redirect your page to when the user completes or skips the form. The problem is that I want to just close the FBMLPopupDialog (the same behavior as if the user just hit the 'X' button on the popup dialog). The best I can do is redirect the user back to the page they were on basically a reload but they lose all AJAX/Flash application state. I'm wondering if any Facebook Connect developers have run into this issue and have a good way to simply display a friend invite "lightbox" dialog within their website where they don't want to "refresh" or "redirect" when the user finishes. The facebook connect JS API provides a FB.Connect.inviteConnectUsers, which provides a nice dialog but only connects existing users of your application who also have a Facebook account and haven't connected. http://bugs.developers.facebook.com/show%5Fbug.cgi?id=4916 function fb_inviteFriends() { //Invite users log("Inviting users..."); FB.Connect.requireSession( function() { //Connect succes var uid = FB.Facebook.apiClient.get_session().uid; log('FB CONNECT SUCCESS: ' + uid); //Invite users log("Inviting users..."); //Update server with connected account updateAccountFacebookUID(); var fbml = fb_getInviteFBML() ; var dialog = new FB.UI. FBMLPopupDialog("Weblings Invite", fbml) ; //dialog.setFBMLContent(fbml); dialog.setContentWidth(650); dialog.setContentHeight(450); dialog.show(); }, //Connect cancelled function() { //User cancelled the connect log("FB Connect cancelled:"); } ); } function fb_getInviteFBML() { var uid = FB.Facebook.apiClient.get_session().uid; var fbml = ""; fbml = '<fb:fbml>\n' + '<fb:request-form\n'+ //Redirect back to this page ' action="'+ document.location +'"\n'+ ' method="POST"\n'+ ' invite="true"\n'+ ' type="Weblings Invite"\n' + ' content="I need your help to discover all the Weblings and save the Internet! WebWars: Weblings is a cool new game where we can collect fantastic creatures while surfing our favorite websites. Come find the missing Weblings with me!'+ //Callback the server with the appropriate Webwars Account URL ' <fb:req-choice url=\''+ WebwarsFB.WebwarsAccountServer +'/SplashPage.aspx?action=ref&reftype=Facebook' label=\'Check out WebWars: Weblings\' />"\n'+ '>\n'+ ' <fb:multi-friend-selector\n'+ ' rows="2"\n'+ ' cols="4"\n'+ ' bypass="Cancel"\n'+ ' showborder="false"\n'+ ' actiontext="Use this form to invite your friends to connect with WebWars: Weblings."/>\n'+ ' </fb:request-form>'+ ' </fb:fbml>'; return fbml; }

    Read the article

  • Eventlet or gevent or Stackless + Twisted, Pylons, Django and SQL Alchemy

    - by Khorkrak
    We're using Twisted extensively for apps requiring a great deal of asynchronous io. There are some cases where stuff is cpu bound instead and for that we spawn a pool of processes to do the work and have a system for managing these across multiple servers as well - all done in Twisted. Works great. The problem is that it's hard to bring new team members up to speed. Writing asynchronous code in Twisted requires a near vertical learning curve. It's as if humans just don't think that way naturally. We're considering a mixed approach perhaps. Maybe do the xmlrpc server part and process management in Twisted still but the other stuff in code that at least looks synchronous while not being as such. Then again I like explicit over implicit so hmmm. Anyway onto greenlets - how well does that stuff work? So there's Stackless and as you can see from my Gallentean avatar I'm well aware of the tremendous success in it's use for CCP's flagship EVE Online game first hand. What about Eventlet or gevent? Well for now only Eventlet works with Twisted. However gevent claims to be faster since it's not a pure python implementation it instead uses libevent. It also has fewer idiosyncrasies and defects supposedly. The documentation there is minimal in comparison to Eventlet and it's maintained by 1 guy as far as I can tell. This makes me leery but all great projects start this way so... Then there's PyPy - I haven't even finished reading about that one yet - just saw it in this thread: Drawbacks of Stackless. So confusing - I'm wondering what the heck to do - sounds like Eventlet is probably the best bet but is it really stable enough? Anyone out there have any experience with it? Should we go with Stackless instead as it's been around and is proven technology - just like Twisted is as well - and they do work together nicely. But still I hate having to have a separate version of Python to do this. what to do.... This somewhat obnoxious blog entry hit the nail on the head for me though: Asynchronous IO for Grownups We're stuck using MySQL as well - I never knew how great PostgreSQL was until having had to work on a production OLTP system in MySQL instead - but that's another story. But if that monkey patch thing really works then wow. Just wow.

    Read the article

  • 2d trajectory planning of a spaceship with physics.

    - by egarcia
    I'm implementing a 2D game with ships in space. In order to do it, I'm using LÖVE, which wraps Box2D with Lua. But I believe that my question can be answered by anyone with a greater understanding of physics than myself - so pseudo code is accepted as a response. My problem is that I don't know how to move my spaceships properly on a 2D physics-enabled world. More concretely: A ship of mass m is located at an initial position {x, y}. It has an initial velocity vector of {vx, vy} (can be {0,0}). The objective is a point in {xo,yo}. The ship has to reach the objective having a velocity of {vxo, vyo} (or near it), following the shortest trajectory. There's a function called update(dt) that is called frequently (i.e. 30 times per second). On this function, the ship can modify its position and trajectory, by applying "impulses" to itself. The magnitude of the impulses is binary: you can either apply it in a given direction, or not to apply it at all). In code, it looks like this: def Ship:update(dt) m = self:getMass() x,y = self:getPosition() vx,vy = self.getLinearVelocity() xo,yo = self:getTargetPosition() vxo,vyo = self:getTargetVelocity() thrust = self:getThrust() if(???) angle = ??? self:applyImpulse(math.sin(angle)*thrust, math.cos(angle)*thrust)) end end The first ??? is there to indicate that in some occasions (I guess) it would be better to "not to impulse" and leave the ship "drift". The second ??? part consists on how to calculate the impulse angle on a given dt. We are in space, so we can ignore things like air friction. Although it would be very nice, I'm not looking for someone to code this for me; I put the code there so my problem is clearly understood. What I need is an strategy - a way of attacking this. I know some basic physics, but I'm no expert. For example, does this problem have a name? That sort of thing. Thanks a lot.

    Read the article

  • Shawn Wildermuth violating MVVM in MSDN article?

    - by rasx
    This may be old news but back in March 2009, Shawn Wildermuth, his article, “Model-View-ViewModel In Silverlight 2 Apps,” has a code sample that includes DataServiceEntityBase: // COPIED FROM SILVERLIGHTCONTRIB Project for simplicity /// <summary> /// Base class for DataService Data Contract classes to implement /// base functionality that is needed like INotifyPropertyChanged. /// Add the base class in the partial class to add the implementation. /// </summary> public abstract class DataServiceEntityBase : INotifyPropertyChanged { /// <summary> /// The handler for the registrants of the interface's event /// </summary> PropertyChangedEventHandler _propertyChangedHandler; /// <summary> /// Allow inheritors to fire the event more simply. /// </summary> /// <param name="propertyName"></param> protected void FirePropertyChanged(string propertyName) { if (_propertyChangedHandler != null) { _propertyChangedHandler(this, new PropertyChangedEventArgs(propertyName)); } } #region INotifyPropertyChanged Members /// <summary> /// The interface used to notify changes on the entity. /// </summary> event PropertyChangedEventHandler INotifyPropertyChanged.PropertyChanged { add { _propertyChangedHandler += value; } remove { _propertyChangedHandler -= value; } } #endregion What this class implies is that the developer intends to bind visuals directly to data (yes, a ViewModel is used but it defines an ObservableCollection of data objects). Is this design diverging too far from the guidance of MVVM? Now I can see some of the reasons why Shawn would go this way: what Shawn can do with DataServiceEntityBase is this sort of thing (which is intimate with the Entity Framework): // Partial Method to support the INotifyPropertyChanged interface public partial class Game : DataServiceEntityBase { #region Partial Method INotifyPropertyChanged Implementation // Override the Changed partial methods to implement the // INotifyPropertyChanged interface // This helps with the Model implementation to be a mostly // DataBound implementation partial void OnDeveloperChanged() { base.FirePropertyChanged("Developer"); } partial void OnGenreChanged() { base.FirePropertyChanged("Genre"); } partial void OnListPriceChanged() { base.FirePropertyChanged("ListPrice"); } partial void OnListPriceCurrencyChanged() { base.FirePropertyChanged("ListPriceCurrency"); } partial void OnPlayerInfoChanged() { base.FirePropertyChanged("PlayerInfo"); } partial void OnProductDescriptionChanged() { base.FirePropertyChanged("ProductDescription"); } partial void OnProductIDChanged() { base.FirePropertyChanged("ProductID"); } partial void OnProductImageUrlChanged() { base.FirePropertyChanged("ProductImageUrl"); } partial void OnProductNameChanged() { base.FirePropertyChanged("ProductName"); } partial void OnProductTypeIDChanged() { base.FirePropertyChanged("ProductTypeID"); } partial void OnPublisherChanged() { base.FirePropertyChanged("Publisher"); } partial void OnRatingChanged() { base.FirePropertyChanged("Rating"); } partial void OnRatingUrlChanged() { base.FirePropertyChanged("RatingUrl"); } partial void OnReleaseDateChanged() { base.FirePropertyChanged("ReleaseDate"); } partial void OnSystemNameChanged() { base.FirePropertyChanged("SystemName"); } #endregion } Of course MSDN code can seen as “toy code” for educational purposes but is anyone doing anything like this in the real world of Silverlight development?

    Read the article

  • Android GridView with ads below

    - by ktambascio
    Hi, I'm trying to integrate ads (admob) into my Android app. It's mostly working, except for one issue with the layout. My layout consists of: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" xmlns:app="http://schemas.android.com/apk/res/com.example.photos"> <LinearLayout android:orientation="horizontal" android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/status_layout"> <ImageView android:id="@+id/cardStatus" android:layout_width="wrap_content" android:layout_height="wrap_content" /> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello" android:id="@+id/cardStatusText" /> </LinearLayout> <GridView android:id="@+id/imageGridView" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="10dp" android:verticalSpacing="10dp" android:horizontalSpacing="10dp" android:numColumns="auto_fit" android:columnWidth="100dp" android:stretchMode="columnWidth" android:gravity="center" android:layout_below="@id/status_layout" /> <!-- Place an AdMob ad at the bottom of the screen. --> <!-- It has white text on a black background. --> <!-- The description of the surrounding context is 'Android game'. --> <com.admob.android.ads.AdView android:id="@+id/ad" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" app:backgroundColor="#000000" app:primaryTextColor="#FFFFFF" app:secondaryTextColor="#CCCCCC" app:keywords="Android Photo" /> </RelativeLayout> The ads are shown at the bottom of the screen, just as I want. However, they seem to be overlayed or drawn on top of the bottom portion of the grid view. I would like the gridview to be shorter, so that the ad can fill the bottom portion of the screen, and not hide part of the gridview. The problem is most annoying when you scroll all the way to the bottom of the gridview, and you still cannot fully see the last row items in the grid due to the ad. I'm not sure if this is the standard way that AdMob ads work. If this is the case, adding some padding to the bottom of the grid (if that's possible) would due the trick. That way the user can scroll a bit further, and see the last row in addition to the ad. I just switched from using LinearLayout to RelativeLayout after reading some similar issues with ListViews. Now my ad is along the bottom instead of above the grid, so I'm getting closer. Thoughts? -Kevin

    Read the article

  • Trying to convert openGL to MFC coordinates and having Problems with "gluProject"

    - by Erez
    Hi, i'm trying to find the naswer on the web and can't find a full solution that i can use and that will work... I'm developing a MFC project with static picture as the canvas for an openGL class that draw the grphics for my game. On moush down, i need to retrive a shape coordinate from the openGL class. I'm looking for a way to convert the openGL coordinates to MFC coordinates but no matter what i try i get junk after using the gluProject or gluUnProject (i've tried to do both ways but non is working) GLdouble modelMatrix[16]; glGetDoublev(GL_MODELVIEW_MATRIX,modelMatrix); GLdouble projMatrix[16]; glGetDoublev(GL_PROJECTION_MATRIX,projMatrix); int viewport[4]; glGetIntegerv(GL_VIEWPORT,viewport); POINT mouse; // Stores The X And Y Coords For The Current Mouse Position GetCursorPos(&mouse); // Gets The Current Cursor Coordinates (Mouse Coordinates) ScreenToClient(hWnd, &mouse); GLdouble winX, winY, winZ; // Holds Our X, Y and Z Coordinates winX; = (float)point.x; // Holds The Mouse X Coordinate winY; = (float)point.y; // Holds The Mouse Y Coordinate winY = (float)viewport[3] - winY; glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); GLdouble posX=s1->getPosX(), posY=s1->getPosY(), posZ=s1->getPosZ(); // Hold The Final Values gluUnProject( winX, winY, winZ, modelMatrix, projMatrix, viewport, &posX, &posY, &posZ); gluProject(posX, posY, posZ, modelMatrix, projMatrix, viewport, &winX, &winY, &winZ); This is just part of the code i've tried. ofcourse not gluProject and gluUnProject together. just had them both here to show.....and i know there is lots of junk over there, its from some of my tries... p.s. i've tried many many more combinations and examples from the web and nothing seem to work in my case.... Can any one show me what is the right way to do the transformation? 10x

    Read the article

  • Strategy pattern and "action" classes explosion

    - by devoured elysium
    Is it bad policy to have lots of "work" classes(such as Strategy classes), that only do one thing? Let's assume I want to make a Monster class. Instead of just defining everything I want about the monster in one class, I will try to identify what are its main features, so I can define them in interfaces. That will allow to: Seal the class if I want. Later, other users can just create a new class and still have polymorphism by means of the interfaces I've defined. I don't have to worry how people (or myself) might want to change/add features to the base class in the future. All classes inherit from Object and they implement inheritance through interfaces, not from mother classes. Reuse the strategies I'm using with this monster for other members of my game world. Con: This model is rigid. Sometimes we would like to define something that is not easily achieved by just trying to put together this "building blocks". public class AlienMonster : IWalk, IRun, ISwim, IGrowl { IWalkStrategy _walkStrategy; IRunStrategy _runStrategy; ISwimStrategy _swimStrategy; IGrowlStrategy _growlStrategy; public Monster() { _walkStrategy = new FourFootWalkStrategy(); ...etc } public void Walk() { _walkStrategy.Walk(); } ...etc } My idea would be next to make a series of different Strategies that could be used by different monsters. On the other side, some of them could also be used for totally different purposes (i.e., I could have a tank that also "swims"). The only problem I see with this approach is that it could lead to a explosion of pure "method" classes, i.e., Strategy classes that have as only purpose make this or that other action. In the other hand, this kind of "modularity" would allow for high reuse of stratagies, sometimes even in totally different contexts. What is your opinion on this matter? Is this a valid reasoning? Is this over-engineering? Also, assuming we'd make the proper adjustments to the example I gave above, would it be better to define IWalk as: interface IWalk { void Walk(); } or interface IWalk { IWalkStrategy WalkStrategy { get; set; } //or something that ressembles this } being that doing this I wouldn't need to define the methods on Monster itself, I'd just have public getters for IWalkStrategy (this seems to go against the idea that you should encapsulate everything as much as you can!) Why? Thanks

    Read the article

  • Tiling rectangles seamlessly in WPF

    - by Joe White
    I want to seamlessly tile a bunch of different-colored Rectangles in WPF. That is, I want to put a bunch of rectangles edge-to-edge, and not have gaps between them. If everything is aligned to pixels, this works fine. But I also want to support arbitrary zoom, and ideally, I don't want to use SnapsToDevicePixels (because it would compromise quality when the image is zoomed way out). But that means my Rectangles sometimes render with gaps. For example: <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Background="Black"> <Canvas SnapsToDevicePixels="False"> <Canvas.RenderTransform> <ScaleTransform ScaleX="0.5" ScaleY="0.5"/> </Canvas.RenderTransform> <Rectangle Canvas.Left="25" Width="100" Height="100" Fill="#CFC"/> <Rectangle Canvas.Left="125" Width="100" Height="100" Fill="#CCF"/> </Canvas> </Page> If the ScaleTransform's ScaleX is 1, then the Rectangles fit together seamlessly. When it's 0.5, there's a dark gray streak between them. I understand why -- the combined semi-transparent edge pixels don't combine to be 100% opaque. But I would like a way to fix it. I could always just make the Rectangles overlap, but I won't always know in advance what patterns they'll be in (this is for a game that will eventually support a map editor). Besides, this would cause artifacts around the overlap area when things were zoomed way in (unless I did bevel-cut angles on the underlapping portion, which is an awful lot of work, and still causes problems at corners). Is there some way I can combine these Rectangles into a single combined "shape" that does render without internal gaps? I've played around with GeometryDrawing, which does exactly that, but then I don't see a way to paint each RectangleGeometry with a different-colored brush. Are there any other ways to get shapes to tile seamlessly under an arbitrary transform, without resorting to SnapsToDevicePixels?

    Read the article

  • Help finding longest non-repeating path through connected nodes - Python

    - by Jordan Magnuson
    I've been working on this for a couple of days now without success. Basically, I have a bunch of nodes arranged in a 2D matrix. Every node has four neighbors, except for the nodes on the sides and corners of the matrix, which have 3 and 2 neighbors, respectively. Imagine a bunch of square cards laid out side by side in a rectangular area--the project is actually simulating a sort of card/board game. Each node may or may not be connected to the nodes around it. Each node has a function (get_connections()), that returns the nodes immediately around it that it is connected to (so anywhere from 0 to 4 nodes are returned). Each node also has an "index" property, that contains it's position on the board matrix (eg '1, 4' - row 1, col 4). What I am trying to do is find the longest non-repeating path of connected nodes given a particular "start" node. I've uploaded a couple of images that should give a good idea of what I'm trying to do: In both images, the highlighted red cards are supposedly the longest path of connected cards containing the most upper-left card. However, you can see in both images that a couple of cards that should be in the path have been left out (Romania and Maldova in the first image, Greece and Turkey in the second) Here's the recursive function that I am using currently to find the longest path, given a starting node/card: def get_longest_trip(self, board, processed_connections = list(), processed_countries = list()): #Append this country to the processed countries list, #so we don't re-double over it processed_countries.append(self) possible_trips = dict() if self.get_connections(board): for i, card in enumerate(self.get_connections(board)): if card not in processed_countries: processed_connections.append((self, card)) possible_trips[i] = card.get_longest_trip(board, processed_connections, processed_countries) if possible_trips: longest_trip = [] for i, trip in possible_trips.iteritems(): trip_length = len(trip) if trip_length > len(longest_trip): longest_trip = trip longest_trip.append(self) return longest_trip else: print card_list = [] card_list.append(self) return card_list else: #If no connections from start_card, just return the start card #as the longest trip card_list = [] card_list.append(board.start_card) return card_list The problem here has to do with the processed_countries list: if you look at my first screenshot, you can see that what has happened is that when Ukraine came around, it looked at its two possible choices for longest path (Maldova-Romania, or Turkey, Bulgaria), saw that they were both equal, and chose one indiscriminantly. Now when Hungary comes around, it can't attempt to make a path through Romania (where the longest path would actually be), because Romania has been added to the processed_countries list by Ukraine. Any help on this is EXTREMELY appreciated. If you can find me a solution to this, recursive or not, I'd be happy to donate some $$ to you. I've uploaded my full source code (Python 2.6, Pygame 1.9 required) to: http://www.necessarygames.com/junk/planes_trains.zip The relevant code is in src/main.py, which is all set to run.

    Read the article

  • Automatically generate table of function pointers in C.

    - by jeremytrimble
    I'm looking for a way to automatically (as part of the compilation/build process) generate a "table" of function pointers in C. Specifically, I want to generate an array of structures something like: typedef struct { void (*p_func)(void); char * funcName; } funcRecord; /* Automatically generate the lines below: */ extern void func1(void); extern void func2(void); /* ... */ funcRecord funcTable[] = { { .p_func = &func1, .funcName = "func1" }, { .p_func = &func2, .funcName = "func2" } /* ... */ }; /* End automatically-generated code. */ ...where func1 and func2 are defined in other source files. So, given a set of source files, each of which which contain a single function that takes no arguments and returns void, how would one automatically (as part of the build process) generate an array like the one above that contains each of the functions from the files? I'd like to be able to add new files and have them automatically inserted into the table when I re-compile. I realize that this probably isn't achievable using the C language or preprocessor alone, so consider any common *nix-style tools fair game (e.g. make, perl, shell scripts (if you have to)). But Why? You're probably wondering why anyone would want to do this. I'm creating a small test framework for a library of common mathematical routines. Under this framework, there will be many small "test cases," each of which has only a few lines of code that will exercise each math function. I'd like each test case to live in its own source file as a short function. All of the test cases will get built into a single executable, and the test case(s) to be run can be specified on the command line when invoking the executable. The main() function will search through the table and, if it finds a match, jump to the test case function. Automating the process of building up the "catalog" of test cases ensures that test cases don't get left out (for instance, because someone forgets to add it to the table) and makes it very simple for maintainers to add new test cases in the future (just create a new source file in the correct directory, for instance). Hopefully someone out there has done something like this before. Thanks, StackOverflow community!

    Read the article

  • Making an XSL stylesheet work with paged XML

    - by fudgey
    First off, here is the situation. I'm using a guild hosting site that allows you to input the URL to an XSL file and another input for the XML. All well and good when all of the XML you want is contained in one file. My problem is this Game Roster XML which is paginated... look near the bottom of the file and you will find a <page_links> section that contains a pager written in HTML with links to /xml?page=2 etc. Since the guild hosting site is set up to only process one XML page I can't get to the other XML pages. So, I can only think of two solutions, but I have no idea how to get started Set up a php page that combines all XML pages into one, then output that file. I can then use this URL in the guild hosting site XSL processor. Somehow combine all the XML files within the XSL stylesheet. I found this question on SO (I don't really understand it, because I don't know what the document($pXml1) is doing), but I don't think it will work since the number of pages will be variable. I think this might be possible by loading the next page until the <members_to> value equals the <members_total>. Any other ideas? I don't know XSL or php that well so any help with code examples would be greatly appreciated. Update: I'm trying method 2 above and here is a snippet of XSLT with which I am having trouble. The first page of the code displays without problems, but the I am having trouble with this xsl:if, or maybe it's the document() statement. Update #2: changed the document to use the string & concat functions, but it's still not working. <xsl:template name="morepages"> <xsl:param name="page">1</xsl:param> <xsl:param name="url"> <xsl:value-of select="concat(SuperGroup/profule_url,'/xml?page=')"/> </xsl:param> <xsl:if test="document(string(concat($url,$page)))/SuperGroup/members_to &lt; document(string(concat($url,$page)))/SuperGroup/members_total"> <xsl:for-each select="document(string(concat($url,$page + 1)))/SuperGroup/members/members_node"> <xsl:call-template name="addrow" /> </xsl:for-each> <!-- Increment page index--> <xsl:call-template name="morepages"> <xsl:with-param name="page" select="$page + 1"/> </xsl:call-template> </xsl:if> </xsl:template>

    Read the article

  • Optimizing multiple dispatch notification algorithm in C#?

    - by Robert Fraser
    Sorry about the title, I couldn't think of a better way to describe the problem. Basically, I'm trying to implement a collision system in a game. I want to be able to register a "collision handler" that handles any collision of two objects (given in either order) that can be cast to particular types. So if Player : Ship : Entity and Laser : Particle : Entity, and handlers for (Ship, Particle) and (Laser, Entity) are registered than for a collision of (Laser, Player), both handlers should be notified, with the arguments in the correct order, and a collision of (Laser, Laser) should notify only the second handler. A code snippet says a thousand words, so here's what I'm doing right now (naieve method): public IObservable<Collision<T1, T2>> onCollisionsOf<T1, T2>() where T1 : Entity where T2 : Entity { Type t1 = typeof(T1); Type t2 = typeof(T2); Subject<Collision<T1, T2>> obs = new Subject<Collision<T1, T2>>(); _onCollisionInternal += delegate(Entity obj1, Entity obj2) { if (t1.IsAssignableFrom(obj1.GetType()) && t2.IsAssignableFrom(obj2.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj1, (T2) obj2)); else if (t1.IsAssignableFrom(obj2.GetType()) && t2.IsAssignableFrom(obj1.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj2, (T2) obj1)); }; return obs; } However, this method is quite slow (measurable; I lost ~2 FPS after implementing this), so I'm looking for a way to shave a couple cycles/allocation off this. I thought about (as in, spent an hour implementing then slammed my head against a wall for being such an idiot) a method that put the types in an order based on their hash code, then put them into a dictionary, with each entry being a linked list of handlers for pairs of that type with a boolean indication whether the handler wanted the order of arguments reversed. Unfortunately, this doesn't work for derived types, since if a derived type is passed in, it won't notify a subscriber for the base type. Can anyone think of a way better than checking every type pair (twice) to see if it matches? Thanks, Robert

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >