Search Results

Search found 3105 results on 125 pages for 'stupid mistake'.

Page 74/125 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • CDN CNAMEs not resolving to customer origin

    - by Donald Jenkins
    I have set up an Edgecast CDN to mirror all my static content. Because I use the root of my domain (donaldjenkins.com) to host my main site—using Google Analytics which sets cookies—I've stored the corresponding static files in a separate cookieless domain (donaldjenkins.info) which is used only for this purpose. I've set it up (using this guide for general guidance), with the following structure, based on a combination of customer origin and CDN origin to make the most of the chosen short domain name and provide meaningful URLs: http://donaldjenkins.info:80 is set as the customer origin for the content stored in the CDN at directory http://wac.62E0.edgecastcdn.net/8062E0/donaldjenkins.info; I've then set up various subdomains of a separate domain, the conveniently-named cdn.dj, as CDN-origin Edge CNAMEs for each of the corresponding static content types: js.cdn.dj points to the origin directory http://wac.62E0.edgecastcdn.net/0062E0/donaldjenkins.info/js; css.cdn.dj points to the origin directory http://wac.62E0.edgecastcdn.net/0062E0/donaldjenkins.info/css; images.cdn.dj points to the origin directory http://wac.62E0.edgecastcdn.net/0062E0/donaldjenkins.info/images and so on. This results in some pretty nice, short, clear URLs. The DNS zone file for cdn.dj (yes, it's a real domain name registered in Djibouti) is set properly: cdn.dj 43200 IN A 205.186.157.162 css.cdn.dj 43200 IN CNAME wac.62E0.edgecastcdn.net. images.cdn.dj 43200 IN CNAME wac.62E0.edgecastcdn.net. js.cdn.dj 43200 IN CNAME wac.62E0.edgecastcdn.net. The DNS resolves to the Edgecast URL: $ host js.cdn.dj js.cdn.dj is an alias for wac.62E0.edgecastcdn.net. wac.62E0.edgecastcdn.net is an alias for gs1.wac.edgecastcdn.net. gs1.wac.edgecastcdn.net has address 93.184.220.20 But whenever I try to fetch a file in any of the directories to which the CNAME assets map, I get a 404: $ curl http://js.cdn.dj/combined.js <?xml version="1.0" encoding="iso-8859-1"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>404 - Not Found</title> </head> <body> <h1>404 - Not Found</h1> </body> </html> despite the fact that the corresponding customer origin file exists: $ curl http://donaldjenkins.info/js/combined.js fetches the content of the combined.js file. Yet it's been more than enough time for the DNS to propagate since I set up the CDN. There's obviously some glaring mistake in the above-described setup, and I'm a bit of a novice with CDNs—but any suggestions would be gratefully received.

    Read the article

  • HTTP Headers: Max-Age vs Expires – Which One To Choose?

    - by Gopinath
    Caching of static content like images, scripts, styles on the client browser reduces load on the webservers and also improves end users browsing experience by loading web pages quickly. We can use HTTP headers Expires or Cache-Control:max-age to cache content on client browser and set expiry time for them. Expire header is HTTP/1.0 standard and Cache-Control:max-age is introduced in HTTP/1.1 specification to solve the issues and limitation with Expire  header. Consider the following headers.   Cache-Control: max-age=24560 Expires: Tue, 15 May 2012 06:17:00 GMT The first header instructs web browsers to cache the content for 24560 seconds relative to the time the content is downloaded and expire it after the time period elapses. The second header instructs web browser to expiry the content after 15th May 2011 06:17. Out of these two options which one to use – max-age or expires? I prefer max-age header for the following reasons As max-age  is a relative value and in most of the cases it makes sense to set relative expiry date rather than an absolute expiry date. Expire  header values are complex to set – time format should be proper, time zones should be appropriate. Even a small mistake in settings these values results in unexpected behaviour. As Expire header values are absolute, we need to  keep changing them at regular intervals. Lets say if we set 2011 June 1 as expiry date to all the image files of this blog, on 2011 June 2 we should modify the expiry date to something like 2012 Jan 1. This add burden of managing the Expire headers. Related: Amazon S3 Tips: Quickly Add/Modify HTTP Headers To All Files Recursively cc image flickr:rogue3w This article titled,HTTP Headers: Max-Age vs Expires – Which One To Choose?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Silverlight Death Trolls Dancing on XAML&rsquo;s Grave

    - by D'Arcy Lussier
    I’m starting to see a whole bunch of tweets and blog posts on how Silverlight/WPF is dead, or how the XAML team has been disbanded at Microsoft, or how someone predicted Silverlight would die, blah blah blah. They all have a similar ring to it though: “Told ya so!” “They were stupid ideas anyway!” “Serves Microsoft right, boy are they dumb!” Let me tell you something, all those that are gleefully raving about Silverlight/WPF’s demise are nothing more than death trolls. Let’s assume that everything out there is true. Microsoft is obviously moving towards HTML 5 in a huge way (TechCrunch pointed out that SkyDrive has replaced its Silverlight based version with an HTML 5 one), and not just on the web as we’ve seen with recent announcements about how HTML 5 apps will be natively supported on Windows 8. WPF never caught on in the marketplace, regardless of its superior technology offering to Winforms. And Silverlight…well, it gave Flash a good run for its money, but plug-in based web applications are becoming passé in light of HTML 5. (It’s interesting that at a developer conference I put on just a few weeks ago, only 1 out of 60+ sessions included Silverlight. 5 focussed on HTML 5.) So what does this *death* of Silverlight/WPF/XAML mean then in the grand scheme of things (again, assuming that they truly *are* dying/dead)? Well, nothing really…at least nothing bad. Silverlight has given us some fantastic applications and experiences (Vancouver Olympics anyone?), and WP7 couldn’t have launched without Silverlight as its development platform. And WPF, although it had putrid adoption, has had some great success stories. A Canadian company that I talked to recently showed me how they re-wrote their point-of-sale application entirely in WPF, and the product is a huge success providing features their competitors aren’t. Arguably (and I say that only because I know I’m going to get WTF comments for this), VS.NET 2010 is a great example of what a WPF app can provide over previous C++ based applications. Technologies evolve. In a decade we’ve had 5 versions of the .NET framework, seen languages like J# come and go, seen F# appear, see communications layers change with WCF, seen EF go through multiple evolutions and traditional ADO.NET Datasets go extinct (from actual use anyway), and ASP.NET Webforms be replaced with ASP.NET MVC as a preferred web platform. Is Silverlight and WPF done? Maybe…probably?…thing is, it doesn’t really affect me personally in any way, or you…so why would we care if its gets replaced with something better and more robust that we can build better solutions with? Just remember the golden rule: don’t feed the trolls.

    Read the article

  • Up in the Air: Team Oracle Play-by-Play

    - by Aaron Lazenby
    Yesterday, I had the amazing opportunity to fly along with Sean D. Tucker and Team Oracle. Leaving from the San Carols airport, we did a 30 minute flight over the Pacific just south of the coastal town of Half Moon Bay. In that half hour, I rode through a massive 4G loop, survived a crushing hammerhead, and took control of the plane to perform a basic wing over (you can learn what the heck I'm talking about by visiting this website). I have lots of great video, but it's going to take me some time to make sense of it. For now, here's my Twitter-based play-by-play of yesterday's events. Many thanks to Sean D. Tucker and the whole crew (Ben and Ian, especially) for this great opportunity to fly with Team Oracle.Live tweets from @OracleProfitI will be spending the afternoon in a stunt plane, upside down above the San Francisco bay. http://bit.ly/cwkrkIAt the San Carlos airport. More than slightly freaked out. Shaking hands diminish texting ability. Slightly reassuring. http://yfrog.com/1qt61nj There go the doors to the photo plane... #teamoracle http://yfrog.com/58ywljSean D Tucker assures me: "The sky is a great place to be." Helpful, but I'm still nervous. #teamoracle"You get a parachute. He gets a harness." How was this decision made? #teamoracleThe plane with @radu43 has returned. I'm up next...Couldn't help myself...drank a soda before flying. Mistake? We'll see... #teamoracleAdvice of the day "If you pull with two hands, you improve the chances of the chute deploying on the first try." Lovely. #teamoracleI feel so strange. But I flew a high performance airplane. And did an aerobatics move. Wild. #teamoracle"Flying ten feet off he ground, upside-down at 250 miles per hour isn't exciting to me." Sean D. Tucker #teamoracle"What is exciting to me is flying that perfect pattern, just like I imagined it in my head." Sean D. Tucker #teamoracle"You're going to sleep well tonight. You just carried four times your body weight." #teamoracle #gforce Just watched the #teamoracle plane take off for its flight home. I'm waiting for Caltrain. #undignifiedanticlimaxEnough with the #teamoracle. Check http://blogs.oracle.com/profit for the video. Coming soon! 

    Read the article

  • Back home :-)

    - by Mike Dietrich
    Wrote this entry last night in the ICE from Stuttgart to Munich but the conncetion broke: 28.5 hour journey - and close by now. Actually I would have been even closer if our TGV wouldn't have had break problems as soon as we had entered German territory. And you don't want a train which goes up to a speed of 200 mph having issues with its breaks, right? So we missed the connection in Stuttgart but I've catched the last train this night towards Munich. Distance approx 1900 km all together. Usually it takes 2.5 hours with a direct flight with Air Lingus from Munich or a bit more when you'll go through Zurich or Frankfurt. But at least you meet more people and see a bit more from the landscapes passing by :-) Except for the break problem everything worked out well so far (I'm no there finally!). I had 4 hours to change in Paris from Gare de Nord to Gare de l'Est and one thing I really have to point out: the people working for SNCF, the French National Railways, were so organized and helpful, purely amazing. I asked the man at the counter where I had to pick up my prepaid tickets for directions to Gare de l'Est - and after we had a chat about Marlene Dietrich he just grabbed his iPhone, started Google Earth and showed me the way to walk. I pretty sure it's a stupid stereotype that people in Paris or France are so unfriendly to foreigners if they don't speak French. In my past 3 stays or travels to Paris in the past 2 years I had only great experiences. And another thing I really enjoy when being in France: the food!!! The sandwich I had at the train station was packed with yummy goat cheese. And there's always Paul. You might ask yourself: Who the heck is Paul? That's Paul - or actually their website. And at Paul's they serve usually excellent fruit tartes - and this time a nice Gateau Au Chocolate. And very good Cafe Cremé as well :-) That's actually the positive part traveling this way: the food you'll get is much better than the airline food - if your airline still serves something called food ...

    Read the article

  • How to Customize Your How-To Geek RSS Feeds (We’re Changing Things)

    - by The Geek
    If you’re an RSS subscriber, you’ll soon notice that we’re making a few changes. Why? It’s time to simplify our system, while providing you a little more control over which articles you want to see. The point, of course, is that people like different things, and that’s OK. What’s not so great is getting complaints—Linux users are always whining about Windows posts, and Windows users are whining when we write Linux posts. It’s also worth pointing out that if you aren’t interested in a post—you don’t have to click on it to read it. This is probably fairly obvious to reasonable people. The New Feeds Here’s the new set of feeds you can subscribe to. We’ll probably add more fine-grained feeds in the future, as we get some more things straightened out. Everything we publish (news, how-tos, features) Just the Feature Articles (the absolute best stuff) Just News (ETC) Posts Just Windows Articles Just Linux Articles Just Apple Articles Just Desktop Fun Articles You can obviously subscribe to one or many of them if you feel like it. The Once Daily Summary Feed! If you’d rather get all your How-To Geek in a single dose each day, you can subscribe to the summary feed, which is pretty much the same as our daily email newsletter. You can subscribe to this summary feed by clicking here. Note: we’re working on a lot of backend changes to hopefully make things a little better for you, the reader. One of the things we’ve consistently had feedback on is the comment system, which we’ll tackle a little later. Also, if you suddenly saw a barrage of posts earlier… oops! Our mistake. Latest Features How-To Geek ETC Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents

    Read the article

  • Indian government department have more unsecure website then others.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/26/indian-government-department-have-more-unsecure-website-then-others.aspxOne of my friend share his college experience with me. He is not related with computer science. One day he told me that Ankia Fadia come to their college. In front of many student he show how to hack BSNL website by tricks. he break the flow how BSNL site work. I have told them BSNL is one of the most unsecure website of India   If you logged-in to website maybe it’s run in few seconds but sometime it run in 58 minute. OK this is not grammar mistake 58 minute is less then 1 hour. This means open a tab and put the link to open. it will open in hours. If you are using IE8, Chrome and Firefox you will be forced to use IE7 or downgrade. I simply use Ie7 mode in IE for make it work. This happen because they use something that is called DynaTrace. This site is most unsecure. now guess how !   Suppose my username is xyz and password is abc. How I can reset the password I simply go to website and in their site when I do reset my password he told me to fill password and password will not worked here.you can use here password here to reset my password. Remember that username are different then broadband username and password. Suppose if I want to reset your password I simply need to know your broadband username and I can reset it myself. I just logged in with my username and when I open the page for reset password I can fill your bb username and password will work here. I have not tried this. the broadband username can easily guess. this is depend on same way how people’s broandband username made. IS this Safe ? Nope, There are many thing on the site which make me feel that is 1900 century website. They still lived in popup life.  These site are nothing but a crap. not work most of time and when work it’s run too slowly.

    Read the article

  • SQLAuthority News – Presented at Bangalore DevCon August 4, 2012

    - by pinaldave
    Bangalore Devcon 2012 was a great fun. Earlier this month I was fortunate to be invited to present at Dev Con. The event was very well planned and had excellent response. There were more than 140 attendees at any time in the sessions. There were two tracks and both tracks were running parallel to each other in the Microsoft Bangalore building. The venue is fantastic and the enthusiasm of the community is impeccable. We had a total of 12 sessions during the day. I had decided to attend each session if I can. We have so many fantastic speakers and I did not want to miss any of the sessions. As sessions were running parallel, I attended every session for 30 minutes and switched to another session. I had fun doing this experiment as tit gave me a good idea about every session. I presented personally on the session of SQL Tips and Tricks for Web Developer. DBA is a very common word and every time when we say SQL Server – lots of people think of DBA in their mind, however, SQL Server is used by many developers as well. In this session I tried to cover a few of the simple concepts where developers must pay special attention while writing T-SQL code. Sometimes a very small mistake can be very fatal on performance in the future. Here are few of the photos of the event. Btw, the two sessions which clearly stand out were Vinod Kumar‘s session on Leadership and Lohith‘s session on Visual Studio Tips and Tricks. Additional Read: Following are the blog posts by community on the Bangalore DevCon Experience. I encourage you to read them all and leave a comment which one you liked the most. http://abhishekbhat.wordpress.com/2012/08/07/devcon-2012-experience/ http://praveenprajapati.wordpress.com/2012/08/07/devcon-2012-part-2/ http://tomsblogsspot.blogspot.in/2012/08/devcon-2012.html?view=classic https://manasdash.wordpress.com/2012/08/06/devcon-2012-by-bdotnet-4th-august-2012/ http://www.jagan-bhathri.com/2012/08/bangalore-developer-conference-2012-by.html Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Pixel Perfect Collision Detection in Cocos2dx

    - by Happybirthday
    I am trying to port the pixel perfect collision detection in Cocos2d-x the original version was made for Cocos2D and can be found here: http://www.cocos2d-iphone.org/forums/topic/pixel-perfect-collision-detection-using-color-blending/ Here is my code for the Cocos2d-x version bool CollisionDetection::areTheSpritesColliding(cocos2d::CCSprite *spr1, cocos2d::CCSprite *spr2, bool pp, CCRenderTexture* _rt) { bool isColliding = false; CCRect intersection; CCRect r1 = spr1-boundingBox(); CCRect r2 = spr2-boundingBox(); intersection = CCRectMake(fmax(r1.getMinX(),r2.getMinX()), fmax( r1.getMinY(), r2.getMinY()) ,0,0); intersection.size.width = fmin(r1.getMaxX(), r2.getMaxX() - intersection.getMinX()); intersection.size.height = fmin(r1.getMaxY(), r2.getMaxY() - intersection.getMinY()); // Look for simple bounding box collision if ( (intersection.size.width0) && (intersection.size.height0) ) { // If we're not checking for pixel perfect collisions, return true if (!pp) { return true; } unsigned int x = intersection.origin.x; unsigned int y = intersection.origin.y; unsigned int w = intersection.size.width; unsigned int h = intersection.size.height; unsigned int numPixels = w * h; //CCLog("Intersection X and Y %d, %d", x, y); //CCLog("Number of pixels %d", numPixels); // Draw into the RenderTexture _rt-beginWithClear( 0, 0, 0, 0); // Render both sprites: first one in RED and second one in GREEN glColorMask(1, 0, 0, 1); spr1-visit(); glColorMask(0, 1, 0, 1); spr2-visit(); glColorMask(1, 1, 1, 1); // Get color values of intersection area ccColor4B *buffer = (ccColor4B *)malloc( sizeof(ccColor4B) * numPixels ); glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer); _rt-end(); // Read buffer unsigned int step = 1; for(unsigned int i=0; i 0 && color.g 0) { isColliding = true; break; } } // Free buffer memory free(buffer); } return isColliding; } My code is working perfectly if I send the "pp" parameter as false. That is if I do only a bounding box collision but I am not able to get it working correctly for the case when I need Pixel Perfect collision. I think the opengl masking code is not working as I intended. Here is the code for "_rt" _rt = CCRenderTexture::create(visibleSize.width, visibleSize.height); _rt-setPosition(ccp(origin.x + visibleSize.width * 0.5f, origin.y + visibleSize.height * 0.5f)); this-addChild(_rt, 1000000); _rt-setVisible(true); //For testing I think I am making a mistake with the implementation of this CCRenderTexture Can anyone guide me with what I am doing wrong ? Thank you for your time :)

    Read the article

  • Make the Firefox Awesome Bar Semi-Transparent Like Google Chrome

    - by Matthew Guay
    Would you like to make the Firefox Awesome Bar drop-down menu semi-transparent like in Google Chrome?  Here’s a quick trick that can make your Firefox Awesome Bar a bit more awesome. When you type an address or search query into the address bar in Google Chrome, the drop-down list of history and search suggestions that appears is slightly transparent.  Nothing extreme, but it adds a nice touch. Firefox’s Awesome bar, on the other hand, is fully opaque by default. We can change that with a simple change.  Exit Firefox, then open your Firefox profile folder by entering the following in the address bar in Explorer or in the Run command: %appdata%\Mozilla\Firefox\Profiles\ Open the default folder, and then open the Chrome folder in it. Now, open the userChrome.css file in an editor such as Notepad.  If you do not have a userChrome.css file, open the userChrome-example.css file instead. Now, add the following to the end of the file: #PopupAutoCompleteRichResult[type="autocomplete-richlistbox"]{    opacity: 0.9 !important;} You can change the opacity value, but 0.9 seemed the closest to Chrome’s transparency while keeping the text readable. Save the file as userChrome.css in that same folder.  If you’re editing with Notepad, make sure to select to save as All Files so the file won’t be saved with a .txt extension. Open Firefox, and now your Awesome Bar’s drop-down list will be transparent.  Actually, it may look even more awesome than Google Chrome’s address bar! Conclusion With this simple trick, you can make your Firefox Awesome bar a bit more awesome.  With tweaks like this, it’s no wonder Firefox is still so popular. Special thanks to Daniel Spiewak for the tip! Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow to Make Google Chrome Your Default BrowserEnable Vista Black Style Theme for Google Chrome in XPMake your Gnome Terminal Background (mostly)Transparent on UbuntuStop YouTube Videos from Automatically Playing in Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule

    Read the article

  • Using MVP, how to create a view from another view, linked with the same model object

    - by Dinaiz
    Background We use the Model-View-Presenter design pattern along with the abstract factory pattern and the "signal/slot" pattern in our application, to fullfill 2 main requirements Enhance testability (very lightweight GUI, every action can be simulated in unit tests) Make the "view" totally independant from the rest, so we can change the actual view implementation, without changing anything else In order to do so our code is divided in 4 layers : Core : which holds the model Presenter : which manages interactions between the view interfaces (see bellow) and the core View Interfaces : they define the signals and slots for a View, but not the implementation Views : the actual implementation of the views When the presenter creates or deals with views, it uses an abstract factory and only knows about the view interfaces. It does the signal/slot binding between views interfaces. It doesn't care about the actual implementation. In the "views" layer, we have a concrete factory which deals with implementations. The signal/slot mechanism is implemented using a custom framework built upon boost::function. Really, what we have is something like that : http://martinfowler.com/eaaDev/PassiveScreen.html Everything works fine. The problem However, there's a problem I don't know how to solve. Let's take for example a very simple drag and drop example. I have two ContainersViews (ContainerView1, ContainerView2). ContainerView1 has an ItemView1. I drag the ItemView1 from ContainerView1 to ContainerView2. ContainerView2 must create an ItemView2, of a different type, but which "points" to the same model object as ItemView1. So the ContainerView2 gets a callback called for the drop action with ItemView1 as a parameter. It calls ContainerPresenterB passing it ItemViewB In this case we are only dealing with views. In MVP-PV, views aren't supposed to know anything about the presenter nor the model, right ? How can I create the ItemView2 from the ItemView1, not knowing which model object is ItemView1 representing ? I thought about adding an "itemId" to every view, this id being the id of the core object the view represents. So in pseudo code, ContainerPresenter2 would do something like itemView2=abstractWidgetFactory.createItemView2(); this.add(itemView2,itemView1.getCoreObjectId()) I don't get too much into details. That just work. The problem I have here is that those itemIds are just like pointers. And pointers can be dangling. Imagine that by mistake, I delete itemView1, and this deletes coreObject1. The itemView2 will have a coreObjectId which represents an invalid coreObject. Isn't there a more elegant and "bulletproof" solution ? Even though I never did ObjectiveC or macOSX programming, I couldn't help but notice that our framework is very similar to Cocoa framework. How do they deal with this kind of problem ? Couldn't find more in-depth information about that on google. If someone could shed some light on this. I hope this question isn't too confusing ...

    Read the article

  • A* navigational mesh path finding

    - by theguywholikeslinux
    So I've been making this top down 2D java game in this framework called Greenfoot [1] and I've been working on the AI for the guys you are gonna fight. I want them to be able to move around the world realistically so I soon realized, amongst a couple of other things, I would need some kind of pathfinding. I have made two A* prototypes. One is grid based and then I made one that works with waypoints so now I need to work out a way to get from a 2d "map" of the obstacles/buildings to a graph of nodes that I can make a path from. The actual pathfinding seems fine, just my open and closed lists could use a more efficient data structure, but I'll get to that if and when I need to. I intend to use a navigational mesh for all the reasons out lined in this post on ai-blog.net [2]. However, the problem I have faced is that what A* thinks is the shortest path from the polygon centres/edges is not necessarily the shortest path if you travel through any part of the node. To get a better idea you can see the question I asked on stackoverflow [3]. I got a good answer concerning a visibility graph. I have since purchased the book (Computational Geometry: Algorithms and Applications [4]) and read further into the topic, however I am still in favour of a navigational mesh (See "Managing Complexity" [5] from Amit’s Notes about Path-Finding [6]). (As a side note, maybe I could possibly use Theta* to convert multiple waypoints into one straight line if the first and last are not obscured. Or each time I move back check to the waypoint before last to see if I can go straight from that to this) So basically what I want is a navigational mesh where once I have put it through a funnel algorithm (e.g. this one from Digesting Duck [7]) I will get the true shortest path, rather than get one that is the shortest path following node to node only, but not the actual shortest given that you can go through some polygons and skip nodes/edges. Oh and I also want to know how you suggest storing the information concerning the polygons. For the waypoint prototype example I made I just had each node as an object and stored a list of all the other nodes you could travel to from that node, I'm guessing that won't work with polygons? and how to I tell if a polygon is open/traversable or if it is a solid object? How do I store which nodes make up the polygon? Finally, for the record: I do want to programme this by myself from scratch even though there are already other solutions available and I don't intend to be (re) using this code in anything other than this game so it does not matter that it will inevitably be poor quality. http://greenfoot.org http://www.ai-blog.net/archives/000152.html http://stackoverflow.com/q/7585515/ http://www.cs.uu.nl/geobook/ http://theory.stanford.edu/~amitp/GameProgramming/MapRepresentations.html http://theory.stanford.edu/~amitp/GameProgramming/ http://digestingduck.blogspot.com/2010/03/simple-stupid-funnel-algorithm.html

    Read the article

  • Oracle Announces Availability of Oracle Exaskeleton with Extreme Scale

    - by J Swaroop
    Re-posting Bruce Tierney's original post - albeit a day late: I reckon this is Oracle's most interesting launch this year. Enjoy! The World’s First Human Scale Body Surface (HSBS) Designed to Toughen Spineless Wimps April 1, 2012 Building on the success of Oracle Exalogic, Oracle Exadata, and Oracle Exalytics, Oracle today announced the general availability of Oracle Exaskeleton, toughening up spineless wimps across the globe through the introduction of extreme scalability over the human body leveraging a revolutionary new technology called Human Scale Body Surface (HSBS). First Customer Ship (FCS) was received by the little known and mostly unsuccessful superhero Awkwardman. After applying Oracle Exaskeleton with extreme scale, he has since rebranded himself as Aquaman. Said Aquaman, “I used to feel so helpless in my skin…now I feel like…well…a highly scaled Engineered System thanks to Oracle!” Thousand of meek and mild individuals eagerly lined up outside Oracle Corporation’s Redwood Shores office to purchase the new Oracle Exaskeleton, with the hope of finally gaining the spine they never had. Unfortunately for the individuals, a bully was spotted allegedly kicking the sand covering the beaches of Redwood Shores into the still spineless Exaskeleton hopefuls. Supporting Quotes “Industry analysts are inquiring if Oracle Exaskeleton is a radical departure from Oracle’s traditional enterprise focus into new markets”, said Oracle representative Sabrina Twich, “Oracle has extensive expertise in unified backbone solutions for application infrastructures…this is simply a new port to the human body combining our Business Intelligence (BI) and RDBC (Remote Direct Brain Cell) technologies.” “With this release of Oracle Exaskeleton, Oracle has redefined scalability. Software and hardware vendors had it all wrong” said the Director of Oracle Exaskeleton, “Scalability for hardware is like…um…you know…so scale-ful. No, wait…can I say that again? I didn’t get that right…Scalability is hardware-on-demand with public and private…hybrid clouds, no…<long pause>…Scalability for… nevermind, I don’t want to be in this stupid press release anyway” Releases An upcoming Oracle Exaskeleton service pack release will include a new datasheet with an extensive library of three-letter acronyms (TLAs) as well as the introduction of more four-letter acronyms (FLAs) since technologies vendors have used up almost all of the 17,576 TLA permutations (TLAPs). About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. It would be an amazing coincidence if any of this is true in some secret Oracle lab, but I doubt it. Trademarks Really…you’re still reading this? Cool! Aquaman - First Customer Ship (FCS) - Oracle Exaskeleton

    Read the article

  • Wireless Drivers for Broadcom BCM 4321 (14e4:4329) will not stay connected to a wireless network

    - by Eugene
    So, I'm not necessary new to Linux, I just never took the time to learn it, so please, bare with me. I just swapped out one of my wireless cards from one computer to another. This wireless card in question would be a "Broadcom BCM4321 (14e4:4329)" or actually a "Netgear WN311B Rangemax Next 270 Mbps Wireless PCI Adapter", but that's not important. I've tried (but probably screwed up in the process) installing the "wl" , "b43" and "brcmsmac" drivers, or at least I think I did. Currently I have only the following drivers loaded: eugene@EugeneS-PCu:~$ lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" b43 387371 0 bcma 52096 1 b43 mac80211 630653 1 b43 cfg80211 484040 2 b43,mac80211 ssb_hcd 12869 0 ssb 62379 2 b43,ssb_hcd The main issue is that with most of the drivers available that I've installed, they will find my wireless network but, they will only stay connected for about a minute with abnormally slow speed and then all of a sudden disconnect. Currently, the computer is hooked into another to share it's connect so that I can install drivers from the internet instead of loading them on to a flash drive and doing it offline. If anyone has any insight to the problem, that would be awesome. If not, I'll probably just look up how to install the Windows closed source driver. Edit 1: Even when I try the method here, as suggested when this was marked as a duplicate, I still can't stay connected to a wireless network. Edit 2: After discussing my issue with @Luis, he opened my question back up and told me to include the tests/procedures in the comments. Basically I did this: Read the first answer of the link above when this question was marked as duplicate which involved installing removing bcmwl-kernel-source and instead install firmware-b43-installer and b43-fwcutter. No change of result and contacted Luis in the comments, who then told me to try the second answer which involved removing my previous mistake and installing bcmwl-kernel-source Now the Network Manger (this has happend before, but usally I fixed it by using a different driver) even recognizes WiFi exist (both non-literal and literal). Luis who then suggested sudo rfkill unblock all rfkill unblock all didn't return anything, so I decide to try sudo rfkill list all. Returns nothing (no wonder rfkill unblock all did nothing). I enter lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" and that returns nothing. Try loading the driver by entering sudo modprobe b43 and try lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" again. Returns this: eugene@Eugenes-uPC:~$ sudo modprobe b43 eugene@Eugenes-uPC:~$ lsmod | grep "brcmsmac\|b43\|ssb\|bcma\|wl" b43 387371 0 bcma 52096 1 b43 mac80211 630653 1 b43 cfg80211 484040 2 b43,mac80211 ssb_hcd 12869 0 ssb 62379 2 b43,ssb_hcd So to recap: Currently Network Manager doesn't recognize Wireless exists, b43 drivers are loaded and I've currently hardwired a connect from my laptop to the computer that's causing this.

    Read the article

  • Responding to Invites

    - by Daniel Moth
    Following up from my post about Sending Outlook Invites here is a shorter one on how to respond. Whatever your choice (ACCEPT, TENTATIVE, DECLINE), if the sender has not unchecked the "Request Response" option, then send your response. Always send your response. Even if you think the sender made a mistake in keeping it on, send your response. Seriously, not responding is plain rude. If you knew about the meeting, and you are happy investing your time in it, and the time and location work for you, and there is an implicit/explicit agenda, then ACCEPT and send it. If one or more of those things don't work for you then you have a few options. Send a DECLINE explaining why. Reply with email to ask for further details or for a change to be made. If you don’t receive a response to your email, send a DECLINE when you've waited enough. Send a TENTATIVE if you haven't made up your mind yet. Hint: if they really require you there, they'll respond asking "why tentative" and you have a discussion about it. When you deem appropriate, instead of the options above, you can also use the counter propose feature of Outlook but IMO that feature has questionable interaction model and UI (on both sender and recipient) so many people get confused by it. BTW, two of my outlook rules are relevant to invites. The first one auto-marks as read the ACCEPT responses if there is no comment in the body of the accept (I check later who has accepted and who hasn't via the "Tracking" button of the invite). I don’t have a rule for the DECLINE and TENTATIVE cause typically I follow up with folks that send those.   The second rule ensures that all Invites go to a specific folder. That is the first folder I see when I triage email. It is also the only folder which I have configured to show a count of all items inside it, rather than the unread count - when sending a response to an invite the item disappears from the folder and hence it is empty and not nagging me. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Interpolation using a sprite's previous frame and current frame

    - by user22241
    Overview I'm currently using a method which has been pointed out to me is extrapolation rather than interolation. As a result, I'm also now looking into the possibility of using another method which is based on a sprite's position at it's last (rendered) frame and it's current one. Assuming an interpolation value of 0.5 this is, (visually), how I understand it should affect my sprite's position.... This is how I'm obtaining an inerpolation value: public void onDrawFrame(GL10 gl) { // Set/re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip) { SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick += skipTicks; timeCorrection += (1000d / ticksPerSecond) % 1; nextGameTick += timeCorrection; timeCorrection %= 1; loops++; tics++; } interpolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(interpolation); } I am then applying it like so (in my rendering call): render(float interpolation) { spriteScreenX = (spriteScreenX - spritePreviousX) * interpolation + spritePreviousX; spritePreviousX = spriteScreenX; // update and store this for next time } Results This unfortunately does nothing to smooth the movement of my sprite. It's pretty much the same as without the interpolation code. I can't get my head around how this is supposed to work and I honestly can't find any decent resources which explain this in any detail. My understanding of extrapolation is that when we arrive at the rendering call, we calculate the time between the last update call and the render call, and then adjust the sprite's position to reflect this time (moving the sprite forward) - And yet, this (Interpolation) is moving the sprite back, so how can this produce smooth results? Any advise on this would be very much appreciated. Edit I've implemented the code from OriginalDaemon's answer like so: @Override public void onDrawFrame(GL10 gl) { newTime = System.currentTimeMillis()*0.001; frameTime = newTime - currentTime; if ( frameTime > (dt*25)) frameTime = (dt*25); currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { SceneManager.getInstance().getCurrentScene().updateLogic(); previousState = currentState; t += dt; accumulator -= dt; } interpolation = (float) (accumulator / dt); render(); } Interpolation values are now being produced between 0 and 1 as expected (similar to how they were in my original loop) - however, the results are the same as my original loop (my original loop allowed frames to skip if they took too long to draw which I think this loop is also doing). I appear to have made a mistake in my previous logging, it is logging as I would expect it to (interpolated position does appear to be inbetween the previous and current positions) - however, the sprites are most definitely choppy when the render() skipping happens.

    Read the article

  • Oracle Announces Availability of Oracle Exaskeleton with Extreme Scale

    - by Bruce Tierney
    The World’s First Human Scale Body Surface (HSBS) Designed to Toughen Spineless Wimps April 1, 2012 Building on the success of Oracle Exalogic, Oracle Exadata, and Oracle Exalytics, Oracle today announced the general availability of Oracle Exaskeleton, toughening up spineless wimps across the globe through the introduction of extreme scalability over the human body leveraging a revolutionary new technology called Human Scale Body Surface (HSBS). First Customer Ship (FCS) was received by the little known and mostly unsuccessful superhero Awkwardman. After applying Oracle Exaskeleton with extreme scale, he has since rebranded himself as Aquaman. Said Aquaman, “I used to feel so helpless in my skin…now I feel like…well…a highly scaled Engineered System thanks to Oracle!” Thousand of meek and mild individuals eagerly lined up outside Oracle Corporation’s Redwood Shores office to purchase the new Oracle Exaskeleton, with the hope of finally gaining the spine they never had. Unfortunately for the individuals, a bully was spotted allegedly kicking the sand covering the beaches of Redwood Shores into the still spineless Exaskeleton hopefuls. Supporting Quotes “Industry analysts are inquiring if Oracle Exaskeleton is a radical departure from Oracle’s traditional enterprise focus into new markets”, said Oracle representative Sabrina Twich, “Oracle has extensive expertise in unified backbone solutions for application infrastructures…this is simply a new port to the human body combining our Business Intelligence (BI) and RDBC (Remote Direct Brain Cell) technologies.” “With this release of Oracle Exaskeleton, Oracle has redefined scalability. Software and hardware vendors had it all wrong” said the Director of Oracle Exaskeleton, “Scalability for hardware is like…um…you know…so scale-ful. No, wait…can I say that again? I didn’t get that right…Scalability is hardware-on-demand with public and private…hybrid clouds, no…<long pause>…Scalability for… nevermind, I don’t want to be in this stupid press release anyway” Releases An upcoming Oracle Exaskeleton service pack release will include a new datasheet with an extensive library of three-letter acronyms (TLAs) as well as the introduction of more four-letter acronyms (FLAs) since technologies vendors have used up almost all of the 17,576 TLA permutations (TLAPs). About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. It would be an amazing coincidence if any of this is true in some secret Oracle lab, but I doubt it. Trademarks Really…you’re still reading this? Cool! Aquaman - First Customer Ship (FCS) - Oracle Exaskeleton

    Read the article

  • Speed Up the Help Dialog in Windows and Office

    - by Matthew Guay
    When you click help, you don’t want to wait for your computer to bring it to you.  Here’s how you can speed up the help dialog in Windows and Office. If you have a slow internet connection, chances are you’ve been frustrated by the Help dialog in Windows and Office trying to download fresh content every time you open them. This can be great if the updated help files contain better content, but sometimes you just want to find what you were looking for without waiting.  Here’s how you can turn off the automatic online help. Use Local Help in Windows Windows 7 and Vista’s help dialog usually tries to load the latest content from the net, but this can take a long time on slow connections. If you’re seeing the above screen a lot, you may want to switch to offline help.  Click the “Online Help” button at the bottom, and select “Get offline Help”. Now your computer will just load the pre-installed help files.  And don’t worry; if there’s a major update to your help files, Windows will download and install it through Windows Update.   Stupid Geek Tip: An easy way to open Windows Help is to click on your desktop or Start Menu and press F1 on your keyboard. Use Local Help in Office This same trick works in Office 2007 and 2010.  We’ve actually had more problems with Office’s help being tardy. Solve this the same way as with Windows help.  Click on the “Connected to Office.com” or “Connected to Office Online” button, depending on your version of Office, and select “Show content only from this computer”. This will automatically change the settings for Help in all of your Office applications. While this may not be a major trick, it can be helpful especially if you have a slow internet connection and want to get things done quickly.  Similar Articles Productive Geek Tips How to See the About Dialog and Version Information in Office 2007Speed Up SATA Hard Drives in Windows VistaMake Mouse Navigation Faster in WindowsSpeed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Graphics card fan is loud (additional graphics card drivers cause problems)

    - by tk4muffin
    Okay this explanation is a bit longer ... but I start at the beginning: I've been using Windows 7 for a very long time, shortly after the release of v12.10 I installed Ubuntu via Windows installer. Everything worked fine but the fan of the graphics card. After a bit of research I found out, that I just had to select a different driver (nvidia-current (proprietary, tested) worked pretty well). This also fixed some graphical bugs when I just logged into my account. Due to my university I got a MSDNAA-Account (allows me to download every Windows OS for free). I downloaded and installed Windows 8. After configuration I installed ubuntu via the Windows installer once again and the first couple launches of ubuntu went well. Suddenly ubuntu didn't launched anymore...caused by some hard-disk errors and had no clue what to do. So I kept working on Windows 8 - unfortunately. After playing around with the new Windows, I put my PC to sleep-mode. I couldn't wake my PC up and it wasn't responding to anything (neither mouse-movement, -clicks or keyboard strokes, nor the power-button and the reset-button worked), so I pulled the plug. Turns out, this was a huge mistake. Somehow the BIOS broke and after restarting a couple of times, the BIOS repaired itself. Neither Windows 8, nor ubuntu where bootable. Now I had to install ubuntu several times, because after rebooting unity was hidden and I didn't know what the problem was and how to fix it. I finally realized that this problem was caused by the graphics card driver, which I've changed to the nvidia-current (This dirver worked fine before my PC "crashed"). So I installed Windows 8 again and after a bit of usage I installed ubuntu once again (via DVD). The booting of ubuntu and windows works fine - so far. But I'm still not able to change the graphics card driver without unity hiding away after restarting the OS. The noisy fan is really disturbing my work... PC Specs: Processor: Intel Core 2 Duo COU E8400 @ 3GHz x2 Memory: 7.8 GB OS type: 64-bit Graphics Card: GeForce 9600 GT Motherboard: Asus P5Q I hope the information given are enough.

    Read the article

  • Best development architecture for a small team of programmers ( WAMP Stack )

    - by Tio
    Hi all.. I'm in the first month of work in a new company.. and after I met the two programmer's and asked how things are organized in terms of projects inside the company, they simply shrug their shoulders, and said that nothing is organized.. I think my jaw hit the ground that same time.. ( I know some, of you think I should quit, but I'm on a privileged position, I'm the most experienced there, so there's room for me to grow inside the company, and I'm taking the high road ).. So I talked to the IT guy, and one of the programmers, and maybe this week I'm going to get a server all to myself to start organizing things. I've used various architectures in my previous work experiences, on one I was developing in a server on the network ( no source control of course ).. another experience I had was developing in my local computer, with no server on the network, just source control. And at home, I have a mix of the two, everything I code is on a server on the network, and I have those folders under source control, and I also have a no-ip account configured on that server so I can access it everywhere and I can show the clients anything. For me I think this last solution ( the one I have at home ) is the best: Network server with WAMP stack. The server as a public IP so we can access it by domain name. And use subdomains for each project. Everybody works directly on the network server. I think the problem arises, when two or more people want to work on the same project, in this case the only way to do this is by using source control and local repositories, this is great, but I think this turns development a lot more complicated. In the example I gave, to make a change to the code, I would simply need to open the file in my favorite editor, make the change, alter the database, check in the changes into source control and presto all done. Using local repositories, I would have to get the latest version, run the scripts on the local database to update it, alter the file, alter the database, check in the changes to the network server, update the database on the network server, see if everything is running well on the network server, and presto all done, to me this seems overcomplicated for a change on a simple php page. I could share the database for the local development and for the network server, that sure would help. Maybe the best way to do this is just simply: Network server with WAMP stack ( test server so to speak ), public server accessible trough the web. LAMP stack on every developer computer ( minus the database ) We develop locally, test, then check in the changes into the server test and presto. What do you think? Maybe I should start doing this at home.. Thanks and best regards... Edit: I'm sorry I made a mistake and switched WAMP with LAMP, sorry about that..

    Read the article

  • How to Disable the New Geolocation Feature in Google Chrome

    - by Asian Angel
    The latest release of Google Chrome has geolocation enabled by default, and if you are worried about privacy or just don’t want websites to prompt you for your location, we’ve got the quick details on how to turn it off. Readers should note that the new Geolocation feature doesn’t give out your details by default, so don’t panic. It’s also only active, at the time of this writing, in the Dev channel builds of Chrome—so if you are using the regular stable build this feature won’t arrive for a while anyway. Note: If you’re a Firefox user, be sure to check out our guide to disabling geolocation in Firefox 3.x. What’s this Geolocation Feature About? Geolocation is a way for your browser to tell a website about your physical location, so you can get results tailored to where you actually are—for example, if you visited Google Maps it can ask you for your location to give you an accurate picture of where you are. To use this feature in Google Maps, you would click on the small white icon to activate the feature. As soon as you have clicked on the small white icon, a thin green toolbar will appear at the top of the webpage, asking to Allow or Deny.   How to Turn Chrome’s Geolocation Off If you want to turn geolocation off you will need to open the “Chrome Options Window”, navigate to the third tab, and click on the “Content settings… ” button. When the “Content Settings Window” opens go to the “Location Tab” and select “Do not allow any site to track my physical location”. Once that is done close out the “Content Settings & Chrome Options Windows”. When you go back to Google Maps and try using the small white icon again this is the message that you will see at the top of the page. Now that is much better! If you are unhappy with geolocation being enabled by default in the latest Dev Channel release then this will help get the problem sorted out nicely. Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow To Disable Individual Plug-ins in Google ChromeStop YouTube Videos from Automatically Playing in ChromeDisable YouTube Comments while using ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff

    Read the article

  • Achieving Zero Downtime Deployment

    - by MattW
    I am trying to achieve zero downtime deployments so I can deploy less during off hours and more during "slower" hours - or anytime, in theory. My current setup, somewhat simplified: Web Server A (.NET App) Web Server B (.NET App) Database Server (SQL Server) My current deployment process: "Stop" the sites on both Web Server A and B Upgrade the database schema for the version of the app being deployed Update Web Server A Update Web Server B Bring everything back online Current Problem This leads to a small amount of downtime each month - about 30 mins. I do this during off hours, so it isn't a huge problem - but it is something I'd like to get away from. Also - there is no way to really go 'back'. I don't generally make rollback DB scripts - only upgrade scripts. Leveraging The Load Balancer I'd love to be able to upgrade one Web Server at a time. Take Web Server A out of the load balancer, upgrade it, put it back online, then repeat for Web Server B. The problem is the database. Each version of my software will need to execute against a different version of the database - so I am sort of "stuck". Possible Solution A current solution I am considering is adopting the following rules: Never delete a database table. Never delete a database column. Never rename a database column. Never reorder a column. Every stored procedure must be versioned. Meaning - 'spFindAllThings' will become 'spFindAllThings_2' when it is edited. Then it becomes 'spFindAllThings_3' when edited again. Same rule applies to views. While, this seems a bit extreme - I think it solves the problem. Each version of the application will be hitting the DB in a non breaking way. The code expects certain results from the views/stored procedures - and this keeps that 'contract' valid. The problem is - it just seeps sloppy. I know I can clean up old stored procedures after the app is deployed for awhile, but it just feels dirty. Also - it depends on all of the developers following these rule, which will mostly happen, but I imagine someone will make a mistake. Finally - My Question Is this sloppy or hacky? Is anybody else doing it this way? How are other people solving this problem?

    Read the article

  • Away3D & Directional Light w/ Rotating Meshes

    - by seethru
    This is likely a stupid error but I can't seem to find what I've done wrong. I've got a simple scene with 10 cylinders rotating at a default speed. If I grab one of these cylinders I can rotate it in the opposite direction or at a greater speed. I have a single directional light in the scene. It would appear that the directional light is only calculated at initialization and not on further frames. The shadow created by the light rotates with the cylinder giving the impression that the light is rotating when it isn't. Camera & Light Initialization _view = new View3D(); addChild(_view); _view.antiAlias = 4; _view.backgroundColor = 0xFFFFFF; _view.camera.z = -850; _view.camera.y = 0; _view.camera.x = 0; _view.camera.lookAt(new Vector3D()); _view.camera.lens = new PerspectiveLens(15); _view.mousePicker = PickingType.RAYCAST_BEST_HIT; _light = new DirectionalLight(); _light.z = -850; _light.direction = new Vector3D(1, 1, 1); _light.color = 0xFFFFFF; _light.ambient = 0.1; _light.diffuse = 0.7; _view.scene.addChild(_light); Mesh and Material creation var material:TextureMaterial = new TextureMaterial(createPow2Texture(sprite, _colors[i]) , true, false, true); material.animateUVs = true; material.lightPicker = _lightPicker; cylinder = new Mesh(new CylinderGeometry(radius, radius, 13, 70, 1, true, true), material); cylinder.subMeshes[0].scaleU = spriteWidth / sprite.width; cylinder.y = y; cylinder.mouseEnabled = true; cylinder.pickingCollider = PickingColliderType.AS3_BEST_HIT; cylinder.addEventListener(MouseEvent3D.MOUSE_OVER, onMouseOverMesh); cylinder.addEventListener(MouseEvent3D.MOUSE_MOVE, onMouseOverMesh); cylinder.addEventListener(MouseEvent3D.MOUSE_OUT, onMouseOutMesh); _cylinders.push(cylinder); Frame private function onEnterFrame(event:Event):void { for each (var mesh:Mesh in _cylinders) { if (mesh == _mouseOverMesh) continue; mesh.rotationY += 0.25; } _view.render(); }

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >