Search Results

Search found 1216 results on 49 pages for 'jon wilson'.

Page 48/49 | < Previous Page | 44 45 46 47 48 49  | Next Page >

  • Refreshing Your PC Won’t Help: Why Bloatware is Still a Problem on Windows 8

    - by Chris Hoffman
    Bloatware is still a big problem on new Windows 8 and 8.1 PCs. Some websites will tell you that you can easily get rid of manufacturer-installed bloatware with Windows 8′s Reset feature, but they’re generally wrong. This junk software often turns the process of powering on your new PC from what could be a delightful experience into a tedious slog, forcing you to spend hours cleaning up your new PC before you can enjoy it. Why Refreshing Your PC (Probably) Won’t Help Manufacturers install software along with Windows on their new PCs. In addition to hardware drivers that allow the PC’s hardware to work properly, they install more questionable things like trial antivirus software and other nagware. Much of this software runs at boot, cluttering the system tray and slowing down boot times, often dramatically. Software companies pay computer manufacturers to include this stuff. It’s installed to make the PC manufacturer money at the cost of making the Windows computer worse for actual users. Windows 8 includes “Refresh Your PC” and “Reset Your PC” features that allow Windows users to quickly get their computers back to a fresh state. It’s essentially a quick, streamlined way of reinstalling Windows.  If you install Windows 8 or 8.1 yourself, the Refresh operation will give your PC a clean Windows system without any additional third-party software. However, Microsoft allows computer manufacturers to customize their Refresh images. In other words, most computer manufacturers will build their drivers, bloatware, and other system customizations into the Refresh image. When you Refresh your computer, you’ll just get back to the factory-provided system complete with bloatware. It’s possible that some computer manufacturers aren’t building bloatware into their refresh images in this way. It’s also possible that, when Windows 8 came out, some computer manufacturer didn’t realize they could do this and that refreshing a new PC would strip the bloatware. However, on most Windows 8 and 8.1 PCs, you’ll probably see bloatware come back when you refresh your PC. It’s easy to understand how PC manufacturers do this. You can create your own Refresh images on Windows 8 and 8.1 with just a simple command, replacing Microsoft’s image with a customized one. Manufacturers can install their own refresh images in the same way. Microsoft doesn’t lock down the Refresh feature. Desktop Bloatware is Still Around, Even on Tablets! Not only is typical Windows desktop bloatware not gone, it has tagged along with Windows as it moves to new form factors. Every Windows tablet currently on the market — aside from Microsoft’s own Surface and Surface 2 tablets — runs on a standard Intel x86 chip. This means that every Windows 8 and 8.1 tablet you see in stores has a full desktop with the capability to run desktop software. Even if that tablet doesn’t come with a keyboard, it’s likely that the manufacturer has preinstalled bloatware on the tablet’s desktop. Yes, that means that your Windows tablet will be slower to boot and have less memory because junk and nagging software will be on its desktop and in its system tray. Microsoft considers tablets to be PCs, and PC manufacturers love installing their bloatware. If you pick up a Windows tablet, don’t be surprised if you have to deal with desktop bloatware on it. Microsoft Surfaces and Signature PCs Microsoft is now selling their own Surface PCs that they built themselves — they’re now a “devices and services” company after all, not a software company. One of the nice things about Microsoft’s Surface PCs is that they’re free of the typical bloatware. Microsoft won’t take money from Norton to include nagging software that worsens the experience. If you pick up a Surface device that provides Windows 8.1 and 8 as Microsoft intended it — or install a fresh Windows 8.1 or 8 system — you won’t see any bloatware. Microsoft is also continuing their Signature program. New PCs purchased from Microsoft’s official stores are considered “Signature PCs” and don’t have the typical bloatware. For example, the same laptop could be full of bloatware in a traditional computer store and clean, without the nasty bloatware when purchased from a Microsoft Store. Microsoft will also continue to charge you $99 if you want them to remove your computer’s bloatware for you — that’s the more questionable part of the Signature program. Windows 8 App Bloatware is an Improvement There’s a new type of bloatware on new Windows 8 systems, which is thankfully less harmful. This is bloatware in the form of included “Windows 8-style”, “Store-style”, or “Modern” apps in the new, tiled interface. For example, Amazon may pay a computer manufacturer to include the Amazon Kindle app from the Windows Store. (The manufacturer may also just receive a cut of book sales for including it. We’re not sure how the revenue sharing works — but it’s clear PC manufacturers are getting money from Amazon.) The manufacturer will then install the Amazon Kindle app from the Windows Store by default. This included software is technically some amount of clutter, but it doesn’t cause the problems older types of bloatware does. It won’t automatically load and delay your computer’s startup process, clutter your system tray, or take up memory while you’re using your computer. For this reason, a shift to including new-style apps as bloatware is a definite improvement over older styles of bloatware. Unfortunately, this type of bloatware has not replaced traditional desktop bloatware, and new Windows PCs will generally have both. Windows RT is Immune to Typical Bloatware, But… Microsoft’s Windows RT can’t run Microsoft desktop software, so it’s immune to traditional bloatware. Just as you can’t install your own desktop programs on it, the Windows RT device’s manufacturer can’t install their own desktop bloatware. While Windows RT could be an antidote to bloatware, this advantage comes at the cost of being able to install any type of desktop software at all. Windows RT has also seemingly failed — while a variety of manufacturers came out with their own Windows RT devices when Windows 8 was first released, they’ve all since been withdrawn from the market. Manufacturers who created Windows RT devices have criticized it in the media and stated they have no plans to produce any future Windows RT devices. The only Windows RT devices still on the market are Microsoft’s Surface (originally named Surface RT) and Surface 2. Nokia is also coming out with their own Windows RT tablet, but they’re in the process of being purchased by Microsoft. In other words, Windows RT just isn’t a factor when it comes to bloatware — you wouldn’t get a Windows RT device unless you purchased a Surface, but those wouldn’t come with bloatware anyway. Removing Bloatware or Reinstalling Windows 8.1 While bloatware is still a problem on new Windows systems and the Refresh option probably won’t help you, you can still eliminate bloatware in the traditional way. Bloatware can be uninstalled from the Windows Control Panel or with a dedicated removal tool like PC Decrapifier, which tries to automatically uninstall the junk for you. You can also do what Windows geeks have always tended to do with new computers — reinstall Windows 8 or 8.1 from scratch with installation media from Microsoft. You’ll get a clean Windows system and you can install only the hardware drivers and other software you need. Unfortunately, bloatware is still a big problem for Windows PCs. Windows 8 tries to do some things to address bloatware, but it ultimately comes up short. Most Windows PCs sold in most stores to most people will still have the typical bloatware slowing down the boot process, wasting memory, and adding clutter. Image Credit: LG on Flickr, Intel Free Press on Flickr, Wilson Hui on Flickr, Intel Free Press on Flickr, Vernon Chan on Flickr     

    Read the article

  • SSAS Compare: an intern’s journey

    - by Red Gate Software BI Tools Team
    About a month ago, David mentioned an intern working in the BI Tools Team. That intern happens to be me! In five weeks’ time, I’ll start my second year of Computer Science at the University of Cambridge and be a full-time student again, but for the past eight weeks, I’ve been living a completely different life. As Jon mentioned before, the teams here at Red Gate are small and everyone (including the interns!) is responsible for the product as a whole. I’ve attended planning sessions, UX tests, daily meetings, and everything else a full-time member of the team would; I had as much say in where we would go next with the product as anyone; I was able to see that what I was doing was an important part of the product from the feedback we got in the UX tests. All these things almost made me forget that this is just an internship and not my full-time job. First steps at Red Gate Being based in Cambridge, Red Gate has many Cambridge university graduates working for them. They also hire some Cambridge undergraduates for internships each summer. With its popularity with university graduates and its great working environment, Red Gate has managed to build up a great reputation. When I thought of doing an internship here in Cambridge, Red Gate just seemed to be the obvious choice for my first real work experience. On my first day at Red Gate, David, the lead developer for SSAS Compare, helped me settle in and explained what I’d be doing. My task was to improve the user experience of displaying differences between MDX scripts by syntax highlighting, script formatting, and improving the difference identification in the first place. David suggested how I should approach the problem, but left all the details and design decisions to me. That was when I realised how much independence and responsibility I’d have. What I’ve done If you launch the latest version of SSAS Compare and drill down to an MDX script difference, you can see the changes that have been made. In earlier versions, you could only see the scripts in plain text on both sides — either in black or grey, depending on whether they were the same or not. However, you couldn’t see exactly where the scripts were different, which was especially annoying when the two scripts were large – as they often are. Furthermore, if parts of the two scripts were formatted differently, they seemed to be different but were actually the same, which caused even more confusion and made it difficult to see where the differences were. All these issues have been fixed now. The two scripts are automatically formatted by the tool so that if two things are syntactically equivalent, they look the same – including case differences in keywords! The actual difference is highlighted in grey, which makes them easy to spot. The difference identification has been improved as well, so two scripts aren’t identified as different if there’s just a difference in meaningless whitespace characters, or when you have “select” on one side and “SELECT” on the other. We also have syntax highlighting, which makes it easier to read the scripts. How I did it In order to do the formatting properly, we decided to parse the MDX scripts. After some investigation into parser builders, I decided to go with the GOLD Parser builder and the bsn-goldparser .NET engine. GOLD Parser builder provides a fairly nice GUI to write, build, and test grammar in. We also liked the idea of separating the grammar building from parsing a text. The bsn-goldparser is one of many .NET engines for GOLD, and although it doesn’t support the newest features of GOLD Parser, it has “the ability to map semantic action classes to terminals or reduction rules, so that a completely functional semantic AST can be created directly without intermediate token AST representation, and without the need for glue code.” That makes it much easier for us to change the implementation in our program when we change the grammar. As bsn-goldparser is open source, and I wanted some more features in it, I contributed two new features which have now been merged to the project. Unfortunately, there wasn’t an MDX grammar written for GOLD already, so I had to write it myself. I was referencing MSDN to get the formal grammar specification, but the specification was all over the place, so it wasn’t that easy to implement and find. We’re aware that we don’t yet fully support all valid MDX, so sometimes you’ll just see the MDX script difference displayed the old way. In that case, there is some grammar construct we don’t yet recognise. If you come across something SSAS Compare doesn’t recognise, we’d love to hear about it so we can add it to our grammar. When some MDX script gets parsed, a tree is produced. That tree can then be processed into a list of inlines which deal with the correct formatting and can be outputted to the screen. Doing all this has led me to many new technologies and projects I haven’t worked with before. This was my first experience with C# and Visual Studio, although I have done things in Java before. I have learnt how to unit test with NUnit, how to do dependency injection with Ninject, how to source-control code with SVN and Mercurial, how to build with TeamCity, how to use GOLD, and many other things. What’s coming next Sadly, my internship comes to an end this week, so there will be less development on MDX difference view for a while. But the team is going to work on marking the differences better and making it consistent with difference indication in the top part of comparison window, and will keep adding support for more MDX grammar so you can see the differences easily in every comparison you make. So long! And maybe I’ll see you next summer!

    Read the article

  • Learn About Oracle’s Strategy for a Simple, Modern User Experience at OpenWorld 2012

    - by Applications User Experience
    By Kathy Miedema, Oracle Applications User Experience If you’re interested in what the best possible user experience looks like, you’ll want to hear what Oracle’s Applications User Experience team is planning for OpenWorld 2012, Sept. 30-Oct. 4 in San Francisco. This year, we will talk Fusion, Fusion, Fusion. We were among the first to show Oracle Fusion Applications in the last couple of years, and we’ll be showing it again this year so you can see what Oracle is planning for the next generation of enterprise applications. Attend our sessions to learn more about the user experience strategy in which Oracle is investing. Simplicity is the driving force behind the demos that we are unveiling now, which you can see at OpenWorld. We want to create opportunities for productivity and efficiency, and deliver enterprise data across devices to help you do your work in the way best suited to your job and needs, said Jeremy Ashley, Vice President, Oracle Applications User Experience. You can see the new look for Fusion Applications at a general session led by Ashley at 3:30 p.m. on Wednesday, Oct. 3. You’ll also have the chance to learn more about tailoring in Oracle Fusion Applications, and gain a new understanding of the investment in the user experience behind Fusion Applications at our sessions (see session information below). Inside the Oracle Applications User Experience team’s on-site lab at Oracle OpenWorld 2011. Head to the demogrounds to see new demos from the Applications User Experience team, including the new look for Fusion Applications and what we’re building for mobile platforms. Take a spin on our eye tracker, a very cool tool that we use to research the usability of a particular design. Visit the Usable Apps OpenWorld page to find out where our demopods will be located. We are also recruiting participants for our on-site lab, in which we gather feedback on new user experience designs, and taking reservations for a charter bus that will bring you to Oracle headquarters for a lab tour Thursday, Oct. 4, or Friday, Oct. 5. Tours leave at 10 a.m. and 1:45 p.m. from the Moscone Center in San Francisco. You’ll see more of our newest designs at the lab tour, and some of our research tools in action. Can’t participate in a customer feedback session or take a lab tour this time around? Visit Usable Apps to participate or book a tour another time. For more information on any OpenWorld sessions, check the content catalog – also available at www.oracle.com/openworld. For information on Applications User Experience (Apps UX) sessions and activities, go to the Usable Apps OpenWorld page. APPS UX OPENWORLD SESSIONS Oracle’s Roadmap to a Simple, Modern User Experience Presenter: Jeremy Ashley, Vice President Applications User Experience, Oracle; with Debra Lilley, Fujitsu Consulting; Basheer Khan, Innowave; and Edward Roske, InterRelSession ID: CON9467Date: Wednesday, Oct. 3 Time: 3:30 - 4:30 p.m.Location: Moscone West - 3002/3004 Jeremy Ashley Oracle Fusion Applications: Transforming Insight into Action Presenters: Killian Evers and Kristin Desmond, OracleSession ID: CON8718Date: Thursday, Oct. 4Time: 11:15 a.m. - 12:15 p.m.Location: Moscone West - 2008 “FRIENDS OF UX” OPENWORLD SESSIONS Sessions by the Oracle Usability Advisory Board (OUAB) members: Advances in Oracle Enterprise Governance, Risk, and Compliance Manager  Presenters: Koen Delaure, KPMG Advisory NV, and Oracle Usability Advisory Board member; Russell Stohr, Oracle Session ID: CON9389Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: Palace Hotel - Concert Optimize Oracle E-Busines Suite Procure-to-Pay: Cut Inefficiences/Fraud with Oracle GRC Apps Presenters: Koen Delaure, KPMG Advisory NV, and Solveig Wagner, Seadrill Management AS, both Oracle Usability Advisory Board members; and Swarnali Bag, OracleSession ID: CON9401Date: Monday, Oct. 1Time: 12:15 - 1:15 p.m.Location: Intercontinental - Sutter Showcase of JD Edwards EnterpriseOne Mobility Presenters: Jon Wells, Westmoreland Coal Co., Oracle Usability Advisory Board member; Rob Mills and Liz Davson, Town of Oakville; Keith Sholes and Louise Farner, Oracle Session ID: CON9123Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: InterContinental - Grand Ballroom B Sessions by the Fusion User Experience Adovcates (FXA) Usability and Features of Oracle Fusion Applications, Built upon Oracle Fusion Middleware Presenters: Debra Lilley, Fujitsu Consulting and Oracle Usability Advisory Board member; John King, King Training ResourcesSession ID: UGF10371Date: Sunday, Sept. 30Time: 11 a.m. - 11:45 a.m. Location: Moscone West – 2010 Ten Things to Love About Oracle Fusion Project Portfolio Management  Presenter: Floyd Teter, EiS TechnologiesSession ID: CON6021Date: Tuesday, Oct. 2Time: 10:15 - 11:15 a.m.Location: Moscone West – 2003

    Read the article

  • Subscribable World Cup 2010 Calendar

    - by jamiet
    I bang on quite a lot on this blog about ways in which data can get published over the web and one of the most interesting ways, in my opinion, of publishing data in a structured manner that is well understood is to use the iCalendar specification. There isn’t much information in the world that doesn’t have some concept of “when” so iCalendar is a great way of distributing that information. You have probably used iCalendar at some point without even knowing about it. All files with a .ics suffix are iCalendar format files and that is why you can happily import them into Outlook, Hotmail Calendar, Google Calendar etc… where they can be parsed and have the semantic data (when, where and who) extracted from them. Importing of iCalendar format data is really only half the trick though; in my opinion the real value of iCalendar-formatted calendar is the ability to subscribe to them. Subscribing has a simple benefit over importing but that single benefit is of massive importance: a subscriber to an iCalendar calendar can periodically check to see if any updates have been made and, if they have, automatically update the local copy. The real benefit to the user is the productivity gain – a single update to an iCalendar means that all subscribers are automatically made aware of the change and there is zero effort on the part of the subscriber; as my former colleague Howard van Rooijen is fond of saying, “work smarter not harder” – nowhere is this edict more ably demonstrated than subscribing versus importing of calendars. If you want to read some more thoughts about iCalendar then go and read my past blog post Calendar syndication - My big hope for 2009's breakthrough technology or better still go and seek out Jon Udell who speaks very authoritatively on the issue of iCalendar. With this subject of iCalendar on my mind I was interested to discover (via Steve Clayton’s blog post Download the world cup fixtures) that the BBC had made a .ics file available containing all of the matches in the upcoming World Cup. As you can probably guess this was a file that was made available so that it could be imported into your calendar of choice. It had one obvious downside though, right now nobody knows who is going to be playing in the knock-out stages so the calendar looks like this: with no teams being named after 25th June. How much more useful would this calendar have been if the BBC had made it possible to subscribe to the calendar instead, thus the calendar could be updated with the teams for the knock out stages when they are known and every subscriber would have a permanently up-to-date record of all the fixtures in their calendar. Better still, the calendar could be updated with match results as well or perhaps even post a match report from the BBC sport pages; when calendars are made subscribable a sea of opportunity opens up for distribution of information. So with that in mind I have decided to go one better than the BBC. I have imported their .ics into a brand new Hotmail calendar and made it publicly available at the following URLs: HTML http://cid-dc1ed121af0476be.calendar.live.com/calendar/World+Cup+2010/index.html iCalendar webcal://cid-dc1ed121af0476be.calendar.live.com/calendar/World+Cup+2010/calendar.ics The link you’re really interested in is the second one - click on that and it should open up in your calendar software of choice. Or, if you want to view it in an online calendar such as Hotmail Calendar or Google Calendar, copy and paste that URL into the appropriate place. Some people have told me they’re having trouble with the iCalendar link in which case hit the HTML link and then click “View ICS” at the resultant web page: I shall endeavour to keep the calendar updated throughout the World Cup and even if I don’t you’re no worse off than if you had imported the BBC’s .ics file so why not give it a try? If I do keep it up to date then you will have a permanent record of the 2010 World Cup available in your calendar. Forever. If you have your calendar synced to your smartphone then you’ll be carrying match reports around with you without you having to do a single thing. Surely that’s worth a quick click isn’t it?   If you have any thoughts let me have them in the comments below. Thanks for reading. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Five Things Learned at the BSR Conference in San Francisco on Nov 2nd-4th

    - by Evelyn Neumayr
    The BSR Conference 2011—“Redefining Leadership”—held from Nov 2nd to Nov 4th in San Francisco, with Oracle as one of the main sponsors, saw senior business executives, civil society representatives, and other experts from around the world gathering to share strategies and insights on the future of sustainability. The general conference sessions kicked off on November 2nd with a plenary address by former U.S. Vice President Al Gore. Other sessions were presented by CEOs of the caliber of Carl Bass (Autodesk), Brian Dunn (Best Buy), Carlos Brito (Anheuser-Busch InBev) and Ofra Strauss (Strauss Group). Here are five key highlights from the conference: 1.      The main leadership challenge is integrating sustainability into core business functions and overcoming short-termism. The “BSR GlobeScan State of Sustainable Business Poll 2011” - a survey of nearly 500 business leaders from 300 member companies - shows that 84% of respondents are optimistic that global businesses will embrace CSR/sustainability as part of their core strategies and operations in the next five years but consider integrating sustainability into their core business functions the key challenge. It is still difficult for many companies that are committed to the sustainability agenda to find investors that understand the long-term implications and as Al Gore said “Many companies are given the signal by the investors that it is the short term results that matter and that is a terribly debilitating force in the market.” 2.      Companies are required to address increasing compliance requirements and transparency in their supply chain, especially in relation with conflict minerals legislation and water management. The Dodd-Frank legislation, OECD guidelines, and the upcoming Securities and Exchange Commission (SEC) rules require companies to monitor upstream the sourcing of tin, tantalum, tungsten, and gold, but given the complexity of this issue companies need to collaborate and partner with peer companies in their industry as well as in other industries to understand how to address conflict minerals in their supply chains. The Institute of Public and Environmental Affairs’ (IPE) China Water Pollution Map enables the public to access thousands of environmental quality, discharge, and infraction records released by various government agencies. Empowered with this information, the public has the opportunity to place greater pressure on polluting companies to comply with environmental standards and create solutions to improve their performance. 3.      A new standard for reporting on supply chain greenhouse gas emissions is available. The New “Scope 3” Supply Chain Greenhouse Gas Inventory Standard, released on October 4th 2011, is the only international greenhouse gas emissions standard that accounts for the full lifecycle of a company’s products. It provides a framework for companies to account for indirect emissions outside of energy use, such as transportation, manufacturing, and distribution, and it incorporates both upstream and downstream impacts of a product. With key investors now listing supplier vulnerability to rising energy prices and disruptions of service as a key concern, greenhouse gas (GHG) management isn’t just for leading companies but a necessity for any business. 4.      Environmental, social, and corporate governance (ESG) reporting is becoming increasingly important to investors and other stakeholders. While European investors have traditionally driven the ESG agenda, U.S. investors are increasingly including ESG data in their analyses. This trend will likely increase as stakeholders continue to demand that an ESG lens be applied to their investments. Investors are increasingly looking to partner on sustainability, as they see the benefits of ESG providing significant returns on investment. 5.      Software companies are offering an increasing variety of solutions to help drive changes and measure performance internally, in supply chains, and across peer companies. The significant challenge is how to integrate different software systems to facilitate decision-making based on a holistic understanding of trade-offs. Jon Chorley, Chief Sustainability Officer and Vice President, Supply Chain Management Product Strategy at Oracle was a panelist in the “Trends in Sustainability Software” session and commented that, “How we think about our business decisions really comes down to how we think about cost. And as long as we don’t assign a cost to things that have an environmental impact or social impact, then we make decisions based on incomplete information. If we could include that in the process that determines ‘Is this product profitable? we would then have a much better decision.” For more information on BSR visit www.brs.org. You can also view highlights of the plenary session at http://www.bsr.org/en/bsr-conference/session-summaries/2011. Oracle is proud to be a sponsor of this BSR conference. By Elena Avesani, Principal Product Strategy Manager, Oracle          

    Read the article

  • How to Save Hundreds or Thousands of Dollars on Cell Phone Service

    - by Chris Hoffman
    Cell phone contracts are bad. You get a seemingly cheap phone up front, but you more than pay for the cost of the phone over two years. Prepaid phone plans are surging in North America for a reason. Prepaid phone plans will be cheaper and more flexible than traditional contracts with big carriers for many people. However much you use your phone, there’s a good chance you can save money with a prepaid service. No More Contracts Here’s how cell phone service typically works in North America: You get a subsidized phone for “free”, $99, or $199. You sign up for a two-year contract and more than pay back the cost of that phone over the length of the contract. This is similar to leasing something or purchasing it on a credit card and paying it back over two years — you spend less up front, but you’re paying more in the long run. But this isn’t the only option. You could opt for a cheaper prepaid service that doesn’t lock you into a contract. If you don’t use your phone much, you could just pay for what you use and avoid the hefty cell phone bills. If you use your phone a lot, you could get a cheaper plan, too. Now, this certainly isn’t for everyone. If you want the latest iPhone or Galaxy smartphone every two years and require a 4G data connection, prepaid services may not be for you. On the other hand, if you don’t need the latest phone, you can save money here. You can also save a huge amount of money if you don’t use your phone much. Phone Options When you choose your prepaid or contract-free service, you’ll often be able to purchase a phone from them. You’ll generally be able to find dirt-cheap dumbphones and the cheapest, slowest Android phones for not very much money. If you are able to buy a top-of-the-line smartphone, you’ll have to pay the full, unsubsidized price. That’s $649 for either an iPhone 5S or Samsung Galaxy S4. Whatever phones the service provider offers, you could always buy a phone elsewhere — for example, you could buy an unsubsidized iPhone direct from Apple and then take it to your cell phone service of choice. Most services will allow you to get a SIM card and pop it into your existing phone rather than purchasing a phone. If you can get a hand-me-down smartphone, you can often save quite a bit of money. For example, you may have a family member upgrading from an iPhone 4S to an iPhone 5S. You could take their phone to a prepaid carrier and have a nicer phone on a cheap cell phone plan. If you brought an old smartphone to a big carrier like AT&T or Verizon, they wouldn’t give you a discount on your monthly plan. You’d have to pay the same amount of money every month as if you had gotten a subsidized phone. Google’s Nexus phones are also great options for people looking to buy smartphones and pay up-front. Google’s Nexus 4 offered a modern, almost top-of-the-line Android smartphone experience at $299 or $349 when it came out last year. Google will soon be releasing the Nexus 5 and it’s expected to be priced at $349. That’s certainly a lot more than a cheap phone, but it’s a fairly high-end smartphone at almost half the price of an iPhone 5S or Galaxy S4. Nexus phones can be purchased online from Google’s Play Store. Service Options When choosing a service, you need to consider what you actually use. If you’re someone who only uses your phone rarely, you can get plans that will allow you to pay as little as a few dollars per month. If you’re someone who’s usually in range of Wi-Fi, you may not need much data at all. If you want a plan with unlimited talk, texting, and data usage, you can get it for much cheaper than you’d pay on a major carrier like AT&T. The options here range from pay-as-you-go plans, like the ones offered by T-Mobile, which allow you to put a certain amount of money in and only drain that balance when you actually use minutes, texts, or data. If you only make a few calls and send a few texts per month, you’d only pay a few bucks. On the other end, Walmart’s Straight Talk service is a popular option that offers unlimited talk, texting, and data at $45 per month. Which service is right for you depends on a lot of things, including your usage and what each network’s coverage is like in your area. You’ll want to do some research of your own before choosing a service. Prepaid services also offer you even more flexibility after you choose one. If you’re not happy or a better deal comes along, you can switch — you’re not locked into your service for two years and you won’t pay an early termination fee. Image Credit: Intel Free Press on Flickr, Jon Fingas on Flickr, John Karakatsanis on Flickr, kendalkinggroup on Flickr     

    Read the article

  • UppercuT v1.0 and 1.1&ndash;Linux (Mono), Multi-targeting, SemVer, Nitriq and Obfuscation, oh my!

    - by Robz / Fervent Coder
    Recently UppercuT (UC) quietly released version 1 (in August). I’m pretty happy with where we are, although I think it’s a few months later than I originally planned. I’m glad I held it back, it gave me some more time to think about some things a little more and also the opportunity to receive a patch for running builds with UC on Linux. We also released v1.1 very recently (December). UppercuT v1 Builds On Linux Perhaps the most significant changes to UC going v1 is that it now supports builds on Linux using Mono! This is thanks mostly to Svein Ackenhausen for the patches and working with me on getting it all working while not breaking the windows builds!  This means you can use mono on Windows or Linux. Notice the shell files to execute with Linux that come as part of UC now. Multi-Targeting Perhaps one of the hardest things to do that requires an automated build is multi-targeting. At v1 this is early, and possibly prone to some issues, but available.  We believe in making everything stupid simple, so it’s as simple as adding a comma to the microsoft.framework property. i.e. “net-3.5, net-4.0” to suddenly produce both framework builds. When you build, this is what you get (if you meet each framework’s requirements): At this time you have to let UC override the build location (as it does by default) or this will not work.  Semantic Versioning By now many of you have been using UppercuT for awhile and have watched how we have done versioning. Many of you who use git already know we put the revision hash in the informational/product version as the last octet. At v1, UppercuT has adopted the semantic versioning scheme. What does that mean? This is a short read, but a good one: http://SemVer.org SemVer (Semantic Versioning) is really using versioning what it was meant for. You have three octets. Major.Minor.Patch as in 1.1.0.  UC will use three different versioning concepts, one for the assembly version, one for the file version, and one for the product version. All versions - The first three octects of the version are owned by SemVer. Major.Minor.Patch i.e.: 1.1.0 Assembly Version - The assembly version would much closer follow SemVer. Last digit is always 0. Major.Minor.Patch.0 i.e: 1.1.0.0 File Version - The file version occupies the build number as the last digit. Major.Minor.Patch.Build i.e.: 1.1.0.2650 Product/Informational Version - The last octect of your product/informational version is the source control revision/hash. Major.Minor.Patch.RevisionOrHash i.e. (TFS/SVN): 1.1.0.235 i.e. (Git/HG): 1.1.0.a45ace4346adef0 SemVer is not on by default, the passive versioning scheme is still in effect. Notice that version.use_semanticversioning has been added to the UppercuT.config file (and version.patch in support of the third octet): Gems Support Gems support was added at v1. This will probably be deprecated as some point once there is an announced sunset for Nu v1. Application gems may keep it around since there is no alternative for that yet though (CoApp would be a possible replacement). Nitriq Support Nitriq is a code analysis tool like NDepend. It’s built by Mr. Jon von Gillern. It uses LINQ query language, so you can use a familiar idiom when analyzing your code base. It’s a pretty awesome tool that has a free version for those looking to do code analysis! To use Nitriq with UC, you are going to need the console edition.  To take advantage of Nitriq, you just need to update the location of Nitriq in the config: Then add the nitriq project files at the root of your source. Please refer to the Nitriq documentation on how these are created. UppercuT v1.1 Obfuscation One thing I started looking into was an easy way to obfuscate my code. I came across EazFuscator, which is both free and awesome. Plus the GUI for it is super simple to use. How do you make obfuscation even easier? Make it a convention and a configurable property in the UC config file! And the code gets obfuscated! Closing Definitely get out and look at the new release. It contains lots of chocolaty (sp?) goodness. And remember, the upgrade path is almost as simple as drag and drop!

    Read the article

  • Removing expired certificates from LDS (new ver of ADAM)

    - by jonthebrewer
    Hi all. This is my situation: We are in the process of replacing a certificate store currently hosted on Sun's iPlanet with Microsoft's Lightweight Directory Services (new version of ADAM with Server 2008). These certificates have been imported into LDS into an application partition (say o=myorg, C=AU). Under this structure I have around 40,000 OU's each one representing a customer under each customers OU are one or more user (iNetOrg) objects (around 60,000 in all). In each user are one or more certificates in the UserCertificate attribute. A combination of in-house written application code and proprietory PKI code reads and publishes these certficates to validate financial transactions. As the LDAP path of the certificates is stored within the customer certificates (and within the application code) and there is zero appetite for changing any of the code, I have had to pick up the iPlanet directory as a whole and dump it in LDS in the same structure. (I will not be using or hosting a Microsoft CA, just implementing an LDAP compliant directory to host these certificates) We have fully tested the application using the data in LDS and everything works fine - here is my dilema and question (finally, phew!) There was no process put in place for removing revoked or expired certificates, consequently the vast majority of the data is completely useless, the system has been running for about 8 years! I have done a quick analysis and I estimate that at least 80% of the data is no longer valid. As I am taking on responsibility for managing the directory I would like to start with a clean directory. Does anyone have any idea how I can cleanup these expired certificates. I am not a highly experienced scripter but have some background in VB. I have been researching the use of CAPICOM and have a feeling this may be able to be used but in exactly what way I am not sure?? I would prefer to write a script that I could specify an expiration date (say any certs that expired prior to 2010) then run against the LDS paritition. This way I can reuse the script periodically to cleanup the directory (as mentioned above - I have no way to adjust the applications that are writing the certs, this is with a third party). Another, less attractive, alternative is to massage the LDIF file (2.7 million lines!) to rip the certs out prior to the import Any help and advice MUCH appreciated. Cheers Jon

    Read the article

  • Reflection - SetValue of array within class?

    - by Jaymz87
    OK, I've been working on something for a while now, using reflection to accomplish a lot of what I need to do, but I've hit a bit of a stumbling block... I'm trying to use reflection to populate the properties of an array of a child property... not sure that's clear, so it's probably best explained in code: Parent Class: Public Class parent Private _child As childObject() Public Property child As childObject() Get Return _child End Get Set(ByVal value As child()) _child = value End Set End Property End Class Child Class: Public Class childObject Private _name As String Public Property name As String Get Return _name End Get Set(ByVal value As String) _name = value End Set End Property Private _descr As String Public Property descr As String Get Return _descr End Get Set(ByVal value As String) _descr = value End Set End Property End Class So, using reflection, I'm trying to set the values of the array of child objects through the parent object... I've tried several methods... the following is pretty much what I've got at the moment (I've added sample data just to keep things simple): Dim Results(1) As String Results(0) = "1,2" Results(1) = "2,3" Dim parent As New parent Dim child As childObject() = New childObject() {} Dim PropInfo As PropertyInfo() = child.GetType().GetProperties() Dim i As Integer = 0 For Each res As String In Results Dim ResultSet As String() = res.Split(",") ReDim child(i) Dim j As Integer = 0 For Each PropItem As PropertyInfo In PropInfo PropItem.SetValue(child, ResultSet(j), Nothing) j += 1 Next i += 1 Next parent.child = child This fails miserably on PropItem.SetValue with ArgumentException: Property set method not found. Anyone have any ideas? @Jon :- Thanks, I think I've gotten a little further, by creating individual child objects, and then assigning them to an array... The issue is now trying to get that array assigned to the parent object (using reflection). It shouldn't be difficult, but I think the problem comes because I don't necessarily know the parent/child types. I'm using reflection to determine which parent/child is being passed in. The parent always has only one property, which is an array of the child object. When I try assigning the child array to the parent object, I get a invalid cast exception saying it can't convert Object[] to . EDIT: Basically, what I have now is: Dim PropChildInfo As PropertyInfo() = ResponseObject.GetType().GetProperties() For Each PropItem As PropertyInfo In PropChildInfo PropItem.SetValue(ResponseObject, ResponseChildren, Nothing) Next ResponseObject is an instance of the parent Class, and ResponseChildren is an array of the childObject Class. This fails with: Object of type 'System.Object[]' cannot be converted to type 'childObject[]'.

    Read the article

  • How do I prevent the concurrent execution of a javascript function?

    - by RyanV
    I am making a ticker similar to the "From the AP" one at The Huffington Post, using jQuery. The ticker rotates through a ul, either by user command (clicking an arrow) or by an auto-scroll. Each list-item is display:none by default. It is revealed by the addition of a "showHeadline" class which is display:list-item. HTML for the UL Looks like this: <ul class="news" id="news"> <li class="tickerTitle showHeadline">Test Entry</li> <li class="tickerTitle">Test Entry2</li> <li class="tickerTitle">Test Entry3</li> </ul> When the user clicks the right arrow, or the auto-scroll setTimeout goes off, it runs a tickForward() function: function tickForward(){ var $active = $('#news li.showHeadline'); var $next = $active.next(); if($next.length==0) $next = $('#news li:first'); $active.stop(true, true); $active.fadeOut('slow', function() {$active.removeClass('showHeadline');}); setTimeout(function(){$next.fadeIn('slow', function(){$next.addClass('showHeadline');})}, 1000); if(isPaused == true){ } else{ startScroll() } }; This is heavily inspired by Jon Raasch's A Simple jQuery Slideshow. Basically, find what's visible, what should be visible next, make the visible thing fade and remove the class that marks it as visible, then fade in the next thing and add the class that makes it visible. Now, everything is hunky-dory if the auto-scroll is running, kicking off tickForward() once every three seconds. But if the user clicks the arrow button repeatedly, it creates two negative conditions: Rather than advance quickly through the list for just the number of clicks made, it continues scrolling at a faster-than-normal rate indefinitely. It can produce a situation where two (or more) list items are given the .showHeadline class, so there's overlap on the list. I can see these happening (especially #2) because the tickForward() function can run concurrently with itself, producing different sets of $active and $next. So I think my question is: What would be the best way to prevent concurrent execution of the tickForward() method? Some things I have tried or considered: Setting a Flag: When tickForward() runs, it sets an isRunning flag to true, and sets it back to false right before it ends. The logic for the event handler is set to only call tickForward() if isRunning is false. I tried a simple implementation of this, and isRunning never appeared to be changed. The jQuery queue(): I think it would be useful to queue up the tickForward() commands, so if you clicked it five times quickly, it would still run as commanded but wouldn't run concurrently. However, in my cursory reading on the subject, it appears that a queue has to be attached to the object its queue applies to, and since my tickForward() method affects multiple lis, I don't know where I'd attach it.

    Read the article

  • Advice on displaying and allowing editing of data using ASP.NET MVC?

    - by Remnant
    I am embarking upon my first ASP.NET MVC project and I would like to get some input on possible ways to display database data and general best practice. In short, the body of my webpage will show data from my database in a table like format, with each table row showing similar data. For example: Name Age Position Date Joined Jon Smith 23 Striker 18th Mar 2005 John Doe 38 Defender 3rd Jan 1988 In terms of functionality, primarily I’d like to give the user the ability to edit the data and, after the edit, commit the edit to the database and refresh the view.The reason I want to refresh the view is because the data is date ordered and I will need to re-sort if the user edits a date field. My main question is what architecture / tools would be best suited to this fulfil my requirements at a high level? From the research I have done so far my initial conclusions were: ADO.NET for data retrieval. This is something I have used before and feel comfortable with. I like the look of LINQ to SQL but don’t want to make the learning curve any steeper for my first outing into MVC land just yet. Partial Views to create a template and then iterate through a datatable that I have pulled back from my database model. jQuery to allow the user to edit data in the table, error check edited data entries etc. Also, my intial view was that caching the data would not be a key requirement here. The only field a user will be able to update is the field and, if they do, I will need to commit that data to the database immediately and then refresh the view (as the data is date sorted). Any thoughts on this? Alternatively, I have seen some jQuery plug-ins that emulate a datagrid and provide associated functionality. My first thoughts are that I do not need all the functionality that comes with these plug-ins (e.g. zebra striping, ability to sort by column using sort glyph in column headers etc .) and I don’t really see any benefit to this over and above the solution I have outlined above. Again, is there reason to reconsider this view? Finally, when a user edits a date , I will need to refresh the view. In order to do this I had been reading about Html.RenderAction and this seemed like it may be a better option than using Partial Views as I can incorporate application logic into the action method. Am I right to consider Html.RenderAction or have I misunderstood its usage? Hope this post is clear and not too long. I did consider separate posts for each topic (e.g. Partial View vs. Html.RenderAction, when to use jQury datagrid plug-in) but it feels like these issues are so intertwined that they need to be dealt with in contect of each other. Thanks

    Read the article

  • C# Begin/EndReceive - how do I read large data?

    - by ryeguy
    When reading data in chunks of say, 1024, how do I continue to read from a socket that receives a message bigger than 1024 bytes until there is no data left? Should I just use BeginReceive to read a packet's length prefix only, and then once that is retrieved, use Receive() (in the async thread) to read the rest of the packet? Or is there another way? edit: I thought Jon Skeet's link had the solution, but there is a bit of a speedbump with that code. The code I used is: public class StateObject { public Socket workSocket = null; public const int BUFFER_SIZE = 1024; public byte[] buffer = new byte[BUFFER_SIZE]; public StringBuilder sb = new StringBuilder(); } public static void Read_Callback(IAsyncResult ar) { StateObject so = (StateObject) ar.AsyncState; Socket s = so.workSocket; int read = s.EndReceive(ar); if (read > 0) { so.sb.Append(Encoding.ASCII.GetString(so.buffer, 0, read)); if (read == StateObject.BUFFER_SIZE) { s.BeginReceive(so.buffer, 0, StateObject.BUFFER_SIZE, 0, new AyncCallback(Async_Send_Receive.Read_Callback), so); return; } } if (so.sb.Length > 0) { //All of the data has been read, so displays it to the console string strContent; strContent = so.sb.ToString(); Console.WriteLine(String.Format("Read {0} byte from socket" + "data = {1} ", strContent.Length, strContent)); } s.Close(); } Now this corrected works fine most of the time, but it fails when the packet's size is a multiple of the buffer. The reason for this is if the buffer gets filled on a read it is assumed there is more data; but the same problem happens as before. A 2 byte buffer, for exmaple, gets filled twice on a 4 byte packet, and assumes there is more data. It then blocks because there is nothing left to read. The problem is that the receive function doesn't know when the end of the packet is. This got me thinking to two possible solutions: I could either have an end-of-packet delimiter or I could read the packet header to find the length and then receive exactly that amount (as I originally suggested). There's problems with each of these, though. I don't like the idea of using a delimiter, as a user could somehow work that into a packet in an input string from the app and screw it up. It also just seems kinda sloppy to me. The length header sounds ok, but I'm planning on using protocol buffers - I don't know the format of the data. Is there a length header? How many bytes is it? Would this be something I implement myself? Etc.. What should I do?

    Read the article

  • Convert a PDF to a Transparent PNG with GhostScript

    - by Jonathon Wolfe
    Hi all. I am attempting, unsuccessfully, to use Ghostscript to rasterize PDF files with a transparent background to PNG files with a transparent background. I've searched high and low for questions from others attempting the same thing and none of the posted solutions, which as far as I can tell come down to specifying -sDEVICE=pngalpha, have worked with my test files. At this point I would really appreciate any advice or tips a more experienced hand could provide. My test PDF is located here: http://www.kolossus.com/files/test.pdf It could be that the issue is with this file, but I doubt it. As far as I can tell, it has no specified background, and when I open the file with a transparency-aware app like Photoshop or Illustrator, sure enough it displays with a transparent background. However, when opened with an application like Adobe Reader the file is rendered with a white background. I believe that this has more to do with the application rendering the PDF than with the PDF itself -- apps like Adobe Reader assume you want to see what a printed document will look like and therefore always show a white canvas behind the artwork -- but I can't be sure. The gs command I'm using is: gs -dNOPAUSE -dBATCH -sDEVICE=pngalpha -r72 -sOutputFile=test.png test.pdf This produces a PNG that has transparent pixels outside of the bounding box of the artwork in the file, but all pixels that are inside the artwork's bounding box are rasterized against a white background. This is a problem for me, as my artwork has drop shadows and antialiased edges that need to be preserved in the final output, and can't just be postprocessed out with ImageMagick. A sample of my PNG output is at the same location as the pdf above, with .png at the end (stackoverflow won't let me include more than one url in my post). Interestingly, I see no effects from using the -dBackgroundColor flag, even if I set it to something non-white like -dBackgroundColor=16#ff0000. Perhaps my understanding of the syntax of this flag is wrong. Also interestingly, I see no effects from using the -dTextAlphaBits=4 -dGraphicsAlphaBits=4 flags to try to enable subpixel antialiasing. I would also appreciate any advice on how to enable subpixel antialiasing, especially on text. Finally, I'm using GPL Ghostscript 8.64 on Mac OS 10.5.7, and the rendering workflow I'm trying to get set up is to generate transparent PNG images from PDFs output by PrinceXML. I'm calling Ghostscript directly for the rasterization instead of using ImageMagick because ImageMagick delegates to Ghostscript for PDF rasterization and I should be able to control the rasterization better by calling GS directly. Thanks for your help. -Jon Wolfe

    Read the article

  • How is IObservable<double>.Average supposed to work?

    - by Dan Tao
    Update Looks like Jon Skeet was right (big surprise!) and the issue was with my assumption about the Average extension providing a continuous average (it doesn't). For the behavior I'm after, I wrote a simple ContinuousAverage extension method, the implementation of which I am including here for the benefit of others who may want something similar: public static class ObservableExtensions { private class ContinuousAverager { private double _mean; private long _count; public ContinuousAverager() { _mean = 0.0; _count = 0L; } // undecided whether this method needs to be made thread-safe or not // seems that ought to be the responsibility of the IObservable (?) public double Add(double value) { double delta = value - _mean; _mean += (delta / (double)(++_count)); return _mean; } } public static IObservable<double> ContinousAverage(this IObservable<double> source) { var averager = new ContinuousAverager(); return source.Select(x => averager.Add(x)); } } I'm thinking of going ahead and doing something like the above for the other obvious candidates as well -- so, ContinuousCount, ContinuousSum, ContinuousMin, ContinuousMax ... perhaps ContinuousVariance and ContinuousStandardDeviation as well? Any thoughts on that? Original Question I use Rx Extensions a little bit here and there, and feel I've got the basic ideas down. Now here's something odd: I was under the impression that if I wrote this: var ticks = Observable.FromEvent<QuoteEventArgs>(MarketDataProvider, "MarketTick"); var bids = ticks .Where(e => e.EventArgs.Quote.HasBid) .Select(e => e.EventArgs.Quote.Bid); var bidsSubscription = bids.Subscribe( b => Console.WriteLine("Bid: {0}", b) ); var avgOfBids = bids.Average(); var avgOfBidsSubscription = avgOfBids.Subscribe( b => Console.WriteLine("Avg Bid: {0}", b) ); I would get two IObservable<double> objects (bids and avgOfBids); one would basically be a stream of all the market bids from my MarketDataProvider, the other would be a stream of the average of these bids. So something like this: Bid Avg Bid 1 1 2 1.5 1 1.33 2 1.5 It seems that my avgOfBids object isn't doing anything. What am I missing? I think I've probably misunderstood what Average is actually supposed to do. (This also seems to be the case for all of the aggregate-like extension methods on IObservable<T> -- e.g., Max, Count, etc.)

    Read the article

  • Call HttpWebRequest in another thread as UI with Task class - avoid to dispose object created in Task scope

    - by John
    I would like call HttpWebRequest on another thread as UI, because I must make 200 request or server and downloaded image. My scenation is that I make a request on server, create image and return image. This I make in another thread. I use Task class, but it call automaticaly Dispose method on all object created in task scope. So I return null object from this method. public BitmapImage CreateAvatar(Uri imageUri, int sex) { if (imageUri == null) return CreateDefaultAvatar(sex); BitmapImage image = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { Byte[] buffer = new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); image = new BitmapImage { CreateOptions = BitmapCreateOptions.None, CacheOption = BitmapCacheOption.OnLoad }; image.BeginInit(); image.StreamSource = new MemoryStream(buffer); image.EndInit(); image.Freeze(); } }).Start(); return image; } How avoid it? Thank Mr. Jon Skeet try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; //new Task(() => //{ var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } //}).Start(); return new MemoryStream(buffer); } It return object which is null a than try this: private Stream GetImageStream(Uri imageUri) { Byte[] buffer = null; new Task(() => { var request = WebRequest.Create(imageUri); var response = request.GetResponse(); using (var stream = response.GetResponseStream()) { buffer= new Byte[response.ContentLength]; int offset = 0, actuallyRead = 0; do { actuallyRead = stream.Read(buffer, offset, buffer.Length - offset); offset += actuallyRead; } while (actuallyRead > 0); } }).Start(); return new MemoryStream(buffer); } Method above return null

    Read the article

  • Delegates in Action -Help

    - by Amutha
    I am learning delegates.I am very curious to apply delegates to the following chain-of-responsibility pattern. Kindly help me the way to apply delegates to the following piece. Thanks in advance.Thanks for your effort. #region Chain of Responsibility Pattern namespace Chain { public class Player { public string Name { get; set; } public int Score { get; set; } } public abstract class PlayerHandler { protected PlayerHandler _Successor = null; public abstract void HandlePlayer(Player _player); public void SetupHandler(PlayerHandler _handler) { _Successor = _handler; } } public class Employee : PlayerHandler { public override void HandlePlayer(Player _player) { if (_player.Score <= 100) { MessageBox.Show(string.Format("{0} is greeted by Employee", _player.Name)); } else { _Successor.HandlePlayer(_player); } } } public class Supervisor : PlayerHandler { public override void HandlePlayer(Player _player) { if (_player.Score >100 && _player.Score<=200) { MessageBox.Show(string.Format("{0} is greeted by Supervisor", _player.Name)); } else { _Successor.HandlePlayer(_player); } } } public class Manager : PlayerHandler { public override void HandlePlayer(Player _player) { if (_player.Score > 200) { MessageBox.Show(string.Format("{0} is greeted by Manager", _player.Name)); } else { MessageBox.Show(string.Format("{0} got low score", _player.Name)); } } } } #endregion #region Main() void Main() { Chain.Player p1 = new Chain.Player(); p1.Name = "Jon"; p1.Score = 100; Chain.Player p2 = new Chain.Player(); p2.Name = "William"; p2.Score = 170; Chain.Player p3 = new Chain.Player(); p3.Name = "Robert"; p3.Score = 300; Chain.Employee emp = new Chain.Employee(); Chain.Manager mgr = new Chain.Manager(); Chain.Supervisor sup = new Chain.Supervisor(); emp.SetupHandler(sup); sup.SetupHandler(mgr); emp.HandlePlayer(p1); emp.HandlePlayer(p2); emp.HandlePlayer(p3); } #endregion

    Read the article

  • Is this a valid pattern for raising events in C#?

    - by Will Vousden
    Update: For the benefit of anyone reading this, since .NET 4, the lock is unnecessary due to changes in synchronization of auto-generated events, so I just use this now: public static void Raise<T>(this EventHandler<T> handler, object sender, T e) where T : EventArgs { if (handler != null) { handlerCopy(sender, e); } } And to raise it: SomeEvent.Raise(this, new FooEventArgs()); Having been reading one of Jon Skeet's articles on multithreading, I've tried to encapsulate the approach he advocates to raising an event in an extension method like so (with a similar generic version): public static void Raise(this EventHandler handler, object @lock, object sender, EventArgs e) { EventHandler handlerCopy; lock (@lock) { handlerCopy = handler; } if (handlerCopy != null) { handlerCopy(sender, e); } } This can then be called like so: protected virtual void OnSomeEvent(EventArgs e) { this.someEvent.Raise(this.eventLock, this, e); } Are there any problems with doing this? Also, I'm a little confused about the necessity of the lock in the first place. As I understand it, the delegate is copied in the example in the article to avoid the possibility of it changing (and becoming null) between the null check and the delegate call. However, I was under the impression that access/assignment of this kind is atomic, so why is the lock necessary? Update: With regards to Mark Simpson's comment below, I threw together a test: static class Program { private static Action foo; private static Action bar; private static Action test; static void Main(string[] args) { foo = () => Console.WriteLine("Foo"); bar = () => Console.WriteLine("Bar"); test += foo; test += bar; test.Test(); Console.ReadKey(true); } public static void Test(this Action action) { action(); test -= foo; Console.WriteLine(); action(); } } This outputs: Foo Bar Foo Bar This illustrates that the delegate parameter to the method (action) does not mirror the argument that was passed into it (test), which is kind of expected, I guess. My question is will this affect the validity of the lock in the context of my Raise extension method? Update: Here is the code I'm now using. It's not quite as elegant as I'd have liked, but it seems to work: public static void Raise<T>(this object sender, ref EventHandler<T> handler, object eventLock, T e) where T : EventArgs { EventHandler<T> copy; lock (eventLock) { copy = handler; } if (copy != null) { copy(sender, e); } }

    Read the article

  • ASP.NET MVC Postbacks and HtmlHelper Controls ignoring Model Changes

    - by Rick Strahl
    So here's a binding behavior in ASP.NET MVC that I didn't really get until today: HtmlHelpers controls (like .TextBoxFor() etc.) don't bind to model values on Postback, but rather get their value directly out of the POST buffer from ModelState. Effectively it looks like you can't change the display value of a control via model value updates on a Postback operation. To demonstrate here's an example. I have a small section in a document where I display an editable email address: This is what the form displays on a GET operation and as expected I get the email value displayed in both the textbox and plain value display below, which reflects the value in the mode. I added a plain text value to demonstrate the model value compared to what's rendered in the textbox. The relevant markup is the email address which needs to be manipulated via the model in the Controller code. Here's the Razor markup: <div class="fieldcontainer"> <label> Email: &nbsp; <small>(username and <a href="http://gravatar.com">Gravatar</a> image)</small> </label> <div> @Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield"}) @Model.User.Email </div> </div>   So, I have this form and the user can change their email address. On postback the Post controller code then asks the business layer whether the change is allowed. If it's not I want to reset the email address back to the old value which exists in the database and was previously store. The obvious thing to do would be to modify the model. Here's the Controller logic block that deals with that:// did user change email? if (!string.IsNullOrEmpty(oldEmail) && user.Email != oldEmail) { if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; } else // allow email change but require verification by forcing a login user.IsVerified = false; }… model.user = user; return View(model); The logic is straight forward - if the new email address is not valid because it already exists I don't want to display the new email address the user entered, but rather the old one. To do this I change the value on the model which effectively does this:model.user.Email = oldEmail; return View(model); So when I press the Save button after entering in my new email address ([email protected]) here's what comes back in the rendered view: Notice that the textbox value and the raw displayed model value are different. The TextBox displays the POST value, the raw value displays the actual model value which are different. This means that MVC renders the textbox value from the POST data rather than from the view data when an Http POST is active. Now I don't know about you but this is not the behavior I expected - initially. This behavior effectively means that I cannot modify the contents of the textbox from the Controller code if using HtmlHelpers for binding. Updating the model for display purposes in a POST has in effect - no effect. (Apr. 25, 2012 - edited the post heavily based on comments and more experimentation) What should the behavior be? After getting quite a few comments on this post I quickly realized that the behavior I described above is actually the behavior you'd want in 99% of the binding scenarios. You do want to get the POST values back into your input controls at all times, so that the data displayed on a form for the user matches what they typed. So if an error occurs, the error doesn't mysteriously disappear getting replaced either with a default value or some value that you changed on the model on your own. Makes sense. Still it is a little non-obvious because the way you create the UI elements with MVC, it certainly looks like your are binding to the model value:@Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield",required="required" }) and so unless one understands a little bit about how the model binder works this is easy to trip up. At least it was for me. Even though I'm telling the control which model value to bind to, that model value is only used initially on GET operations. After that ModelState/POST values provide the display value. Workarounds The default behavior should be fine for 99% of binding scenarios. But if you do need fix up values based on your model rather than the default POST values, there are a number of ways that you can work around this. Initially when I ran into this, I couldn't figure out how to set the value using code and so the simplest solution to me was simply to not use the MVC Html Helper for the specific control and explicitly bind the model via HTML markup and @Razor expression: <input type="text" name="User.Email" id="User_Email" value="@Model.User.Email" /> And this produces the right result. This is easy enough to create, but feels a little out of place when using the @Html helpers for everything else. As you can see by the difference in the name and id values, you also are forced to remember the naming conventions that MVC imposes in order for ModelBinding to work properly which is a pain to remember and set manually (name is the same as the property with . syntax, id replaces dots with underlines). Use the ModelState Some of my original confusion came because I didn't understand how the model binder works. The model binder basically maintains ModelState on a postback, which holds a value and binding errors for each of the Post back value submitted on the page that can be mapped to the model. In other words there's one ModelState entry for each bound property of the model. Each ModelState entry contains a value property that holds AttemptedValue and RawValue properties. The AttemptedValue is essentially the POST value retrieved from the form. The RawValue is the value that the model holds. When MVC binds controls like @Html.TextBoxFor() or @Html.TextBox(), it always binds values on a GET operation. On a POST operation however, it'll always used the AttemptedValue to display the control. MVC binds using the ModelState on a POST operation, not the model's value. So, if you want the behavior that I was expecting originally you can actually get it by clearing the ModelState in the controller code:ModelState.Clear(); This clears out all the captured ModelState values, and effectively binds to the model. Note this will produce very similar results - in fact if there are no binding errors you see exactly the same behavior as if binding from ModelState, because the model has been updated from the ModelState already and binding to the updated values most likely produces the same values you would get with POST back values. The big difference though is that any values that couldn't bind - like say putting a string into a numeric field - will now not display back the value the user typed, but the default field value or whatever you changed the model value to. This is the behavior I was actually expecting previously. But - clearing out all values might be a bit heavy handed. You might want to fix up one or two values in a model but rarely would you want the entire model to update from the model. So, you can also clear out individual values on an as needed basis:if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; ModelState.Remove("User.Email"); } This allows you to remove a single value from the ModelState and effectively allows you to replace that value for display from the model. Why? While researching this I came across a post from Microsoft's Brad Wilson who describes the default binding behavior best in a forum post: The reason we use the posted value for editors rather than the model value is that the model may not be able to contain the value that the user typed. Imagine in your "int" editor the user had typed "dog". You want to display an error message which says "dog is not valid", and leave "dog" in the editor field. However, your model is an int: there's no way it can store "dog". So we keep the old value. If you don't want the old values in the editor, clear out the Model State. That's where the old value is stored and pulled from the HTML helpers. There you have it. It's not the most intuitive behavior, but in hindsight this behavior does make some sense even if at first glance it looks like you should be able to update values from the model. The solution of clearing ModelState works and is a reasonable one but you have to know about some of the innards of ModelState and how it actually works to figure that out.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Top tweets SOA Partner Community – October 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community Deploying Fusion Order Demo on 11.1.1.6 by Antony Reynolds http://wp.me/p10C8u-vA leonsmiers ?Cant wait to test it >> 't waiRT @OracleSOA: Case Management patterns, session coverage from #OOW #OracleBPM #ACM #BPM http://bit.ly/OdcZL6 Danilo Schmiedel Bye bye San Francisco. #oow was a great conference in a wonderful city! Thanks! @soacommunity pic.twitter.com/lcYSe9xC OPITZ CONSULTING ?The Journey towards #Oracle #BPM @OpenWorld 2012 - Slides by @t_winterberg & H. Normann: http://ow.ly/edkWE #oow demed Full house at the SOA Customer Advisory Board! #oow12 http://instagr.am/p/QX9B8eLMLS/ Danilo Schmiedel "@whitehorsesnl: Had some great talks with the BPM guys at the DEMOgrounds. It is one of the best things at #oow" -> I agree!! @soacommunity Mark Simpson ?Fusion Middleware Global Innovation Awards: nice to pick up a soa and bpm with our customer. #oow Mark Simpson ?RT @SOASimone: #oraclesoa #oow hands on lab fully booked pic.twitter.com/pwI94Ew7 <--quick, provision some more compute power on the cloud! Oracle SOA ?Join us for BPM and Analytics: Process Dashboards. BAM, and Intelligent OptimizationMoscone South - 308#OracleBPM #OOW Oracle SOA ?Real-time public safety demo! License plate recognition and processing in London via Oracle Event Processing. #oow pic.twitter.com/WufesDBq Marc ?Nice session on customer success stories on #SOA11g on with @SOASimone Pro and cons and architectural overview. #oow pic.twitter.com/bzuhsujm Lucas Jellema Full length Keynote on Middleware #oow : http://medianetwork.oracle.com/video/player/1873556035001 … #oow_amis OracleBlogs ?Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers? http://ow.ly/2stVQ0 OracleBlogs ?Open World Session - BPM, SOA and ADF Combined:Patterns learned from Fusion Applications http://ow.ly/2suhzf Ronald Luttikhuizen ?VENNSTER BLOG | Presentations at OpenWorld 2012 | http://blog.vennster.nl/2012/10/presentations-at-openworld-2012.html … Andrejus Baranovskis @dschmied @soacommunity next OOW for sure, and may be SOA community event ! @soacommunity Danilo Schmiedel ?@andrejusb Thanks Andrejus - I really enjoyed having a session with you at #oow. When is next time :-) ? @soacommunity Lionel Dubreuil ?@soacommunity #oow12 Today-1:15pm-Marriott Marquis Salon 7 Jump-starting Integration with Oracle Foundation Pack http://bit.ly/QKKJzF Ronald Luttikhuizen ?Impression from our fault handling session in OSB and SOA Suite from the audience @soacommunity @gschmutz #oow pic.twitter.com/WSg1Z89E Marc Nice session on Oracle Virtual Assembly for #SOA11g, @soacommunity Works with #exalogic but not required SOA Community ?Send your #soacommunity #oow pictures and blog posts @soacommunity or http://www.facebook.com/soacommunity Enjoy OOW ;-) Jon petter hjulstad Oracle BPM- Big leap forward in 11.1.1.7 ! Whitehorses ?Common BPM Use Cases from Oracle #bpm #oow pic.twitter.com/ofOv04EF Whitehorses ?Oracle BPM 11.1.1.7 top new features. Interesting #oow #oowbenelux pic.twitter.com/HY9QN5un SOA Community Industrialized SOA - topic of Business Technology Magazine http://wp.me/p10C8u-vi orclateamsoa ?A-Team Blog #ateam: The curious case of SOA Human tasks' automatic completion http://ow.ly/1mq6YU Simone Geib Look for this sign #oow #oraclesoa pic.twitter.com/MJsPV4PO Lucas Jellema My summary of Larry Ellison's keynote at #oow on the AMIS Blog: http://technology.amis.nl/2012/10/01/oow-2012-larry-ellisons-keynote-announcements-exa-cloud-database/ … #oow_amis gschmutz ?Join my #oow session "Five Cool Use Cases for the Spring Component" to see the power of Spring and SOA Suite combined! Moscone 310 - 3:15 PM Ronald Luttikhuizen Thanks to @soacommunity for great SOA/BPM dinner event yesterday night! #oow pic.twitter.com/v7x3i0DC OracleBlogs ?OSB, Service Callouts and OQL http://ow.ly/2sq6B2 OracleBlogs ?Cloud and On-Premises Applications Integration using Oracle Integration Adapters http://ow.ly/2sqiDy OracleBlogs ?Adapters, SOA Suite and More @Openworld 2012 http://ow.ly/2srdTg Eric Elzinga ?OSB, Service Callouts and OQL - Part 3, http://see.sc/JodzEx #oracleservicebus Donatas Valys interesting articles about soa industrialization to read #soa #industrialization http://it-republik.de/business-technology/bt-magazin-ausgaben/Industrialized-SOA-000516.html … gschmutz ?“@techsymp: 2012 Symposium Presentation Download Page Now Available! 75% of presentations published. http://www.servicetechsymposium.com ” find mine there.. Oracle BPM Customer Experience and BPM – From Efficiency to Engagement #bpm #oraclebpm #processmanagement #socialbpm http://pub.vitrue.com/Tahi SOA Community ?@soacommunity SOA Community Newsletter September 2012 http://wp.me/p10C8u-wa SOA Community again again again.... it is Oracle Open World 2012 http://wp.me/p10C8u-wk OracleBlogs ?SOA Proactive support http://ow.ly/2smrSJ demed ?@gschmutz on NoSQL at @techsymp http://lockerz.com/s/247601661 demed ?Just finished "#BigData and its impact on #SOA" talk @techsymp. Really enjoyed getting out of beaten path. #london #oep http://lockerz.com/s/247636974 OTNArchBeat ?Need help selling SOA to business stakeholders? Give them this free eBook. #soasuite http://pub.vitrue.com/hsQY SOA Community top Tweets SOA Partner Community &ndash; September 2012 http://wp.me/p10C8u-vc SOA Community Move Data into the grid for scalable, predictable response times http://wp.me/p10C8u-vv ServiceTechSymposium ?The September issue of the Service Technology Magazine is now published with six new items! Read them at http://www.servicetechmag.com Marc ?Reviewed @Packt_OracleFMW new book on SOA11g administration! Very good ! http://tinyurl.com/8pzd5ww SOA Community ?BPM Solution Catalogue&ndash;promote your process templates http://wp.me/p10C8u-vt OTNArchBeat ?BPM ADF Task forms: Checking whether the current user is in a BPM Swimlane | @ChrisKarlChan http://pub.vitrue.com/aPMG OTNArchBeat ?Cloud, automation drive new growth in SOA governance market | @JoeMcKendrick http://pub.vitrue.com/hNPv Simon Haslam ?Looking for "oak style"(!) advanced content but you're a middleware specialist? See #ukoug2012 #middlewaresunday http://2012.ukoug.org/default.asp?p=9355 … Simon Haslam ?The #ukoug2012 agenda is "go, go, go!" (as Murray would say!) http://2012.ukoug.org/agendagrid Germán Gazzoni SOA Spezial II verfügbar – Industralized SOA: Die überarbeitete und ergänzte Neuauflage des SOA Spezial Sonderhe... http://bit.ly/PAWwN9 Oracle SOA ?Flip thru new interactive "Oracle SOA Suite eBook-In the Customers Words" #middleware #soa #oraclesoa http://pub.vitrue.com/NzFZ SOA Community Follow SOA Community on Facebook http://www.facebook.com/soacommunity #soacommunity #opn SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Finding a person in the forest

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved finding a person in the forest or Limiting the AD result in SharePoint People Picker There are times when we need to limit the SharePoint audience of certain farms or servers or site collections to a particular audience. One of my experiences involved limiting access to US citizens, another to a particular location. Now, most of us – your humble servant included – are not Active Directory experts – but we must be able to handle the “audience restrictions” as required. So here is how it’s done in a nutshell. Important note. Not all could be done in PowerShell (at least not yet)! There are no Windows PowerShell commands to configure People Picker. The stsadm command is: stsadm -o setproperty -pn peoplepicker-searchadcustomquery -pv ADQuery –url http://somethingOrOther Note the long-hyphenated property name. Now to filling the ADQuery.   LDAP Query in a nutshell Syntax LDAP is no older than SQL and an LDAP query is actually a query against the LDAP Database. LDAP attributes are the equivalent of Database columns, so why do we have to learn a new query language? Beats me! But we must, so here it is. The syntax of an LDAP query string is made of individual statements with relational operators including: = Equal <= Lower than or equal >= Greater than or equal… and memberOf – a group membership. ! Not * Wildcard Equal and memberOf are the most commonly used. Checking for absence uses the ! – not and the * - wildcard Example: (SN=Grant) All whose last name – SurName – is Grant Example: (!(SN=Grant)) All except Grant Example: (!(SN=*)) all where there is no SurName i.e SurName is absent (probably Rappers). Example: (CN=MyGroup) Common Name is MyGroup.  Example: (GN=J*) all the Given Names that start with J (JJ, Jane, Jon, John, etc.) The cryptic SN, CN, GN, etc. are attributes and more about them later All the queries are enclosed in parentheses (Query). Complex queries are comprised of sets that are in AND or OR conditions. AND is denoted by the ampersand (&) and the OR is denoted by the vertical pipe (|). The general syntax is that of the Prefix polish notation where the operand precedes the variables. E.g +ab is the sum of a and b. In an LDAP query (&(A)(B)) will garner the objects for which both A and B are true. In an LDAP query (&(A)(B)(C)) will garner the objects for which A, B and C are true. There’s no limit to the number of conditions. In an LDAP query (|(A)(B)) will garner the objects for which either A or B are true. In an LDAP query (|(A)(B)(C)) will garner the objects for which at least one of A, B and C is true. There’s no limit to the number of conditions. More complex queries have both types of conditions and the parentheses determine the order of operations. Attributes Now let’s get into the SN, CN, GN, and other attributes of the query SN – is the SurName (last name) GN – is the Given Name (first name) CN – is the Common Name, usually GN followed by SN OU – is an Organization Unit such as division, department etc. DC – is a Domain Content in the AD forest l – lower case ‘L’ stands for location. Jerusalem anybody? Or Katmandu. UPN – User Principal Name, is usually the first part of an email address. By nature it is unique in the forest. Most systems set the UPN to be the first initial followed by the SN of the person involved. Some limit the total to 8 characters. If we have many ‘jsmith’ we have to somehow distinguish them from each other. DN – is the distinguished name – a name unique to AD forest in which it lives. Usually it’s a CN with some domain or group distinguishers. DN is important in conjunction with the memberOf relation. Groups have stricter requirement. Each group has to have a unique name - its CN and it has to be unique regardless of its place. See more below. All of the attributes are case insensitive. CN, cn, Cn, and cN are identical. objectCategory is an element that requires special consideration. AD contains many different object like computers, printers, and of course people and groups. In the queries below, we’re limiting our search to people (person). Putting it altogether Let’s get a list of all the Johns in the SPAdmin group of the Jerusalem that local domain. (&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local)) The memberOf=cn=SPAdmin uses the cn (Common Name) of the SPAdmin group. This is how the memberOf relation is used. ‘SPAdmin’ is actually the DN of the group. Also the memberOf relation does not allow wild cards (*) in the group name. Also, you are limited to at most one ‘OU’ entry. Let’s add Marvin Minsky to the search above. |(&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local))(CN=Marvin Minsky) Here I added the or pipeline at the beginning of the query and put the CN requirement for Minsky at the end. Note that if Marvin was already in the prior result, he’s not going to be listed twice. One last note: You may see a dryer but more complete list of attributes rules and examples in: http://www.tek-tips.com/faqs.cfm?fid=5667 And finally (thus negating the claim that my previous note was last), to the best of my knowledge there are 3 more ways to limit the audience. One is to use the peoplepicker-searchadcustomfilter property using the same ADQuery. This works only in SP1 and above. The second is to limit the search to users within this particular site collection – the property name is peoplepicker-onlysearchwithinsitecollection and the value is yes (-pv yes) And the third is –pn peoplepicker-serviceaccountdirectorypaths –pv “OU=ou1,DC=dc1…..” Again you are limited to at most one ‘OU’ phrase – no OU=ou1,OU=ou2… And now the real end. The main property discussed in this sprawling and seemingly endless monogram – peoplepicker-searchadcustomquery - is the most general way of getting the job done. Here are a few examples of command lines that worked and some that didn’t. Can you see why? C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (!Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,OU=OUMid,OU=OUTop,DC=TopDC,DC=MidDC,DC=BottomDC) Command line error. Too many OUs C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully.   That’s all folks!

    Read the article

  • Stuck removing ffmpeg from system due to package problems

    - by Michiel
    I'd be very grateful to get a bit of support on the following situation: Using Ubuntu 12.04, and I used the PPA of Jon Severinson for ffmpeg. I had a problem playing movies with mplayer, so I decided to remove the PPA and try to use libav only after I read this article about libav vs ffmpeg. I first removed the PPA from the list, then apt-get update, etc. But I couldn't remove ffmpeg due to dependency-errors and got stuck in what seemed some kind of loop. Then I found a suggestion here. I followed the steps, and ended up force-removing libavcodec-extra-53. Because, apparently that was "what got stuff moving again". At the moment I can't. Now Ubuntu is reporting a broken package (BrokenCount 0). Errors: De afhankelijkheden van de volgende pakketten konden niet geïnstalleerd worden: audacious-plugins: Depends: audacious-plugins-data (= 3.2.1-4ubuntu1) maar 3.2.1-4ubuntu1 is geïnstalleerd Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libcurl3-gnutls (= 7.16.2-1) maar 7.22.0-3ubuntu4 is geïnstalleerd Depends: libgcc1 (= 1:4.1.1) maar 1:4.6.3-1ubuntu5 is geïnstalleerd Depends: libpulse0 (= 1:0.99.1) maar 1:1.1-0ubuntu15.1 is geïnstalleerd Depends: libstdc++6 (= 4.6) maar 4.6.3-1ubuntu5 is geïnstalleerd Depends: libwavpack1 (= 4.40.0) maar 4.60.1-2 is geïnstalleerd Depends: libxcomposite1 (= 1:0.3-1) maar 1:0.4.3-2build1 is geïnstalleerd Depends: zlib1g (= 1:1.1.4) maar 1:1.2.3.4.dfsg-3ubuntu4 is geïnstalleerd libavfilter-extra-2: Depends: libavcodec-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar het is niet geïnstalleerd Depends: libavcodec-extra-53 (< 6:0.10.4.0ubuntu0jon2.2-99) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libswresample-extra-0 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libswscale-extra-2 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd libavformat-extra-53: Depends: libavcodec-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar het is niet geïnstalleerd Depends: libavcodec-extra-53 (< 6:0.10.4.0ubuntu0jon2.2-99) maar het is niet geïnstalleerd Depends: libavutil-extra-51 (= 6:0.10.4.0ubuntu0jon2.2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd libk3b6-extracodecs: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libc6 (= 2.14) maar 2.15-0ubuntu10 is geïnstalleerd Depends: libkdecore5 (= 4:4.4.4) maar 4:4.8.4a-0ubuntu0.2 is geïnstalleerd Depends: libkio5 (= 4:4.4.4) maar 4:4.8.4a-0ubuntu0.2 is geïnstalleerd Depends: libqtcore4 (= 4:4.7.0~beta1) maar 4:4.8.1-0ubuntu4.2 is geïnstalleerd Depends: libstdc++6 (= 4.1.1) maar 4.6.3-1ubuntu5 is geïnstalleerd libquicktime2: Depends: libavcodec-extra-53 (= 4:0.8~beta2-2) maar het is niet geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8~beta2-2) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd libxine1-ffmpeg: Depends: libavcodec-extra-53 (= 4:0.7.3-1) maar het is niet geïnstalleerd Depends: libavutil-extra-51 (= 4:0.7.3-1) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libc6 (= 2.4) maar 2.15-0ubuntu10 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.7.3-1) maar het is niet geïnstalleerd Depends: libxine1-bin (= 1.1.20-2build1) maar 1.1.20-2build1 is geïnstalleerd mencoder: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd mplayer: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd vlc: Depends: vlc-nox (= 2.0.3-0ubuntu0.12.04.1) maar 2.0.3-0ubuntu0.12.04.1 is geïnstalleerd Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libc6 (= 2.15) maar 2.15-0ubuntu10 is geïnstalleerd Depends: libfreetype6 (= 2.2.1) maar 2.4.8-1ubuntu2 is geïnstalleerd Depends: libgcc1 (= 1:4.1.1) maar 1:4.6.3-1ubuntu5 is geïnstalleerd Depends: libqtcore4 (= 4:4.8.0) maar 4:4.8.1-0ubuntu4.2 is geïnstalleerd Depends: libqtgui4 (= 4:4.7.0~beta1) maar 4:4.8.1-0ubuntu4.2 is geïnstalleerd Depends: libstdc++6 (= 4.6) maar 4.6.3-1ubuntu5 is geïnstalleerd Depends: zlib1g (= 1:1.2.3.3.dfsg) maar 1:1.2.3.4.dfsg-3ubuntu4 is geïnstalleerd vlc-nox: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libc6 (= 2.15) maar 2.15-0ubuntu10 is geïnstalleerd Depends: libfontconfig1 (= 2.8.0) maar 2.8.0-3ubuntu9.1 is geïnstalleerd Depends: libfreetype6 (= 2.2.1) maar 2.4.8-1ubuntu2 is geïnstalleerd Depends: libgcc1 (= 1:4.1.1) maar 1:4.6.3-1ubuntu5 is geïnstalleerd Depends: libgnutls26 (= 2.12.6.1-0) maar 2.12.14-5ubuntu3.1 is geïnstalleerd Depends: libmpcdec6 (= 1:0.1~r435) maar 2:0.1~r459-1ubuntu1 is geïnstalleerd Depends: libncursesw5 (= 5.6+20070908) maar 5.9-4 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libsmbclient (= 3.0.24) maar 2:3.6.3-2ubuntu2.3 is geïnstalleerd Depends: libstdc++6 (= 4.6) maar 4.6.3-1ubuntu5 is geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libudev0 (= 147) maar 175-0ubuntu9.1 is geïnstalleerd Depends: libxml2 (= 2.7.4) maar 2.7.8.dfsg-5.1ubuntu4.1 is geïnstalleerd Depends: zlib1g (= 1:1.2.0.2) maar 1:1.2.3.4.dfsg-3ubuntu4 is geïnstalleerd xbmc-bin: Depends: libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libavfilter-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavformat-extra-53 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libavutil-extra-51 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd Depends: libpostproc-extra-52 (= 4:0.8-1~) maar het is niet geïnstalleerd Depends: libswscale-extra-2 (= 4:0.8-1~) maar 6:0.10.4.0ubuntu0jon2.2 is geïnstalleerd And apt-get is reporting: root@LAPTOP:~# apt-get -f install Pakketlijsten worden ingelezen... Klaar Boom van vereisten wordt opgebouwd De status informatie wordt gelezen... Klaar Vereisten worden gecorrigeerd... mislukt. De volgende pakketten hebben niet-voldane vereisten: audacious-plugins : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd libavfilter-extra-2 : Vereisten: libavcodec-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar het is niet geïnstalleerd Vereisten: libavcodec-extra-53 (< 6:0.10.4.0ubuntu0jon2.2-99) maar het is niet geïnstalleerd libavformat-extra-53 : Vereisten: libavcodec-extra-53 (= 6:0.10.4.0ubuntu0jon2.2) maar het is niet geïnstalleerd Vereisten: libavcodec-extra-53 (< 6:0.10.4.0ubuntu0jon2.2-99) maar het is niet geïnstalleerd libk3b6-extracodecs : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd libquicktime2 : Vereisten: libavcodec53 (= 4:0.8~beta2-2) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8~beta2-2) maar het is niet geïnstalleerd libxine1-ffmpeg : Vereisten: libavcodec53 (= 4:0.7.3-1) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.7.3-1) maar het is niet geïnstalleerd mencoder : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd mplayer : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd vlc : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd vlc-nox : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd xbmc-bin : Vereisten: libavcodec53 (= 4:0.8-1~) maar het is niet geïnstalleerd of libavcodec-extra-53 (= 4:0.8-1~) maar het is niet geïnstalleerd E: Fout, pkgProblemResolver::Resolve maakte scheidingen aan, dit kan veroorzaakt worden door vastgehouden pakketten. E: Kan vereisten niet corrigeren How to proceed?? Thanks in advance, Michiel

    Read the article

  • ASP.NET MVC 2 Mdel encapsulated within ViewModel Validation

    - by Program.X
    I am trying to get validation to work in ASP.NET MVC 2, but without much success. I have a complex class containing a large number of fields. (Don't ask - this is oneo f those real-world situations best practices can't touch) This would normally be my Model and is a LINQ-to-SQL generated class. Because this is generated code, I have created a MetaData class as per http://davidhayden.com/blog/dave/archive/2009/08/10/AspNetMvc20BuddyClassesMetadataType.aspx. public class ConsultantRegistrationMetadata { [DisplayName("Title")] [Required(ErrorMessage = "Title is required")] [StringLength(10, ErrorMessage = "Title cannot contain more than 10 characters")] string Title { get; set; } [Required(ErrorMessage = "Forename(s) is required")] [StringLength(128, ErrorMessage = "Forename(s) cannot contain more than 128 characters")] [DisplayName("Forename(s)")] string Forenames { get; set; } // ... I've attached this to the partial class of my generated class: [MetadataType(typeof(ConsultantRegistrationMetadata))] public partial class ConsultantRegistration { // ... Because my form is complex, it has a number of dependencies, such as SelectLists, etc. which I have encapsulated in a ViewModel pattern - and included the ConsultantRegistration model as a property: public class ConsultantRegistrationFormViewModel { public Data.ConsultantRegistration ConsultantRegistration { get; private set; } public SelectList Titles { get; private set; } public SelectList Countries { get; private set; } // ... So it is essentially ViewModel=Model My View then has: <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Title) %> <%: Html.DropDownListFor(model => model.ConsultantRegistration.Title, Model.Titles,"(select a Title)") %> <%: Html.ValidationMessage("Title","*") %> </p> <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.TextBoxFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.ValidationMessageFor(model=>model.ConsultantRegistration.Forenames) %> </p> The problem is, the validation attributes on the metadata class are having no effect. I tried doing it via an Interface, but also no effect. I'm beginning to think that the reason is because I am encapsulating my model within a ViewModel. My Controller (Create Action) is as follows: [HttpPost] public ActionResult Create(Data.ConsultantRegistration consultantRegistration) { if (ModelState.IsValid) // this is always true - which is wrong!! { try { consultantRegistration = ConsultantRegistrationRepository.SaveConsultantRegistration(consultantRegistration); return RedirectToAction("Edit", new { id = consultantRegistration.ID, sectionIndex = 2 }); } catch (Exception ex) { ModelState.AddModelError("CreateException",ex); } } return View(new ConsultantRegistrationFormViewModel(consultantRegistration)); } As outlined in the comment, the ModelState.IsValid property always returns true, despite fields with the Validaiton annotations not being valid. (Forenames being a key example). Am I missing something obvious - considering I am an MVC newbie? I'm after the mechanism demoed by Jon Galloway at http://www.asp.net/learn/mvc-videos/video-10082.aspx. (Am aware t is similar to http://stackoverflow.com/questions/1260562/asp-net-mvc-model-viewmodel-validation but that post seems to talk about xVal. I have no idea what that is and suspect it is for MVC 1)

    Read the article

  • ASP.NET MVC 2 Model encapsulated within ViewModel Validation

    - by Program.X
    I am trying to get validation to work in ASP.NET MVC 2, but without much success. I have a complex class containing a large number of fields. (Don't ask - this is oneo f those real-world situations best practices can't touch) This would normally be my Model and is a LINQ-to-SQL generated class. Because this is generated code, I have created a MetaData class as per http://davidhayden.com/blog/dave/archive/2009/08/10/AspNetMvc20BuddyClassesMetadataType.aspx. public class ConsultantRegistrationMetadata { [DisplayName("Title")] [Required(ErrorMessage = "Title is required")] [StringLength(10, ErrorMessage = "Title cannot contain more than 10 characters")] string Title { get; set; } [Required(ErrorMessage = "Forename(s) is required")] [StringLength(128, ErrorMessage = "Forename(s) cannot contain more than 128 characters")] [DisplayName("Forename(s)")] string Forenames { get; set; } // ... I've attached this to the partial class of my generated class: [MetadataType(typeof(ConsultantRegistrationMetadata))] public partial class ConsultantRegistration { // ... Because my form is complex, it has a number of dependencies, such as SelectLists, etc. which I have encapsulated in a ViewModel pattern - and included the ConsultantRegistration model as a property: public class ConsultantRegistrationFormViewModel { public Data.ConsultantRegistration ConsultantRegistration { get; private set; } public SelectList Titles { get; private set; } public SelectList Countries { get; private set; } // ... So it is essentially ViewModel=Model My View then has: <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Title) %> <%: Html.DropDownListFor(model => model.ConsultantRegistration.Title, Model.Titles,"(select a Title)") %> <%: Html.ValidationMessage("Title","*") %> </p> <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.TextBoxFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.ValidationMessageFor(model=>model.ConsultantRegistration.Forenames) %> </p> The problem is, the validation attributes on the metadata class are having no effect. I tried doing it via an Interface, but also no effect. I'm beginning to think that the reason is because I am encapsulating my model within a ViewModel. My Controller (Create Action) is as follows: [HttpPost] public ActionResult Create(Data.ConsultantRegistration consultantRegistration) { if (ModelState.IsValid) // this is always true - which is wrong!! { try { consultantRegistration = ConsultantRegistrationRepository.SaveConsultantRegistration(consultantRegistration); return RedirectToAction("Edit", new { id = consultantRegistration.ID, sectionIndex = 2 }); } catch (Exception ex) { ModelState.AddModelError("CreateException",ex); } } return View(new ConsultantRegistrationFormViewModel(consultantRegistration)); } As outlined in the comment, the ModelState.IsValid property always returns true, despite fields with the Validaiton annotations not being valid. (Forenames being a key example). Am I missing something obvious - considering I am an MVC newbie? I'm after the mechanism demoed by Jon Galloway at http://www.asp.net/learn/mvc-videos/video-10082.aspx. (Am aware t is similar to http://stackoverflow.com/questions/1260562/asp-net-mvc-model-viewmodel-validation but that post seems to talk about xVal. I have no idea what that is and suspect it is for MVC 1)

    Read the article

  • Grafting LINQ onto C# 2 library

    - by P Daddy
    I'm writing a data access layer. It will have C# 2 and C# 3 clients, so I'm compiling against the 2.0 framework. Although encouraging the use of stored procedures, I'm still trying to provide a fairly complete ability to perform ad-hoc queries. I have this working fairly well, already. For the convenience of C# 3 clients, I'm trying to provide as much compatibility with LINQ query syntax as I can. Jon Skeet noticed that LINQ query expressions are duck typed, so I don't have to have an IQueryable and IQueryProvider (or IEnumerable<T>) to use them. I just have to provide methods with the correct signatures. So I got Select, Where, OrderBy, OrderByDescending, ThenBy, and ThenByDescending working. Where I need help are with Join and GroupJoin. I've got them working, but only for one join. A brief compilable example of what I have is this: // .NET 2.0 doesn't define the Func<...> delegates, so let's define some workalikes delegate TResult FakeFunc<T, TResult>(T arg); delegate TResult FakeFunc<T1, T2, TResult>(T1 arg1, T2 arg2); abstract class Projection{ public static Condition operator==(Projection a, Projection b){ return new EqualsCondition(a, b); } public static Condition operator!=(Projection a, Projection b){ throw new NotImplementedException(); } } class ColumnProjection : Projection{ readonly Table table; readonly string columnName; public ColumnProjection(Table table, string columnName){ this.table = table; this.columnName = columnName; } } abstract class Condition{} class EqualsCondition : Condition{ readonly Projection a; readonly Projection b; public EqualsCondition(Projection a, Projection b){ this.a = a; this.b = b; } } class TableView{ readonly Table table; readonly Projection[] projections; public TableView(Table table, Projection[] projections){ this.table = table; this.projections = projections; } } class Table{ public Projection this[string columnName]{ get{return new ColumnProjection(this, columnName);} } public TableView Select(params Projection[] projections){ return new TableView(this, projections); } public TableView Select(FakeFunc<Table, Projection[]> projections){ return new TableView(this, projections(this)); } public Table Join(Table other, Condition condition){ return new JoinedTable(this, other, condition); } public TableView Join(Table inner, FakeFunc<Table, Projection> outerKeySelector, FakeFunc<Table, Projection> innerKeySelector, FakeFunc<Table, Table, Projection[]> resultSelector){ Table join = new JoinedTable(this, inner, new EqualsCondition(outerKeySelector(this), innerKeySelector(inner))); return join.Select(resultSelector(this, inner)); } } class JoinedTable : Table{ readonly Table left; readonly Table right; readonly Condition condition; public JoinedTable(Table left, Table right, Condition condition){ this.left = left; this.right = right; this.condition = condition; } } This allows me to use a fairly decent syntax in C# 2: Table table1 = new Table(); Table table2 = new Table(); TableView result = table1 .Join(table2, table1["ID"] == table2["ID"]) .Select(table1["ID"], table2["Description"]); But an even nicer syntax in C# 3: TableView result = from t1 in table1 join t2 in table2 on t1["ID"] equals t2["ID"] select new[]{t1["ID"], t2["Description"]}; This works well and gives me identical results to the first case. The problem is if I want to join in a third table. TableView result = from t1 in table1 join t2 in table2 on t1["ID"] equals t2["ID"] join t3 in table3 on t1["ID"] equals t3["ID"] select new[]{t1["ID"], t2["Description"], t3["Foo"]}; Now I get an error (Cannot implicitly convert type 'AnonymousType#1' to 'Projection[]'), presumably because the second join is trying to join the third table to an anonymous type containing the first two tables. This anonymous type, of course, doesn't have a Join method. Any hints on how I can do this?

    Read the article

  • Time Warp

    - by Jesse
    It’s no secret that daylight savings time can wreak havoc on systems that rely heavily on dates. The system I work on is centered around recording dates and times, so naturally my co-workers and I have seen our fair share of date-related bugs. From time to time, however, we come across something that we haven’t seen before. A few weeks ago the following error message started showing up in our logs: “The supplied DateTime represents an invalid time. For example, when the clock is adjusted forward, any time in the period that is skipped is invalid.” This seemed very cryptic, especially since it was coming from areas of our application that are typically only concerned with capturing date-only (no explicit time component) from the user, like reports that take a “start date” and “end date” parameter. For these types of parameters we just leave off the time component when capturing the date values, so midnight is used as a “placeholder” time. How is midnight an “invalid time”? Globalization Is Hard Over the last couple of years our software has been rolled out to users in several countries outside of the United States, including Brazil. Brazil begins and ends daylight savings time at midnight on pre-determined days of the year. On October 16, 2011 at midnight many areas in Brazil began observing daylight savings time at which time their clocks were set forward one hour. This means that at the instant it became midnight on October 16, it actually became 1:00 AM, so any time between 12:00 AM and 12:59:59 AM never actually happened. Because we store all date values in the database in UTC, always adjust any “local” dates provided by a user to UTC before using them as filters in a query. The error we saw was thrown by .NET when trying to convert the Brazilian local time of 2011-10-16 12:00 AM to UTC since that local time never actually existed. We hadn’t experienced this same issue with any of our US customers because the daylight savings time changes in the US occur at 2:00 AM which doesn’t conflict with our “placeholder” time of midnight. Detecting Invalid Times In .NET you might use code similar to the following for converting a local time to UTC: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); The code above throws the “invalid time” exception referenced above. We could try to detect whether or not the local time is invalid with something like this: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); if (localTimeZone.IsInvalidTime(localDate)) localDate = localDate.AddHours(1); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); This code works in this particular scenario, but it hardly seems robust. It also does nothing to address the issue that can arise when dealing with the ambiguous times that fall around the end of daylight savings. When we roll the clocks back an hour they record the same hour on the same day twice in a row. To continue on with our Brazil example, on February 19, 2012 at 12:00 AM, it will immediately become February 18, 2012 at 11:00 PM all over again. In this scenario, how should we interpret February 18, 2011 11:30 PM? Enter Noda Time I heard about Noda Time, the .NET port of the Java library Joda Time, a little while back and filed it away in the back of my mind under the “sounds-like-it-might-be-useful-someday” category.  Let’s see how we might deal with the issue of invalid and ambiguous local times using Noda Time (note that as of this writing the samples below will only work using the latest code available from the Noda Time repo on Google Code. The NuGet package version 0.1.0 published 2011-08-19 will incorrectly report unambiguous times as being ambiguous) : var localDateTime = new LocalDateTime(2011, 10, 16, 0, 0); const string timeZoneId = "Brazil/East"; var timezone = DateTimeZone.ForId(timeZoneId); var localDateTimeMaping = timezone.MapLocalDateTime(localDateTime); ZonedDateTime unambiguousLocalDateTime; switch (localDateTimeMaping.Type) { case ZoneLocalMapping.ResultType.Unambiguous: unambiguousLocalDateTime = localDateTimeMaping.UnambiguousMapping; break; case ZoneLocalMapping.ResultType.Ambiguous: unambiguousLocalDateTime = localDateTimeMaping.EarlierMapping; break; case ZoneLocalMapping.ResultType.Skipped: unambiguousLocalDateTime = new ZonedDateTime( localDateTimeMaping.ZoneIntervalAfterTransition.Start, timezone); break; default: throw new InvalidOperationException(string.Format("Unexpected mapping result type: {0}", localDateTimeMaping.Type)); } var convertedDateTime = unambiguousLocalDateTime.ToInstant().ToDateTimeUtc(); Let’s break this sample down: I’m using the Noda Time ‘LocalDateTime’ object to represent the local date and time. I’ve provided the year, month, day, hour, and minute (zeros for the hour and minute here represent midnight). You can think of a ‘LocalDateTime’ as an “invalidated” date and time; there is no information available about the time zone that this date and time belong to, so Noda Time can’t make any guarantees about its ambiguity. The ‘timeZoneId’ in this sample is different than the ones above. In order to use the .NET TimeZoneInfo class we need to provide Windows time zone ids. Noda Time expects an Olson (tz / zoneinfo) time zone identifier and does not currently offer any means of mapping the Windows time zones to their Olson counterparts, though project owner Jon Skeet has said that some sort of mapping will be publicly accessible at some point in the future. I’m making use of the Noda Time ‘DateTimeZone.MapLocalDateTime’ method to disambiguate the original local date time value. This method returns an instance of the Noda Time object ‘ZoneLocalMapping’ containing information about the provided local date time maps to the provided time zone.  The disambiguated local date and time value will be stored in the ‘unambiguousLocalDateTime’ variable as an instance of the Noda Time ‘ZonedDateTime’ object. An instance of this object represents a completely unambiguous point in time and is comprised of a local date and time, a time zone, and an offset from UTC. Instances of ZonedDateTime can only be created from within the Noda Time assembly (the constructor is ‘internal’) to ensure to callers that each instance represents an unambiguous point in time. The value of the ‘unambiguousLocalDateTime’ might vary depending upon the ‘ResultType’ returned by the ‘MapLocalDateTime’ method. There are three possible outcomes: If the provided local date time is unambiguous in the provided time zone I can immediately set the ‘unambiguousLocalDateTime’ variable from the ‘Unambiguous Mapping’ property of the mapping returned by the ‘MapLocalDateTime’ method. If the provided local date time is ambiguous in the provided time zone (i.e. it falls in an hour that was repeated when moving clocks backward from Daylight Savings to Standard Time), I can use the ‘EarlierMapping’ property to get the earlier of the two possible local dates to define the unambiguous local date and time that I need. I could have also opted to use the ‘LaterMapping’ property in this case, or even returned an error and asked the user to specify the proper choice. The important thing to note here is that as the programmer I’ve been forced to deal with what appears to be an ambiguous date and time. If the provided local date time represents a skipped time (i.e. it falls in an hour that was skipped when moving clocks forward from Standard Time to Daylight Savings Time),  I have access to the time intervals that fell immediately before and immediately after the point in time that caused my date to be skipped. In this case I have opted to disambiguate my local date and time by moving it forward to the beginning of the interval immediately following the skipped period. Again, I could opt to use the end of the interval immediately preceding the skipped period, or raise an error depending on the needs of the application. The point of this code is to convert a local date and time to a UTC date and time for use in a SQL Server database, so the final ‘convertedDate’  variable (typed as a plain old .NET DateTime) has its value set from a Noda Time ‘Instant’. An 'Instant’ represents a number of ticks since 1970-01-01 at midnight (Unix epoch) and can easily be converted to a .NET DateTime in the UTC time zone using the ‘ToDateTimeUtc()’ method. This sample is admittedly contrived and could certainly use some refactoring, but I think it captures the general approach needed to take a local date and time and convert it to UTC with Noda Time. At first glance it might seem that Noda Time makes this “simple” code more complicated and verbose because it forces you to explicitly deal with the local date disambiguation, but I feel that the length and complexity of the Noda Time sample is proportionate to the complexity of the problem. Using TimeZoneInfo leaves you susceptible to overlooking ambiguous and skipped times that could result in run-time errors or (even worse) run-time data corruption in the form of a local date and time being adjusted to UTC incorrectly. I should point out that this research is my first look at Noda Time and I know that I’ve only scratched the surface of its full capabilities. I also think it’s safe to say that it’s still beta software for the time being so I’m not rushing out to use it production systems just yet, but I will definitely be tinkering with it more and keeping an eye on it as it progresses.

    Read the article

< Previous Page | 44 45 46 47 48 49  | Next Page >