Search Results

Search found 664 results on 27 pages for 'no seriously'.

Page 26/27 | < Previous Page | 22 23 24 25 26 27  | Next Page >

  • 8 Reasons Why Even Microsoft Agrees the Windows Desktop is a Nightmare

    - by Chris Hoffman
    Let’s be honest: The Windows desktop is a mess. Sure, it’s extremely powerful and has a huge software library, but it’s not a good experience for average people. It’s not even a good experience for geeks, although we tolerate it. Even Microsoft agrees about this. Microsoft’s Surface tablets with Windows RT don’t support any third-party desktop apps. They consider this a feature — users can’t install malware and other desktop junk, so the system will always be speedy and secure. Malware is Still Common Malware may not affect geeks, but it certainly continues to affect average people. Securing Windows, keeping it secure, and avoiding unsafe programs is a complex process. There are over 50 different file extensions that can contain harmful code to keep track of. It’s easy to have theoretical discussions about how malware could infect Mac computers, Android devices, and other systems. But Mac malware is extremely rare, and has  generally been caused by problem with the terrible Java plug-in. Macs are configured to only run executables from identified developers by default, whereas Windows will run everything. Android malware is talked about a lot, but Android malware is rare in the real world and is generally confined to users who disable security protections and install pirated apps. Google has also taken action, rolling out built-in antivirus-like app checking to all Android devices, even old ones running Android 2.3, via Play Services. Whatever the reason, Windows malware is still common while malware for other systems isn’t. We all know it — anyone who does tech support for average users has dealt with infected Windows computers. Even users who can avoid malware are stuck dealing with complex and nagging antivirus programs, especially since it’s now so difficult to trust Microsoft’s antivirus products. Manufacturer-Installed Bloatware is Terrible Sit down with a new Mac, Chromebook, iPad, Android tablet, Linux laptop, or even a Surface running Windows RT and you can enjoy using your new device. The system is a clean slate for you to start exploring and installing your new software. Sit down with a new Windows PC and the system is a mess. Rather than be delighted, you’re stuck reinstalling Windows and then installing the necessary drivers or you’re forced to start uninstalling useless bloatware programs one-by-one, trying to figure out which ones are actually useful. After uninstalling the useless programs, you may end up with a system tray full of icons for ten different hardware utilities anyway. The first experience of using a new Windows PC is frustration, not delight. Yes, bloatware is still a problem on Windows 8 PCs. Manufacturers can customize the Refresh image, preventing bloatware rom easily being removed. Finding a Desktop Program is Dangerous Want to install a Windows desktop program? Well, you’ll have to head to your web browser and start searching. It’s up to you, the user, to know which programs are safe and which are dangerous. Even if you find a website for a reputable program, the advertisements on that page will often try to trick you into downloading fake installers full of adware. While it’s great to have the ability to leave the app store and get software that the platform’s owner hasn’t approved — as on Android — this is no excuse for not providing a good, secure software installation experience for typical users installing typical programs. Even Reputable Desktop Programs Try to Install Junk Even if you do find an entirely reputable program, you’ll have to keep your eyes open while installing it. It will likely try to install adware, add browse toolbars, change your default search engine, or change your web browser’s home page. Even Microsoft’s own programs do this — when you install Skype for Windows desktop, it will attempt to modify your browser settings t ouse Bing, even if you’re specially chosen another search engine and home page. With Microsoft setting such an example, it’s no surprise so many other software developers have followed suit. Geeks know how to avoid this stuff, but there’s a reason program installers continue to do this. It works and tricks many users, who end up with junk installed and settings changed. The Update Process is Confusing On iOS, Android, and Windows RT, software updates come from a single place — the app store. On Linux, software updates come from the package manager. On Mac OS X, typical users’ software updates likely come from the Mac App Store. On the Windows desktop, software updates come from… well, every program has to create its own update mechanism. Users have to keep track of all these updaters and make sure their software is up-to-date. Most programs now have their act together and automatically update by default, but users who have old versions of Flash and Adobe Reader installed are vulnerable until they realize their software isn’t automatically updating. Even if every program updates properly, the sheer mess of updaters is clunky, slow, and confusing in comparison to a centralized update process. Browser Plugins Open Security Holes It’s no surprise that other modern platforms like iOS, Android, Chrome OS, Windows RT, and Windows Phone don’t allow traditional browser plugins, or only allow Flash and build it into the system. Browser plugins provide a wealth of different ways for malicious web pages to exploit the browser and open the system to attack. Browser plugins are one of the most popular attack vectors because of how many users have out-of-date plugins and how many plugins, especially Java, seem to be designed without taking security seriously. Oracle’s Java plugin even tries to install the terrible Ask toolbar when installing security updates. That’s right — the security update process is also used to cram additional adware into users’ machines so unscrupulous companies like Oracle can make a quick buck. It’s no wonder that most Windows PCs have an out-of-date, vulnerable version of Java installed. Battery Life is Terrible Windows PCs have bad battery life compared to Macs, IOS devices, and Android tablets, all of which Windows now competes with. Even Microsoft’s own Surface Pro 2 has bad battery life. Apple’s 11-inch MacBook Air, which has very similar hardware to the Surface Pro 2, offers double its battery life when web browsing. Microsoft has been fond of blaming third-party hardware manufacturers for their poorly optimized drivers in the past, but there’s no longer any room to hide. The problem is clearly Windows. Why is this? No one really knows for sure. Perhaps Microsoft has kept on piling Windows component on top of Windows component and many older Windows components were never properly optimized. Windows Users Become Stuck on Old Windows Versions Apple’s new OS X 10.9 Mavericks upgrade is completely free to all Mac users and supports Macs going back to 2007. Apple has also announced their intention that all new releases of Mac OS X will be free. In 2007, Microsoft had just shipped Windows Vista. Macs from the Windows Vista era are being upgraded to the latest version of the Mac operating system for free, while Windows PCs from the same era are probably still using Windows Vista. There’s no easy upgrade path for these people. They’re stuck using Windows Vista and maybe even the outdated Internet Explorer 9 if they haven’t installed a third-party web browser. Microsoft’s upgrade path is for these people to pay $120 for a full copy of Windows 8.1 and go through a complicated process that’s actaully a clean install. Even users of Windows 8 devices will probably have to pay money to upgrade to Windows 9, while updates for other operating systems are completely free. If you’re a PC geek, a PC gamer, or someone who just requires specialized software that only runs on Windows, you probably use the Windows desktop and don’t want to switch. That’s fine, but it doesn’t mean the Windows desktop is actually a good experience. Much of the burden falls on average users, who have to struggle with malware, bloatware, adware bundled in installers, complex software installation processes, and out-of-date software. In return, all they get is the ability to use a web browser and some basic Office apps that they could use on almost any other platform without all the hassle. Microsoft would agree with this, touting Windows RT and their new “Windows 8-style” app platform as the solution. Why else would Microsoft, a “devices and services” company, position the Surface — a device without traditional Windows desktop programs — as their mass-market device recommended for average people? This isn’t necessarily an endorsement of Windows RT. If you’re tech support for your family members and it comes time for them to upgrade, you may want to get them off the Windows desktop and tell them to get a Mac or something else that’s simple. Better yet, if they get a Mac, you can tell them to visit the Apple Store for help instead of calling you. That’s another thing Windows PCs don’t offer — good manufacturer support. Image Credit: Blanca Stella Mejia on Flickr, Collin Andserson on Flickr, Luca Conti on Flickr     

    Read the article

  • CodePlex Daily Summary for Monday, August 04, 2014

    CodePlex Daily Summary for Monday, August 04, 2014Popular ReleasesSpace Engineers Server Manager: SESM V1.11: V1.11 - Added the rename option in the map manager - Added the possibility to select map without starting the server - Upgraded the diagnosis page, now more usefull, more colorful, and with a big check/cross sign to tell you if your installation is healthy. Seriously, you must check it out ^^ - Corrected issue #7, now the asteroid count is the real number of asteroid of the current map - Corrected issue #8, the CPU/RAM value of the stats page should be much more accurate nowJson.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...VidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.5: *Added Send-Keys function to send a sequence of keys to an application window (Thanks to mmashwani) *Added 3 optimization/stability improvements to Execute-Process following MS best practice (Thanks to mmashwani) *Fixed issue where Execute-MSI did not use value from XML file for uninstall but instead ran all uninstalls silently by default *Fixed error on 1641 exit code (should be a success like 3010) *Fixed issue with error handling in Invoke-SCCMTask *Fixed issue with deferral dates where th...AutoUpdater.NET : Auto update library for VB.NET and C# Developer: AutoUpdater.NET 1.3: Fixed problem in DownloadUpdateDialog where download continues even if you close the dialog. Added support for new url field for 64 bit application setup. AutoUpdater.NET will decide which download url to use by looking at the value of IntPtr.Size. Added German translation provided by Rene Kannegiesser. Now developer can handle update logic herself using event suggested by ricorx7. Added italian translation provided by Gianluca Mariani. Fixed bug that prevents Application from exiti...HigLabo: HigLabo_20140801: ■Summary Add feature to create SmtpMessage object from System.Net.Mail.MailMessage, HigLabo.Mime.MailMessage. Add feature to get mail header only by using ImapClient. Add some of Portable class library project. Modify Async internal implementation. -------------------------------------------------- ■HigLabo.Converter Bug fix of QuotedPrintableConverter when last char of line is whitespace or tab. ■HigLabo.Core Modify code to add HigLabo.Core.Pcl project. ■HigLabo.Mail Add feature to create...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Dynamics CRM 2013 Global Advanced Find: Global Advanced Find Solution v1.0.0.0: Global Advanced Find Managed Solution v1.0.0.0Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.Multiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterAricie Shared: Aricie.Shared Version 1.8.00: Version 1.8.0 - Release Notes New: Expression Builder to design Flee Expressions New: Cryptographic helpers and configuration classes Improvement: Many fixes and improvements with property editor Improvement: Token Replace Property explorer now has a restricted mode for additional security Improvement: Better variables, types and object manipulation Fixed: smart file and flee bugs Fixed: Removed Exception while trying to read unsuported files Improvement: several performance twe...DataBind: DataBind 1.0: - Efficiency ImprovedConsole Progress Bar: ConsoleProgressBar-0.3: - Validate ProgressInPercentageGum UI Tool: Gum 0.6.07: Fixed bug related to setting the parent to screen bounds.Accesorios de sitios Torrent en Español para Synology Download Station: Pack de Torrents en Español 6.0.0: Agregado los módulos de DivXTotal, el módulo de búsqueda depende del de alojamiento para bajar las series Utiliza el rss: http://www.divxtotal.com/rss.php DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).VG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.New ProjectsAndroid_Games: Free downloadsDailyProgrammer Challenge 173: My solution to /r/dailyprogrammer Challenge #173EmotionPing: This a littel software to make ping to a net ip rangeEntityMapper: Library for mapping (or projecting) Entity types (Domain Model) to DTO types (Contract). Made to be compatible with the Linq Provider for Entity Framework.kevinsheldon: private collectionKeyboard Auto-Complete for PC: Did you ever use the auto-complete in mobile phones? Why not on PC?? Now you can!LetsPlay: W co pograsz dzisiaj? Wybierz jedna z gier zainstalowanych w systemie i graj do woli:)Mini ORM like Enterprise library Data Block: A mini ORM like Enterprise library Data Block.ProjkyAddin ssms addin script shortcut ssmsaddin: ProjkyAddin ???Management Studio ??,??SSMS 2005、SSMS 2008、SSMS 2008 R2、SSMS 2012、SSMS 2014,?????????????Management Studio????????????,????????????????????。Reddit Downvote Bot: Reddit Downvote Bot , Mass Downvote a certain reddit user.SQL Server Dialog: This SQL Server Dialog is now release in v.1.0.0VNTalk Messenger: VNTalk MessengerWindyPaper: Test Project

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • Masters vs. PhD - long [closed]

    - by Sterling
    I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully having people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the most important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I can't even really accurately imagine. I know that I wouldn't have at once obviously, but just to know I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money…get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long…not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the "publish or perish" phrase really turns me off from research. Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say "I made this" rather than "I managed a group of people that made this." I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. I When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry…just some really great new application or system…there is a shelf-life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, "The goal isn't to live forever, its to create something that will." Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years…I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help.

    Read the article

  • xamlparser error after clickonce deployment.Application crashing after installation

    - by black sensei
    Hello Good People, I've built an WPF application with visual studio 2008 and created an installer for it.Works fine so far.I realized it lacks the automatic updates feature, and after trying several solutions, i decided to give a try to clickonce deployment.After a successful deployment on a network server, i 've noticed that the application crashes after installation of the downloaded app.It complains about this: Cannot create instance of 'Login' defined in assembly 'MyApplication, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. Exception has been thrown by the target of an invocation. Error in markup file 'MyApplication;component/login.xaml' Line 1 Position 9. here is the stacktrace at System.Windows.Markup.XamlParseException.ThrowException(String message, Exception innerException, Int32 lineNumber, Int32 linePosition, Uri baseUri, XamlObjectIds currentXamlObjectIds, XamlObjectIds contextXamlObjectIds, Type objectType) at System.Windows.Markup.XamlParseException.ThrowException(ParserContext parserContext, Int32 lineNumber, Int32 linePosition, String message, Exception innerException) at System.Windows.Markup.BamlRecordReader.ThrowExceptionWithLine(String message, Exception innerException) at System.Windows.Markup.BamlRecordReader.CreateInstanceFromType(Type type, Int16 typeId, Boolean throwOnFail) at System.Windows.Markup.BamlRecordReader.GetElementAndFlags(BamlElementStartRecord bamlElementStartRecord, Object& element, ReaderFlags& flags, Type& delayCreatedType, Int16& delayCreatedTypeId) at System.Windows.Markup.BamlRecordReader.BaseReadElementStartRecord(BamlElementStartRecord bamlElementRecord) at System.Windows.Markup.BamlRecordReader.ReadElementStartRecord(BamlElementStartRecord bamlElementRecord) at System.Windows.Markup.BamlRecordReader.ReadRecord(BamlRecord bamlRecord) at System.Windows.Markup.BamlRecordReader.Read(Boolean singleRecord) at System.Windows.Markup.TreeBuilderBamlTranslator.ParseFragment() at System.Windows.Markup.TreeBuilder.Parse() at System.Windows.Markup.XamlReader.LoadBaml(Stream stream, ParserContext parserContext, Object parent, Boolean closeStream) at System.Windows.Application.LoadBamlStreamWithSyncInfo(Stream stream, ParserContext pc) at System.Windows.Application.LoadComponent(Uri resourceLocator, Boolean bSkipJournaledProperties) at System.Windows.Application.DoStartup() at System.Windows.Application.<.ctorb__0(Object unused) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.WrappedInvoke(Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Windows.Threading.DispatcherOperation.InvokeInSecurityContext(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.WrappedInvoke(Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Boolean isSingleParameter) at System.Windows.Threading.Dispatcher.Invoke(DispatcherPriority priority, Delegate method, Object arg) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.PushFrame(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.Run() at System.Windows.Application.RunDispatcher(Object ignore) at System.Windows.Application.RunInternal(Window window) at System.Windows.Application.Run(Window window) at System.Windows.Application.Run() at myApplication.App.Main() here is just the region the debugger is pointing to <Window x:Class="MyApplication.Login" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:src="clr-namespace:MyApplication" xmlns:UI="clr-namespace:UI;assembly=UI" Title="My Application" Height="400" Width="550" ResizeMode="NoResize" WindowStyle="ThreeDBorderWindow" WindowStartupLocation="CenterScreen" Name="Logine" Loaded="Logine_Loaded" Closed="Logine_Closed" Icon="orLogo.ico"> But the installer version as in the msi from setup project works fine.so i cannot see where the error is comming from since i can have design view. Question 1 : Does any one have a similar issue, or is that a known issue? Question 2 : If it's a known issue then what are alternative.I might give up on the clickonce but then i my automatic update feature will be lost (as in there is none which is not ovekill or seriously outdated that i can find right now). thanks for reading this and for pointing me to the right direction.

    Read the article

  • How do I catch this WPF Bitmap loading exception?

    - by mmr
    I'm developing an application that loads bitmaps off of the web using .NET 3.5 sp1 and C#. The loading code looks like: try { CurrentImage = pics[unChosenPics[index]]; bi = new BitmapImage(CurrentImage.URI); // BitmapImage.UriSource must be in a BeginInit/EndInit block. bi.DownloadCompleted += new EventHandler(bi_DownloadCompleted); AssessmentImage.Source = bi; } catch { System.Console.WriteLine("Something broke during the read!"); } and the code to load on bi_DownloadCompleted is: void bi_DownloadCompleted(object sender, EventArgs e) { try { double dpi = 96; int width = bi.PixelWidth; int height = bi.PixelHeight; int stride = width * 4; // 4 bytes per pixel byte[] pixelData = new byte[stride * height]; bi.CopyPixels(pixelData, stride, 0); BitmapSource bmpSource = BitmapSource.Create(width, height, dpi, dpi, PixelFormats.Bgra32, null, pixelData, stride); AssessmentImage.Source = bmpSource; Loading.Visibility = Visibility.Hidden; AssessmentImage.Visibility = Visibility.Visible; } catch { System.Console.WriteLine("Exception when viewing bitmap."); } } Every so often, an image comes along that breaks the reader. I guess that's to be expected. However, rather than being caught by either of those try/catch blocks, the exception is apparently getting thrown outside of where I can handle it. I could handle it using global WPF exceptions, like this SO question. However, that will seriously mess up the control flow of my program, and I'd like to avoid that if at all possible. I have to do the double source assignment because it appears that many images are lacking in width/height parameters in the places where the microsoft bitmap loader expects them to be. So, the first assignment appears to force the download, and the second assignment gets the dpi/image dimensions happen properly. What can I do to catch and handle this exception? Stack trace: at MS.Internal.HRESULT.Check(Int32 hr) at System.Windows.Media.Imaging.BitmapFrameDecode.get_ColorContexts() at System.Windows.Media.Imaging.BitmapImage.FinalizeCreation() at System.Windows.Media.Imaging.BitmapImage.OnDownloadCompleted(Object sender, EventArgs e) at System.Windows.Media.UniqueEventHelper.InvokeEvents(Object sender, EventArgs args) at System.Windows.Media.Imaging.LateBoundBitmapDecoder.DownloadCallback(Object arg) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Boolean isSingleParameter) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.TranslateAndDispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Application.RunInternal(Window window) at LensComparison.App.Main() in C:\Users\Mark64\Documents\Visual Studio 2008\Projects\LensComparison\LensComparison\obj\Release\App.g.cs:line 48 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • What's the most unsound program you've had to maintain?

    - by Robert Rossney
    I periodically am called upon to do maintenance work on a system that was built by a real rocket surgeon. There's so much wrong with it that it's hard to know where to start. No, wait, I'll start at the beginning: in the early days of the project, the designer was told that the system would need to scale, and he'd read that a source of scalability problems was traffic between the application and database servers, so he made sure to minimize this traffic. How? By putting all of the application logic in SQL Server stored procedures. Seriously. The great bulk of the application functions by the HTML front end formulating XML messages. When the middle tier receives an XML message, it uses the document element's tag name as the name of the stored procedure it should call, and calls the SP, passing it the entire XML message as a parameter. It takes the XML message that the SP returns and returns it directly back to the front end. There is no other logic in the application tier. (There was some code in the middle tier to validate the incoming XML messages against a library of schemas. But I removed it, after ascertaining that 1) only a small handful of messages had corresponding schemas, 2) the messages didn't actually conform to these schemas, and 3) after validating the messages, if any errors were encountered, the method discarded them. "This fuse box is a real time-saver - it comes from the factory with pennies pre-installed!") I've seen software that does the wrong thing before. Lots of it. I've written quite a bit. But I've never seen anything like the steely-eyed determination to do the wrong thing, at every possible turn, that's embodied in the design and programming of this system. Well, at least he went with what he knew, right? Um. Apparently, what he knew was Access. And he didn't really understand Access. Or databases. Here's a common pattern in this code: SELECT @TestCodeID FROM TestCode WHERE TestCode = @TestCode SELECT @CountryID FROM Country WHERE CountryAbbr = @CountryAbbr SELECT Invoice.*, TestCode.*, Country.* FROM Invoice JOIN TestCode ON Invoice.TestCodeID = TestCode.ID JOIN Country ON Invoice.CountryID = Country.ID WHERE Invoice.TestCodeID = @TestCodeID AND Invoice.CountryID = @CountryID Okay, fine. You don't trust the query optimizer either. But how about this? (Originally, I was going to post this in What's the best comment in source code you have ever encountered? but I realized that there was so much more to write about than just this one comment, and things just got out of hand.) At the end of many of the utility stored procedures, you'll see code that looks like the following: -- Fix NULLs SET @TargetValue = ISNULL(@TargetValue, -9999) Yes, that code is doing exactly what you can't allow yourself to believe it's doing lest you be driven mad. If the variable contains a NULL, he's alerting the caller by changing its value to -9999. Here's how this number is commonly used: -- Get target value EXEC ap_GetTargetValue @Param1, @Param2, OUTPUT @TargetValue -- Check target value for NULL value IF @TargetValue = -9999 ... Really. For another dimension of this system, see the article on thedailywtf.com entitled I Think I'll Call Them "Transactions". I'm not making any of this up. I swear. I'm often reminded, when I work on this system, of Wolfgang Pauli's famous response to a student: "That isn't right. It isn't even wrong." This can't really be the very worst program ever. It's definitely the worst one I've worked

    Read the article

  • Why does my finite state machine take so long to execute?

    - by BillyONeal
    Hello all :) I'm working on a state machine which is supposed to extract function calls of the form /* I am a comment */ //I am a comment perf("this.is.a.string.which\"can have QUOTES\"", 123456); where the extracted data would be perf("this.is.a.string.which\"can have QUOTES\"", 123456); from a file. Currently, to process a 41kb file, this process is taking close to a minute and a half. Is there something I'm seriously misunderstanding here about this finite state machine? #include <boost/algorithm/string.hpp> std::vector<std::string> Foo() { std::string fileData; //Fill filedata with the contents of a file std::vector<std::string> results; std::string::iterator begin = fileData.begin(); std::string::iterator end = fileData.end(); std::string::iterator stateZeroFoundLocation = fileData.begin(); std::size_t state = 0; for(; begin < end; begin++) { switch (state) { case 0: if (boost::starts_with(boost::make_iterator_range(begin, end), "pref(")) { stateZeroFoundLocation = begin; begin += 4; state = 2; } else if (*begin == '/') state = 1; break; case 1: state = 0; switch (*begin) { case '*': begin = boost::find_first(boost::make_iterator_range(begin, end), "*/").end(); break; case '/': begin = std::find(begin, end, L'\n'); } break; case 2: if (*begin == '"') state = 3; break; case 3: switch(*begin) { case '\\': state = 4; break; case '"': state = 5; } break; case 4: state = 3; break; case 5: if (*begin == ',') state = 6; break; case 6: if (*begin != ' ') state = 7; break; case 7: switch(*begin) { case '"': state = 8; break; default: state = 10; break; } break; case 8: switch(*begin) { case '\\': state = 9; break; case '"': state = 10; } break; case 9: state = 8; break; case 10: if (*begin == ')') state = 11; break; case 11: if (*begin == ';') state = 12; break; case 12: state = 0; results.push_back(std::string(stateZeroFoundLocation, begin)); }; } return results; } Billy3

    Read the article

  • What is the best way to test using grails using IDEA?

    - by egervari
    I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way. The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail. This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself. So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically. Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb. It is hard to run all the unit tests and not integration tests in IDEA. Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain. Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!@!@! Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it. Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful. After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.

    Read the article

  • ublas::bounded_vector<> being resized?

    - by n2liquid
    Now, seriously... I'll refrain from using bad words here because we're talking about the Boost fellows. It MUST be my mistake to see things this way, but I can't understand why, so I'll ask it here; maybe someone can enlighten me in this matter. Here it goes: uBLAS has this nice class template called bounded_vector<> that's used to create fixed-size vectors (or so I thought). From the Effective uBLAS wiki (http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?Effective_UBLAS): The default uBLAS vector and matrix types are of variable size. Many linear algebra problems involve vectors with fixed size. 2 and 3 elements are common in geometry! Fixed size storage (akin to C arrays) can be implemented efficiently as it does not involve the overheads (heap management) associated with dynamic storage. uBLAS implements fixed sizes by changing the underling storage of a vector/matrix to a "bounded_array" from the default "unbounded_array". Alright, this bounded_vector<> thing is used to free you from specifying the underlying storage of the vector to a bounded_array<> of the specified size. Here I ask you: doesn't it look like this bounded vector thing has fixed size to you? Well, it doesn't have. At first I felt betrayed by the wiki, but then I reconsidered the meaning of "bounded" and I think I can let it pass. But in case you, like me (I'm still uncertain), is still wondering if this makes sense, what I found out is that the bounded_vector<> actually can be resized, it may only not be greater than the size specified as template parameter. So, first off, do you think they've had a good reason not to make a real fixed<< size vector or matrix type? Do you think it's okay to "sell" this bounded -- as opposed to fixed-size -- vector to the users of my library as a "fixed-size" vector replacement, even named "Vector3" or "Vector2", like the Effective uBLAS wiki did? Do you think I should somehow implement a vector with fixed size for this purpose? If so, how? (Sorry, but I'm really new to uBLAS; just tried it today) I am developing a 3D game. Should uBLAS be used for the calculations involved in this ("hey, geometry!", per Effective uBLAS wiki)? What replacement would you suggest, if not? -- edit And just in case, yes, I've read this warning: It should be noted that this only changes the storage uBLAS uses for the vector3. uBLAS will still use all the same algorithm (which assume a variable size) to manipulate the vector3. In practice this seems to have no negative impact on speed. The above runs just as quickly as a hand crafted vector3 which does not use uBLAS. The only negative impact is that the vector3 always store a "size" member which in this case is redundant [or isn't it? I mean......]. I see it uses the same algorithm, assuming a variable size, but if an operation were to actually change its size, shouldn't it be stopped (assertion)? ublas::bounded_vector<float,3> v3; ublas::bounded_vector<float,2> v2; v3 = v2; std::cout << v3.size() << '\n'; // prints 2 Oh, come on, isn't this just plain betrayal?

    Read the article

  • Retrieve XML from URL

    - by Pl4za
    I am having some problems retrieving a xml from url with the following code: private static String getAlbumArt(String artistName, String albumName){ try{ XMLParser xml_parser = new XMLParser(); String xml = xml_parser.getXmlFromUrl(getAlbumURL(artistName, albumName)); Document doc = xml_parser.getDomElement(xml); NodeList N = doc.getElementsByTagName("album"); Node node = N.item(0); NodeList N2 = node.getChildNodes(); System.out.println("1------"); for (int i = 0; i < N2.getLength(); i++) { Node detailNode = N2.item(i); if (detailNode.getNodeType() == Node.ELEMENT_NODE) { System.out.println("2------"); if (detailNode.getNodeName().equalsIgnoreCase("image")) { String sizeVal = ((Element) detailNode).getAttribute("size"); String url = detailNode.getTextContent(); if (sizeVal.equalsIgnoreCase("large")) { return url; } } } } } catch (Exception e){ } return null; } The xml function which i call in the above code: public String getXmlFromUrl(String url) { String xml = null; try { DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); xml = EntityUtils.toString(httpEntity); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } return xml; } getAlbumURL: public static String getAlbumURL(String artist, String album){ return URL_METHOD + METHOD_GETALBUM + AMPERSAND + API_KEY + AMPERSAND + PARAM_ARTIST + artist + AMPERSAND + PARAM_ALBUM + album; } XMLparser: public class XMLParser { // constructor public XMLParser() { } //Get XML from URL public String getXmlFromUrl(String url) { String xml = null; try { DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); xml = EntityUtils.toString(httpEntity); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } return xml; } //Get dom element public Document getDomElement(String xml){ Document doc = null; DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); try { DocumentBuilder db = dbf.newDocumentBuilder(); InputSource is = new InputSource(); is.setCharacterStream(new StringReader(xml)); doc = db.parse(is); } catch (ParserConfigurationException e) { Log.e("Error: ", e.getMessage()); return null; } catch (SAXException e) { Log.e("Error: ", e.getMessage()); return null; } catch (IOException e) { Log.e("Error: ", e.getMessage()); return null; } return doc; } //Get nod element public final String getElementValue(Node elem ) { Node child; if( elem != null){ if (elem.hasChildNodes()){ for( child = elem.getFirstChild(); child != null; child = child.getNextSibling() ){ if( child.getNodeType() == Node.TEXT_NODE ){ return child.getNodeValue(); } } } } return ""; } //Get element value public String getValue(Element item, String str) { NodeList nlList = item.getElementsByTagName(str).item(0).getChildNodes(); Node nValue = (Node) nlList.item(0); return nValue.getNodeValue(); } } Any ideas ? I seriously don't know what is wrong.. I used this before and it worked.

    Read the article

  • Create / build / generate a web form that can be on my server and has modern looks and could be impl

    - by Luay
    I have a small web site and would like to add a 'contact us' form and a 'feedback' form. i would like the forms to satisfy the following: 1- be modern looking (with beautiful css effects) 2- the form fields are validated properly and 'inline'. What I mean is once a user skips a required field or enters an email address incorrectly some kind of tooltip or icon is displayed to ask him to cerrect the error (as opposed to a message box that appears after the user clicks 'submit') 3- once the submit button is clicked the form contents are emailed to me. 4- the whole thing can be setup by a noob like myself. 5- no ads on the form I have been searching for at least 5 days for a solution but I can't seem to find anything the would satisfy the above 5 conditions. I don't mind paying for a solution as long as it is hosted on my site and it is a one off payment and not a monthly payment. So far my search has lead me to the following: 1 wufoo. The good: the generated forms seem to look okay but not the best there is. The bad: the free service is limited to 100 submissions. ads on the form. it is not hosted on my server. Paid service requires monthly payments 2- emailmeform: almost same as above except the generated form looks old. They do have an offer where you pay only $4 to get the form and set it up on your own site but that doesn't solve the fact that the forms look old. 3- formAssembly: same as above with minor variations (the generated form looks better) 4- formchamp, formthis, kontaktr,... And other similar online services: the same problem. either the form generated looks outdated or require monthly payments or they put ads...they don't satisfy my conditions. 5- coffeecup form builder. a desktop software. The problem is the generated forms look too old and use flash. 6- simfatic. Another software. Much better than coffeecup. almost satisfies my conditions but the forms not as good as I like. 7- many, many php scripts or html templates that look so outdated or fail when tested (probably because they are too old). Seriously guys, how hard is this. At least 90%+ of website contain at least a 'contact us' form. Why aren't there better solutions? if there is I can't seem to find them. In terms of looks I want something similar to this: http://web-kreation.com/articles/lightform-free-ajaxphp-contact-form/ It is called lightform. And this is a perfect example of what I mean by 'inline' validation. the only problem: there is no script to handle sending the mail. Even if I find one, I don't know how to modify it for my needs. So could you please help me out. I really can't search anymore. I reached rock bottom with this issue. I need a complete solution. If nothing exists then at least a: 1- form template (html) that looks nice and can easily be modified 2- a validation script that does 'inline' validation like the example above (or similar to it) and can be easily implemented by a noob like me to work with the html form. 3- a php script that will handle sending the email and can be easily implemented (all three working in harmony). I hope there is a complete solution but am I asking for too much? Pretty please...help...

    Read the article

  • Is it possible to reference a linkbotton outside an update panel as the update trigger?

    - by Selase
    I have a page based on a master page and as such i can only see the content place holders i used in the master page showing up in the aspx pages based on the master page. the source code shown below: <%@ Page Title="" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="CaseAdmin.aspx.cs" Inherits="Prototype4.CaseAdmin" %> <%@PreviousPageType VirtualPath="~/Account/Login.aspx"%> <asp:Content ID="Content1" ContentPlaceHolderID="HeadContent" runat="server"> </asp:Content> <asp:Content ID="CaseRightNews" ContentPlaceHolderID="RightNewsItem" runat="server"> </asp:Content> <asp:Content ID="CaseLeftNav" ContentPlaceHolderID="LeftNavigation" runat="server"> <div style="margin-top:20px; margin-bottom:20px;"> <p class="actionButton"> <asp:LinkButton ID="OpenCaseLinkButton" runat="server" onclick="OpenCaseLinkButton_Click">Open Case</asp:LinkButton> </p> <p class="actionButton"><asp:LinkButton ID="RegisterExhibitLinkButton" runat="server" onclick="RegisterExhibitLinkButton_Click">Register Exhibit</asp:LinkButton> </p> </div> </asp:Content> <asp:Content ID="CaseMainContnt" ContentPlaceHolderID="MainContent" runat="server"> <asp:ScriptManager ID="ScriptManager" runat="server" /> <asp:UpdatePanel ID="CaseMainCntntUpdatePanel" UpdateMode="Conditional" runat="server"> <Triggers> <asp:AsyncPostBackTrigger ControlID="" eventname="Click"/> </Triggers> <ContentTemplate> <%--Some text here to inform user to click on the open case botton to display open case form--%> </ContentTemplate> </asp:UpdatePanel> <asp:UpdatePanel runat="server" id="UpdatePanel1" updatemode="Conditional"> <Triggers> <asp:AsyncPostBackTrigger ControlID="" eventname="Click"/> </Triggers> <ContentTemplate> <%--some text here to inform users to click on the add exhibit botton to display add exhibit form--%> </ContentTemplate> </asp:UpdatePanel> </asp:Content> the section of the entire page i wish to change upon update is the (this is the main content of the page). for this reason i placed the updatepanel inside the content place holder since it cant be sitting outside and not wrapped in a content place holder. However, the buttons that i wish to apply the trigger that fires the update to, are in another content place holder(). How can i possibly get those buttons to act as the trigger while changing only what appears in the main content area. Plus, i tried getting the updatepanel to work just so i could see if it does the update well but it turned out really bad. i added some linkbottons in the content template area and used them as the triggers for testing reasons. i tested and the changes took over the entire page in contrast to just appearing in the content area. I actually just wanted to load a form that is created in another asp. page into the main content area... I seriously need help with this... Every little help, detail and information is dearly appreciated... thanks so much in advance

    Read the article

  • Java algorithm for normalizing audio

    - by Marty Pitt
    I'm trying to normalize an audio file of speech. Specifically, where an audio file contains peaks in volume, I'm trying to level it out, so the quiet sections are louder, and the peaks are quieter. I know very little about audio manipulation, beyond what I've learnt from working on this task. Also, my math is embarrassingly weak. I've done some research, and the Xuggle site provides a sample which shows reducing the volume using the following code: (full version here) @Override public void onAudioSamples(IAudioSamplesEvent event) { // get the raw audio byes and adjust it's value ShortBuffer buffer = event.getAudioSamples().getByteBuffer().asShortBuffer(); for (int i = 0; i < buffer.limit(); ++i) buffer.put(i, (short)(buffer.get(i) * mVolume)); super.onAudioSamples(event); } Here, they modify the bytes in getAudioSamples() by a constant of mVolume. Building on this approach, I've attempted a normalisation modifies the bytes in getAudioSamples() to a normalised value, considering the max/min in the file. (See below for details). I have a simple filter to leave "silence" alone (ie., anything below a value). I'm finding that the output file is very noisy (ie., the quality is seriously degraded). I assume that the error is either in my normalisation algorithim, or the way I manipulate the bytes. However, I'm unsure of where to go next. Here's an abridged version of what I'm currently doing. Step 1: Find peaks in file: Reads the full audio file, and finds this highest and lowest values of buffer.get() for all AudioSamples @Override public void onAudioSamples(IAudioSamplesEvent event) { IAudioSamples audioSamples = event.getAudioSamples(); ShortBuffer buffer = audioSamples.getByteBuffer().asShortBuffer(); short min = Short.MAX_VALUE; short max = Short.MIN_VALUE; for (int i = 0; i < buffer.limit(); ++i) { short value = buffer.get(i); min = (short) Math.min(min, value); max = (short) Math.max(max, value); } // assign of min/max ommitted for brevity. super.onAudioSamples(event); } Step 2: Normalize all values: In a loop similar to step1, replace the buffer with normalized values, calling: buffer.put(i, normalize(buffer.get(i)); public short normalize(short value) { if (isBackgroundNoise(value)) return value; short rawMin = // min from step1 short rawMax = // max from step1 short targetRangeMin = 1000; short targetRangeMax = 8000; int abs = Math.abs(value); double a = (abs - rawMin) * (targetRangeMax - targetRangeMin); double b = (rawMax - rawMin); double result = targetRangeMin + ( a/b ); // Copy the sign of value to result. result = Math.copySign(result,value); return (short) result; } Questions: Is this a valid approach for attempting to normalize an audio file? Is my math in normalize() valid? Why would this cause the file to become noisy, where a similar approach in the demo code doesn't?

    Read the article

  • Calling Object Methods in Code

    - by Mister R2
    I'm a bit new to PHP, and I'm more experienced with strongly-typed languages such as JAVA, C# or C++.I'm currently writing a web tool in PHP, and I am having an issue trying to do what I want. The simple idea of what I want to do in code is run through some emails I used PHP-IMAP to get. I then create email objects (a class I defined), and put them in an array. Later on the code, however, I cycle through those emails to display them. However, as you might have guessed I'd have an issue with, I try to use an Email Class object method in that later loop -- and I'm pretty sure PHP doesn't know that the variables in the array happen to be Email Class objects! I wrote a toString method, and I want to call it in the loop. While I don't need to do this for the final version of this tool, I would like to find out what I'm missing. This is the class and the loop where I'm calling the method: include 'imap_email_interface.php'; class ImapEmail implements imap_email_interface { // Email data var $msgno; var $to; var $from; var $subject; var $body; var $attachment; // Email behavior /* PHP 4 ~ legacy constructor */ public function ImapEmail($message_number) { $this->__construct(); $this->msgno = $message_number; } /* PHP 5 Constructor */ public function __construct($message_number) { $this->msgno = $message_number; } public function send($send_to) { // Not Yet Needed! Seriously! } public function setHeaderDirectly($TO, $FROM, $SUBJECT) { $this->to = $TO; $this->from = $FROM; $this->subject = $SUBJECT; } public function setHeaderIndirectly($HEADER) { if (isset($HEADER->to[0]->personal)) $this->to = '"'.$HEADER->to[0]->personal.'", '.$HEADER->to[0]->mailbox.'@'.$HEADER->to[0]->host; else $this->to = $HEADER->to[0]->mailbox.'@'.$HEADER->to[0]->host; $this->from = '"'.$HEADER->from[0]->personal.'", '.$HEADER->from[0]->mailbox.'@'.$HEADER->from[0]->host; $this->subject = $HEADER->subject; } public function setBody($BODY) { $this->body = $BODY; } public function setAttachment($ATTCH) { $this->attachment = $ATTCH; } public function toString() { $str = '[TO]: ' . $this->to . '<br />' . '[FROM]: ' . $this->from . '<br />' . '[SUBJECT]: ' . $this->subject . '<br />'; $str .= '[Attachment]: '.$this->attachment.'<br />'; return $str; } } ?> The Loop: foreach ($orderFileEmails as $x) { $x->toString(); echo '<br /><br />'; } Any ideas?

    Read the article

  • How can I make the storage of C++ lambda objects more efficient?

    - by Peter Ruderman
    I've been thinking about storing C++ lambda's lately. The standard advice you see on the Internet is to store the lambda in a std::function object. However, none of this advice ever considers the storage implications. It occurred to me that there must be some seriously black voodoo going on behind the scenes to make this work. Consider the following class that stores an integer value: class Simple { public: Simple( int value ) { puts( "Constructing simple!" ); this->value = value; } Simple( const Simple& rhs ) { puts( "Copying simple!" ); this->value = rhs.value; } Simple( Simple&& rhs ) { puts( "Moving simple!" ); this->value = rhs.value; } ~Simple() { puts( "Destroying simple!" ); } int Get() const { return this->value; } private: int value; }; Now, consider this simple program: int main() { Simple test( 5 ); std::function<int ()> f = [test] () { return test.Get(); }; printf( "%d\n", f() ); } This is the output I would hope to see from this program: Constructing simple! Copying simple! Moving simple! Destroying simple! 5 Destroying simple! Destroying simple! First, we create the value test. We create a local copy on the stack for the temporary lambda object. We then move the temporary lambda object into memory allocated by std::function. We destroy the temporary lambda. We print our output. We destroy the std::function. And finally, we destroy the test object. Needless to say, this is not what I see. When I compile this on Visual C++ 2010 (release or debug mode), I get this output: Constructing simple! Copying simple! Copying simple! Copying simple! Copying simple! Destroying simple! Destroying simple! Destroying simple! 5 Destroying simple! Destroying simple! Holy crap that's inefficient! Not only did the compiler fail to use my move constructor, but it generated and destroyed two apparently superfluous copies of the lambda during the assignment. So, here finally are the questions: (1) Is all this copying really necessary? (2) Is there some way to coerce the compiler into generating better code? Thanks for reading!

    Read the article

  • Alphabetically Ordering an array of words

    - by Genesis
    I'm studying C on my own in preparation for my upcoming semester of school and was wondering what I was doing wrong with my code so far. If Things look weird it is because this is part of a much bigger grab bag of sorting functions I'm creating to get a sense of how to sort numbers,letters,arrays,and the like! I'm basically having some troubles with the manipulation of strings in C currently. Also, I'm quite limited in my knowledge of C at the moment! My main Consists of this: #include <stdio.h> #include <stdio.h> #include <stdlib.h> int numbers[10]; int size; int main(void){ setvbuf(stdout,NULL,_IONBF,0); //This is magical code that allows me to input. int wordNumber; int lengthOfWord = 50; printf("How many words do you want to enter: "); scanf("%i", &wordNumber); printf("%i\n",wordNumber); char words[wordNumber][lengthOfWord]; printf("Enter %i words:",wordNumber); int i; for(i=0;i<wordNumber+1;i++){ //+1 is because my words[0] is blank. fgets(&words[i], 50, stdin); } for(i=1;i<wordNumber+1;i++){ // Same as the above comment! printf("%s", words[i]); //prints my words out! } alphabetize(words,wordNumber); //I want to sort these arrays with this function. } My sorting "method" I am trying to construct is below! This function is seriously flawed, but I'd thought I'd keep it all to show you where my mind was headed when writing this. void alphabetize(char a[][],int size){ // This wont fly. size = size+1; int wordNumber; int lengthOfWord; char sortedWords[wordNumber][lengthOfWord]; //In effort for the for loop int i; int j; for(i=1;i<size;i++){ //My effort to copy over this array for manipulation for(j=1;j<size;j++){ sortedWords[i][j] = a[i][j]; } } //This should be kinda what I want when ordering words alphabetically, right? for(i=1;i<size;i++){ for(j=2;j<size;j++){ if(strcmp(sortedWords[i],sortedWords[j]) > 0){ char* temp = sortedWords[i]; sortedWords[i] = sortedWords[j]; sortedWords[j] = temp; } } } for(i=1;i<size;i++){ printf("%s, ",sortedWords[i]); } } I guess I also have another question as well... When I use fgets() it's doing this thing where I get a null word for the first spot of the array. I have had other issues recently trying to scanf() char[] in certain ways specifically spacing my input word variables which "magically" gets rid of the first null space before the character. An example of this is using scanf() to write "Hello" and getting " Hello" or " ""Hello"... Appreciate any thoughts on this, I've got all summer to study up so this doesn't need to be answered with haste! Also, thank you stack overflow as a whole for being so helpful in the past. This may be my first post, but I have been a frequent visitor for the past couple of years and it's been one of the best places for helpful advice/tips.

    Read the article

  • map with string is broken?[solved]

    - by teritriano
    Yes. I can't see what im doing wrong the map is string, int Here the method bange::function::Add(lua_State *vm){ //userdata, function if (!lua_isfunction(vm, 2)){ cout << "bange: AddFunction: First argument isn't a function." << endl; return false;} void *pfunction = const_cast<void *>(lua_topointer(vm, 2)); char key[32] = {0}; snprintf(key, 32, "%p", pfunction); cout << "Key: " << key << endl; string strkey = key; if (this->functions.find(strkey) != this->functions.end()){ luaL_unref(vm, LUA_REGISTRYINDEX, this->functions[strkey]);} this->functions[strkey] = luaL_ref(vm, LUA_REGISTRYINDEX); return true; Ok, when the code is executed... Program received signal SIGSEGV, Segmentation fault. 0x00007ffff6e6caa9 in std::basic_string<char, std::char_traits<char>, std::allocator<char> > ::compare(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const () from /usr/lib/libstdc++.so.6 Seriously, what's wrong with my code. Thanks for help. Edit 1: Ok, I've done the solution and still fails. I've tried directly insert a string but gives the same error. Let's see, the object is a bange::scene inherited from bange::function. I create the object with lua_newuserdata: bange::scene *scene = static_cast<bange::scene *>(lua_newuserdata(vm, sizeof(bange::scene))); (...) scene = new (scene) bange::scene(width, height, nlayers, vm); I need this for LUA garbage collection. Now the access to bange::function::Add from Lua: static int bangefunction_Add(lua_State *vm){ //userdata, function bange::function *function = reinterpret_cast<bange::function *>(lua_touserdata(vm, 1)); cout &lt&lt "object with bange::function: " &lt&lt function << endl; bool added = function->bange::function::Add(vm); lua_pushboolean(vm, static_cast<int>(added)); return 1; } Userdata is bange::scene stored in Lua. Knowing that userdata is scene, in fact, the object's direction is the same when I've created the scene before. I need the reinterpret_cast, and then call the method. The pointer "this" is still the same direction inside the method. solved I did a small test in the bange::function constructor which works without problems. bange::function::function(){ string test("test"); this->functions["test"] = 2; } I finally noticed that the problem is bange::function *function = reinterpret_cast<bange::function *>(lua_touserdata(vm, 1)); because the object is bange::scene and no bange::function (i admit it, a pointer corruption) and this seems more a code design issue. So this, in a way, is solved. Thanks everybody.

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Stored Procedures with SSRS? Hmm… not so much

    - by Rob Farley
    Little Bobby Tables’ mother says you should always sanitise your data input. Except that I think she’s wrong. The SQL Injection aspect is for another post, where I’ll show you why I think SQL Injection is the same kind of attack as many other attacks, such as the old buffer overflow, but here I want to have a bit of a whinge about the way that some people sanitise data input, and even have a whinge about people who insist on using stored procedures for SSRS reports. Let me say that again, in case you missed it the first time: I want to have a whinge about people who insist on using stored procedures for SSRS reports. Let’s look at the data input sanitisation aspect – except that I’m going to call it ‘parameter validation’. I’m talking about code that looks like this: create procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     /* First check that @eomdate is a valid date */     if isdate(@eomdate) != 1     begin         select 'Please enter a valid date' as ErrorMessage;         return;     end     /* Then check that time has passed since @eomdate */     if datediff(day,@eomdate,sysdatetime()) < 5     begin         select 'Sorry - EOM is not complete yet' as ErrorMessage;         return;     end         /* If those checks have succeeded, return the data */     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales     from Sales.SalesOrderHeader     where OrderDate >= dateadd(month,-1,@eomdate)         and OrderDate < @eomdate     group by SalesPersonID     order by SalesPersonID; end Notice that the code checks that a date has been entered. Seriously??!! This must only be to check for NULL values being passed in, because anything else would have to be a valid datetime to avoid an error. The other check is maybe fair enough, but I still don’t like it. The two problems I have with this stored procedure are the result sets and the small fact that the stored procedure even exists in the first place. But let’s consider the first one of these problems for starters. I’ll get to the second one in a moment. If you read Jes Borland (@grrl_geek)’s recent post about returning multiple result sets in Reporting Services, you’ll be aware that Reporting Services doesn’t support multiple results sets from a single query. And when it says ‘single query’, it includes ‘stored procedure call’. It’ll only handle the first result set that comes back. But that’s okay – we have RETURN statements, so our stored procedure will only ever return a single result set.  Sometimes that result set might contain a single field called ErrorMessage, but it’s still only one result set. Except that it’s not okay, because Reporting Services needs to know what fields to expect. Your report needs to hook into your fields, so SSRS needs to have a way to get that information. For stored procs, it uses an option called FMTONLY. When Reporting Services tries to figure out what fields are going to be returned by a query (or stored procedure call), it doesn’t want to have to run the whole thing. That could take ages. (Maybe it’s seen some of the stored procedures I’ve had to deal with over the years!) So it turns on FMTONLY before it makes the call (and turns it off again afterwards). FMTONLY is designed to be able to figure out the shape of the output, without actually running the contents. It’s very useful, you might think. set fmtonly on exec dbo.GetMonthSummaryPerSalesPerson '20030401'; set fmtonly off Without the FMTONLY lines, this stored procedure returns a result set that has three columns and fourteen rows. But with FMTONLY turned on, those rows don’t come back. But what I do get back hurts Reporting Services. It doesn’t run the stored procedure at all. It just looks for anything that could be returned and pushes out a result set in that shape. Despite the fact that I’ve made sure that the logic will only ever return a single result set, the FMTONLY option kills me by returning three of them. It would have been much better to push these checks down into the query itself. alter procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     order by SalesPersonID; end Now if we run it with FMTONLY turned on, we get the single result set back. But let’s consider the execution plan when we pass in an invalid date. First let’s look at one that returns data. I’ve got a semi-useful index in place on OrderDate, which includes the SalesPersonID and TotalDue fields. It does the job, despite a hefty Sort operation. …compared to one that uses a future date: You might notice that the estimated costs are similar – the Index Seek is still 28%, the Sort is still 71%. But the size of that arrow coming out of the Index Seek is a whole bunch smaller. The coolest thing here is what’s going on with that Index Seek. Let’s look at some of the properties of it. Glance down it with me… Estimated CPU cost of 0.0005728, 387 estimated rows, estimated subtree cost of 0.0044385, ForceSeek false, Number of Executions 0. That’s right – it doesn’t run. So much for reading plans right-to-left... The key is the Filter on the left of it. It has a Startup Expression Predicate in it, which means that it doesn’t call anything further down the plan (to the right) if the predicate evaluates to false. Using this method, we can make sure that our stored procedure contains a single query, and therefore avoid any problems with multiple result sets. If we wanted, we could always use UNION ALL to make sure that we can return an appropriate error message. alter procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales, /*Placeholder: */ '' as ErrorMessage     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     /* Now include the error messages */     union all     select 0, 0, 0, 'Please enter a valid date' as ErrorMessage     where isdate(@eomdate) != 1     union all     select 0, 0, 0, 'Sorry - EOM is not complete yet' as ErrorMessage     where datediff(day,@eomdate,sysdatetime()) < 5     order by SalesPersonID; end But still I don’t like it, because it’s now a stored procedure with a single query. And I don’t like stored procedures that should be functions. That’s right – I think this should be a function, and SSRS should call the function. And I apologise to those of you who are now planning a bonfire for me. Guy Fawkes’ night has already passed this year, so I think you miss out. (And I’m not going to remind you about when the PASS Summit is in 2012.) create function dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) returns table as return (     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales, '' as ErrorMessage     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     union all     select 0, 0, 0, 'Please enter a valid date' as ErrorMessage     where isdate(@eomdate) != 1     union all     select 0, 0, 0, 'Sorry - EOM is not complete yet' as ErrorMessage     where datediff(day,@eomdate,sysdatetime()) < 5 ); We’ve had to lose the ORDER BY – but that’s fine, as that’s a client thing anyway. We can have our reports leverage this stored query still, but we’re recognising that it’s a query, not a procedure. A procedure is designed to DO stuff, not just return data. We even get entries in sys.columns that confirm what the shape of the result set actually is, which makes sense, because a table-valued function is the right mechanism to return data. And we get so much more flexibility with this. If you haven’t seen the simplification stuff that I’ve preached on before, jump over to http://bit.ly/SimpleRob and watch the video of when I broke a microphone and nearly fell off the stage in Wales. You’ll see the impact of being able to have a simplifiable query. You can also read the procedural functions post I wrote recently, if you didn’t follow the link from a few paragraphs ago. So if we want the list of SalesPeople that made any kind of sales in a given month, we can do something like: select SalesPersonID from dbo.GetMonthSummaryPerSalesPerson(@eomonth) order by SalesPersonID; This doesn’t need to look up the TotalDue field, which makes a simpler plan. select * from dbo.GetMonthSummaryPerSalesPerson(@eomonth) where SalesPersonID is not null order by SalesPersonID; This one can avoid having to do the work on the rows that don’t have a SalesPersonID value, pushing the predicate into the Index Seek rather than filtering the results that come back to the report. If we had joins involved, we might see some of those being simplified out. We also get the ability to include query hints in individual reports. We shift from having a single-use stored procedure to having a reusable stored query – and isn’t that one of the main points of modularisation? Stored procedures in Reporting Services are just a bit limited for my liking. They’re useful in plenty of ways, but if you insist on using stored procedures all the time rather that queries that use functions – that’s rubbish. @rob_farley

    Read the article

  • The Changing Face of PASS

    - by Bill Graziano
    I’m starting my sixth year on the PASS Board.  I served two years as the Program Director, two years as the Vice-President of Marketing and I’m starting my second year as the Executive Vice-President of Finance.  There’s a pretty good chance that if PASS has done something you don’t like or is doing something you don’t like, that I’m involved in one way or another. Andy Leonard asked in a comment on his blog if the Board had ever reversed itself based on community input.  He asserted that it hadn’t.  I disagree.  I’m not going to try and list all the changes we make inside portfolios based on feedback from and meetings with the community.  I’m going to focus on major governance issues since I was elected to the Board. Management Company The first big change was our management company.  Our old management company had a standard approach to running a non-profit.  It worked well when PASS was launched.  Having a ready-made structure and process to run the organization enabled the organization to grow quickly.  As time went on we were limited in some of the things we wanted to do.  The more involved you were with PASS, the more you saw these limitations.  Key volunteers were regularly providing feedback that they wanted certain changes that were difficult for us to accomplish.  The Board at that time wanted changes that were difficult or impossible to accomplish under that structure. This was not a simple change.  Imagine a $2.5 million dollar company letting all its employees go on a Friday and starting with a new staff on Monday.  We also had a very narrow window to accomplish that so that we wouldn’t affect the Summit – our only source of revenue.  We spent the year after the change rebuilding processes and putting on the Summit in Denver.  That’s a concrete example of a huge change that PASS made to better serve its members.  And it was a change that many in the community were telling us we needed to make. Financials We heard regularly from our members that they wanted our financials posted.  Today on our web site you can find audited financials going back to 2004.  We publish our budget at the start of each year.  If you ask a question about the financials on the PASS site I do my best to answer it.  I’m also trying to do a better job answering financial questions posted in other locations.  (And yes, I know I owe a few of you some blog posts.) That’s another concrete example of a change that our members asked for that the Board agreed was a good decision. Minutes When I started on the Board the meeting minutes were very limited.  The minutes from a two day Board meeting might fit on one page.  I think we did the bare minimum we were legally required to do.  Today Board meeting minutes run from 5 to 12 pages and go into incredible detail on what we talk about.  There are certain topics that are NDA but where possible we try to list the topic we discussed but that the actual discussion was under NDA.  We also publish the agenda of Board meetings ahead of time. This is another specific example where input from the community influenced the decision.  It was certainly easier to have limited minutes but I think the extra effort helps our members understand what’s going on. Board Q&A At the 2009 Summit the Board held its first public Q&A with our members.  We’d always been available individually to answer questions.  There’s a benefit to getting us all in one room and asking the really hard questions to watch us squirm.  We learn what questions we don’t have good answers for.  We get to see how many people in the crowd look interested in the various questions and answers. I don’t recall the genesis of how this came about.  I’m fairly certain there was some community pressure though. Board Votes Until last November, the Board only reported the vote totals and not how individual Board members voted.  That was one of the topics at a great lunch I had with Tim Mitchell and Kendal van Dyke at the Summit.  That was also the topic of the first question asked at the Board Q&A by Kendal.  Kendal expressed his opposition to to anonymous votes clearly and passionately and without trying to paint anyone into a corner.  Less than 24 hours later the PASS Board voted to make individual votes public unless the topic was under NDA.  That’s another area where the Board decided to change based on feedback from our members. Summit Location While this isn’t actually a governance issue it is one of the more public decisions we make that has taken some public criticism.  There is a significant portion of our members that want the Summit near them.  There is a significant portion of our members that like the Summit in Seattle.  There is a significant portion of our members that think it should move around the country.  I was one that felt strongly that there were significant, tangible benefits to our attendees to being in Seattle every year.  I’m also one that has been swayed by some very compelling arguments that we need to have at least one outside Seattle and then revisit the decision.  I can’t tell you how the Board will vote but I know the opinion of our members weighs heavily on the decision. Elections And that brings us to the grand-daddy of all governance issues.  My thesis for this blog post is that the PASS Board has implemented policy changes in response to member feedback.  It isn’t to defend or criticize our election process.  It’s just to say that is has been under going continuous change since I’ve been on the Board.  I ran for the Board in the fall of 2005.  I don’t know much about what happened before then.  I was actively volunteering for PASS for four years prior to that as a chapter leader and on the program committee.  I don’t recall any complaints about elections but that doesn’t mean they didn’t occur.  The questions from the Nominating Committee (NomCom) were trivial and the selection process rudimentary (For example, “Tell us about your accomplishments”).  I don’t even remember who I ran against or how many other people ran.  I ran for the VP of Marketing in the fall of 2007.  I don’t recall any significant changes the Board made in the election process for that election.  I think a lot of the changes in 2007 came from us asking the management company to work on the election process.  I was expecting a similar set of puff ball questions from my previous election.  Boy, was I in for a shock.  The NomCom had found a much better set of questions and really made the interview portion difficult.  The questions were much more behavioral in nature.  I’d already written about my vision for PASS and my goals.  They wanted to know how I handled adversity, how I handled criticism, how I handled conflict, how I handled troublesome volunteers, how I motivated people and how I responded to motivation. And many, many other things. They grilled me for over an hour.  I’ve done a fair bit of technical sales in my time.  I feel I speak well under pressure addressing pointed questions.  This interview intentionally put me under pressure.  In addition to wanting to know about my interpersonal skills, my work experience, my volunteer experience and my supervisory experience they wanted to see how I’d do under pressure.  They wanted to see who would respond under pressure and who wouldn’t.  It was a bit of a shock. That was the first big change I remember in the election process.  I know there were other improvements around the process but none of them stick in my mind quite like the unexpected hour-long grilling. The next big change I remember was after the 2009 elections.  Andy Warren was unhappy with the election process and wanted to make some changes.  He worked with Hannes at HQ and they came up with a better set of processes.  I think Andy moved PASS in the right direction.  Nonetheless, after the 2010 election even more people were very publicly clamoring for changes to our election process.  In August of 2010 we had a choice to make.  There were numerous bloggers criticizing the Board and our upcoming election.  The easy change would be to announce that we were changing the process in a way that would satisfy our critics.  I believe that a knee-jerk response to criticism is seldom correct. Instead the Board spent August and September and October and November listening to the community.  I visited two SQLSaturdays and asked questions of everyone I could.  I attended chapter meetings and asked questions of as many people as they’d let me.  At Summit I made it a point to introduce myself to strangers and ask them about the election.  At every breakfast I’d sit down at a table full of strangers and ask about the election.  I’m happy to say that I left most tables arguing about the election.  Most days I managed to get 2 or 3 breakfasts in. I spent less time talking to people that had already written about the election.  They were already expressing their opinion.  I wanted to talk to people that hadn’t spoken up.  I wanted to know what the silent majority thought.  The Board all attended the Q&A session where our members expressed their concerns about a variety of issues including the election. The PASS Board also chose to create the Election Review Committee.  We wanted people from the community that had been involved with PASS to look at our election process with fresh eyes while listening to what the community had to say and give us some advice on how we could improve the process.  I’m a part of this as is Andy Warren.  None of the other members are on the Board.  I’ve sat in numerous calls and interviews with this group and attended an open meeting at the Summit.  We asked anyone that wanted to discuss the election to come speak with us.  The ERC held an open meeting at the Summit and invited anyone to attend.  There are forums on the ERC web site where we’ve invited people to participate.  The ERC has reached to key people involved in recent elections.  The years that I haven’t mentioned also saw minor improvements in the election process.  Off the top of my head I don’t recall what exact changes were made each year.  Specifically since the 2010 election we’ve gone out of our way to seek input from the community about the process.  I’m not sure what more we could have done to invite feedback from the community. I think to say that we haven’t “fixed” the election process isn’t a fair criticism at this time.  We haven’t rushed any changes through the process.  If you don’t see any changes in our election process in July or August then I think it’s fair to criticize us for ignoring the community or ask for an explanation for what we’ve done. In Summary Andy’s main point was that the PASS Board hasn’t changed in response to our members wishes.  I think I’ve shown that time and time again the PASS Board has changed in response to what our members want.  There are only two outstanding issues: Summit location and elections.  The 2013 Summit location hasn’t been decided yet.  Our work on the elections is also in progress.  And at every step in the election review we’ve gone out of our way to listen to the community and incorporate their feedback on the process. I also hope I’m not encouraging everyone that wants some change in the organization to organize a “blog rush” against the Board.  We take public suggestions very seriously but we also take the time to evaluate those suggestions and learn what the rest of our members think and make a measured decision.

    Read the article

  • Job conditions conflicting with personal principles on software-development - how much is too much?

    - by Baelnorn
    Sorry for the incoming wall'o'text (and for my probably bad English) but I just need to get this off somehow. I also accept that this question will be probably closed as subjective and argumentative, but I need to know one thing: "how much BS are programmers supposed to put up with before breaking?" My background I'm 27 years old and have a B.Sc. in Computer engineering with a graduation grade of 1.8 from a university of applied science. I went looking for a job right after graduation. I got three offers right away, with two offers paying vastly more than the last one, but that last one seemed more interesting so I went for that. My situation I've been working for the company now for 17 months now, but it feels like a drag more and more each day. Primarily because the company (which has only 5 other developers but me, and of these I work with 4) turned out to be pretty much the anti-thesis of what I expected (and was taught in university) from a modern software company. I agreed to accept less than half of the usual payment appropriate for my qualification for the first year because I was promised a trainee program. However, the trainee program turned out to be "here you got a computer, there's some links on the stuff we use, and now do what you colleagues tell you". Further, during my whole time there (trainee or not) I haven't been given the grace of even a single code-review - apparently nobody's interested in my work as long as it "just works". I was told in the job interview that "Microsoft technology played a central role in the company" yet I've been slowly eroding my congnitive functions with Flex/Actionscript/Cairngorm ever since I started (despite having applied as a C#/.NET developer). Actually, the company's primary projects are based on Java/XSLT and Flex/Actionscript (with some SAP/ABAP stuff here and there but I'm not involved in that) and they've been working on these before I even applied. Having had no experience either with that particular technology nor the framework nor the field (RIA) nor in developing business scale applications I obviously made several mistakes. However, my boss told me that he let me make those mistakes (which ate at least 2 months of development time on their own) on purpose to provide some "learning experience". Even when I was still a trainee I was already tasked with working on a business-critical application. On my own. Without supervision. Without code-reviews. My boss thinks agile methods are a waste of time/money and deems putting more than one developer on any project not efficient. Documentation is not necessary and each developer should only document what he himself needs for his work. Recently he wanted us to do bug tracking with Excel and Email instead of using an already existing Bugzilla, overriding an unanimous decision made by all developers and testers involved in the process - only after another senior developer had another hour-long private discussion with him he agreed to let us use the bugtracker. Project management is basically not present, there are only a few Excel sheets floating around where the senior developer lists some things (not all, mind you) with a time estimate ranging from days to months, trying to at least somehow organize the whole mess. A development process is also basically not present, each developer just works on his own however he wants. There are not even coding conventions in the company. Testing is done manually with a single tester (sometimes two testers) per project because automated testing wasn't given the least thought when the whole project was started. I guess it's not a big surprise when I say that each developer also has his own share of hundreds of overhours (which are, of course, unpaid). Each developer is tasked with working on his own project(s) which in turn leads to a very extensive knowledge monopolization - if one developer was to have an accident or become ill there would be absolutely no one who could even hope to do his work. Considering that each developer has his own business-critical application to work on, I guess that's a pretty bad situation. I've been trying to change things for the better. I tried to introduce a development process, but my first attempt was pretty much shot down by my boss with "I don't want to discuss agile methods". After that I put together a process that at least resembled how most of the developers were already working and then include stuff like automated (or at least organized) testing, coding conventions, etc. However, this was also shot down because it wasn't "simple" enought to be shown on a business slide (actually, I wasn't even given the 15 minutes I'd have needed to present the process in the meeting). My problem I can't stand working there any longer. Seriously, I consider to resign on monday, which still leaves me with 3 months to work there due to the cancelation period. My primary goal since I started studying computer science was being a good computer scientist, working with modern technologies and adhering to modern and proven principles and methods. However, the company I'm working for seems to make that impossible. Some days I feel as if was living in a perverted real-life version of the Dilbert comics. My question Am I overreacting? Is this the reality each graduate from university has to face? Should I betray my sound principles and just accept these working conditions? Or should I gtfo of there? What's the opinion of other developers on this matter. Would you put up with all that stuff?

    Read the article

  • What is stopping data flow with .NET 3.5 asynchronous System.Net.Sockets.Socket?

    - by TonyG
    I have a .NET 3.5 client/server socket interface using the asynchronous methods. The client connects to the server and the connection should remain open until the app terminates. The protocol consists of the following pattern: send stx receive ack send data1 receive ack send data2 (repeat 5-6 while more data) receive ack send etx So a single transaction with two datablocks as above would consist of 4 sends from the client. After sending etx the client simply waits for more data to send out, then begins the next transmission with stx. I do not want to break the connection between individual exchanges or after each stx/data/etx payload. Right now, after connection, the client can send the first stx, and get a single ack, but I can't put more data onto the wire after that. Neither side disconnects, the socket is still intact. The client code is seriously abbreviated as follows - I'm following the pattern commonly available in online code samples. private void SendReceive(string data) { // ... SocketAsyncEventArgs completeArgs; completeArgs.Completed += new EventHandler<SocketAsyncEventArgs>(OnSend); clientSocket.SendAsync(completeArgs); // two AutoResetEvents, one for send, one for receive if ( !AutoResetEvent.WaitAll(autoSendReceiveEvents , -1) ) Log("failed"); else Log("success"); // ... } private void OnSend( object sender , SocketAsyncEventArgs e ) { // ... Socket s = e.UserToken as Socket; byte[] receiveBuffer = new byte[ 4096 ]; e.SetBuffer(receiveBuffer , 0 , receiveBuffer.Length); e.Completed += new EventHandler<SocketAsyncEventArgs>(OnReceive); s.ReceiveAsync(e); // ... } private void OnReceive( object sender , SocketAsyncEventArgs e ) {} // ... if ( e.BytesTransferred > 0 ) { Int32 bytesTransferred = e.BytesTransferred; String received = Encoding.ASCII.GetString(e.Buffer , e.Offset , bytesTransferred); dataReceived += received; } autoSendReceiveEvents[ SendOperation ].Set(); // could be moved elsewhere autoSendReceiveEvents[ ReceiveOperation ].Set(); // releases mutexes } The code on the server is very similar except that it receives first and then sends a response - the server is not doing anything (that I can tell) to modify the connection after it sends a response. The problem is that the second time I hit SendReceive in the client, the connection is already in a weird state. Do I need to do something in the client to preserve the SocketAsyncEventArgs, and re-use the same object for the lifetime of the socket/connection? I'm not sure which eventargs object should hang around during the life of the connection or a given exchange. Do I need to do something, or Not do something in the server to ensure it continues to Receive data? The server setup and response processing looks like this: void Start() { // ... listenSocket.Bind(...); listenSocket.Listen(0); StartAccept(null); // note accept as soon as we start. OK? mutex.WaitOne(); } void StartAccept(SocketAsyncEventArgs acceptEventArg) { if ( acceptEventArg == null ) { acceptEventArg = new SocketAsyncEventArgs(); acceptEventArg.Completed += new EventHandler<SocketAsyncEventArgs>(OnAcceptCompleted); } Boolean willRaiseEvent = this.listenSocket.AcceptAsync(acceptEventArg); if ( !willRaiseEvent ) ProcessAccept(acceptEventArg); // ... } private void OnAcceptCompleted( object sender , SocketAsyncEventArgs e ) { ProcessAccept(e); } private void ProcessAccept( SocketAsyncEventArgs e ) { // ... SocketAsyncEventArgs readEventArgs = new SocketAsyncEventArgs(); readEventArgs.SetBuffer(dataBuffer , 0 , Int16.MaxValue); readEventArgs.Completed += new EventHandler<SocketAsyncEventArgs>(OnIOCompleted); readEventArgs.UserToken = e.AcceptSocket; dataReceived = ""; // note server is degraded for single client/thread use // As soon as the client is connected, post a receive to the connection. Boolean willRaiseEvent = e.AcceptSocket.ReceiveAsync(readEventArgs); if ( !willRaiseEvent ) this.ProcessReceive(readEventArgs); // Accept the next connection request. this.StartAccept(e); } private void OnIOCompleted( object sender , SocketAsyncEventArgs e ) { // switch ( e.LastOperation ) case SocketAsyncOperation.Receive: ProcessReceive(e); // similar to client code // operate on dataReceived here case SocketAsyncOperation.Send: ProcessSend(e); // similar to client code } // execute this when a data has been processed into a response (ack, etc) private SendResponseToClient(string response) { // create buffer with response // currentEventArgs has class scope and is re-used currentEventArgs.SetBuffer(sendBuffer , 0 , sendBuffer.Length); Boolean willRaiseEvent = currentClient.SendAsync(currentEventArgs); if ( !willRaiseEvent ) ProcessSend(currentEventArgs); } A .NET trace shows the following when sending ABC\r\n: Socket#7588182::SendAsync() Socket#7588182::SendAsync(True#1) Data from Socket#7588182::FinishOperation(SendAsync) 00000000 : 41 42 43 0D 0A Socket#7588182::ReceiveAsync() Exiting Socket#7588182::ReceiveAsync() - True#1 And it stops there. It looks just like the first send from the client but the server shows no activity. I think that could be info overload for now but I'll be happy to provide more details as required. Thanks!

    Read the article

  • Cobol: science and fiction

    - by user847
    There are a few threads about the relevance of the Cobol programming language on this forum, e.g. this thread links to a collection of them. What I am interested in here is a frequently repeated claim based on a study by Gartner from 1997: that there were around 200 billion lines of code in active use at that time! I would like to ask some questions to verify or falsify a couple of related points. My goal is to understand if this statement has any truth to it or if it is totally unrealistic. I apologize in advance for being a little verbose in presenting my line of thought and my own opinion on the things I am not sure about, but I think it might help to put things in context and thus highlight any wrong assumptions and conclusions I have made. Sometimes, the "200 billion lines" number is accompanied by the added claim that this corresponded to 80% of all programming code in any language in active use. Other times, the 80% merely refer to so-called "business code" (or some other vague phrase hinting that the reader is not to count mainstream software, embedded systems or anything else where Cobol is practically non-existent). In the following I assume that the code does not include double-counting of multiple installations of the same software (since that is cheating!). In particular in the time prior to the y2k problem, it has been noted that a lot of Cobol code is already 20 to 30 years old. That would mean it was written in the late 60ies and 70ies. At that time, the market leader was IBM with the IBM/370 mainframe. IBM has put up a historical announcement on his website quoting prices and availability. According to the sheet, prices are about one million dollars for machines with up to half a megabyte of memory. Question 1: How many mainframes have actually been sold? I have not found any numbers for those times; the latest numbers are for the year 2000, again by Gartner. :^( I would guess that the actual number is in the hundreds or the low thousands; if the market size was 50 billion in 2000 and the market has grown exponentially like any other technology, it might have been merely a few billions back in 1970. Since the IBM/370 was sold for twenty years, twenty times a few thousand will result in a couple of ten-thousands of machines (and that is pretty optimistic)! Question 2: How large were the programs in lines of code? I don't know how many bytes of machine code result from one line of source code on that architecture. But since the IBM/370 was a 32-bit machine, any address access must have used 4 bytes plus instruction (2, maybe 3 bytes for that?). If you count in operating system and data for the program, how many lines of code would have fit into the main memory of half a megabyte? Question 3: Was there no standard software? Did every single machine sold run a unique hand-coded system without any standard software? Seriously, even if every machine was programmed from scratch without any reuse of legacy code (wait ... didn't that violate one of the claims we started from to begin with???) we might have O(50,000 l.o.c./machine) * O(20,000 machines) = O(1,000,000,000 l.o.c.). That is still far, far, far away from 200 billion! Am I missing something obvious here? Question 4: How many programmers did we need to write 200 billion lines of code? I am really not sure about this one, but if we take an average of 10 l.o.c. per day, we would need 55 million man-years to achieve this! In the time-frame of 20 to 30 years this would mean that there must have existed two to three million programmers constantly writing, testing, debugging and documenting code. That would be about as many programmers as we have in China today, wouldn't it? Question 5: What about the competition? So far, I have come up with two things here: 1) IBM had their own programming language, PL/I. Above I have assumed that the majority of code has been written exclusively using Cobol. However, all other things being equal I wonder if IBM marketing had really pushed their own development off the market in favor of Cobol on their machines. Was there really no relevant code base of PL/I? 2) Sometimes (also on this board in the thread quoted above) I come across the claim that the "200 billion lines of code" are simply invisible to anybody outside of "governments, banks ..." (and whatnot). Actually, the DoD had funded their own language in order to increase cost effectiveness and reduce the proliferation of programming language. This lead to their use of Ada. Would they really worry about having so many different programming languages if they had predominantly used Cobol? If there was any language running on "government and military" systems outside the perception of mainstream computing, wouldn't that language be Ada? I hope someone can point out any flaws in my assumptions and/or conclusions and shed some light on whether the above claim has any truth to it or not.

    Read the article

< Previous Page | 22 23 24 25 26 27  | Next Page >