Search Results

Search found 3574 results on 143 pages for 'difficult'.

Page 141/143 | < Previous Page | 137 138 139 140 141 142 143  | Next Page >

  • You should NOT be writing jQuery in SharePoint if&hellip;

    - by Mark Rackley
    Yes… another one of these posts. What can I say? I’m a pot stirrer.. a rabble rouser *rabble rabble* jQuery in SharePoint seems to be a fairly polarizing issue with one side thinking it is the most awesome thing since Princess Leia as the slave girl in Return of the Jedi and the other half thinking it is the worst idea since Mannequin 2: On the Move. The correct answer is OF COURSE “it depends”. But what are those deciding factors that make jQuery an awesome fit or leave a bad taste in your mouth? Let’s see if I can drive the discussion here with some polarizing comments of my own… I know some of you are getting ready to leave your comments even now before reading the rest of the blog, which is great! Iron sharpens iron… These discussions hopefully open us up to understanding the entire process better and think about things in a different way. You should not be writing jQuery in SharePoint if you are not a developer… Let’s start off with my most polarizing and rant filled portion of the blog post. If you don’t know what you are doing or you don’t have a background that helps you understand the implications of what you are writing then you should not be writing jQuery in SharePoint! I truly believe that one of the biggest reasons for the jQuery haters is because of all the bad jQuery out there. If you don’t know what you are doing you can do some NASTY things! One of the best stories I’ve heard about this is from my good friend John Ferringer (@ferringer). John tells this story during our Mythbusters session we do together. One of his clients was undergoing a Denial of Service attack and they couldn’t figure out what was going on! After much searching they found that some genius jQuery developer wrote some code for an image rotator, but did not take into account what happens when there are no images to load! The code just kept hitting the servers over and over and over again which prevented anything else from getting done! Now, I’m NOT saying that I have not done the same sort of thing in the past or am immune from such mistakes. My point is that if you don’t know what you are doing, there are very REAL consequences that can have a major impact on your organization AND they will be hard to track down.  Think how happy your boss will be after you copy and pasted some jQuery from a blog without understanding what it does, it brings down the farm, AND it takes them 3 days to track it back to you.  :/ Good times will not be had. Like it or not JavaScript/jQuery is a programming language. While you .NET people sit on your high horses because your code is compiled and “runs faster” (also debatable), the rest of us will be actually getting work done and delivering solutions while you are trying to figure out why your widget won’t deploy. I can pick at that scab because I write .NET code too and speak from experience. I can do both, and do both well. So, I am not speaking from ignorance here. In JavaScript/jQuery you have variables, loops, conditionals, functions, arrays, events, and built in methods. If you are not a developer you just aren’t going to take advantage of all of that and use it correctly. Ahhh.. but there is hope! There is a lot of jQuery resources out there to help you learn and learn well! There are many experts on the subject that will gladly tell you when you are smoking crack. I just this minute saw a tweet from @cquick with a link to: “jQuery Fundamentals”. I just glanced through it and this may be a great primer for you aspiring jQuery devs. Take advantage of all the resources and become a developer! Hey, it will look awesome on your resume right? You should not be writing jQuery in SharePoint if it depends too much on client resources for a good user experience I’ve said it once and I’ll say it over and over until you understand. jQuery is executed on the client’s computer. Got it? If you are looping through hundreds of rows of data, searching through an enormous DOM, or performing many calculations it is going to take some time! AND if your user happens to be sitting on some old PC somewhere that they picked up at a garage sale their experience will be that much worse! If you can’t give the user a good experience they will not use the site. So, if jQuery is causing the user to have a bad experience, don’t use it. I sometimes go as far to say that you should NOT go to jQuery as a first option for external facing web sites because you have ZERO control over what the end user’s computer will be. You just can’t guarantee an awesome user experience all of the time. Ahhh… but you have no choice? (where have I heard that before?). Well… if you really have no choice, here are some tips to help improve the experience: Avoid screen scraping This is not 1999 and SharePoint is not an old green screen from a mainframe… so why are you treating it like it is? Screen scraping is time consuming and client intensive. Take advantage of tools like SPServices to do your data retrieval when possible. Fine tune your DOM searches A lot of time can be eaten up just searching the DOM and ignoring table rows that you don’t need. Write better jQuery to only loop through tables rows that you need, or only access specific elements you need. Take advantage of Element ID’s to return the one element you are looking for instead of looping through all the DOM over and over again. Write better jQuery Remember this is development. Think about how you can write cleaner, faster jQuery. This directly relates to the previous point of improving your DOM searches, but also when using arrays, variables and loops. Do you REALLY need to loop through that array 3 times? How can you knock it down to 2 times or even 1? When you have lots of calculations and data that you are manipulating every operation adds up. Think about how you can streamline it. Back in the old days before RAM was abundant, Cores were plentiful and dinosaurs roamed the earth, us developers had to take performance into account in everything we did. It’s a lost art that really needs to be used here. You should not be writing jQuery in SharePoint if you are sending a lot of data over the wire… Developer:  “Awesome… you can easily call SharePoint’s web services to retrieve and write data using SPServices!” Administrator: “Crap! you can easily call SharePoint’s web services to retrieve and write data using SPServices!” SPServices may indeed be the best thing that happened to SharePoint since the invention of SharePoint Saturdays by Godfather Lotter… BUT you HAVE to use it wisely! (I REFUSE to make the Spiderman reference). If you do not know what you are doing your code will bring back EVERY field and EVERY row from a list and push that over the internet with all that lovely XML wrapped around it. That can be a HUGE amount of data and will GREATLY impact performance! Calling several web service methods at the same time can cause the same problem and can negatively impact your SharePoint servers. These problems, thankfully, are not difficult to rectify if you are careful: Limit list data retrieved Use CAML to reduce the number of rows returned and limit the fields returned using ViewFields.  You should definitely be doing this regardless. If you aren’t I hope your admin thumps you upside the head. Batch large list updates You may or may not have noticed that if you try to do large updates (hundreds of rows) that the performance is either completely abysmal or it fails over half the time. You can greatly improve performance and avoid timeouts by breaking up your updates into several smaller updates. I don’t know if there is a magic number for best performance, it really depends on how much data you are sending back more than the number of rows. However, I have found that 200 rows generally works well.  Play around and find the right number for your situation. Delay Web Service calls when possible One of the cool things about jQuery and SPServices is that you can delay queries to the server until they are actually needed instead of doing them all at once. This can lead to performance improvements over DataViewWebParts and even .NET code in the right situations. So, don’t load the data until it’s needed. In some instances you may not need to retrieve the data at all, so why retrieve it ALL the time? You should not be writing jQuery in SharePoint if there is a better solution… jQuery is NOT the silver bullet in SharePoint, it is not the answer to every question, it is just another tool in the developers toolkit. I urge all developers to know what options exist out there and choose the right one! Sometimes it will be jQuery, sometimes it will be .NET,  sometimes it will be XSL, and sometimes it will be some other choice… So, when is there a better solution to jQuery? When you can’t get away from performance problems Sometimes jQuery will just give you horrible performance regardless of what you do because of unavoidable obstacles. In these situations you are going to have to figure out an alternative. Can I do it with a DVWP or do I have to crack open Visual Studio? When you need to do something that jQuery can’t do There are lots of things you can’t do in jQuery like elevate privileges, event handlers, workflows, or interact with back end systems that have no web service interface. It just can’t do everything. When it can be done faster and more efficiently another way Why are you spending time to write jQuery to do a DataViewWebPart that would take 5 minutes? Or why are you trying to implement complicated logic that would be simple to do in .NET? If your answer is that you don’t have the option, okay. BUT if you do have the option don’t reinvent the wheel! Take advantage of the other tools. The answer is not always jQuery… sorry… the kool-aid tastes good, but sweet tea is pretty awesome too. You should not be using jQuery in SharePoint if you are a moron… Let’s finish up the blog on a high note… Yes.. it’s true, I sometimes type things just to get a reaction… guess this section title might be a good example, but it feels good sometimes just to type the words that a lot of us think… So.. don’t be that guy! Another good buddy of mine that works for Microsoft told me. “I loved jQuery in SharePoint…. until I had to support it.”. He went on to explain that some user was making several web service calls on a page using jQuery and then was calling Microsoft and COMPLAINING because the page took so long to load… DUH! What do you expect to happen when you are pushing that much data over the wire and are making that many web service calls at once!! It’s one thing to write that kind of code and accept it’s just going to take a while, it’s COMPLETELY another issue to do that and then complain when it’s not lightning fast!  Someone’s gene pool needs some chlorine. So, I think this is a nice summary of the blog… DON’T be that guy… don’t be a moron. How can you stop yourself from being a moron? Ah.. glad you asked, here are some tips: Think Is jQuery the right solution to my problem? Is there a better approach? What are the implications and pitfalls of using jQuery in this situation? Search What are others doing? Does someone have a better solution? Is there a third party library that does the same thing I need? Plan Write good jQuery. Limit calculations and data sent over the wire and don’t reinvent the wheel when possible. Test Okay, it works well on your machine. Try it on others ESPECIALLY if this is for an external site. Test with empty data. Test with hundreds of rows of data. Test as many scenarios as possible. Monitor those server resources to see the impact there as well. Ask the experts As smart as you are, there are people smarter than you. Even the experts talk to each other to make sure they aren't doing something stupid. And for the MOST part they are pretty nice guys. Marc Anderson and Christophe Humbert are two guys who regularly keep me in line. Make sure you aren’t doing something stupid. Repeat So, when you think you have the best solution possible, repeat the steps above just to be safe.  Conclusion jQuery is an awesome tool and has come in handy on many occasions. I’m even teaching a 1/2 day SharePoint & jQuery workshop at the upcoming SPTechCon in Boston if you want to berate me in person. However, it’s only as awesome as the developer behind the keyboard. It IS development and has its pitfalls. Knowledge and experience are invaluable to giving the user the best experience possible.  Let’s face it, in the end, no matter our opinions, prejudices, or ego providing our clients, customers, and users with the best solution possible is what counts. Period… end of sentence…

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • CodePlex Daily Summary for Sunday, February 13, 2011

    CodePlex Daily Summary for Sunday, February 13, 2011Popular ReleasesTV4Home - The all-in-one TV solution!: 0.1.0.0 Preview: This is the beta preview release of the TV4Home software.Finestra Virtual Desktops: 1.2: Fixes a few minor issues with 1.1 including the broken per-desktop backgrounds Further improves the speed of switching desktops A few UI performance improvements Added donations linksNuGet: NuGet 1.1: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. The URL to the package OData feed is: http://go.microsoft.com/fwlink/?LinkID=206669 To see the list of issues fixed in this release, visit this our issues listEnhSim: EnhSim 2.4.0: 2.4.0This release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 - Upd...Sterling Isolated Storage Database with LINQ for Silverlight and Windows Phone 7: Sterling OODB v1.0: Note: use this changeset to download the source example that has been extended to show database generation, backup, and restore in the desktop example. Welcome to the Sterling 1.0 RTM. This version is not backwards-compatible with previous versions of Sterling. Sterling is also available via NuGet. This product has been used and tested in many applications and contains a full suite of unit tests. You can refer to the User's Guide for complete documentation, and use the unit tests as guide...PDF Rider: PDF Rider 0.5.1: Changes from the previous version * Use dynamic layout to better fit text in other languages * Includes French and Spanish localizations Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0.5.1-setup.exe" (reccomended) 2. Down...Snoop, the WPF Spy Utility: Snoop 2.6.1: This release is a bug fixing release. Most importantly, issues have been seen around WPF 4.0 applications not always showing up in the app chooser. Hopefully, they are fixed now. I thought this issue warranted a minor release since more and more people are going WPF 4.0 and I don't want anyone to have any problems. Dan Hanan also contributes again with several usability features. Thanks Dan! Happy Snooping! p.s. By request, I am also attaching a .zip file ... so that people can install it ...RIBA - Rich Internet Business Application for Silverlight: Preview of MVVM Framework Source + Tutorials: This is a first public release of the MVVM Framework which is part of the final RIBA application. The complete RIBA example LOB application has yet to be published. Further Documentation on the MVVM part can be found on the Blog, http://www.SilverlightBlog.Net and in the downloadable source ( mvvm/doc/ ). Please post all issues and suggestions in the issue tracker.SharePoint Learning Kit: 1.5: SharePoint Learning Kit 1.5 has the following new functionality: *Support for SharePoint 2010 *E-Learning Actions can be localised *Two New Document Library Edit Options *Automatically add the Assignment List Web Part to the Web Part Gallery *Various Bug Fixes for the Drop Box There are 2 downloads for this release SLK-1.5-2010.zip for SharePoint 2010 SLK-1.5-2007.zip for SharePoint 2007 (WSS3 & MOSS 2007)Facebook C# SDK: 5.0.3 (BETA): This is fourth BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. For more information about this release see the following blog posts: Facebook C# SDK - Writing your first Facebook Application Facebook C# SDK v5 Beta Internals Facebook C# SDK V5.0.0 (BETA) Released We have spend time trying ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.161: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a new Twitter List network importer, makes some minor feature improvements, and fixes a few bugs. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file...WCF Data Services Toolkit: WCF Data Services Toolkit: The source code and binary releases of the WCF Data Services Toolkit. For simplicity, the source code download doesn't include any of the MSTest files. If you want those, you can pull the code down via MercurialyoutubeFisher: youtubeFisher 3.0 [beta]: What's new: Video capturing improved Supports YouTube's new layout (january 2011) Internal refactoringNearforums - ASP.NET MVC forum engine: Nearforums v5.0: Version 5.0 of the ASP.NET MVC Forum Engine, containing the following improvements: .NET 4.0 as target framework using ASP.NET MVC 3. All views migrated to Razor for cleaner markup. Alternate template (Layout file) for mobile devices 4 Bug Fixes since Version 4.1 Visit the project Roadmap for more details. Webdeploy package sha1 checksum: 28785b7248052465ea0738a7775e8e8744d84c27fuv: 1.0 release, codename Chopper Joe: features: search/replace :o to open file :s to save file :q to quitASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager html generation optimized new features for the lookup (add additional search data ) live demo went aeroAutoLoL: AutoLoL v1.5.5: AutoChat now allows up to 6 items. Items with nr. 7-0 will be removed! News page url's are now opened in the default browser Added a context menu to the system tray icon (thanks to Alex Banagos) AutoChat now allows configuring the Chat Keys and the Modifier Key The recent files list now supports compact and full mode Fix: Swapped mouse buttons are now properly detected Fix: Sometimes the Play button was pressed while still greyed out Champion: Karma Note: You can also run the u...mojoPortal: 2.3.6.2: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2362-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Rawr: Rawr 4.0.19 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...IronRuby: 1.1.2: IronRuby 1.1.2 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. In this release we fixed several major issues: - problems that blocked Gem installation in certain cases - regex syntax: the parser was replaced with a new one that is much more compatible with Ruby 1.9.2 - cras...New ProjectsAbstract | .NET DDD abstraction for infra-structure (Data, Blobs, Queues): In the last few years we have seen many tools abstract access to infra-structures. They are all very different - what makes it difficult for you to move from Azure or to Azure. Abstract makes migration easier by standardising access to these infra-structures.Apex APRS: Apex APRS is a new APRS client application that is unlike any other. Key Features: Online and offline-cached map viewing from multiple popular sources Fast, simple, intuitive & powerful user interface Customizable Notification System: Customizable Notification SystemDaniel Singleton for C++: An elegant solution for C++ singletons using dependency declaration to control lifetime. One object created during any execution, lazy-init, thread safety... nice and compact.Deduplicator: Deduplicator helps to organize your file system. Create one folder organized by choice containing unique files. To be used for photo's, mp3's or any other binary format. Deduplicator is released yet, user interface is limited and some hardcoding is still in placeDigitypon (ASP.NET MVC 3): Digitypon will be a new web application specialized to be used by those who want to set an e-newspaper or an e-magazine. The main difference among other CMSs is that Digitypon’s workflow is a virtualized way of how employees of printed matters (newspaperes, magazinews) work.EdgeJournalImporter: Import journal files written on the Entourage (Pocket) Edge into Microsoft OneNote 2007+FlatFileSerializer: Serialize and deserialize flat file records from and to self defined classes using Attributes.Google Chart Helper: Controls to insert Google Charts to your web application. No Javascript code to do. We do it for you !How to display records from MySQL 5.1 database in asp.net using VB.net or CSharp: How to display records from MySQl 5.1+ database in asp.net with vb.net or C# code.HTTP Filer: HTTP Filer is a utility that allow users to share files and documents over http protocol. This utility was designed especially for Windows Phone users to send files from computer to their phone easily without send emails with attachments or upload files to an internet server.ibamonitoring: Source code for the avian point-count data collection web site www.ibamonitoring.org.JoPack Ultra Light Packaging for large teams: JoPack is an opensource ultra light package management software – that is targeted for simplifying development with large teams sharing volatile assemblies across several solutions. Latest project source code can be found on project home site: http://code.google.com/p/jo-pack/ L-System Turtle Based Fractal Tool (L-Fractal Tool): A tool to help you play with L-System turtle graphic based fractal curves( http://en.wikipedia.org/wiki/L-system) This tool helps you look into some of the well known curves & lets you define new patterns & production rules to build your own. Have a fun-fractal day !mailer: mailer is a application to mail. It's developed in Python.NJamb: A C# DSL for more rigorous tests: NJamb is a C# syntax for tests and DDD specifications. It makes them more readable, faster to write, and more rigorous. Its Linq-style expressions can assert preconditions and postconditions. IntelliSense makes the syntax almost foolproof. And, it's designed to be extended.NUpdater: NUpdater makes it easier for .NET Framework developers to add auto-updating capability to their software. Putting together numerous patching capabilities, this library is an all-around updater. Developed in C# with CLS compliance (this library is fully compatible with Mono).Perihelia - The .NET & Silverlight Socket Project: Perihelia is an open-source socket framework. The framework includes (or will include) all the necessities you need to satisfy your networking needs. Windows and WPF applications are currently supported, and Silverlight applications will be supported soon.PLogger: PLogger is a light, fast configuration-less file appender logger build using a parallel pipeline architecture. It is much easier and faster to set up and use then Log4Net or the enterprise libraryQuasar: Quasar is a professional .Net utility library which adds sugar on .net framework.Sectors Game Engine: Sectors is a XNA-based 2.5D (Doom-like) game engine with console and scripting support for Windows.SharePoint 2010 Server-Side-Scanner WebPart - embDocumentInhalator: embDocumentInhalator makes it possible for SharePoint 2010 users to scan documents from scanners attached directly to the server. For developers it may help to see the relationship between the individual components required. SIAJUR: Projeto Web para controle de documentossvcutil2: svcutil2 generates Wcf client proxies from Wsdl2 documents.TemporalMemoryNetwork: TemporalMemoryNetwork is a research project exploring how dynamical systems can store and represent patterns that occur through time.WebDAV#: This project aims to implement WebDAV support for .NET, both for client software as well as software hosting their own WebDAV server. The project will start with the server portion. The project will be developed in C# 3.5 for .NET 3.5 and 4.0.

    Read the article

  • Running SSIS packages from C#

    - by Piotr Rodak
    Most of the developers and DBAs know about two ways of deploying packages: You can deploy them to database server and run them using SQL Server Agent job or you can deploy the packages to file system and run them using dtexec.exe utility. Both approaches have their pros and cons. However I would like to show you that there is a third way (sort of) that is often overlooked, and it can give you capabilities the ‘traditional’ approaches can’t. I have been working for a few years with applications that run packages from host applications that are implemented in .NET. As you know, SSIS provides programming model that you can use to implement more flexible solutions. SSIS applications are usually thought to be batch oriented, with fairly rigid architecture and processing model, with fixed timeframes when the packages are executed to process data. It doesn’t to be the case, you don’t have to limit yourself to batch oriented architecture. I have very good experiences with service oriented architectures processing large amounts of data. These applications are more complex than what I would like to show here, but the principle stays the same: you can execute packages as a service, on ad-hoc basis. You can also implement and schedule various signals, HTTP calls, file drops, time schedules, Tibco messages and other to run the packages. You can implement event handler that will trigger execution of SSIS when a certain event occurs in StreamInsight stream. This post is just a small example of how you can use the API and other features to create a service that can run SSIS packages on demand. I thought it might be a good idea to implement a restful service that would listen to requests and execute appropriate actions. As it turns out, it is trivial in C#. The application is implemented as console application for the ease of debugging and running. In reality, you might want to implement the application as Windows service. To begin, you have to reference namespace System.ServiceModel.Web and then add a few lines of code: Uri baseAddress = new Uri("http://localhost:8011/");               WebServiceHost svcHost = new WebServiceHost(typeof(PackRunner), baseAddress);                           try             {                 svcHost.Open();                   Console.WriteLine("Service is running");                 Console.WriteLine("Press enter to stop the service.");                 Console.ReadLine();                   svcHost.Close();             }             catch (CommunicationException cex)             {                 Console.WriteLine("An exception occurred: {0}", cex.Message);                 svcHost.Abort();             } The interesting lines are 3, 7 and 13. In line 3 you create a WebServiceHost object. In line 7 you start listening on the defined URL and then in line 13 you shut down the service. As you have noticed, the WebServiceHost constructor is accepting type of an object (here: PackRunner) that will be instantiated as singleton and subsequently used to process the requests. This is the class where you put your logic, but to tell WebServiceHost how to use it, the class must implement an interface which declares methods to be used by the host. The interface itself must be ornamented with attribute ServiceContract. [ServiceContract]     public interface IPackRunner     {         [OperationContract]         [WebGet(UriTemplate = "runpack?package={name}")]         string RunPackage1(string name);           [OperationContract]         [WebGet(UriTemplate = "runpackwithparams?package={name}&rows={rows}")]         string RunPackage2(string name, int rows);     } Each method that is going to be used by WebServiceHost has to have attribute OperationContract, as well as WebGet or WebInvoke attribute. The detailed discussion of the available options is outside of scope of this post. I also recommend using more descriptive names to methods . Then, you have to provide the implementation of the interface: public class PackRunner : IPackRunner     {         ... There are two methods defined in this class. I think that since the full code is attached to the post, I will show only the more interesting method, the RunPackage2.   /// <summary> /// Runs package and sets some of its variables. /// </summary> /// <param name="name">Name of the package</param> /// <param name="rows">Number of rows to export</param> /// <returns></returns> public string RunPackage2(string name, int rows) {     try     {         string pkgLocation = ConfigurationManager.AppSettings["PackagePath"];           pkgLocation = Path.Combine(pkgLocation, name.Replace("\"", ""));           Console.WriteLine();         Console.WriteLine("Calling package {0} with parameter {1}.", name, rows);                  Application app = new Application();         Package pkg = app.LoadPackage(pkgLocation, null);           pkg.Variables["User::ExportRows"].Value = rows;         DTSExecResult pkgResults = pkg.Execute();         Console.WriteLine();         Console.WriteLine(pkgResults.ToString());         if (pkgResults == DTSExecResult.Failure)         {             Console.WriteLine();             Console.WriteLine("Errors occured during execution of the package:");             foreach (DtsError er in pkg.Errors)                 Console.WriteLine("{0}: {1}", er.ErrorCode, er.Description);             Console.WriteLine();             return "Errors occured during execution. Contact your support.";         }                  Console.WriteLine();         Console.WriteLine();         return "OK";     }     catch (Exception ex)     {         Console.WriteLine(ex);         return ex.ToString();     } }   The method accepts package name and number of rows to export. The packages are deployed to the file system. The path to the packages is configured in the application configuration file. This way, you can implement multiple services on the same machine, provided you also configure the URL for each instance appropriately. To run a package, you have to reference Microsoft.SqlServer.Dts.Runtime namespace. This namespace is implemented in Microsoft.SQLServer.ManagedDTS.dll which in my case was installed in the folder “C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies”. Once you have done it, you can create an instance of Microsoft.SqlServer.Dts.Runtime.Application as in line 18 in the above snippet. It may be a good idea to create the Application object in the constructor of the PackRunner class, to avoid necessity of recreating it each time the service is invoked. Then, in line 19 you see that an instance of Microsoft.SqlServer.Dts.Runtime.Package is created. The method LoadPackage in its simplest form just takes package file name as the first parameter. Before you run the package, you can set its variables to certain values. This is a great way of configuring your packages without all the hassle with dtsConfig files. In the above code sample, variable “User:ExportRows” is set to value of the parameter “rows” of the method. Eventually, you execute the package. The method doesn’t throw exceptions, you have to test the result of execution yourself. If the execution wasn’t successful, you can examine collection of errors exposed by the package. These are the familiar errors you often see during development and debugging of the package. I you run the package from the code, you have opportunity to persist them or log them using your favourite logging framework. The package itself is very simple; it connects to my AdventureWorks database and saves number of rows specified in variable “User::ExportRows” to a file. You should know that before you run the package, you can change its connection strings, logging, events and many more. I attach solution with the test service, as well as a project with two test packages. To test the service, you have to run it and wait for the message saying that the host is started. Then, just type (or copy and paste) the below command to your browser. http://localhost:8011/runpackwithparams?package=%22ExportEmployees.dtsx%22&rows=12 When everything works fine, and you modified the package to point to your AdventureWorks database, you should see "OK” wrapped in xml: I stopped the database service to simulate invalid connection string situation. The output of the request is different now: And the service console window shows more information: As you see, implementing service oriented ETL framework is not a very difficult task. You have ability to configure the packages before you run them, you can implement logging that is consistent with the rest of your system. In application I have worked with we also have resource monitoring and execution control. We don’t allow to run more than certain number of packages to run simultaneously. This ensures we don’t strain the server and we use memory and CPUs efficiently. The attached zip file contains two projects. One is the package runner. It has to be executed with administrative privileges as it registers HTTP namespace. The other project contains two simple packages. This is really a cool thing, you should check it out!

    Read the article

  • Abstracting functionality

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/22/abstracting-functionality.aspxWhat is more important than data? Functionality. Yes, I strongly believe we should switch to a functionality over data mindset in programming. Or actually switch back to it. Focus on functionality Functionality once was at the core of software development. Back when algorithms were the first thing you heard about in CS classes. Sure, data structures, too, were important - but always from the point of view of algorithms. (Niklaus Wirth gave one of his books the title “Algorithms + Data Structures” instead of “Data Structures + Algorithms” for a reason.) The reason for the focus on functionality? Firstly, because software was and is about doing stuff. Secondly because sufficient performance was hard to achieve, and only thirdly memory efficiency. But then hardware became more powerful. That gave rise to a new mindset: object orientation. And with it functionality was devalued. Data took over its place as the most important aspect. Now discussions revolved around structures motivated by data relationships. (John Beidler gave his book the title “Data Structures and Algorithms: An Object Oriented Approach” instead of the other way around for a reason.) Sure, this data could be embellished with functionality. But nevertheless functionality was second. When you look at (domain) object models what you mostly find is (domain) data object models. The common object oriented approach is: data aka structure over functionality. This is true even for the most modern modeling approaches like Domain Driven Design. Look at the literature and what you find is recommendations on how to get data structures right: aggregates, entities, value objects. I´m not saying this is what object orientation was invented for. But I´m saying that´s what I happen to see across many teams now some 25 years after object orientation became mainstream through C++, Delphi, and Java. But why should we switch back? Because software development cannot become truly agile with a data focus. The reason for that lies in what customers need first: functionality, behavior, operations. To be clear, that´s not why software is built. The purpose of software is to be more efficient than the alternative. Money mainly is spent to get a certain level of quality (e.g. performance, scalability, security etc.). But without functionality being present, there is nothing to work on the quality of. What customers want is functionality of a certain quality. ASAP. And tomorrow new functionality needs to be added, existing functionality needs to be changed, and quality needs to be increased. No customer ever wanted data or structures. Of course data should be processed. Data is there, data gets generated, transformed, stored. But how the data is structured for this to happen efficiently is of no concern to the customer. Ask a customer (or user) whether she likes the data structured this way or that way. She´ll say, “I don´t care.” But ask a customer (or user) whether he likes the functionality and its quality this way or that way. He´ll say, “I like it” (or “I don´t like it”). Build software incrementally From this very natural focus of customers and users on functionality and its quality follows we should develop software incrementally. That´s what Agility is about. Deliver small increments quickly and often to get frequent feedback. That way less waste is produced, and learning can take place much easier (on the side of the customer as well as on the side of developers). An increment is some added functionality or quality of functionality.[1] So as it turns out, Agility is about functionality over whatever. But software developers’ thinking is still stuck in the object oriented mindset of whatever over functionality. Bummer. I guess that (at least partly) explains why Agility always hits a glass ceiling in projects. It´s a clash of mindsets, of cultures. Driving software development by demanding small increases in functionality runs against thinking about software as growing (data) structures sprinkled with functionality. (Excuse me, if this sounds a bit broad-brush. But you get my point.) The need for abstraction In the end there need to be data structures. Of course. Small and large ones. The phrase functionality over data does not deny that. It´s not functionality instead of data or something. It´s just over, i.e. functionality should be thought of first. It´s a tad more important. It´s what the customer wants. That´s why we need a way to design functionality. Small and large. We need to be able to think about functionality before implementing it. We need to be able to reason about it among team members. We need to be able to communicate our mental models of functionality not just by speaking about them, but also on paper. Otherwise reasoning about it does not scale. We learned thinking about functionality in the small using flow charts, Nassi-Shneiderman diagrams, pseudo code, or UML sequence diagrams. That´s nice and well. But it does not scale. You can use these tools to describe manageable algorithms. But it does not work for the functionality triggered by pressing the “1-Click Order” on an amazon product page for example. There are several reasons for that, I´d say. Firstly, the level of abstraction over code is negligible. It´s essentially non-existent. Drawing a flow chart or writing pseudo code or writing actual code is very, very much alike. All these tools are about control flow like code is.[2] In addition all tools are computationally complete. They are about logic which is expressions and especially control statements. Whatever you code in Java you can fully (!) describe using a flow chart. And then there is no data. They are about control flow and leave out the data altogether. Thus data mostly is assumed to be global. That´s shooting yourself in the foot, as I hope you agree. Even if it´s functionality over data that does not mean “don´t think about data”. Right to the contrary! Functionality only makes sense with regard to data. So data needs to be in the picture right from the start - but it must not dominate the thinking. The above tools fail on this. Bottom line: So far we´re unable to reason in a scalable and abstract manner about functionality. That´s why programmers are so driven to start coding once they are presented with a problem. Programming languages are the only tool they´ve learned to use to reason about functional solutions. Or, well, there might be exceptions. Mathematical notation and SQL may have come to your mind already. Indeed they are tools on a higher level of abstraction than flow charts etc. That´s because they are declarative and not computationally complete. They leave out details - in order to deliver higher efficiency in devising overall solutions. We can easily reason about functionality using mathematics and SQL. That´s great. Except for that they are domain specific languages. They are not general purpose. (And they don´t scale either, I´d say.) Bummer. So to be more precise we need a scalable general purpose tool on a higher than code level of abstraction not neglecting data. Enter: Flow Design. Abstracting functionality using data flows I believe the solution to the problem of abstracting functionality lies in switching from control flow to data flow. Data flow very naturally is not about logic details anymore. There are no expressions and no control statements anymore. There are not even statements anymore. Data flow is declarative by nature. With data flow we get rid of all the limiting traits of former approaches to modeling functionality. In addition, nomen est omen, data flows include data in the functionality picture. With data flows, data is visibly flowing from processing step to processing step. Control is not flowing. Control is wherever it´s needed to process data coming in. That´s a crucial difference and needs some rewiring in your head to be fully appreciated.[2] Since data flows are declarative they are not the right tool to describe algorithms, though, I´d say. With them you don´t design functionality on a low level. During design data flow processing steps are black boxes. They get fleshed out during coding. Data flow design thus is more coarse grained than flow chart design. It starts on a higher level of abstraction - but then is not limited. By nesting data flows indefinitely you can design functionality of any size, without losing sight of your data. Data flows scale very well during design. They can be used on any level of granularity. And they can easily be depicted. Communicating designs using data flows is easy and scales well, too. The result of functional design using data flows is not algorithms (too low level), but processes. Think of data flows as descriptions of industrial production lines. Data as material runs through a number of processing steps to be analyzed, enhances, transformed. On the top level of a data flow design might be just one processing step, e.g. “execute 1-click order”. But below that are arbitrary levels of flows with smaller and smaller steps. That´s not layering as in “layered architecture”, though. Rather it´s a stratified design à la Abelson/Sussman. Refining data flows is not your grandpa´s functional decomposition. That was rooted in control flows. Refining data flows does not suffer from the limits of functional decomposition against which object orientation was supposed to be an antidote. Summary I´ve been working exclusively with data flows for functional design for the past 4 years. It has changed my life as a programmer. What once was difficult is now easy. And, no, I´m not using Clojure or F#. And I´m not a async/parallel execution buff. Designing the functionality of increments using data flows works great with teams. It produces design documentation which can easily be translated into code - in which then the smallest data flow processing steps have to be fleshed out - which is comparatively easy. Using a systematic translation approach code can mirror the data flow design. That way later on the design can easily be reproduced from the code if need be. And finally, data flow designs play well with object orientation. They are a great starting point for class design. But that´s a story for another day. To me data flow design simply is one of the missing links of systematic lightweight software design. There are also other artifacts software development can produce to get feedback, e.g. process descriptions, test cases. But customers can be delighted more easily with code based increments in functionality. ? No, I´m not talking about the endless possibilities this opens for parallel processing. Data flows are useful independently of multi-core processors and Actor-based designs. That´s my whole point here. Data flows are good for reasoning and evolvability. So forget about any special frameworks you might need to reap benefits from data flows. None are necessary. Translating data flow designs even into plain of Java is possible. ?

    Read the article

  • R Package Installation with Oracle R Enterprise

    - by Sherry LaMonica-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE Programming languages give developers the opportunity to write reusable functions and to bundle those functions into logical deployable entities. In R, these are called packages. R has thousands of such packages provided by an almost equally large group of third-party contributors. To allow others to benefit from these packages, users can share packages on the CRAN system for use by the vast R development community worldwide. R's package system along with the CRAN framework provides a process for authoring, documenting and distributing packages to millions of users. In this post, we'll illustrate the various ways in which such R packages can be installed for use with R and together with Oracle R Enterprise. In the following, the same instructions apply when using either open source R or Oracle R Distribution. In this post, we cover the following package installation scenarios for: R command line Linux shell command line Use with Oracle R Enterprise Installation on Exadata or RAC Installing all packages in a CRAN Task View Troubleshooting common errors 1. R Package Installation BasicsR package installation basics are outlined in Chapter 6 of the R Installation and Administration Guide. There are two ways to install packages from the command line: from the R command line and from the shell command line. For this first example on Oracle Linux using Oracle R Distribution, we’ll install the arules package as root so that packages will be installed in the default R system-wide location where all users can access it, /usr/lib64/R/library.Within R, using the install.packages function always attempts to install the latest version of the requested package available on CRAN:R> install.packages("arules")If the arules package depends upon other packages that are not already installed locally, the R installer automatically downloads and installs those required packages. This is a huge benefit that frees users from the task of identifying and resolving those dependencies.You can also install R from the shell command line. This is useful for some packages when an internet connection is not available or for installing packages not uploaded to CRAN. To install packages this way, first locate the package on CRAN and then download the package source to your local machine. For example:$ wget http://cran.r-project.org/src/contrib/arules_1.1-2.tar.gz Then, install the package using the command R CMD INSTALL:$ R CMD INSTALL arules_1.1-2.tar.gzA major difference between installing R packages using the R package installer at the R command line and shell command line is that package dependencies must be resolved manually at the shell command line. Package dependencies are listed in the Depends section of the package’s CRAN site. If dependencies are not identified and installed prior to the package’s installation, you will see an error similar to:ERROR: dependency ‘xxx’ is not available for package ‘yyy’As a best practice and to save time, always refer to the package’s CRAN site to understand the package dependencies prior to attempting an installation. If you don’t run R as root, you won’t have permission to write packages into the default system-wide location and you will be prompted to create a personal library accessible by your userid. You can accept the personal library path chosen by R, or specify the library location by passing parameters to the install.packages function. For example, to create an R package repository in your home directory: R> install.packages("arules", lib="/home/username/Rpackages")or$ R CMD INSTALL arules_1.1-2.tar.gz --library=/home/username/RpackagesRefer to the install.packages help file in R or execute R CMD INSTALL --help at the shell command line for a full list of command line options.To set the library location and avoid having to specify this at every package install, simply create the R startup environment file .Renviron in your home area if it does not already exist, and add the following piece of code to it:R_LIBS_USER = "/home/username/Rpackages" 2. Setting the RepositoryEach time you install an R package from the R command line, you are asked which CRAN mirror, or server, R should use. To set the repository and avoid having to specify this during every package installation, create the R startup command file .Rprofile in your home directory and add the following R code to it:cat("Setting Seattle repository")r = getOption("repos") r["CRAN"] = "http://cran.fhcrc.org/"options(repos = r)rm(r) This code snippet sets the R package repository to the Seattle CRAN mirror at the start of each R session. 3. Installing R Packages for use with Oracle R EnterpriseEmbedded R execution with Oracle R Enterprise allows the use of CRAN or other third-party R packages in user-defined R functions executed on the Oracle Database server. The steps for installing and configuring packages for use with Oracle R Enterprise are the same as for open source R. The database-side R engine just needs to know where to find the R packages.The Oracle R Enterprise installation is performed by user oracle, which typically does not have write permission to the default site-wide library, /usr/lib64/R/library. On Linux and UNIX platforms, the Oracle R Enterprise Server installation provides the ORE script, which is executed from the operating system shell to install R packages and to start R. The ORE script is a wrapper for the default R script, a shell wrapper for the R executable. It can be used to start R, run batch scripts, and build or install R packages. Unlike the default R script, the ORE script installs packages to a location writable by user oracle and accessible by all ORE users - $ORACLE_HOME/R/library.To install a package on the database server so that it can be used by any R user and for use in embedded R execution, an Oracle DBA would typically download the package source from CRAN using wget. If the package depends on any packages that are not in the R distribution in use, download the sources for those packages, also.  For a single Oracle Database instance, replace the R script with ORE to install the packages in the same location as the Oracle R Enterprise packages. $ wget http://cran.r-project.org/src/contrib/arules_1.1-2.tar.gz$ ORE CMD INSTALL arules_1.1-2.tar.gzBehind the scenes, the ORE script performs the equivalent of setting R_LIBS_USER to the value of $ORACLE_HOME/R/library, and all R packages installed with the ORE script are installed to this location. For installing a package on multiple database servers, such as those in an Oracle Real Application Clusters (Oracle RAC) or a multinode Oracle Exadata Database Machine environment, use the ORE script in conjunction with the Exadata Distributed Command Line Interface (DCLI) utility.$ dcli -g nodes -l oracle ORE CMD INSTALL arules_1.1-1.tar.gz The DCLI -g flag designates a file containing a list of nodes to install on, and the -l flag specifies the user id to use when executing the commands. For more information on using DCLI with Oracle R Enterprise, see Chapter 5 in the Oracle R Enterprise Installation Guide.If you are using an Oracle R Enterprise client, install the package the same as any R package, bearing in mind that you must install the same version of the package on both the client and server machines to avoid incompatibilities. 4. CRAN Task ViewsCRAN also maintains a set of Task Views that identify packages associated with a particular task or methodology. Task Views are helpful in guiding users through the huge set of available R packages. They are actively maintained by volunteers who include detailed annotations for routines and packages. If you find one of the task views is a perfect match, you can install every package in that view using the ctv package - an R package for automating package installation. To use the ctv package to install a task view, first, install and load the ctv package.R> install.packages("ctv")R> library(ctv)Then query the names of the available task views and install the view you choose.R> available.views() R> install.views("TimeSeries") 5. Using and Managing R packages To use a package, start up R and load packages one at a time with the library command.Load the arules package in your R session. R> library(arules)Verify the version of arules installed.R> packageVersion("arules")[1] '1.1.2'Verify the version of arules installed on the database server using embedded R execution.R> ore.doEval(function() packageVersion("arules"))View the help file for the apropos function in the arules packageR> ?aproposOver time, your package repository will contain more and more packages, especially if you are using the system-wide repository where others are adding additional packages. It’s good to know the entire set of R packages accessible in your environment. To list all available packages in your local R session, use the installed.packages command:R> myLocalPackages <- row.names(installed.packages())R> myLocalPackagesTo access the list of available packages on the ORE database server from the ORE client, use the following embedded R syntax: R> myServerPackages <- ore.doEval(function() row.names(installed.packages()) R> myServerPackages 6. Troubleshooting Common ProblemsInstalling Older Versions of R packagesIf you immediately upgrade to the latest version of R, you will have no problem installing the most recent versions of R packages. However, if your version of R is older, some of the more recent package releases will not work and install.packages will generate a message such as: Warning message: In install.packages("arules") : package ‘arules’ is not availableThis is when you have to go to the Old sources link on the CRAN page for the arules package and determine which version is compatible with your version of R.Begin by determining what version of R you are using:$ R --versionOracle Distribution of R version 3.0.1 (--) -- "Good Sport" Copyright (C) The R Foundation for Statistical Computing Platform: x86_64-unknown-linux-gnu (64-bit)Given that R-3.0.1 was released May 16, 2013, any version of the arules package released after this date may work. Scanning the arules archive, we might try installing version 0.1.1-1, released in January of 2014:$ wget http://cran.r-project.org/src/contrib/Archive/arules/arules_1.1-1.tar.gz$ R CMD INSTALL arules_1.1-1.tar.gzFor use with ORE:$ ORE CMD INSTALL arules_1.1-1.tar.gzThe "package not available" error can also be thrown if the package you’re trying to install lives elsewhere, either another R package site, or it’s been removed from CRAN. A quick Google search usually leads to more information on the package’s location and status.Oracle R Enterprise is not in the R library pathOn Linux hosts, after installing the ORE server components, starting R, and attempting to load the ORE packages, you may receive the error:R> library(ORE)Error in library(ORE) : there is no package called ‘ORE’If you know the ORE packages have been installed and you receive this error, this is the result of not starting R with the ORE script. To resolve this problem, exit R and restart using the ORE script. After restarting R and ">running the command to load the ORE packages, you should not receive any errors.$ ORER> library(ORE)On Windows servers, the solution is to make the location of the ORE packages visible to R by adding them to the R library paths. To accomplish this, exit R, then add the following lines to the .Rprofile file. On Windows, the .Rprofile file is located in R\etc directory C:\Program Files\R\R-<version>\etc. Add the following lines:.libPaths("<path to $ORACLE_HOME>/R/library")The above line will tell R to include the R directory in the Oracle home as part of its search path. When you start R, the path above will be included, and future R package installations will also be saved to $ORACLE_HOME/R/library. This path should be writable by the user oracle, or the userid for the DBA tasked with installing R packages.Binary package compiled with different version of RBy default, R will install pre-compiled versions of packages if they are found. If the version of R under which the package was compiled does not match your installed version of R you will get an error message:Warning message: package ‘xxx’ was built under R version 3.0.0The solution is to download the package source and build it for your version of R.$ wget http://cran.r-project.org/src/contrib/Archive/arules/arules_1.1-1.tar.gz$ R CMD INSTALL arules_1.1-1.tar.gzFor use with ORE:$ ORE CMD INSTALL arules_1.1-1.tar.gzUnable to execute files in /tmp directoryBy default, R uses the /tmp directory to install packages. On security conscious machines, the /tmp directory is often marked as "noexec" in the /etc/fstab file. This means that no file under /tmp can ever be executed, and users who attempt to install R package will receive an error:ERROR: 'configure' exists but is not executable -- see the 'R Installation and Administration Manual’The solution is to set the TMP and TMPDIR environment variables to a location which R will use as the compilation directory. For example:$ mkdir <some path>/tmp$ export TMPDIR= <some path>/tmp$ export TMP= <some path>/tmpThis error typically appears on Linux client machines and not database servers, as Oracle Database writes to the value of the TMP environment variable for several tasks, including holding temporary files during database installation. 7. Creating your own R packageCreating your own package and submitting to CRAN is for advanced users, but it is not difficult. The procedure to follow, along with details of R's package system, is detailed in the Writing R Extensions manual.

    Read the article

  • CodePlex Daily Summary for Friday, April 13, 2012

    CodePlex Daily Summary for Friday, April 13, 2012Popular ReleasesCatel - WPF, Silverlight and Windows Phone 7 MVVM toolkit: 3.1 beta 1: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com Documentation can be found at: http://catel.catenalogic.com ********************************************************** =========== Version 3.1 =========== Release date: ============= 2012/xx/xx Added/fixed: ============ (+) Added OnDataContextChanged and OnPropertyChanged to UserControl, DataWindow, Page ...Visual Studio Team Foundation Server Branching and Merging Guide: v2 - For Visual Studio 11: Welcome to the BETA of the Branching and Merging Guide preview As this is a BETA release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these BETA artifacts in a production environment. Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has not been through an independent technical review Documentation has been reviewed by the quality and recording te...Media Companion: MC 3.435b Release: This release should be the last beta for 3.4xx. A handful of problems have been sorted out since last weeks release. If there are no major problems this time, it will upgraded to 3.500 Stable at the end of the week! General The .NET Framework has been modified to use the Client profile, as provided by normal Windows updates; no longer is there a requirement to download and install the Full profile! mc_com.exe has been worked on to mimic proper Media Companion output (a big thanks to vbat99...THE NVL Maker: The NVL Maker Ver 3.12: SIM??????,TRA??????,ZIP????。 ????????????????,??????~(??????????????????) ??????? simpatch1440x900 trapatch1440x900 ?????1400x900??1440x900,?????????????Data.xp3。 ???? ?????3.12?EXE????????????????, ??????????????,??Tool/krkrconf.exe,??Editor.exe, ???????????????「??????」。 ?????Editor.exe??????。 ???? ???? http://etale.us/gameupload/THE_NVL_Maker_ver3.12_sim.zip ???? http://www.mediafire.com/?je51683g22bz8vo ??Infinite Creation?? http://bbs.etale.us/forum.php ?????? ???? 3.12 ??? ???、????...SQL DAC Examples: DAC SQL Azure Import Export Service Client v 1.5: Latest version for the service client. Changes Refactored the sources to make the client implemenation as simple and streamlined as possible Fixed "type initializer" configuration issues in the previous release Updated SQL Azure datacenter mappingsSnmpMessenger: 0.1.1.1: Project Description SnmpMessenger, a messenger. Using the SNMP protocol to exchange messages. It's developed in C#. SnmpMessenger For .Net 4.0, Mono 2.8. Support SNMP V1, V2, V3. Features Send get, set and other requests and get the response. Send and receive traps. Handle requests and return the response. Note This library is compliant with the Common Language Specification(CLS). The latest version is 0.1.1.1. It is only a messenger, does not involve VACM. Any problems, Please mailto: wa...Python Tools for Visual Studio: 1.1.1: We’re pleased to announce the release of Python Tools for Visual Studio 1.1.1. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython and IronPython • Python editor with advanced member and signature intellisense • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging • Profiling with multiple view...Supporting Guidance and Whitepapers: v1 - Team Foundation Service Whitepapers: Welcome to the BETA release of the Team Foundation Service Whitepapers preview As this is a BETA release and the quality bar for the final Release has not been achieved, we value your candid feedback and recommend that you do not use or deploy these BETA artifacts in a production environment. Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review All critical bugs have been resolved Known Issue...Microsoft .NET Gadgeteer: .NET Gadgeteer Core 2.42.550 (BETA): Microsoft .NET Gadgeteer Core RELEASE NOTES Version 2.42.550 11 April 2012 BETA VERSION WARNING: This is a beta version! Please note: - API changes may be made before the next version (2.42.600) - The designer will not show modules/mainboards for NETMF 4.2 until you get upgraded libraries from the module/mainboard vendors - Install NETMF 4.2 (see link below) to use the new features of this release That warning aside, this version should continue to sup...LINQ to Twitter: LINQ to Twitter Beta v2.0.24: Supports .NET 3.5, .NET 4.0, Silverlight 4.0, Windows Phone 7.1, and Client Profile. 100% Twitter API coverage. Also available via NuGet.Kendo UI ASP.NET Sample Applications: Sample Applications (2012-04-11): Sample application(s) demonstrating the use of Kendo UI in ASP.NET applications.Json.NET: Json.NET 4.5 Release 2: New feature - Added support for the SerializableAttribute and serializing a type's internal fields New feature - Added MaxDepth to JsonReader/JsonSerializer/JsonSerializerSettings New feature - Added support for ignoring properties with the NonSerializableAttribute Fix - Fixed deserializing a null string throwing a NullReferenceException Fix - Fixed JsonTextReader reading from a slow stream Fix - Fixed CultureInfo not being overridden on JsonSerializerProxy Fix - Fixed full trust ...SCCM Client Actions Tool: SCCM Client Actions Tool v1.12: SCCM Client Actions Tool v1.12 is the latest version. It comes with following changes since last version: Improved WMI date conversion to be aware of timezone differences and DST. Fixed new version check. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA on Windows XP. Cmdkey.exe is natively availab...Dual Browsing: Dual Browser: Please note the following: I setup the address bar temporarily to only accepts http:// .com addresses. Just type in the name of the website excluding: http://, www., and .com; (Ex: for www.youtube.com just type: youtube then click OK). The page splitter can be grabbed by holding down your left mouse button and move left or right. By right clicking on the page background, you can choose to refresh, go back a page and so on. Demo video: http://youtu.be/L7NTFVM3JUYPhoenix Service Bus: PServiceBus 2.0.0: Note Before installing 2.0.0, please uninstall 1.0.2 to make sure that 2.0.0 is not corrupted when installed. If you download the 2.0.0 version from 4/10/2012 9am-2pm, you might want to re-download because the version was corrupted. Feature/Changes Replace WCF Gateway Service with a ZeroMQ implementation Improve performance of TCP based transports such as (Low Level TCP Itself, RabbitMQ, Redis, e.t.c) Improve performance of message publishing when dealing with single message rather...Liberty: v3.2.0.1 Release 9th April 2012: Change Log-Fixed -Reach Fixed a bug where the object editor did not work on non-English operating systemsPath Copy Copy: 10.1: This release addresses the following work items: 11357 11358 11359 This release is a recommended upgrade, especially for users who didn't install the 10.0.1 version.ExtAspNet: ExtAspNet v3.1.3: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-04-08 v3.1.3 -??Language="zh_TW"?JS???BUG(??)。 +?D...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.5.5: New Controls ChatBubble ChatBubbleTextBox OpacityToggleButton New Stuff TimeSpan languages added: RU, SK, CS Expose the physics math from TimeSpanPicker Image Stretch now on buttons Bug Fixes Layout fix so RoundToggleButton and RoundButton are exactly the same Fix for ColorPicker when set via code behind ToastPrompt bug fix with OnNavigatedTo Toast now adjusts its layout if the SIP is up Fixed some issues with Expression Blend supportHarness - Internet Explorer Automation: Harness 2.0.3: support the operation fo frameset, frame and iframe Add commands SwitchFrame GetUrl GoBack GoForward Refresh SetTimeout GetTimeout Rename commands GetActiveWindow to GetActiveBrowser SetActiveWindow to SetActiveBrowser FindWindowAll to FindBrowser NewWindow to NewBrowser GetMajorVersion to GetVersionNew Projects.NET Gadgeteer Light: This is a light weight version of the Gadgeteer framework. Although it lacks quite some support (making it lighter), it can be very useful to use Gadgeteer drivers and modules on non-gadgeteer hardware.Bloog: Yet another frickin' blog appC# compiler improvements: This project is a proof of concept which demonstrate how to improve a compiler using Roslyn. CallBack: Callback is a library written in pure Lua which helps you trigger custom defined functions automatically when your code is running according to the time. Functions can be runned once after a set time, periodically after an amount time, or many times successively.CeairCarbin: One Project About A Carbin Department Manager System About News WrokFlow SaraleChinchilla: Advanced Programming using Small Basic to create a cool 2-D video game. With enough depth (no pun intended) to branch into a 3-D version. This is a project based example for teaching a home school class for 13-17 year olds.ChlodnyWebApi: Created to demostrate ASP.NET WebAPI usefulness in a multi-targeted client scenario. See Examples and documentation at: http://researchaholic.com/Delete SharePoint List Column: preliminary on-going project to provide an easy to delete columns from a SharePoint list. Upload into SharePoint makes it easier for administrators to delete stuck columns. It's developed in C#.Directories Creater: <dirCreater> create lots of directories in simple way! <c#> <vs2010>Eclipse Project: Eclipse project Team ExplorerFlot.Net: Flot.Net provides a .Net wrapper around the Flot jQuery charting library. It is developed in C# 3.5 I created this project as I found the javascript notation difficult to create, and so developed an opbject model around the flot objects so I could cerate charts in a common language. I have used this project in an MVC environment and it serves my purpose for this. I wanted to keep the API as simple as possible.GRE Word Study: MVC application using ASP.NETJHWF Admin: Back end for jhwfKKZCodeHelper: KKOMZI Code HelperKkzSSIKORHelper: KKOMZI VB CS Helper AppLiMiao jiesuanshu: limiao de jiesuanshuManagement tool for MWT: This project though focused on creating a tool for the MWT management to use is a place to exercise the latest technology trends in the .NET community. I intend to use the best design practices the technologies like WCF, regex, HTML5, Jquery, WPF(maybe).MyHydroServer: MyHydroServer (HydroServer Lite) is a lightweight version of the CUAHSI HydroServer written in PHP. It can be run on any webhosting service that supports PHP and MySQL. The goal of this project is to make it easier to set up your own HydroServer.nanoCMS: nanoCMS is free, community driven modular CMS and platform designed for delivering rich web applications. It serves two main purposes. First for developers it provides easy expandable and customizable platform for creating web applications. Second for normal user it provides a siNetSysInfo: NetSysInfo is a free software which displays information about system like Uptime, CPU, Memory, Drives devices, Network adapters, Disk Usage, Processes, Services and more.NewFifa: My fifa the best technologyOrdinapoche: An implementation of the Ordinapoche (also known as CARDIAC in English) cardboard computer.Plan 9 Software: Home of our Open Source CodePrettyFormat: PrettyFormat is a small library of string formatters for .NET. It's developed in C# and is fully localizable.pruebasandroid: pruebas con eclipseRMath and RMath for .Net: We provide pre-compiled Windows binaries for RMath library. RMath provides stable implementation for commonly used special functions, e.g. bessel family. We also provide a .Net wrapper for the native RMath DLL with documentation and usage examples in C# and F#.Security with Visual Understanding: A Kinect home security camera. Security with Visual Understanding (SVU) is a hardware/software solution which provides a more accurate security camera. SVU uses the Microsoft Kinect to provide these capabilities. SVU recognizes when a human enters the image and furthermore, is able to differentiate between known and unknown persons by maintaining a database of known persons’ skeleton dimensions. These combined capabilities allow SVU to deliver an intelligent, autonomous security system ca...SPDeveloperDashboardFilter: This project contains the javascript code to enhance the useability of the developer dashboard.SSIS Checksum Transformation: The Checksum Transformation Calculates hash values for one or more rows using a variety of methods like MD5, RIPEMD160, SHA1, SHA256, SHA384 and SHA512.test01: firsttesttom04122012hg01: testtom04122012hg01testtom04122012tfs01: testtom04122012tfs01Tinter: Tinter is a online tool about personal information management. Such as post to-do list or notes, record financial activities, etc. It's developed in C# and will involve more new technologies as a practice project.Twicko: Simple twitter client.Vaffanculo: None.web2call: Providing live chat support over website is now an old technique to facilitate customer. web2call will allow you to place a callback button on your website where user can click and connect automatically to one of your call center representative.

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • Asynchronous Streaming in ASP.NET WebApi

    - by andresv
     Hi everyone, if you use the cool MVC4 WebApi you might encounter yourself in a common situation where you need to return a rather large amount of data (most probably from a database) and you want to accomplish two things: Use streaming so the client fetch the data as needed, and that directly correlates to more fetching in the server side (from our database, for example) without consuming large amounts of memory. Leverage the new MVC4 WebApi and .NET 4.5 async/await asynchronous execution model to free ASP.NET Threadpool threads (if possible).  So, #1 and #2 are not directly related to each other and we could implement our code fulfilling one or the other, or both. The main point about #1 is that we want our method to immediately return to the caller a stream, and that client side stream be represented by a server side stream that gets written (and its related database fetch) only when needed. In this case we would need some form of "state machine" that keeps running in the server and "knows" what is the next thing to fetch into the output stream when the client ask for more content. This technique is generally called a "continuation" and is nothing new in .NET, in fact using an IEnumerable<> interface and the "yield return" keyword does exactly that, so our first impulse might be to write our WebApi method more or less like this:           public IEnumerable<Metadata> Get([FromUri] int accountId)         {             // Execute the command and get a reader             using (var reader = GetMetadataListReader(accountId))             {                 // Read rows asynchronously, put data into buffer and write asynchronously                 while (reader.Read())                 {                     yield return MapRecord(reader);                 }             }         }   While the above method works, unfortunately it doesn't accomplish our objective of returning immediately to the caller, and that's because the MVC WebApi infrastructure doesn't yet recognize our intentions and when it finds an IEnumerable return value, enumerates it before returning to the client its values. To prove my point, I can code a test method that calls this method, for example:        [TestMethod]         public void StreamedDownload()         {             var baseUrl = @"http://localhost:57771/api/metadata/1";             var client = new HttpClient();             var sw = Stopwatch.StartNew();             var stream = client.GetStreamAsync(baseUrl).Result;             sw.Stop();             Debug.WriteLine("Elapsed time Call: {0}ms", sw.ElapsedMilliseconds); } So, I would expect the line "var stream = client.GetStreamAsync(baseUrl).Result" returns immediately without server-side fetching of all data in the database reader, and this didn't happened. To make the behavior more evident, you could insert a wait time (like Thread.Sleep(1000);) inside the "while" loop, and you will see that the client call (GetStreamAsync) is not going to return control after n seconds (being n == number of reader records being fetched).Ok, we know this doesn't work, and the question would be: is there a way to do it?Fortunately, YES!  and is not very difficult although a little more convoluted than our simple IEnumerable return value. Maybe in the future this scenario will be automatically detected and supported in MVC/WebApi.The solution to our needs is to use a very handy class named PushStreamContent and then our method signature needs to change to accommodate this, returning an HttpResponseMessage instead of our previously used IEnumerable<>. The final code will be something like this: public HttpResponseMessage Get([FromUri] int accountId)         {             HttpResponseMessage response = Request.CreateResponse();             // Create push content with a delegate that will get called when it is time to write out              // the response.             response.Content = new PushStreamContent(                 async (outputStream, httpContent, transportContext) =>                 {                     try                     {                         // Execute the command and get a reader                         using (var reader = GetMetadataListReader(accountId))                         {                             // Read rows asynchronously, put data into buffer and write asynchronously                             while (await reader.ReadAsync())                             {                                 var rec = MapRecord(reader);                                 var str = await JsonConvert.SerializeObjectAsync(rec);                                 var buffer = UTF8Encoding.UTF8.GetBytes(str);                                 // Write out data to output stream                                 await outputStream.WriteAsync(buffer, 0, buffer.Length);                             }                         }                     }                     catch(HttpException ex)                     {                         if (ex.ErrorCode == -2147023667) // The remote host closed the connection.                          {                             return;                         }                     }                     finally                     {                         // Close output stream as we are done                         outputStream.Close();                     }                 });             return response;         } As an extra bonus, all involved classes used already support async/await asynchronous execution model, so taking advantage of that was very easy. Please note that the PushStreamContent class receives in its constructor a lambda (specifically an Action) and we decorated our anonymous method with the async keyword (not a very well known technique but quite handy) so we can await over the I/O intensive calls we execute like reading from the database reader, serializing our entity and finally writing to the output stream.  Well, if we execute the test again we will immediately notice that the a line returns immediately and then the rest of the server code is executed only when the client reads through the obtained stream, therefore we get low memory usage and far greater scalability for our beloved application serving big chunks of data.Enjoy!Andrés.        

    Read the article

  • CodePlex Daily Summary for Thursday, June 23, 2011

    CodePlex Daily Summary for Thursday, June 23, 2011Popular ReleasesMiniTwitter: 1.70: MiniTwitter 1.70 ???? ?? ????? xAuth ?? OAuth ??????? 1.70 ??????????????????????????。 ???????????????? Twitter ? Web ??????????、PIN ????????????????????。??????????????????、???????????????????????????。Total Commander SkyDrive File System Plugin (.wfx): Total Commander SkyDrive File System Plugin 0.8.7b: Total Commander SkyDrive File System Plugin version 0.8.7b. Bug fixes: - BROKEN PLUGIN by upgrading SkyDriveServiceClient version 2.0.1b. Please do not forget to express your opinion of the plugin by rating it! Donate (EUR)SkyDrive .Net API Client: SkyDrive .Net API Client 2.0.1b (RELOADED): SkyDrive .Net API Client assembly has been RELOADED in version 2.0.1b as a REAL API. It supports the followings: - Creating root and sub folders - Uploading and downloading files - Renaming and deleting folders and files Bug fixes: - BROKEN API (issue 6834) Please do not forget to express your opinion of the assembly by rating it! Donate (EUR)Mini SQL Query: Mini SQL Query v1.0.0.59794: This release includes the following enhancements: Added a Most Recently Used file list Added Row counts to the query (per tab) and table view windows Added the Command Timeout option, only valid for MSSQL for now - see options If you have no idea what this thing is make sure you check out http://pksoftware.net/MiniSqlQuery/Help/MiniSqlQueryQuickStart.docx for an introduction. PK :-]HydroDesktop - CUAHSI Hydrologic Information System Desktop Application: 1.2.591 Beta Release: 1.2.591 Beta ReleaseJavaScript Web Resource Manager for Microsoft Dynamics CRM 2011: JavaScript Web Resource Manager (v1.0.521.54): Initial releasepatterns & practices: Project Silk: Project Silk Community Drop 12 - June 22, 2011: Changes from previous drop: Minor code changes. New "Introduction" chapter. New "Modularity" chapter. Updated "Architecture" chapter. Updated "Server-Side Implementation" chapter. Updated "Client Data Management and Caching" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To ins...MapWinGIS ActiveX Map and GIS Component: Second Release Candidate for v4.8: This is the second release candidate for v4.8. This is the ocx installer only, with the new dependancies like GDAL v1.8, GEOS v3.3 and updated libraries for MrSid and ECW.Epinova.CRMFramework: Epinova.CRMFramework 0.5: Beta Release Candidate. Issues solved in this release: http://crmframework.codeplex.com/workitem/593SQL Server HowTo: Version 1.0: Initial ReleaseDropBox Linker: DropBox Linker 1.3: Added "Get links..." dialog, that provides selective public files links copying Get links link added to tray menu as the default option Fixed URL encoding .NET Framework 4.0 Client Profile requiredDotNetNuke® Community Edition: 06.00.00 Beta: Beta 1 (Build 2300) includes many important enhancements to the user experience. The control panel has been updated for easier access to the most important features and additional forms have been adapted to the new pattern. This release also includes many bug fixes that make it more stable than previous CTP releases. Beta ForumsTerraria World Viewer: Version 1.4: Update June 21st World file will be stored in memory to minimize chances of Terraria writing to it while we read it. Different set of APIs allow the program to draw the world much quicker. Loading world information (world variables, chest list) won't cause the GUI to freeze at all anymore. Re-introduced the "Filter chests" checkbox: Allow disabling of chest filter/finder so all chest symbos are always drawn. First-time users will have a default world path suggested to them: C:\Users\U...BlogEngine.NET: BlogEngine.NET 2.5 RC: BlogEngine.NET Hosting - Click Here! 3 Months FREE – BlogEngine.NET Hosting – Click Here! This is a Release Candidate version for BlogEngine.NET 2.5. The most current, stable version of BlogEngine.NET is version 2.0. Find out more about the BlogEngine.NET 2.5 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation. If you are upgrading from a previous version, please take a look at ...Microsoft All-In-One Code Framework - a centralized code sample library: All-In-One Code Framework 2011-06-19: Alternatively, you can install Sample Browser or Sample Browser VS extension, and download the code samples from Sample Browser. Improved and Newly Added Examples:For an up-to-date code sample index, please refer to All-In-One Code Framework Sample Catalog. NEW Samples for Windows Azure Sample Description Owner CSAzureStartupTask The sample demonstrates using the startup tasks to install the prerequisites or to modify configuration settings for your environment in Windows Azure Rafe Wu ...IronPython: 2.7.1 Beta 1: This is the first beta release of IronPython 2.7. Like IronPython 54498, this release requires .NET 4 or Silverlight 4. This release will replace any existing IronPython installation. The highlights of this release are: Updated the standard library to match CPython 2.7.2. Add the ast, csv, and unicodedata modules. Fixed several bugs. IronPython Tools for Visual Studio are disabled by default. See http://pytools.codeplex.com for the next generation of Python Visual Studio support. See...Facebook C# SDK: 5.0.40: This is a RTW release which adds new features to v5.0.26 RTW. Support for multiple FacebookMediaObjects in one request. Allow FacebookMediaObjects in batch requests. Removes support for Cassini WebServer (visual studio inbuilt web server). Better support for unit testing and mocking. updated SimpleJson to v0.6 Refer to CHANGES.txt for details. For more information about this release see the following blog posts: Facebook C# SDK - Multiple file uploads in Batch Requests Faceb...Candescent NUI: Candescent NUI (7774): This is the binary version of the source code in change set 7774.EffectControls-Silverlight controls with animation effects: EffectControls: EffectControlsNLog - Advanced .NET Logging: NLog 2.0 Release Candidate: Release notes for NLog 2.0 RC can be found at http://nlog-project.org/nlog-2-rc-release-notesNew ProjectsA SharePoint Client Object Model Demo in Silverlight: This demo shows how to utilize the SharePoint 2010 Foundation's Client Object Model within a Silverlight application.Azure SDK Extentions (ja): Azure SDK Tools ????????VS??????????????????。 ??????????????????、????????????????。Bango Apple iOS Application Analytics SDK: Bango application analytics is an analytics solution for mobile applications. This SDK provides a framework you can use in your application to add analytics capabilities to your mobile applications. It's developed in Object C and targets the Apple iOS operating system. Binsembler: Binsembler allows assembler programmers to easily import files into their source code, so that they'll no longer have to use external flash memory just for a few bytes of data. The project is developed in C#.DataTool: Liver.DTEMICHAG: This project contains the .NET Microframework solution for the EMIC Home Automation Gateway (EMICHAG). This project created within the co-funded research project called HOMEPLANE (http://homeplane.org/). html5_stady: html5 DemoJavaScript Web Resource Manager for Microsoft Dynamics CRM 2011: JavaScript Web Resource Manager for Microsoft Dynamics CRM 2011 helps CRM developers to extract javascript web resources to disk, maintain them and import changes back to CRM database. You will no longer have to perform mutliple clicks operation to update you js web resourcesKiefer SharePoint 2010 Templates: The Kiefer Consulting SharePoint 2010 Site Templates allow you to quickly create SharePoint sites pre-configured to meet specific business needs. These sites use only out of the box SharePoint Foundation 2010 features and will run on SharePoint Server 2010 and Office 365.KxFrame: KxFrame is intended to be free, easy-to-use, rapid business application framework. How many times were you starting to develop application from scratch? And every time you say to yourself: "Just this time, next time I'll make some framework"? It's over now! Lets make business application framework together and enjoy creating applications faster than ever.Made In Hell: Small console game. Just a training in code writings.Merula SmartPages: When you want to remember your data in an ASP.NET site after a postback, you will have to use Session or ViewState to remember your data. It’s a lot work to use Session and ViewStates, I just want to declare variables like in a desktop application. So I created a library called Merula SmartPages. This library will remember the properties and variables of your page. MVC ASPX to Razor View Converter: So this project will convert your MVC ASPX pages you worked so hard to make into cshtml files. You simply drop the exe into the folder and it works. I hope you enjoy, thanks! Netduino Utilities: Netduino classes I use with various odds of hardware I have. Hope it's useful ;-) NetEssentialsBugtrackerProject: Bug tracker for telerikPin My Site: Site to help demonstrate pinning capabilities of IE9Python3 ????: Python3 ????RDI Open CTe: RDI Open CTeRDI Open SPED (Contábil): RDI Open SPED (Contábil)RDI Open SPED (Fiscal): RDI Open SPED (Fiscal)sbc: sbcSimply Helpdesk UserControl: Simply Helpdesk is a user control that allows people to embed a helpdesk directly into their active projects. It is design to be as simple as possible, Minimal configuration is required to get it up and running.SQL Server HowTo: Short T-SQL scripts for difficult tasks from everyday job of programmers and database administrators. Also contains links to external resources with such useful information.Stashy: Stash data simply. Stashy is a Micro-NoSql framework. Stashy is a tiny interface, plus four reference implementations, for adding data storage and retrieval to all your .net types. Drop a single stashy class into your application then you can save and load the lot. storeBags01: hacer pedido de bolsos maletas y morrales compra de bolsos y maletas storebags UMTS Net: Program optimizes deployment of UMTS devicesvAddins - A little different addins framework: vAddins transforms C# and VB into powerful scripting languages for making addins or scripts. With this, you no longer need to open Visual Studio or other IDEs, reference assemblies, write the code and then built, move and test... Now you only have to write the code and test!Visual Studio Tags: A Visual Studio extension for tagging your source code files, and then explore your files using the tags.Window Phone Private Media Library: Windows Phone application to protect your media files from unauthorized eyes using your phone.Windows Touch/Mouse/TUIO Multi-touch Gesture recognizer & Trainer: This projects based on Single point Gesture recognizer paper of http://faculty.washington.edu/wobbrock/pubs/uist-07.1.pdf of unistrokes alphabets. The project expand capabilities to allow multi-touch gestures. C# library for rapid use. Mono Client. And C++ Client.WPF, Silverlight PDF viewer: WPF, Silverlight PDF viewer

    Read the article

  • PASS: Bylaw Changes

    - by Bill Graziano
    While you’re reading this, a post should be going up on the PASS blog on the plans to change our bylaws.  You should be able to find our old bylaws, our proposed bylaws and a red-lined version of the changes.  We plan to listen to feedback until March 31st.  At that point we’ll decide whether to vote on these changes or take other action. The executive summary is that we’re adding a restriction to prevent more than two people from the same company on the Board and eliminating the Board’s Officer Appointment Committee to have Officers directly elected by the Board.  This second change better matches how officer elections have been conducted in the past. The Gritty Details Our scope was to change bylaws to match how PASS actually works and tackle a limited set of issues.  Changing the bylaws is hard.  We’ve been working on these changes since the March board meeting last year.  At that meeting we met and talked through the issues we wanted to address.  In years past the Board has tried to come up with language and then we’ve discussed and negotiated to get to the result.  In March, we gave HQ guidance on what we wanted and asked them to come up with a starting point.  Hannes worked on building us an initial set of changes that we could work our way through.  Discussing changes like this over email is difficult wasn’t very productive.  We do a much better job on this at the in-person Board meetings.  Unfortunately there are only 2 or 3 of those a year. In August we met in Nashville and spent time discussing the changes.  That was also the day after we released the slate for the 2010 election. The discussion around that colored what we talked about in terms of these changes.  We talked very briefly at the Summit and again reviewed and revised the changes at the Board meeting in January.  This is the result of those changes and discussions. We made numerous small changes to clean up language and make wording more clear.  We also made two big changes. Director Employment Restrictions The first is that only two people from the same company can serve on the Board at the same time.  The actual language in section VI.3 reads: A maximum of two (2) Directors who are employed by, or who are joint owners or partners in, the same for-profit venture, company, organization, or other legal entity, may concurrently serve on the PASS Board of Directors at any time. The definition of “employed” is at the sole discretion of the Board. And what a mess this turns out to be in practice.  Our membership is a hodgepodge of interlocking relationships.  Let’s say three Board members get together and start a blog service for SQL Server bloggers.  It’s technically for-profit.  Let’s assume it makes $8 in the first year.  Does that trigger this clause?  (Technically yes.)  We had a horrible time trying to write language that covered everything.  All the sample bylaws that we found were just as vague as this. That led to the third clause in this section.  The first sentence reads: The Board of Directors reserves the right, strictly on a case-by-case basis, to overrule the requirements of Section VI.3 by majority decision for any single Director’s conflict of employment. We needed some way to handle the trivial issues and exercise some judgment.  It seems like a public vote is the best way.  This discloses the relationship and gets each Board member on record on the issue.   In practice I think this clause will rarely be used.  I think this entire section will only be invoked for actual employment issues and not for small side projects.  In either case we have the mechanisms in place to handle it in a public, transparent way. That’s the first and third clauses.  The second clause says that if your situation changes and you fall afoul of this restriction you need to notify the Board.  The clause further states that if this new job means a Board members violates the “two-per-company” rule the Board may request their resignation.  The Board can also  allow the person to continue serving with a majority vote.  I think this will also take some judgment.  Consider a person switching jobs that leads to three people from the same company.  I’m very likely to ask for someone to resign if all three are two weeks into a two year term.  I’m unlikely to ask anyone to resign if one is two weeks away from ending their term.  In either case, the decision will be a public vote that we can be held accountable for. One concern that was raised was whether this would affect someone choosing to accept a job.  I think that’s a choice for them to make.  PASS is clearly stating its intent that only two directors from any one organization should serve at any time.  Once these bylaws are approved, this policy should not come as a surprise to any potential or current Board members considering a job change.  This clause isn’t perfect.  The biggest hole is business relationships that aren’t defined above.  Let’s say that two employees from company “X” serve on the Board.  What happens if I accept a full-time consulting contract with that company?  Let’s assume I’m working directly for one of the two existing Board members.  That doesn’t violate section VI.3.  But I think it’s clearly the kind of relationship we’d like to prevent.  Unfortunately that was even harder to write than what we have now.  I fully expect that in the next revision of the bylaws we’ll address this.  It just didn’t make it into this one. Officer Elections The officer election process received a slightly different rewrite.  Our goal was to codify in the bylaws the actual process we used to elect the officers.  The officers are the President, Executive Vice-President (EVP) and Vice-President of Marketing.  The Immediate Past President (IPP) is also an officer but isn’t elected.  The IPP serves in that role for two years after completing their term as President.  We do that for continuity’s sake.  Some organizations have a President-elect that serves for one or two years.  The group that founded PASS chose to have an IPP. When I started on the Board, the Nominating Committee (NomCom) selected the slate for the at-large directors and the slate for the officers.  There was always one candidate for each officer position.  It wasn’t really an election so much as the NomCom decided who the next person would be for each officer position.  Behind the scenes the Board worked to select the best people for the role. In June 2009 that process was changed to bring it line with what actually happens.  An Officer Appointment Committee was created that was a subset of the Board.  That committee would take time to interview the candidates and present a slate to the Board for approval.  The majority vote of the Board would determine the officers for the next two years.  In practice the Board itself interviewed the candidates and conducted the elections.  That means it was time to change the bylaws again. Section VII.2 and VII.3 spell out the process used to select the officers.  We use the phrase “Officer Appointment” to separate it from the Director election but the end result is that the Board elects the officers.  Section VII.3 starts: Officers shall be appointed bi-annually by a majority of all the voting members of the Board of Directors. Everything else revolves around that sentence.  We use the word appoint but they truly are elected.  There are details in the bylaws for term limits, minimum requirements for President (1 prior term as an officer), tie breakers and filling vacancies. In practice we will have an election for President, then an election for EVP and then an election for VP Marketing.  That means that losing candidates will be able to fall down the ladder and run for the next open position.  Another point to note is that officers aren’t at-large directors.  That means if a current sitting officer loses all three elections they are off the Board.  Having Board member votes public will help with the transparency of this approach. This process has a number of positive and negatives.  The biggest concern I expect to hear is that our members don’t directly choose the officers.  I’m going to try and list all the positives and negatives of this approach. Many non-profits value continuity and are slower to change than a business.  On the plus side this promotes that.  On the negative side this promotes that.  If we change too slowly the members complain that we aren’t responsive.  If we change too quickly we make mistakes and fail at various things.  We’ve been criticized for both of those lately so I’m not entirely sure where to draw the line.  My rough assumption to this point is that we’re going too slow on governance and too quickly on becoming “more than a Summit.”  This approach creates competition in the officer elections.  If you are an at-large director there is no consequence to losing an election.  If you are an officer the only way to stay on the Board is to win an officer election or an at-large election.  If you are an officer and lose an election you can always run for the next office down.  This makes it very easy for multiple people to contest an election. There is value in a person moving through the officer positions up to the Presidency.  Having the Board select the officers promotes this.  The down side is that it takes a LOT of time to get to the Presidency.  We’ve had good people struggle with burnout.  We’ve had lots of discussion around this.  The process as we’ve described it here makes it possible for someone to move quickly through the ranks but doesn’t prevent people from working their way up through each role. We talked long and hard about having the officers elected by the members.  We had a self-imposed deadline to complete these changes prior to elections this summer. The other challenge was that our original goal was to make the bylaws reflect our actual process rather than create a new one.  I believe we accomplished this goal. We ran out of time to consider this option in the detail it needs.  Having member elections for officers needs a number of problems solved.  We would need a way for candidates to fall through the election.  This is what promotes competition.  Without this few people would risk an election and we’ll be back to one candidate per slot.  We need to do this without having multiple elections.  We may be able to copy what other organizations are doing but I was surprised at how little I could find on other organizations.  We also need a way for people that lose an officer election to win an at-large election.  Otherwise we’ll have very little competition for officers. This brings me to an area that I think we as a Board haven’t done a good job.  We haven’t built a strong process to tell you who is doing a good job and who isn’t.  This is a double-edged sword.  I don’t want to highlight Board members that are failing.  That’s not a good way to get people to volunteer and run for the Board.  But I also need a way let the members make an informed choice about who is doing a good job and would make a good officer.  Encouraging Board members to blog, publishing minutes and making votes public helps in that regard but isn’t the final answer.  I don’t know what the final answer is yet.  I do know that the Board members themselves are uniquely positioned to know which other Board members are doing good work.  They know who speaks up in meetings, who works to build consensus, who has good ideas and who works with the members.  What I Could Do Better I’ve learned a lot writing this about how we communicated with our members.  The next time we revise the bylaws I’d do a few things differently.  The biggest change would be to provide better documentation.  The March 2009 minutes provide a very detailed look into what changes we wanted to make to the bylaws.  Looking back, I’m a little surprised at how closely they matched our final changes and covered the various arguments.  If you just read those you’d get 90% of what we eventually changed.  Nearly everything else was just details around implementation.  I’d also consider publishing a scope document defining exactly what we were doing any why.  I think it really helped that we had a limited, defined goal in mind.  I don’t think we did a good job communicating that goal outside the meeting minutes though. That said, I wish I’d blogged more after the August and January meeting.  I think it would have helped more people to know that this change was coming and to be ready for it. Conclusion These changes address two big concerns that the Board had.  First, it prevents a single organization from dominating the Board.  Second, it codifies and clearly spells out how officers are elected.  This is the process that was previously followed but it was somewhat murky.  These changes bring clarity to this and clearly explain the process the Board will follow. We’re going to listen to feedback until March 31st.  At that time we’ll decide whether to approve these changes.  I’m also assuming that we’ll start another round of changes in the next year or two.  Are there other issues in the bylaws that we should tackle in the future?

    Read the article

  • CodePlex Daily Summary for Saturday, January 29, 2011

    CodePlex Daily Summary for Saturday, January 29, 2011Popular ReleasesRawr: Rawr 4.0.17 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 and on the Version Notes page: http://rawr.codeplex.com/wikipage?title=VersionNotes As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you...VivoSocial: VivoSocial 7.4.2: Version 7.4.2 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 7.4.2 release and see if they persist. If you have any questions about this release, please post them in our Support forums. If you are experiencing a bug or would like to request a new feature, please submit it to our issue tracker. Web Controls * Updated Business Objects and added a new SQL Data Provider File. Groups * Fixed a security issue whe...PHP Manager for IIS: PHP Manager 1.1.1 for IIS 7: This is a minor release of PHP Manager for IIS 7. It contains all the functionality available in 56962 plus several bug fixes (see change list for more details). Also, this release includes Russian language support. SHA1 codes for the downloads are: PHPManagerForIIS-1.1.0-x86.msi - 6570B4A8AC8B5B776171C2BA0572C190F0900DE2 PHPManagerForIIS-1.1.0-x64.msi - 12EDE004EFEE57282EF11A8BAD1DC1ADFD66A654BloodSim: WControls.dll update: Priority Update It's just come to my attention that the latest version of WControls.dll was not included in the 1.4 release and as a result, BloodSim has been unuseable. Please download WControls.dll from here, and this will rectify the issue.VFPX: VFP2C32 2.0.0.8 Release Candidate: This release includes several bugfixes, new functions and finally a CHM help file for the complete library.DB>doc for Microsoft SQL Server: 1.0.0.0: Initial release Supported output HTML WikiPlex markup Raw XML Supported objects Tables Primary Keys Foreign Keys ViewsmojoPortal: 2.3.6.1: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2361-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Office Web.UI: Alpha preview: This is the first alpha release. Very exciting moment... This download is just the demo application : "Contoso backoffice Web App". No source included for the moment, just the app, for testing purposes and for feedbacks !! This package includes a set of official MS Office icons (143 png 16x16 and 132 png 32x32) to really make a great app ! Please rate and give feedback ThanksParallel Programming with Microsoft Visual C++: Drop 6 - Chapters 4 and 5: This is Drop 6. It includes: Drafts of the Preface, Introduction, Chapters 2-7, Appendix B & C and the glossary Sample code for chapters 2-7 and Appendix A & B. The new material we'd like feedback on is: Chapter 4 - Parallel Aggregation Chapter 5 - Futures The source code requires Visual Studio 2010 in order to run. There is a known bug in the A-Dash sample when the user attempts to cancel a parallel calculation. We are working to fix this.Catel - WPF and Silverlight MVVM library: 1.1: (+) Styles can now be changed dynamically, see Examples application for a how-to (+) ViewModelBase class now have a constructor that allows services injection (+) ViewModelBase services can now be configured by IoC (via Microsoft.Unity) (+) All ViewModelBase services now have a unit test implementation (+) Added IProcessService to run processes from a viewmodel with directly using the process class (which makes it easier to unit test view models) (*) If the HasErrors property of DataObjec...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.160: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release improves NodeXL's Twitter and Pajek features. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file in Windows Explorer and select "Extract All." Close Ex...Kooboo CMS: Kooboo CMS 3.0 CTP: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL, RavenDB and SQLCE. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder to enable...UOB & ME: UOB ME 2.6: UOB ME 2.6????: ???? V1.0: ???? V1.0 ??Password Generator: 2.2: Parallel password generation Password strength calculation ( Same method used by Microsoft here : https://www.microsoft.com/protect/fraud/passwords/checker.aspx ) Minor code refactoringVisual Studio 2010 Architecture Tooling Guidance: Spanish - Architecture Guidance: Francisco Fagas http://geeks.ms/blogs/ffagas, Microsoft Most Valuable Professional (MVP), localized the Visual Studio 2010 Quick Reference Guidance for the Spanish communities, based on http://vsarchitectureguide.codeplex.com/releases/view/47828. Release Notes The guidance is available in a xps-only (default) or complete package. The complete package contains the files in xps, pdf and Office 2007 formats. 2011-01-24 Publish version 1.0 of the Spanish localized bits.ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6.2: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager the html generation has been optimized, the html page size is much smaller nowFacebook Graph Toolkit: Facebook Graph Toolkit 0.6: new Facebook Graph objects: Application, Page, Post, Comment Improved Intellisense documentation new Graph Api connections: albums, photos, posts, feed, home, friends JSON Toolkit upgraded to version 0.9 (beta release) with bug fixes and new features bug fixed: error when handling empty JSON arrays bug fixed: error when handling JSON array with square or large brackets in the message bug fixed: error when handling JSON obejcts with double quotation in the message bug fixed: erro...Microsoft All-In-One Code Framework: Visual Studio 2008 Code Samples 2011-01-23: Code samples for Visual Studio 2008MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (3): Instructions for installation: http://www.galasoft.ch/mvvm/installing/manually/ Includes the hotfix templates for Windows Phone 7 development. This is only relevant if you didn't already install the hotfix described at http://blog.galasoft.ch/archive/2010/07/22/mvvm-light-hotfix-for-windows-phone-7-developer-tools-beta.aspx.New ProjectsAsp.Net MvcRoute Debugger Visualizer: Asp.Net MvcRoute Debugger Visualizer is an extension for Visual Studio 2010.It displays the matched route data and datatokens of all defined routes in your mvc applicationAutoLaunch for Windows Embedded Compact (CE): AutoLaunch component for Windows Embedded CE 6.0 and Windows Embedded Compact 7BicubicResizeSilverlight: A client-side Silverlight library for performing a bicubic resize. Cloud Monitoring Studio: Cloud Monitoring Studio provides real-time performance monitoring mechanism for Azure deployed applications/Azure roles with the help of easy-to-understand graphs. Users can configure required performance counters for individual Azure roles through easy-to-use & intuitive GUI.CoreCon for Windows Embedded Compact (CE): CoreCon component for Windows Embedded CE 6.0 and Windows Embedded Compact 7, to simplify the tasks needed to include CoreCon files to the OS image.DokanDiscUtils Bridge: This a a bridge for the Dokan library and the DiscUtils library that allows Any partition/image that discutils can open to be mounted using dokanEverything About My Share: Everything About My ShareExtraordinary Ideas: This project contains some Extra oradinary solutions for difficult software problems.. Article's are in Turkish Language tahircakmak.blogspot.comIE9 Helper: The IE9 Helper is an ASP.NET Helper to makes adding IE9 features simple.ITSolutions: .NET Sandbox SolutionsKitty - Async Server: Kitty is an Async Server with C# and Java versions. It builds on Parallelism to deliver a Internet Scale Server StarterKit.mediainfocollector: Retrieves info from mediafilesMini - Async WebSocket Server: Mini is an Async WebSocket Server with versions in C# and Java. It builds on Parallelism and Kitty to deliver an Internet Scale WebSocket Server StarterKit. MVC N-tier EMR Sample Application: This sample uses EMR to showcase a MVC N-Tier application. It uses WCF to separate the presentation from the DB/Backend.MyPhotoGallery: MyPhotoGallery is a photo gallery developed in ASP.Net MVC.NuGet: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development.OneCode CodeBox: OneCode CodeBoxonecode toolkit: toolsOrchard SSL: Orchard Secured Sockets Layer (SSL) adds new features to force the usage of SSL in sections like authentication pages or the dashboard.OrchardCMS - Lokalizacija: OrchardCMS - Lokalizacija je projekat lokalizacije Orchard CMS-a na srpski, hrvatski, srpskohrvatski, hrvatskosrpski i sve ostale jezike sa podrucija ex-YU. Pokušacemo da iskorisitmo slicnosti u jezicima kako bi jedni drugima pomogli i olakšali lokalizaciju Orchard-a. Pyxis Install Maker: This utility creates single file installs for Pyxis 2 applications.SaleCard: TestSharePoint Social Networking: A Collection of solutions aimed at adding social network to your SharePoint 2010 site. The collection will include web parts, ribbon extensions and other type of solutions that will add or enhance the social networking capabilities of SharePoint. The solution is written in c#.SmartScreenTerminator: Pissed off by the stupid "smart screen" asking you to "remember to protect your password" every time you click someone else's blog link in your Windows Live homepage? Use this three line appication to allow your IE to automatically navigate to the destination page.StreetScene: StreetScene is a sample project showing how to use Microsoft technologies such as ASP.NET, Azure and Silverlight to build rich applications.svmnetx: testtest_project: test projectTweet-Links: Tweet-Links is designed for ultra-fast sending a twitter selected text and files and images.WCMF - Windows Content Management Framework: WCMF is a framework that allows one to build CMS appllications that target the Windows platform (whereas a typical CMS system is usually web based). Used technologies are MEF, WCF, WPF (including Silverlight) and Caliburn.Micro.WebNoteAOP: A sample project for aspect oriented programming with PostSharp. The main purpose is a short introduction. All aspects are made in a very simple way. They should be useful for simple applications. Please keep in mind, that complex problems often require a complex solution, too.

    Read the article

  • Help with Design for Vacation Tracking System (C#/.NET/Access/WebServices/SOA/Excel) [closed]

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?) Many thanks for your advices, Aaron

    Read the article

  • Ajax, Lizard Brain Web Design, JSF, Struts, JavaScript, Mobile Web, Flash, jQuery, GWT, Harmony at I

    - by Kim Won
    Great Indian Developer Summit 2010 – India's Biggest Polyglot Conference and Workshops for IT Software Professionals Bangalore, April 9, 2010: The GIDS.Web Conference and Workshops has announced the complete program of over 30 sessions on how browser and rich web technologies such as AJAX, DHTML, Mashups, Web 2.0, Enterprise 2.0 technologies, and Rich UI technologies are making money and gaining market-share for some of the leading businesses in the world. The GIDS.Web track at Great Indian Developer Summit takes place 21 and 23 April 2010, at the Indian Institute of Science in Bangalore. As one of the longest running independent developer conferences in India, GIDS.Web at the Great Indian Developer Summit 2010 is uniquely positioned to provide a blend of practical, pragmatic and immediately applicable knowledge and a glimpse of the future of technology. During 21 and 23 April 2010, GIDS.Web offers a multi-track conference, workshops, expo show floor, and networking opportunities. The first keynote at GIDS.Web is led by the leading Java EE and Ajax developer, speaker, and author Marty Hall. The best of India's Java and RIA programmers have learnt the subject from Marty's seminal books Core Servlets and JavaServer Pages (first and second editions), More Servlets and JavaServer Pages, and Core Web Programming (first and second editions) from Prentice Hall and Sun Microsystems Press. Marty's keynote address is a comparison of approaches to building rich Internet applications with Ajax. Marty says Ajax development is difficult, and there are several fundamentally different strategies to building Ajaxified Web applications. The keynote address will survey the three most important of these approaches: using an Ajax-enabled JavaScript library such as jQuery, Prototype, Scriptaculous, Dojo, or Ext/JS; using a Web framework such as JSF 2.0 or Struts 2 that has integrated Ajax support; using the Google Web Toolkit (GWT) to build "pure Java" Ajax applications. The talk will compare and contrast these three approaches, discussing the types of applications that fit best for each option. Over the course of the summit Marty will conduct several more sessions on "Choosing an Ajax/JavaScript Toolkit: A Comparison of the Most Popular JavaScript Libraries", "Pure Java Ajax: An Overview of GWT 2.0", "Integrated Ajax Support in JSF 2.0" and "Ajax Support in the Prototype JavaScript Library". The second keynote by the head of Adobe's Flash initiative in India, Ramesh Srinivasaraghavan, explores the state of art in web application development and identify trends that could transform the way we create and use web applications. The talk explains how the Adobe Flash Platform has fuelled this revolution with an integrated set of technologies for delivering the most compelling applications, content and video to the widest possible audience. The Director of Forum Nokia will explain how cloud computing coupled with mobile applications enable consumers to have access to powerful services and improved user experiences never before thought possible. IEEE's 2010 President-Elect Sorel Reisman's afternoon address steps to improve the IT profession in India. Featured talks at GID.Web also include: Web 2.0 Checklist - Deconstructing Modern Websites, Scott Davis Choosing an Ajax/JavaScript Toolkit: Comparison of Popular JavaScript Libraries, Marty Hall Lizard Brain Web Design, Scott Davis Effective Design Processes and Resources for Mobile Web Development, Arabella David NoSQL: The Shift to a Non-relational World, Nosh Petigara Open Source Web Debugging Tools, Matthew McCullough Building Line of Business Applications with Silverlight 4.0, Stephen Forte Hadoop - Divide and Conquer, Matthew McCullough Adobe Flash Catalyst for Agile Interaction Design, Harish Sivaramakrishnan Using jQuery and AJAX to Build Front-ends for ASP.NET and ASP.NET MVC, Pandurang Nayak First Steps to IT Heaven Through the Cloud. Part II: .WEB, Simone Brunozzi Building Rich Internet Applications with SL RIA Web Services, Pandurang Nayak Enriching Cloud Applications with Adobe Flash Platform, Ramesh Srinivasaraghavan Payments for the Web.future, Khurram Khan and Praveen Alavilli Longevity of Scalable Systems, Nishad Kamat Transform yourself into a Mobile App Developer Using Web Run Time, Balagopal K S Developing Multi Screen Applications on Adobe Flash Platform, Hemanth Sharma Why Harmony and For Whom?, Himanshu Goyal IIS Hosting Solution for ASP.net and PHP Web Sites, Nahas Mohammed Building Pluggable Web applications using Django, Lakshman Prasad Workshop: The 180-min AJAX and JSON Spike Class, Scott Davis Workshop: Essence of Functional Programming, Venkat Subramaniam Workshop: Agile Development, Tools, and Teams and Scrum Certification, Stephen Forte Workshop: PHP + Adobe Flex = Killer RIA, Shyamprasad P Workshop: Cloud Computing Boot Camp on the Google App Engine, Matthew McCullough Workshop: Building Data Centric Applications using Adobe Flex and Java, Prashant Singh Workshop: Building Your First Amazon App, Simone Brunozzi Workshop: Windows Azure Deep Dive, Ramaprasanna Chellamuthu Workshop: Monetizing your Apps with PayPal X Payments Platform, Khurram Khan, Praveen Alavilli Workshop: User Expereince Evaluation Model Walkthrough, Sanna Häiväläinen Sponsors of Great Indian Developer Summit 2010 include: Platinum sponsors Microsoft, Oracle Forum Nokia and Adobe; Gold sponsors Intel and SAP; Silver sponsors Quest Software, PayPal, Telerik and AMT. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

  • Design for Vacation Tracking System

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?)

    Read the article

  • What strategy do you use for package naming in Java projects and why?

    - by Tim Visher
    I thought about this awhile ago and it recently resurfaced as my shop is doing its first real Java web app. As an intro, I see two main package naming strategies. (To be clear, I'm not referring to the whole 'domain.company.project' part of this, I'm talking about the package convention beneath that.) Anyway, the package naming conventions that I see are as follows: Functional: Naming your packages according to their function architecturally rather than their identity according to the business domain. Another term for this might be naming according to 'layer'. So, you'd have a *.ui package and a *.domain package and a *.orm package. Your packages are horizontal slices rather than vertical. This is much more common than logical naming. In fact, I don't believe I've ever seen or heard of a project that does this. This of course makes me leery (sort of like thinking that you've come up with a solution to an NP problem) as I'm not terribly smart and I assume everyone must have great reasons for doing it the way they do. On the other hand, I'm not opposed to people just missing the elephant in the room and I've never heard a an actual argument for doing package naming this way. It just seems to be the de facto standard. Logical: Naming your packages according to their business domain identity and putting every class that has to do with that vertical slice of functionality into that package. I have never seen or heard of this, as I mentioned before, but it makes a ton of sense to me. I tend to approach systems vertically rather than horizontally. I want to go in and develop the Order Processing system, not the data access layer. Obviously, there's a good chance that I'll touch the data access layer in the development of that system, but the point is that I don't think of it that way. What this means, of course, is that when I receive a change order or want to implement some new feature, it'd be nice to not have to go fishing around in a bunch of packages in order to find all the related classes. Instead, I just look in the X package because what I'm doing has to do with X. From a development standpoint, I see it as a major win to have your packages document your business domain rather than your architecture. I feel like the domain is almost always the part of the system that's harder to grok where as the system's architecture, especially at this point, is almost becoming mundane in its implementation. The fact that I can come to a system with this type of naming convention and instantly from the naming of the packages know that it deals with orders, customers, enterprises, products, etc. seems pretty darn handy. It seems like this would allow you to take much better advantage of Java's access modifiers. This allows you to much more cleanly define interfaces into subsystems rather than into layers of the system. So if you have an orders subsystem that you want to be transparently persistent, you could in theory just never let anything else know that it's persistent by not having to create public interfaces to its persistence classes in the dao layer and instead packaging the dao class in with only the classes it deals with. Obviously, if you wanted to expose this functionality, you could provide an interface for it or make it public. It just seems like you lose a lot of this by having a vertical slice of your system's features split across multiple packages. I suppose one disadvantage that I can see is that it does make ripping out layers a little bit more difficult. Instead of just deleting or renaming a package and then dropping a new one in place with an alternate technology, you have to go in and change all of the classes in all of the packages. However, I don't see this is a big deal. It may be from a lack of experience, but I have to imagine that the amount of times you swap out technologies pales in comparison to the amount of times you go in and edit vertical feature slices within your system. So I guess the question then would go out to you, how do you name your packages and why? Please understand that I don't necessarily think that I've stumbled onto the golden goose or something here. I'm pretty new to all this with mostly academic experience. However, I can't spot the holes in my reasoning so I'm hoping you all can so that I can move on. Thanks in advance!

    Read the article

  • ASP.NET Web Page Not Available

    - by hahuang65
    It's pretty difficult to show code for ASP.NET here, so I will try my best to describe my problem. I have a FileUploadControl and a Button that calls a function when it's clicked. It seems that the Button function works when there is nothing chosen for my FileUploadControl. However, when there is something chosen in the FileUploadControl (I have selected a file to upload), there is a problem when I click the button. It completely does not matter what the function does (it could just be writing to a label, even when it has nothing to do with the FileUploadControl). The error I get is: This webpage is not available. The webpage at http://localhost:2134/UploadMedia/Default.aspx might be temporarily down or it may have moved permanently to a new web address. I have searched on Google, and people seem to have had problems with this, but different causes from me. They have said that their ASP.NET Development Server port is actually different from their port in the address bar. This is not the case for me. Also, another problem people have had is with Use Dynamic Ports. I have tried both true and false. I have also tried different ports, and I have always gotten the same error. This is really driving me crazy because it doesn't matter what the code in the buttonFunction is, it doesn't work as long as there is something in the FileUploadControl. If there is nothing, it seems to work fine. Here is the code for the ASP.NET Controls: <asp:FileUpload id="FileUploadControl" runat="server" /> <asp:Button runat="server" id="UploadButton" text="Upload" OnClick="uploadClicked" /> <br /><br /> <asp:Label runat="server" id="StatusLabel" text="Upload status: " /> And this is the code for the button function: protected void uploadClicked(object sender, EventArgs e) { if (FileUploadControl.HasFile) { string filename = Path.GetFileName(FileUploadControl.FileName); //Check if the entered username already exists in the database. String sqlDupStmt = "Select songPath from Songs where songPath ='" + Server.MapPath("~/Uploads/") + filename + "'"; SqlConnection sqlDupConn = new SqlConnection(@"Data Source = .\SQLEXPRESS; AttachDbFilename = |DataDirectory|\Database.mdf; Integrated Security = True; User Instance = True;"); SqlCommand sqlDupCmd = new SqlCommand(sqlDupStmt, sqlDupConn); sqlDupCmd.Connection.Open(); SqlDataReader sqlDupReader = sqlDupCmd.ExecuteReader(CommandBehavior.CloseConnection); if (sqlDupReader.Read()) { StatusLabel.Text = "Upload status: The file already exists."; sqlDupReader.Close(); } else { sqlDupReader.Close(); //See "How To Use DPAPI (Machine Store) from ASP.NET" for information about securely storing connection strings. String sqlStmt = "Insert into Songs values (@songpath);"; SqlConnection sqlConn = new SqlConnection(@"Data Source = .\SQLEXPRESS; AttachDbFilename = |DataDirectory|\Database.mdf; Integrated Security = True; User Instance = True; uid=sa; pwd=password;"); SqlCommand cmd = new SqlCommand(sqlStmt, sqlConn); SqlParameter sqlParam = null; //Usage of Sql parameters also helps avoid SQL Injection attacks. sqlParam = cmd.Parameters.Add("@userName", SqlDbType.VarChar, 150); sqlParam.Value = Server.MapPath("~/Uploads/") + filename; //Attempt to add the song to the database. try { sqlConn.Open(); cmd.ExecuteNonQuery(); FileUploadControl.SaveAs(Server.MapPath("~/Uploads/") + filename); songList.Items.Add(filename); StatusLabel.Text = "Upload status: File uploaded!"; } catch (Exception ex) { StatusLabel.Text = "Upload status: The file could not be uploaded. The following error occured: " + ex.Message; } finally { sqlConn.Close(); } } } } But this buttonfunction provides the same results: protected void uploadClicked(object sender, EventArgs e) { StatusLabel.Text = "FooBar"; } Has anyone had this problem before, or might know what the cause is? Thanks!

    Read the article

  • Using WeakReference to resolve issue with .NET unregistered event handlers causing memory leaks.

    - by Eric
    The problem: Registered event handlers create a reference from the event to the event handler's instance. If that instance fails to unregister the event handler (via Dispose, presumably), then the instance memory will not be freed by the garbage collector. Example: class Foo { public event Action AnEvent; public void DoEvent() { if (AnEvent != null) AnEvent(); } } class Bar { public Bar(Foo l) { l.AnEvent += l_AnEvent; } void l_AnEvent() { } } If I instantiate a Foo, and pass this to a new Bar constructor, then let go of the Bar object, it will not be freed by the garbage collector because of the AnEvent registration. I consider this a memory leak, and seems just like my old C++ days. I can, of course, make Bar IDisposable, unregister the event in the Dispose() method, and make sure to call Dispose() on instances of it, but why should I have to do this? I first question why events are implemented with strong references? Why not use weak references? An event is used to abstractly notify an object of changes in another object. It seems to me that if the event handler's instance is no longer in use (i.e., there are no non-event references to the object), then any events that it is registered with should automatically be unregistered. What am I missing? I have looked at WeakEventManager. Wow, what a pain. Not only is it very difficult to use, but its documentation is inadequate (see http://msdn.microsoft.com/en-us/library/system.windows.weakeventmanager.aspx -- noticing the "Notes to Inheritors" section that has 6 vaguely described bullets). I have seen other discussions in various places, but nothing I felt I could use. I propose a simpler solution based on WeakReference, as described here. My question is: Does this not meet the requirements with significantly less complexity? To use the solution, the above code is modified as follows: class Foo { public WeakReferenceEvent AnEvent = new WeakReferenceEvent(); internal void DoEvent() { AnEvent.Invoke(); } } class Bar { public Bar(Foo l) { l.AnEvent += l_AnEvent; } void l_AnEvent() { } } Notice two things: 1. The Foo class is modified in two ways: The event is replaced with an instance of WeakReferenceEvent, shown below; and the invocation of the event is changed. 2. The Bar class is UNCHANGED. No need to subclass WeakEventManager, implement IWeakEventListener, etc. OK, so on to the implementation of WeakReferenceEvent. This is shown here. Note that it uses the generic WeakReference that I borrowed from here: http://damieng.com/blog/2006/08/01/implementingweakreferencet I had to add Equals() and GetHashCode() to his class, which I include below for reference. class WeakReferenceEvent { public static WeakReferenceEvent operator +(WeakReferenceEvent wre, Action handler) { wre._delegates.Add(new WeakReference<Action>(handler)); return wre; } public static WeakReferenceEvent operator -(WeakReferenceEvent wre, Action handler) { foreach (var del in wre._delegates) if (del.Target == handler) { wre._delegates.Remove(del); return wre; } return wre; } HashSet<WeakReference<Action>> _delegates = new HashSet<WeakReference<Action>>(); internal void Invoke() { HashSet<WeakReference<Action>> toRemove = null; foreach (var del in _delegates) { if (del.IsAlive) del.Target(); else { if (toRemove == null) toRemove = new HashSet<WeakReference<Action>>(); toRemove.Add(del); } } if (toRemove != null) foreach (var del in toRemove) _delegates.Remove(del); } } public class WeakReference<T> : IDisposable { private GCHandle handle; private bool trackResurrection; public WeakReference(T target) : this(target, false) { } public WeakReference(T target, bool trackResurrection) { this.trackResurrection = trackResurrection; this.Target = target; } ~WeakReference() { Dispose(); } public void Dispose() { handle.Free(); GC.SuppressFinalize(this); } public virtual bool IsAlive { get { return (handle.Target != null); } } public virtual bool TrackResurrection { get { return this.trackResurrection; } } public virtual T Target { get { object o = handle.Target; if ((o == null) || (!(o is T))) return default(T); else return (T)o; } set { handle = GCHandle.Alloc(value, this.trackResurrection ? GCHandleType.WeakTrackResurrection : GCHandleType.Weak); } } public override bool Equals(object obj) { var other = obj as WeakReference<T>; return other != null && Target.Equals(other.Target); } public override int GetHashCode() { return Target.GetHashCode(); } } It's functionality is trivial. I override operator + and - to get the += and -= syntactic sugar matching events. These create WeakReferences to the Action delegate. This allows the garbage collector to free the event target object (Bar in this example) when nobody else is holding on to it. In the Invoke() method, simply run through the weak references and call their Target Action. If any dead (i.e., garbage collected) references are found, remove them from the list. Of course, this only works with delegates of type Action. I tried making this generic, but ran into the missing where T : delegate in C#! As an alternative, simply modify class WeakReferenceEvent to be a WeakReferenceEvent, and replace the Action with Action. Fix the compiler errors and you have a class that can be used like so: class Foo { public WeakReferenceEvent<int> AnEvent = new WeakReferenceEvent<int>(); internal void DoEvent() { AnEvent.Invoke(5); } } Hopefully this will help someone else when they run into the mystery .NET event memory leak!

    Read the article

  • how to link with static mySQL C library with Visual Studio 2008?

    - by Jean-Denis Muys
    Hi, My project is running fine, but its requirement for some DLLs means it cannot be simply dragged and dropped by the end user. The DLLs are not loaded when put side by side with my executable, because my executable is not an application, and its location is not in the few locations where Windows looks for DLL. I already asked a question about how to make their loading happen. None of the suggestions worked (see the question at http://stackoverflow.com/questions/2637499/how-can-a-win32-app-plugin-load-its-dll-in-its-own-directory) So I am now exploring another way: get rid of the DLLs altogether, and link with static versions of them. This is failing for the last of those DLLs. So I am at this point where all but one of the libraries are statically linked, and everything is fine. The last library is the standard C library for mySQL, aka Connector/C. The problem I have may or may not be related with that origin. Whenever I switched to the static library in the linker additional dependency, I get the following errors (log at the end): 1- about 40 duplicate symbols (e.g. _toupper) mutually between LIBCMT.lib and MSVCRT.lib. Interestingly, I can't control the inclusion of these two libraries: they are from Visual Studio and automatically included. So why are these symbol duplicate when I include mySQL's static lib, but not its DLL? Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\MSVCRT.lib: Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\OLDNAMES.lib: Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\msvcprt.lib: Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\LIBCMT.lib: LIBCMT.lib(setlocal.obj) : error LNK2005: _setlocale already defined in MSVCRT.lib(MSVCR90.dll) Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\MSVCRT.lib: MSVCRT.lib(MSVCR90.dll) : error LNK2005: _toupper already defined in LIBCMT.lib(toupper.obj) 2- two warnings that MSVCRT and LIBCMT conflicts with use of other libs, with a suggestion to use /NODEFAULTLIB:library:. I don't understand that suggestion: what am I supposed to do and how? LINK : warning LNK4098: defaultlib 'MSVCRT' conflicts with use of other libs; use /NODEFAULTLIB:library LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs; use /NODEFAULTLIB:library 3- an external symbol is undefined: _main. So does that mean that the static mySQL lib (but not the DLL) references a _main symbol? For the sake of it, I tried to define an empty function named _main() in my code, with no difference. LIBCMT.lib(crt0.obj) : error LNK2001: unresolved external symbol _main As mentioned in my first question, my code is a port of a fully working Mac version of the code. Its a plugin for a host application that I don't control. The port currently works, albeit with installation issues due to that lone remaining DLL. As a Mac programmer I am rather disoriented with Visual Studio and Windows which I find confusing, poorly designed and documented, with error messages that are very difficult to grasp and act upon. So I will be very grateful for any help. Here is the full set of errors: 1 Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\MSVCRT.lib: 1 Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\OLDNAMES.lib: 1 Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\msvcprt.lib: 1 Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\LIBCMT.lib: 1LIBCMT.lib(setlocal.obj) : error LNK2005: _setlocale already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(tidtable.obj) : error LNK2005: __encode_pointer already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(tidtable.obj) : error LNK2005: __encoded_null already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(tidtable.obj) : error LNK2005: __decode_pointer already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(tolower.obj) : error LNK2005: _tolower already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(invarg.obj) : error LNK2005: __set_invalid_parameter_handler already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(invarg.obj) : error LNK2005: __invalid_parameter_noinfo already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(crt0dat.obj) : error LNK2005: __amsg_exit already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(crt0dat.obj) : error LNK2005: __initterm_e already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(crt0dat.obj) : error LNK2005: _exit already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(crtheap.obj) : error LNK2005: __malloc_crt already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(dosmap.obj) : error LNK2005: __errno already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(file.obj) : error LNK2005: __iob_func already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(mlock.obj) : error LNK2005: __unlock already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(mlock.obj) : error LNK2005: _lock already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(winxfltr.obj) : error LNK2005: __CppXcptFilter already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(crt0init.obj) : error LNK2005: ___xi_a already defined in MSVCRT.lib(cinitexe.obj) 1LIBCMT.lib(crt0init.obj) : error LNK2005: ___xi_z already defined in MSVCRT.lib(cinitexe.obj) 1LIBCMT.lib(crt0init.obj) : error LNK2005: ___xc_a already defined in MSVCRT.lib(cinitexe.obj) 1LIBCMT.lib(crt0init.obj) : error LNK2005: ___xc_z already defined in MSVCRT.lib(cinitexe.obj) 1LIBCMT.lib(hooks.obj) : error LNK2005: "void __cdecl terminate(void)" (?terminate@@YAXXZ) already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(winsig.obj) : error LNK2005: _signal already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(fflush.obj) : error LNK2005: _fflush already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(tzset.obj) : error LNK2005: __tzset already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(_ctype.obj) : error LNK2005: _isspace already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(_ctype.obj) : error LNK2005: _iscntrl already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(getenv.obj) : error LNK2005: _getenv already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(strnicmp.obj) : error LNK2005: __strnicmp already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(osfinfo.obj) : error LNK2005: __get_osfhandle already defined in MSVCRT.lib(MSVCR90.dll) 1LIBCMT.lib(osfinfo.obj) : error LNK2005: __open_osfhandle already defined in MSVCRT.lib(MSVCR90.dll) [...] 1 Searching C:\Program Files\Microsoft Visual Studio 9.0\VC\lib\MSVCRT.lib: 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _toupper already defined in LIBCMT.lib(toupper.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _isalpha already defined in LIBCMT.lib(_ctype.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _wcschr already defined in LIBCMT.lib(wcschr.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _isdigit already defined in LIBCMT.lib(_ctype.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _islower already defined in LIBCMT.lib(ctype.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: __doserrno already defined in LIBCMT.lib(dosmap.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _strftime already defined in LIBCMT.lib(strftime.obj) 1MSVCRT.lib(MSVCR90.dll) : error LNK2005: _isupper already defined in LIBCMT.lib(_ctype.obj) [...] 1Finished searching libraries 1 Creating library z:\PCdev\Test\RK_Demo_2004\plugins\Test.bundle\contents\windows\Test.lib and object z:\PCdev\Test\RK_Demo_2004\plugins\Test.bundle\contents\windows\Test.exp 1Searching libraries [...] 1Finished searching libraries 1LINK : warning LNK4098: defaultlib 'MSVCRT' conflicts with use of other libs; use /NODEFAULTLIB:library 1LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs; use /NODEFAULTLIB:library 1LIBCMT.lib(crt0.obj) : error LNK2001: unresolved external symbol _main

    Read the article

  • Fast multi-window rendering with C#

    - by seb
    I've been searching and testing different kind of rendering libraries for C# days for many weeks now. So far I haven't found a single library that works well on multi-windowed rendering setups. The requirement is to be able to run the program on 12+ monitor setups (financial charting) without latencies on a fast computer. Each window needs to update multiple times every second. While doing this CPU needs to do lots of intensive and time critical tasks so some of the burden has to be shifted to GPUs. That's where hardware rendering steps in, in another words DirectX or OpenGL. I have tried GDI+ with windows forms and figured it's way too slow for my needs. I have tried OpenGL via OpenTK (on windows forms control) which seemed decently quick (I still have some tests to run on it) but painfully difficult to get working properly (hard to find/program good text rendering libraries). Recently I tried DirectX9, DirectX10 and Direct2D with Windows forms via SharpDX. I tried a separate device for each window and a single device/multiple swap chains approaches. All of these resulted in very poor performance on multiple windows. For example if I set target FPS to 20 and open 4 full screen windows on different monitors the whole operating system starts lagging very badly. Rendering is simply clearing the screen to black, no primitives rendered. CPU usage on this test was about 0% and GPU usage about 10%, I don't understand what is the bottleneck here? My development computer is very fast, i7 2700k, AMD HD7900, 16GB ram so the tests should definitely run on this one. In comparison I did some DirectX9 tests on C++/Win32 API one device/multiple swap chains and I could open 100 windows spread all over the 4-monitor workspace (with 3d teapot rotating on them) and still had perfectly responsible operating system (fps was dropping of course on the rendering windows quite badly to around 5 which is what I would expect running 100 simultaneous renderings). Does anyone know any good ways to do multi-windowed rendering on C# or am I forced to re-write my program in C++ to get that performance (major pain)? I guess I'm giving OpenGL another shot before I go the C++ route... I'll report any findings here. Test methods for reference: For C# DirectX one-device multiple swapchain test I used the method from this excellent answer: Display Different images per monitor directX 10 Direct3D10 version: I created the d3d10device and DXGIFactory like this: D3DDev = new SharpDX.Direct3D10.Device(SharpDX.Direct3D10.DriverType.Hardware, SharpDX.Direct3D10.DeviceCreationFlags.None); DXGIFac = new SharpDX.DXGI.Factory(); Then initialized the rendering windows like this: var scd = new SwapChainDescription(); scd.BufferCount = 1; scd.ModeDescription = new ModeDescription(control.Width, control.Height, new Rational(60, 1), Format.R8G8B8A8_UNorm); scd.IsWindowed = true; scd.OutputHandle = control.Handle; scd.SampleDescription = new SampleDescription(1, 0); scd.SwapEffect = SwapEffect.Discard; scd.Usage = Usage.RenderTargetOutput; SC = new SwapChain(Parent.DXGIFac, Parent.D3DDev, scd); var backBuffer = Texture2D.FromSwapChain<Texture2D>(SC, 0); _rt = new RenderTargetView(Parent.D3DDev, backBuffer); Drawing command executed on each rendering iteration is simply: Parent.D3DDev.ClearRenderTargetView(_rt, new Color4(0, 0, 0, 0)); SC.Present(0, SharpDX.DXGI.PresentFlags.None); DirectX9 version is very similar: Device initialization: PresentParameters par = new PresentParameters(); par.PresentationInterval = PresentInterval.Immediate; par.Windowed = true; par.SwapEffect = SharpDX.Direct3D9.SwapEffect.Discard; par.PresentationInterval = PresentInterval.Immediate; par.AutoDepthStencilFormat = SharpDX.Direct3D9.Format.D16; par.EnableAutoDepthStencil = true; par.BackBufferFormat = SharpDX.Direct3D9.Format.X8R8G8B8; // firsthandle is the handle of first rendering window D3DDev = new SharpDX.Direct3D9.Device(new Direct3D(), 0, DeviceType.Hardware, firsthandle, CreateFlags.SoftwareVertexProcessing, par); Rendering window initialization: if (parent.D3DDev.SwapChainCount == 0) { SC = parent.D3DDev.GetSwapChain(0); } else { PresentParameters pp = new PresentParameters(); pp.Windowed = true; pp.SwapEffect = SharpDX.Direct3D9.SwapEffect.Discard; pp.BackBufferFormat = SharpDX.Direct3D9.Format.X8R8G8B8; pp.EnableAutoDepthStencil = true; pp.AutoDepthStencilFormat = SharpDX.Direct3D9.Format.D16; pp.PresentationInterval = PresentInterval.Immediate; SC = new SharpDX.Direct3D9.SwapChain(parent.D3DDev, pp); } Code for drawing loop: SharpDX.Direct3D9.Surface bb = SC.GetBackBuffer(0); Parent.D3DDev.SetRenderTarget(0, bb); Parent.D3DDev.Clear(ClearFlags.Target, Color.Black, 1f, 0); SC.Present(Present.None, new SharpDX.Rectangle(), new SharpDX.Rectangle(), HWND); bb.Dispose(); C++ DirectX9/Win32 API test with multiple swapchains and one device code is here: http://pastebin.com/tjnRvATJ It's a modified version from Kevin Harris's nice example code.

    Read the article

  • How to hide jQuery Sub-Menus(ddsmoothmenu)?

    - by Tim
    I'm new to jQuery and i must admit that i've understood nothing yet, the syntax appears to me as an unknown language although i thought that i had my experiences with javascript. Nevertheless i managed it to implement this menu in my asp.net masterpage's header. Even got it to work that the content-page is loaded with ajax with help from here. But finally i'm failing with the menu to disappear when the new page was loaded asynchronously. I dont know how to hide this accursed jQuery Menu. Following the part of the js-file where the events are registered for hiding/disappearing. I dont know how to get the part that is responsible for it and even i dont know how to implement that part in my Anchor-onclick function where i dont have a reference to the jQuery Object. buildmenu:function($, setting){ var smoothmenu=ddsmoothmenu var $mainmenu=$("#"+setting.mainmenuid+">ul") //reference main menu UL $mainmenu.parent().get(0).className=setting.classname || "ddsmoothmenu" var $headers=$mainmenu.find("ul").parent() $headers.hover( function(e){ $(this).children('a:eq(0)').addClass('selected') }, function(e){ $(this).children('a:eq(0)').removeClass('selected') } ) $headers.each(function(i){ //loop through each LI header var $curobj=$(this).css({zIndex: 100-i}) //reference current LI header var $subul=$(this).find('ul:eq(0)').css({display:'block'}) $subul.data('timers', {}) this._dimensions={w:this.offsetWidth, h:this.offsetHeight, subulw:$subul.outerWidth(), subulh:$subul.outerHeight()} this.istopheader=$curobj.parents("ul").length==1? true : false //is top level header? $subul.css({top:this.istopheader && setting.orientation!='v'? this._dimensions.h+"px" : 0}) $curobj.children("a:eq(0)").css(this.istopheader? {paddingRight: smoothmenu.arrowimages.down[2]} : {}).append( //add arrow images '<img src="'+ (this.istopheader && setting.orientation!='v'? smoothmenu.arrowimages.down[1] : smoothmenu.arrowimages.right[1]) +'" class="' + (this.istopheader && setting.orientation!='v'? smoothmenu.arrowimages.down[0] : smoothmenu.arrowimages.right[0]) + '" style="border:0;" />' ) if (smoothmenu.shadow.enable){ this._shadowoffset={x:(this.istopheader?$subul.offset().left+smoothmenu.shadow.offsetx : this._dimensions.w), y:(this.istopheader? $subul.offset().top+smoothmenu.shadow.offsety : $curobj.position().top)} //store this shadow's offsets if (this.istopheader) $parentshadow=$(document.body) else{ var $parentLi=$curobj.parents("li:eq(0)") $parentshadow=$parentLi.get(0).$shadow } this.$shadow=$('<div class="ddshadow'+(this.istopheader? ' toplevelshadow' : '')+'"></div>').prependTo($parentshadow).css({left:this._shadowoffset.x+'px', top:this._shadowoffset.y+'px'}) //insert shadow DIV and set it to parent node for the next shadow div } $curobj.hover( function(e){ var $targetul=$subul //reference UL to reveal var header=$curobj.get(0) //reference header LI as DOM object clearTimeout($targetul.data('timers').hidetimer) $targetul.data('timers').showtimer=setTimeout(function(){ header._offsets={left:$curobj.offset().left, top:$curobj.offset().top} var menuleft=header.istopheader && setting.orientation!='v'? 0 : header._dimensions.w menuleft=(header._offsets.left+menuleft+header._dimensions.subulw>$(window).width())? (header.istopheader && setting.orientation!='v'? -header._dimensions.subulw+header._dimensions.w : -header._dimensions.w) : menuleft //calculate this sub menu's offsets from its parent if ($targetul.queue().length<=1){ //if 1 or less queued animations $targetul.css({left:menuleft+"px", width:header._dimensions.subulw+'px'}).animate({height:'show',opacity:'show'}, ddsmoothmenu.transition.overtime) if (smoothmenu.shadow.enable){ var shadowleft=header.istopheader? $targetul.offset().left+ddsmoothmenu.shadow.offsetx : menuleft var shadowtop=header.istopheader?$targetul.offset().top+smoothmenu.shadow.offsety : header._shadowoffset.y if (!header.istopheader && ddsmoothmenu.detectwebkit){ //in WebKit browsers, restore shadow's opacity to full header.$shadow.css({opacity:1}) } header.$shadow.css({overflow:'', width:header._dimensions.subulw+'px', left:shadowleft+'px', top:shadowtop+'px'}).animate({height:header._dimensions.subulh+'px'}, ddsmoothmenu.transition.overtime) } } }, ddsmoothmenu.showhidedelay.showdelay) }, function(e){ var $targetul=$subul var header=$curobj.get(0) clearTimeout($targetul.data('timers').showtimer) $targetul.data('timers').hidetimer=setTimeout(function(){ $targetul.animate({height:'hide', opacity:'hide'}, ddsmoothmenu.transition.outtime) if (smoothmenu.shadow.enable){ if (ddsmoothmenu.detectwebkit){ //in WebKit browsers, set first child shadow's opacity to 0, as "overflow:hidden" doesn't work in them header.$shadow.children('div:eq(0)').css({opacity:0}) } header.$shadow.css({overflow:'hidden'}).animate({height:0}, ddsmoothmenu.transition.outtime) } }, ddsmoothmenu.showhidedelay.hidedelay) } ) //end hover }) //end $headers.each() $mainmenu.find("ul").css({display:'none', visibility:'visible'}) } one link of my menu what i want to hide when the content is redirected to another page(i need "closeMenu-function"): <li><a href="DeliveryControl.aspx" onclick="AjaxContent.getContent(this.href);closeMenu();return false;">Delivery Control</a></li> In short: I want to fade out the submenus the same way they do automatically onblur, so that only the headermenu stays visible but i dont know how. Thanks, Tim EDIT: thanks to Starx' private-lesson in jQuery for beginners i solved it: I forgot the # in $("#smoothmenu1"). After that it was not difficult to find and call the hover-function from the menu's headers to let them fade out smoothly: $("#smoothmenu1").find("ul").hover(); Regards, Tim

    Read the article

  • Sorting/Paginating/Filtering Complex Multi-AR Object Tables in Rails

    - by Matt Rogish
    I have a complex table pulled from a multi-ActiveRecord object array. This listing is a combined display of all of a particular user's "favorite" items (songs, messages, blog postings, whatever). Each of these items is a full-fledged AR object. My goal is to present the user with a simplified search, sort, and pagination interface. The user need not know that the Song has a singer, and that the Message has an author -- to the end user both entries in the table will be displayed as "User". Thus, the search box will simply be a dropdown list asking them which to search on (User name, created at, etc.). Internally, I would need to convert that to the appropriate object search, combine the results, and display. I can, separately, do pagination (mislav will_paginate), sorting, and filtering, but together I'm having some problems combining them. For example, if I paginate the combined list of items, the pagination plugin handles it just fine. It is not efficient since the pagination is happening in the app vs. the DB, but let's assume the intended use-case would indicate the vast majority of the users will have less than 30 favorited items and all other behavior, server capabilities, etc. indicates this will not be a bottleneck. However, if I wish to sort the list I cannot sort it via the pagination plugin because it relies on the assumption that the result set is derived from a single SQL query, and also that the field name is consistent throughout. Thus, I must sort the merged array via ruby, e.g. @items.sort_by{ |i| i.whatever } But, since the items do not share common names, I must first interrogate the object and then call the correct sort by. For example, if the user wishes to sort by user name, if the sorted object is a message, I sort by author but if the object is a song, I sort by singer. This is all very gross and feels quite un-ruby-like. This same problem comes into play with the filter. If the user filters on the "parent item" (the message's thread, the song's album), I must translate that to the appropriate collection object method. Also gross. This is not the exact set-up but is close enough. Note that this is a legacy app so changing it is quite difficult, although not impossible. Also, yes there is some DRY that can be done, but don't focus on the style or elegance of the following code. Style/elegance of the SOLUTION is important, however! :D models: class User < ActiveRecord::Base ... has_and_belongs_to_many :favorite_messages, :class_name => "Message" has_and_belongs_to_many :favorite_songs, :class_name => "Song" has_many :authored_messages, :class_name => "Message" has_many :sung_songs, :class_name => "Song" end class Message < ActiveRecord::Base has_and_belongs_to_many :favorite_messages belongs_to :author, :class_name => "User" belongs_to :thread end class Song < ActiveRecord::Base has_and_belongs_to_many :favorite_songs belongs_to :singer, :class_name => "User" belongs_to :album end controller: def show u = User.find 123 @items = Array.new @items << u.favorite_messages @items << u.favorite_songs # etc. etc. @items.flatten! @items = @items.sort_by{ |i| i.created_at } @items = @items.paginate :page => params[:page], :per_page => 20 end def search # Assume user is searching for username like 'Bob' u = User.find 123 @items = Array.new @items << u.favorite_messages.find( :all, :conditions => "LOWER( author ) LIKE LOWER('%bob%')" ) @items << u.favorite_songs.find( :all, :conditions => "LOWER( singer ) LIKE ... " ) # etc. etc. @items.flatten! @items = @items.sort_by{ |i| determine appropriate sorting based on user selection } @items = @items.paginate :page => params[:page], :per_page => 20 end view: #index.html.erb ... <table> <tr> <th>Title (sort ASC/DESC links)</th> <th>Created By (sort ASC/DESC links))</th> <th>Collection Title (sort ASC/DESC links)</th> <th>Created At (sort ASC/DESC links)</th> </tr> <% @items.each |item| do %> <%= render { :partial => "message", :locals => item } if item.is_a? Message %> <%= render { :partial => "song", :locals => item } if item.is_a? Song %> <%end%> ... </table> #message.html.erb # shorthand, not real ruby print out message title, author name, thread title, message created at #song.html.erb # shorthand print out song title, singer name, album title, song created at

    Read the article

  • Amazon java.lang.VerifyError Android

    - by easycheese
    I have been trying to submit 2 separate apps into the Amazon App store but they keep being rejected. Here is the stack trace for the first: 11-05 11:14:36.488 E/AndroidRuntime(28128): FATAL EXCEPTION: AsyncTask #1 11-05 11:14:36.488 E/AndroidRuntime(28128): java.lang.RuntimeException: An error occured while executing doInBackground() 11-05 11:14:36.488 E/AndroidRuntime(28128): at android.os.AsyncTask$3.done(AsyncTask.java:200) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.lang.Thread.run(Thread.java:1096) 11-05 11:14:36.488 E/AndroidRuntime(28128): Caused by: java.lang.VerifyError: com.companionfree.WLThemeViewer.AmazonClientManager 11-05 11:14:36.488 E/AndroidRuntime(28128): at com.companionfree.WLThemeViewer.UpdateDBs.doInBackground(UpdateDBs.java) 11-05 11:14:36.488 E/AndroidRuntime(28128): at com.companionfree.WLThemeViewer.UpdateDBs.doInBackground(UpdateDBs.java) 11-05 11:14:36.488 E/AndroidRuntime(28128): at android.os.AsyncTask$2.call(AsyncTask.java:185) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 11-05 11:14:36.488 E/AndroidRuntime(28128): ... 4 more And the relevant logcat for the second 10-12 15:41:48.929 D/dalvikvm( 2451): GC_FOR_MALLOC freed 8099 objects / 524416 bytes in 34ms 10-12 15:41:49.327 I/RPC ( 1563): rx thread timeout (1 clients): 10-12 15:41:49.828 I/RPC ( 1563): rx thread timeout (1 clients): 10-12 15:41:50.089 I/ActivityManager( 1563): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=com.companionfree.pushup/.MainScreen } 10-12 15:41:50.099 D/SurfaceFlinger( 1563): Layer::setBuffers(this=0xeafa50), pid=1563, w=1, h=1 10-12 15:41:50.099 D/SurfaceFlinger( 1563): Layer::setBuffers(this=0xeafa50), pid=1563, w=1, h=1 10-12 15:41:50.139 D/SurfaceFlinger( 1563): Layer::requestBuffer(this=0xeafa50), index=0, pid=1563, w=480, h=800 success 10-12 15:41:50.189 I/ActivityManager( 1563): Start proc com.companionfree.pushup for activity com.companionfree.pushup/.MainScreen: pid=2644 uid=10129 gids={1015, 3003} 10-12 15:41:50.319 I/RPC ( 1563): rx thread timeout (1 clients): 10-12 15:41:50.359 W/dalvikvm( 2644): VFY: Lcom/companionfree/pushup/WorkoutDbAdapter; is not instance of Landroid/app/Activity; 10-12 15:41:50.369 W/dalvikvm( 2644): VFY: bad arg 0 (into Landroid/app/Activity;) 10-12 15:41:50.369 W/dalvikvm( 2644): VFY: rejecting call to Lcom/amazon/android/Kiwi;.onActivityResult (Landroid/app/Activity;IILandroid/content/Intent;)Z 10-12 15:41:50.369 W/dalvikvm( 2644): VFY: rejecting opcode 0x71 at 0x0000 10-12 15:41:50.369 W/dalvikvm( 2644): VFY: rejected Lcom/companionfree/pushup/WorkoutDbAdapter;.onActivityResult (IILandroid/content/Intent;)V 10-12 15:41:50.369 W/dalvikvm( 2644): Verifier rejected class Lcom/companionfree/pushup/WorkoutDbAdapter; 10-12 15:41:50.369 D/AndroidRuntime( 2644): Shutting down VM 10-12 15:41:50.369 W/dalvikvm( 2644): threadid=1: thread exiting with uncaught exception (group=0x40025a70) 10-12 15:41:50.369 E/AndroidRuntime( 2644): FATAL EXCEPTION: main 10-12 15:41:50.369 E/AndroidRuntime( 2644): java.lang.VerifyError: com.companionfree.pushup.WorkoutDbAdapter 10-12 15:41:50.369 E/AndroidRuntime( 2644): at com.companionfree.pushup.MainScreen.onCreateMainScreen(MainScreen.java) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at com.companionfree.pushup.MainScreen.onCreate(MainScreen.java) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1088) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2802) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2859) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread.access$2300(ActivityThread.java:136) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2179) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.os.Handler.dispatchMessage(Handler.java:99) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.os.Looper.loop(Looper.java:143) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread.main(ActivityThread.java:5073) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at java.lang.reflect.Method.invokeNative(Native Method) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at java.lang.reflect.Method.invoke(Method.java:521) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:858) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:616) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at dalvik.system.NativeStart.main(Native Method) 10-12 15:41:50.379 W/ActivityManager( 1563): Force finishing activity com.companionfree.pushup/.MainScreen 10-12 15:41:50.399 D/SurfaceFlinger( 1563): Layer::setBuffers(this=0xeff6b8), pid=1563, w=1, h=1 10-12 15:41:50.399 D/SurfaceFlinger( 1563): Layer::setBuffers(this=0xeff6b8), pid=1563, w=1, h=1 10-12 15:41:50.419 D/SurfaceFlinger( 1563): Layer::requestBuffer(this=0xeff6b8), index=0, pid=1563, w=480, h=337 success 10-12 15:41:50.469 D/dalvikvm( 2451): GC_FOR_MALLOC freed 7889 objects / 521072 bytes in 105ms 10-12 15:41:50.819 I/RPC ( 1563): rx thread timeout (1 clients): I see the same verify error on both but I can't figure it out. The only common library used between the 2 apps is the FlurryAgent.jar for analytics. For the top app I have For the bottom app I have in the manifests. The only information I have been able to find out is about libraries (GSON) and needing to use dx but I am using Eclipse so that doesn't help. To make this more difficult, the error does NOT occur on the Android Market. Yet the testers at Amazon say that it FC 5/5 times on each of their devices (I tried using an emulator for their test devices and they worked fine). I know they use "wrapper" code around my app and I think it must be interfering in some way. Does anyone have any experience with this?

    Read the article

  • Android: who can help me with setting up this google maps class please??

    - by Capsud
    Hi, Firstly this has turned out to be quite a long post so please bear with me as its not too difficult but you may need to clarify something with me if i haven't explained it correctly. So with some help the other day from guys on this forum, i managed to partially set up my 'mapClass' class, but i'm having trouble with it and its not running correctly so i would like some help if possible. I will post the code below so you can see. What Ive got is a 'Dundrum' class which sets up the listView for an array of items. Then ive got a 'dundrumSelector' class which I use to set up the setOnClickListener() methods on the listItems and link them to their correct views. DundrumSelector class.. public static final int BUTTON1 = R.id.anandaAddressButton; public static final int BUTTON2 = R.id.bramblesCafeAddressButton; public static final int BUTTON3 = R.id.brannigansAddressButton; public void onCreate(Bundle savedInstanceState){ super.onCreate(savedInstanceState); int position = getIntent().getExtras().getInt("position"); if(position == 0){ setContentView(R.layout.ananda); }; if(position == 1){ setContentView(R.layout.bramblescafe); }; if(position == 2){ setContentView(R.layout.brannigans); Button anandabutton = (Button) findViewById(R.id.anandaAddressButton); anandabutton.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { Intent myIntent = new Intent(view.getContext(),MapClass.class); myIntent.putExtra("button", BUTTON1); startActivityForResult(myIntent,0); } }); Button bramblesbutton = (Button) findViewById(R.id.bramblesCafeAddressButton); bramblesbutton.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { Intent myIntent = new Intent(view.getContext(),MapClass.class); myIntent.putExtra("button", BUTTON2); startActivityForResult(myIntent, 0); } }); etc etc.... Then what i did was set up static ints to represent the buttons which you can see at the top of this class, the reason for this is because in my mapClass activity I just want to have one method, because the only thing that is varying is the coordinates to each location. ie. i dont want to have 100+ map classes essentially doing the same thing other than different coordinates into the method. So my map class is as follows... case DundrumSelector.BUTTON1: handleCoordinates("53.288719","-6.241179"); break; case DundrumSelector.BUTTON2: handleCoordinates("53.288719","-6.241179"); break; case DundrumSelector.BUTTON3: handleCoordinates("53.288719","-6.241179"); break; } } private void handleCoordinates(String l, String b){ mapView = (MapView) findViewById(R.id.mapView); LinearLayout zoomLayout = (LinearLayout)findViewById(R.id.zoom); View zoomView = mapView.getZoomControls(); zoomLayout.addView(zoomView, new LinearLayout.LayoutParams( LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT)); mapView.displayZoomControls(true); mc = mapView.getController(); String coordinates[] = {l, b}; double lat = Double.parseDouble(coordinates[0]); double lng = Double.parseDouble(coordinates[1]); p = new GeoPoint( (int) (lat*1E6), (int) (lng*1E6)); mc.animateTo(p); mc.setZoom(17); mapView.invalidate(); } Now this is where my problem is. The onClick() events don't even work from the listView to get into the correct views. I have to comment out the methods in 'DundrumSelector' before I can get into their views. And this is what I dont understand, firstly why wont the onClick() events work, because its not even on that next view where the map is. I know this is a very long post and it might be quite confusing so let me know if you want any clarification.. Just to recap, what i'm trying to do is just have one class that sets up the map coordinates, like what i'm trying to do in my 'mapClass'. Please can someone help or suggest another way of doing this! Thanks alot everyone for reading this.

    Read the article

  • Matplotlib plotting non uniform data in 3D surface

    - by Raj Tendulkar
    I have a simple code to plot the points in 3D for Matplotlib as below - from mpl_toolkits.mplot3d import axes3d import matplotlib.pyplot as plt import numpy as np from numpy import genfromtxt import csv fig = plt.figure() ax = fig.add_subplot(111, projection='3d') my_data = genfromtxt('points1.csv', delimiter=',') points1X = my_data[:,0] points1Y = my_data[:,1] points1Z = my_data[:,2] ## I remove the header of the CSV File. points1X = np.delete(points1X, 0) points1Y = np.delete(points1Y, 0) points1Z = np.delete(points1Z, 0) # Convert the array to 1D array points1X = np.reshape(points1X,points1X.size) points1Y = np.reshape(points1Y,points1Y.size) points1Z = np.reshape(points1Z,points1Z.size) my_data = genfromtxt('points2.csv', delimiter=',') points2X = my_data[:,0] points2Y = my_data[:,1] points2Z = my_data[:,2] ## I remove the header of the CSV File. points2X = np.delete(points2X, 0) points2Y = np.delete(points2Y, 0) points2Z = np.delete(points2Z, 0) # Convert the array to 1D array points2X = np.reshape(points2X,points2X.size) points2Y = np.reshape(points2Y,points2Y.size) points2Z = np.reshape(points2Z,points2Z.size) ax.plot(points1X, points1Y, points1Z, 'd', markersize=8, markerfacecolor='red', label='points1') ax.plot(points2X, points2Y, points2Z, 'd', markersize=8, markerfacecolor='blue', label='points2') plt.show() My problem is that I tried to make a decent surface plot out of these data points that I have. I already tried to use ax.plot_surface() function to make it look nice. For this I eliminated some points, and recalculated the matrix kind of input needed by this function. However, the graph I generated was far more difficult to interpret and understand. So there might be 2 possibilities: either I am not using the function correctly, or otherwise, the data I am trying to plot is not good for the surface plot. What I was expecting was 3D graph which would have an effect similar to that we have of 3D pie chart. We see that one piece (that which is extracted out) is part of another piece. I was not expecting it to be exactly same like that, but some kind of effect like that. What I would like to ask is: Do you think it will be possible to make such 3D graph? Is there any way better, I could express my data in 3 dimension? Here are the 2 files - points1.csv Dim1,Dim2,Dim3 3,8,1 3,8,2 3,8,3 3,8,4 3,8,5 3,9,1 3,9,2 3,9,3 3,9,4 3,9,5 3,10,1 3,10,2 3,10,3 3,10,4 3,10,5 3,11,1 3,11,2 3,11,3 3,11,4 3,11,5 3,12,1 3,12,2 3,13,1 3,13,2 3,14,1 3,14,2 3,15,1 3,15,2 3,16,1 3,16,2 3,17,1 3,17,2 3,18,1 3,18,2 4,8,1 4,8,2 4,8,3 4,8,4 4,8,5 4,9,1 4,9,2 4,9,3 4,9,4 4,9,5 4,10,1 4,10,2 4,10,3 4,10,4 4,10,5 4,11,1 4,11,2 4,11,3 4,11,4 4,11,5 4,12,1 4,13,1 4,14,1 4,15,1 4,16,1 4,17,1 4,18,1 5,8,1 5,8,2 5,8,3 5,8,4 5,8,5 5,9,1 5,9,2 5,9,3 5,9,4 5,9,5 5,10,1 5,10,2 5,10,3 5,10,4 5,10,5 5,11,1 5,11,2 5,11,3 5,11,4 5,11,5 5,12,1 5,13,1 5,14,1 5,15,1 5,16,1 5,17,1 5,18,1 6,8,1 6,8,2 6,8,3 6,8,4 6,8,5 6,9,1 6,9,2 6,9,3 6,9,4 6,9,5 6,10,1 6,11,1 6,12,1 6,13,1 6,14,1 6,15,1 6,16,1 6,17,1 6,18,1 7,8,1 7,8,2 7,8,3 7,8,4 7,8,5 7,9,1 7,9,2 7,9,3 7,9,4 7,9,5 and points2.csv Dim1,Dim2,Dim3 3,12,3 3,12,4 3,12,5 3,13,3 3,13,4 3,13,5 3,14,3 3,14,4 3,14,5 3,15,3 3,15,4 3,15,5 3,16,3 3,16,4 3,16,5 3,17,3 3,17,4 3,17,5 3,18,3 3,18,4 3,18,5 4,12,2 4,12,3 4,12,4 4,12,5 4,13,2 4,13,3 4,13,4 4,13,5 4,14,2 4,14,3 4,14,4 4,14,5 4,15,2 4,15,3 4,15,4 4,15,5 4,16,2 4,16,3 4,16,4 4,16,5 4,17,2 4,17,3 4,17,4 4,17,5 4,18,2 4,18,3 4,18,4 4,18,5 5,12,2 5,12,3 5,12,4 5,12,5 5,13,2 5,13,3 5,13,4 5,13,5 5,14,2 5,14,3 5,14,4 5,14,5 5,15,2 5,15,3 5,15,4 5,15,5 5,16,2 5,16,3 5,16,4 5,16,5 5,17,2 5,17,3 5,17,4 5,17,5 5,18,2 5,18,3 5,18,4 5,18,5 6,10,2 6,10,3 6,10,4 6,10,5 6,11,2 6,11,3 6,11,4 6,11,5 6,12,2 6,12,3 6,12,4 6,12,5 6,13,2 6,13,3 6,13,4 6,13,5 6,14,2 6,14,3 6,14,4 6,14,5 6,15,2 6,15,3 6,15,4 6,15,5 6,16,2 6,16,3 6,16,4 6,16,5 6,17,2 6,17,3 6,17,4 6,17,5 6,18,2 6,18,3 6,18,4 6,18,5 7,10,1 7,10,2 7,10,3 7,10,4 7,10,5 7,11,1 7,11,2 7,11,3 7,11,4 7,11,5 7,12,1 7,12,2 7,12,3 7,12,4 7,12,5 7,13,1 7,13,2 7,13,3 7,13,4 7,13,5 7,14,1 7,14,2 7,14,3 7,14,4 7,14,5 7,15,1 7,15,2 7,15,3 7,15,4 7,15,5 7,16,1 7,16,2 7,16,3 7,16,4 7,16,5 7,17,1 7,17,2 7,17,3 7,17,4 7,17,5 7,18,1 7,18,2 7,18,3 7,18,4 7,18,5

    Read the article

< Previous Page | 137 138 139 140 141 142 143  | Next Page >