Search Results

Search found 10978 results on 440 pages for 'collision testing'.

Page 326/440 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • top tweets SOA Partner Community – June 2013

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Oracle SOA Learn how Business Rules are used in Oracle SOA Suite. New free self-study course - Oracle Univ. #soa #oraclesoa http://pub.vitrue.com/ll9B OPITZ CONSULTING ?Wie #BPM und #SOA zusammengehören? Watch 100-Seconds-Video-Lesson by @Rolfbaer - http://ow.ly/luSjK @soacommunity Andrejus Baranovskis ?Customized BPM 11g PS6 Workspace Application http://fb.me/2ukaSBXKs Mark Nelson ?Case Management Samples Released http://wp.me/pgVeO-Lv Mark Nelson Instance Patching Demo for BPM 11.1.1.7 http://wp.me/pgVeO-Lx Simone Geib Antony Reynolds: Target Verification #oraclesoa https://blogs.oracle.com/reynolds/ OPITZ CONSULTING ?"It's all about Integration - Developing with Oracle #Cloud Services" @t_winterberg files: http://ow.ly/ljtEY #cloudworld @soacommunity Arun Pareek ?Functional Testing Business Processes In Oracle BPM Suite 11g http://wp.me/pkPu1-pc via @arrunpareek SOA Proactive Want to get started with Human Workflow? Check out the introductory video on OTN, http://pub.vitrue.com/enIL C2B2 Consulting Free tech workshop,London 6th of Jun Diagnosing Performance & Scalability Problems in Oracle SOASuite http://www.c2b2.co.uk/oracle_fusion_middleware_performance_seminar … @soacommunity Oracle BPM Must have technologies for delivering effective #CX : #BPM #Social #Mobile > #OracleBPM Whitepaper http://pub.vitrue.com/6pF6 OracleBlogs ?Introduction to Web Forms -Basic Tutorial http://ow.ly/2wQLTE OTNArchBeat ?Complete State of SOA podcast now available w/ @soacommunity @hajonormann @gschmutz @t_winterberg #industrialsoa http://pub.vitrue.com/PZFw Ronald Luttikhuizen VENNSTER Blog | Article published - Fault Handling and Prevention - Part 2 | http://blog.vennster.nl/2013/05/article-published-fault-handling-and.html … Mark Nelson ?Getting to know Maven http://wp.me/pgVeO-Lk gschmutz ?Cool! Our 2nd article has just been published: "Fault Handling and Prevention for Services in Oracle Service Bus" http://pub.vitrue.com/jMOy David Shaffer Interesting SOA Development and Delivery post on A-Team Redstack site - http://bit.ly/18oqrAI . Would be great to get others to contribute! Mark Nelson BPM PS6 video showing process lifecycle in more detail (30min) http://wp.me/pgVeO-Ko SOA Proactive ?Webcast: 'Introduction and Troubleshooting of the SOA 11g Database Adapter', May 9th. Register now at http://pub.vitrue.com/8In7 Mark Nelson ?SOA Development and Delivery http://wp.me/pgVeO-Kd Oracle BPM Manoj Das, VP Product Mangement talks about new #OracleBPM release #BPM #processmanagement http://pub.vitrue.com/FV3R OTNArchBeat Podcast: The State of SOA w/ @soacommunity @hajonormann @gschmutz @t_winterberg #industrialsoa http://pub.vitrue.com/OK2M gschmutz New article series on Industrial SOA started on OTN and Service Technology Magazine: http://guidoschmutz.wordpress.com/2013/04/22/first-two-chapters-of-industrial-soa-articles-series-have-been-published-both-on-otn-and-service-technology-magazine/ … #industrialSOA Danilo Schmiedel ?Article series #industrialSOA published on OTN and Service Technology Magazine http://inside-bpm-and-soa.blogspot.de/2013/04/industrial-soa_22.html … @soacommunity @OC_WIRE SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: twitter,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Advice Needed: Developers blocked by waiting on code to merge from another branch using GitFlow

    - by fogwolf
    Our team just made the switch from FogBugz & Kiln/Mercurial to Jira & Stash/Git. We are using the Git Flow model for branching, adding subtask branches off of feature branches (relating to Jira subtasks of Jira features). We are using Stash to assign a reviewer when we create a pull request to merge back into the parent branch (usually develop but for subtasks back into the feature branch). The problem we're finding is that even with the best planning and breakdown of feature cases, when multiple developers are working together on the same feature, say on the front-end and back-end, if they are working on interdependent code that is in separate branches one developer ends up blocking the other. We've tried pulling between each others' branches as we develop. We've also tried creating local integration branches each developer can pull from multiple branches to test the integration as they develop. Finally, and this seems to work possibly the best for us so far, though with a bit more overhead, we have tried creating an integration branch off of the feature branch right off the bat. When a subtask branch (off of the feature branch) is ready for a pull request and code review, we also manually merge those change sets into this feature integration branch. Then all interested developers are able to pull from that integration branch into other dependent subtask branches. This prevents anyone from waiting for any branch they are dependent upon to pass code review. I know this isn't necessarily a Git issue - it has to do with working on interdependent code in multiple branches, mixed with our own work process and culture. If we didn't have the strict code-review policy for develop (true integration branch) then developer 1 could merge to develop for developer 2 to pull from. Another complication is that we are also required to do some preliminary testing as part of the code review process before handing the feature off to QA.This means that even if front-end developer 1 is pulling directly from back-end developer 2's branch as they go, if back-end developer 2 finishes and his/her pull request is sitting in code review for a week, then front-end developer 2 technically can't create his pull request/code review because his/her code reviewer can't test because back-end developer 2's code hasn't been merged into develop yet. Bottom line is we're finding ourselves in a much more serial rather than parallel approach in these instance, depending on which route we go, and would like to find a process to use to avoid this. Last thing I'll mention is we realize by sharing code across branches that haven't been code reviewed and finalized yet we are in essence using the beta code of others. To a certain extent I don't think we can avoid that and are willing to accept that to a degree. Anyway, any ideas, input, etc... greatly appreciated. Thanks!

    Read the article

  • Windows Phone 7 Developer Tools - January 2011 Update

    - by Nikita Polyakov
    Long time no talk? So to make up for it, here is something very new – update to WP7 Dev Tools! The Windows Phone 7 Developer Tools January 2011 Update provides bits that you would install on TOP of the current WP7 Dev tools on your machine. If you are just installing the tools for the first time, this update replaces previously released October patch. In fact, it is no longer available as this January 2011 update replaces the patch entirely. What is in this update? TextBox support for Copy&Paste Updated Emulator Image that contains Copy&Paste for your testing There have been performance tweaks for the OS Minor Bugs and Fixes How does it Work? The Copy&Paste extends a existing TextBox control to have this new functionality, There is no current API access to the Clipboard or support for other controls that are not based on TextBox. If I have/Do I need to: A current application in the marketplace/No action is required Have an application that contains a TextBox in a Pivot or Panorama control surface/Text your application in provided emulator Recommendation is to move TextBox controls from directly top of controls that listen to Gesture movement to their own pop-off screens or entire pages as this might interfear with select behavior for Copy&Paste Have controls that do no inherit from TextBox/Such controls will not get new Copy&Paste behavior Note: The update materials, FAQ and Q&A do not answer WHEN the update for the OS will be sent to the phones.  Also to note - this update does NOT update your developer phone to enable Copy&Paste or any other features. Windows Phone 7 Training Kit February Update Windows Phone Training Kit has also been updated – you can grab a fresh copy here.   Where to I find more good information, documentation and training? This very awesome blog post from the Windows Phone Developer Blog - Windows Phone 7 Documentation Landscape. Official Blog Post on the Update is here. Happy coding! -Nikita   PS: I am well aware that it is Feb 4th and not January :) If you were disappointed at CES that Microsoft said nothing at all about future of WP7, don’t forget that MWC 2011 is Feb 14th – I am going to be listening for Windows Phone announcements then, as that is where the announcements were made about Windows Phone 7.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Dependency Injection/IoC container practices when writing frameworks

    - by Dave Hillier
    I've used various IoC containers (Castle.Windsor, Autofac, MEF, etc) for .Net in a number of projects. I have found they tend to encourage a number of bad practices. Are there any established practices for IoC container use, particularly when providing a platform/framework? My aim as a framework writer is to make code as simple and as easy to use as possible. I'd rather write one line of code to construct an object than ten or even just two. For example, a couple of code smells that I've noticed and don't have good suggestions to: Large number of parameters (5) for constructors. Creating services tends to be complex; all of the dependencies are injected via the constructor - despite the fact that the components are rarely optional (except for maybe in testing). Lack of private and internal classes; this one may be a specific limitation of using C# and Silverlight, but I'm interested in how it is solved. It's difficult to tell what a frameworks interface is if all the classes are public; it allows me access to private parts that I probably shouldnt touch. Coupling the object lifecycle to the IoC container. It is often difficult to manually construct the dependencies required to create objects. Object lifecycle is too often managed by the IoC framework. I've seen projects where most classes are registered as Singletons. You get a lack of explicit control and are also forced to manage the internals (it relates to the above point, all classes are public and you have to inject them). For example, .Net framework has many static methods. such as, DateTime.UtcNow. Many times I have seen this wrapped and injected as a construction parameter. Depending on concrete implementation makes my code hard to test. Injecting a dependency makes my code hard to use - particularly if the class has many parameters. How do I provide both a testable interface, as well as one that is easy to use? What are the best practices?

    Read the article

  • How To Run A Shell Script Again And Again Having X Interval Of Time?

    - by Muhammad Hassan
    I have a shell script in my Ubuntu Server 14.04 LTS at ./ShellScript.sh. I setup /etc/rc.local to run the shell script after boot but before login using below code. Run this: sudo nano /etc/rc.local then add following and save. #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. #!/bin/bash ./ShellScript.sh exit 0 Now I want to run/execute this shell script again and again having 15min of time interval between every run after boot but before login. So Can I do it? Update 1:) When I run crontab -e then I got the following. Now What to do? no crontab for root - using an empty one Select an editor. To change later, run 'select-editor'. 1. /bin/ed 2. /bin/nano <---- easiest 3. /usr/bin/vim.basic 4. /usr/bin/vim.tiny Choose 1-4 [2]: After selecting 2, I got crontab: "/usr/bin/sensible-editor" exited with status 2 UPDATE 2:) Update ShellScript.sh like below... #!/bin/bash # Testing ShellScript... while true do echo "ShellScript Start Running..." ********************************** All My Shell Script Codes/Script/Commands ********************************** echo "ShellScript End Running..." exit 0 sleep 900 done Then Run this: sudo nano /etc/rc.local then add following and save. #!/bin/sh -e # # rc.local # # This script is executed at the end of each multiuser runlevel. # Make sure that the script will "exit 0" on success or any other # value on error. # # In order to enable or disable this script just change the execution # bits. # # By default this script does nothing. sh ./ShellScript.sh & exit 0

    Read the article

  • JavaOne Rock Star – Adam Bien

    - by Janice J. Heiss
    Among the most celebrated developers in recent years, especially in the domain of Java EE and JavaFX, is consultant Adam Bien, who, in addition to being a JavaOne Rock Star for Java EE sessions given in 2009 and 2011, is a Java Champion, the winner of Oracle Magazine’s 2011 Top Java Developer of the Year Award, and recently won a 2012 JAX Innovation Award as a top Java Ambassador. Bien will be presenting the following sessions: TUT3907 - Java EE 6/7: The Lean Parts CON3906 - Stress-Testing Java EE 6 Applications Without Stress CON3908 - Building Serious JavaFX 2 Applications CON3896 - Interactive Onstage Java EE Overengineering I spoke with Bien to get his take on Java today. He expressed excitement that the smallest companies and startups are showing increasing interest in Java EE. “This is a very good sign,” said Bien. “Only a few years ago J2EE was mostly used by larger companies -- now it becomes interesting even for one-person shows. Enterprise Java events are also extremely popular. On the Java SE side, I'm really excited about Project Nashorn.” Nashorn is an upcoming JavaScript engine, developed fully in Java by Oracle, and based on the Da Vinci Machine (JSR 292) which is expected to be available for Java 8.    Bien expressed concern about a common misconception regarding Java's mediocre productivity. “The problem is not Java,” explained Bien, “but rather systems built with ancient patterns and approaches. Sometimes it really is ‘Cargo Cult Programming.’ Java SE/EE can be incredibly productive and lean without the unnecessary and hard-to-maintain bloat. The real problems are ‘Ivory Towers’ and not Java’s lack of productivity.” Bien remarked that if there is one thing he wanted Java developers to understand it is that, "Premature optimization is the root of all evil. Or at least of some evil. Modern JVMs and application servers are hard to optimize upfront. It is far easier to write simple code and measure the results continuously. Identify the hotspots first, then optimize.”   He advised Java EE developers to, “Rethink everything you know about Enterprise Java. Before you implement anything, ask the question: ‘Why?’ If there is no clear answer -- just don't do it. Most well known best practices are outdated. Focus your efforts on the domain problem and not the technology.” Looking ahead, Bien remarked, “I would like to see open source application servers running directly on a hypervisor. Packaging the whole runtime in a single file would significantly simplify the deployment and operations.” Check out a recent Java Magazine interview with Bien about his Java EE 6 stress monitoring tool here.

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • K2 4.5 Quick Thoughts

    I just finished attending a webcast on K2 4.5 and I thought Id share a few quick thoughts. Power User Story Improved Given it is just a presentation and I havent actually played with it, the story seemed improved and more believable that real world power users would be able to define workflows in SharePoint.  Power users who would be comfortable with Excel functions may be able to do some more worthwhile workflows since there is new support for inline functions and conditions.  The new SilverLight K2 designer seems pretty user friendly, though the dialog windows can really stack up which may get confusing.  I thought the neatest part was that the workflow can be defined just by starting with a SharePoint Lists settings which may be okay for some organizations and simpler workflows that dont need to define the workflow and push it through lots of testing in different environments.  The standalone K2 Studio is back.  In K2 2003 it was required because Visual Studio integration didnt exist.  Its back now for use by power users who need functionality up to the point of code.  Not sure if this Administration/Installation Installation is supposed to be simplified, with unattended install and other details I didnt catch.  Install and configuration has always seemed daunting to me so anything to improve that is good.  Related to that there is a new tool that is meant to help diagnose issues in your installation.  That may include figuring out missing permissions or services that arent running.  Also, now all K2 SharePoint features deployed as solutions. Dynamic SQL Service Broker Create a smart object to go against a table that you created, NOT the SmartBox.  This seems promising and something that maybe should have been there all along. Reference Event Allows you to call functionally that youve referenced, in the sample showing it was calling a web service that was referenced.  It seemed odd because it was really like writing code using dialogs (call constructor, set timeout, call web service method).  Seemed a little odd to me. Help We were reminded that help.k2.com site is newish site that is supposed to be the MSDN of K2 for partners and customers. VS 2010 Support Still no hard date on this, but what we were told is approximately 90 days after VS 2010 is officially released.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Interviews: Going Beyond the Technical Quiz

    - by Tony Davis
    All developers will be familiar with the basic format of a technical interview. After a bout of CV-trawling to gauge basic experience, strengths and weaknesses, the interview turns technical. The whiteboard takes center stage and the challenge is set to design a function or query, or solve what on the face of it might seem a disarmingly simple programming puzzle. Most developers will have experienced those few panic-stricken moments, when one’s mind goes as blank as the whiteboard, before un-popping the marker pen, and hopefully one’s mental functions, to work through the problem. It is a way to probe the candidate’s knowledge of basic programming structures and techniques and to challenge their critical thinking. However, these challenges or puzzles, often devised by some of the smartest brains in the development team, have a tendency to become unnecessarily ‘tricksy’. They often seem somewhat academic in nature. While the candidate straight out of IT school might breeze through the construction of a Markov chain, a candidate with bags of practical experience but less in the way of formal training could become nonplussed. Also, a whiteboard and a marker pen make up only a very small part of the toolkit that a programmer will use in everyday work. I remember vividly my first job interview, for a position as technical editor. It went well, but after the usual CV grilling and technical questions, I was only halfway there. Later, they sat me alongside a team of editors, in front of a computer loaded with MS Word and copy of SQL Server Query Analyzer, and my task was to edit a real chapter for a real SQL Server book that they planned to publish, including validating and testing all the code. It was a tough challenge but I came away with a sound knowledge of the sort of work I’d do, and its context. It makes perfect sense, yet my impression is that many organizations don’t do this. Indeed, it is only relatively recently that Red Gate started to move over to this model for developer interviews. Now, instead of, or perhaps in addition to, the whiteboard challenges, the candidate can expect to sit with their prospective team, in front of Visual Studio, loaded with all the useful tools in the developer’s kit (ReSharper and so on) and asked to, for example, analyze and improve a real piece of software. The same principles should apply when interviewing for a database positon. In addition to the usual questions challenging the candidate’s knowledge of such things as b-trees, object permissions, database recovery models, and so on, sit the candidate down with the other database developers or DBAs. Arm them with a copy of Management Studio, and a few other tools, then challenge them to discover the flaws in a stored procedure, and improve its performance. Or present them with a corrupt database and ask them to get the database back online, and discover the cause of the corruption.

    Read the article

  • ObjectStorageHelper<T> now available for Windows 8 RTM

    - by jamiet
    In October 2011 I wrote a blog post entitled ObjectStorageHelper<T> – A WinRT utility for Windows 8 where I introduced a little utility class called ObjectStorageHelper<T> that I had been working on while noodling around on the Developer Preview of Windows 8. ObjectStorageHelper<T> makes it easy for anyone building apps for Windows 8 to save data to files. How easy? As easy as this: var myPoco = new Poco() { IntProp = 1, StringProp = "one" }; var objectStorageHelper = new ObjectStorageHelper<Poco>(StorageType.Local); await objectStorageHelper.SaveAsync(myPoco); Compare that to the plumbing code that you would have to write otherwise: var Obj = new Poco() { IntProp = 1, StringProp = "one" }; StorageFile file = null; StorageFolder folder = GetFolder(storageType); file = await folder.CreateFileAsync(FileName(Obj), CreationCollisionOption.ReplaceExisting); IRandomAccessStream writeStream = await file.OpenAsync(FileAccessMode.ReadWrite); using (Stream outStream = Task.Run(() => writeStream.AsStreamForWrite()).Result) {     serializer.Serialize(outStream, Obj);     await outStream.FlushAsync(); } and you can see how ObjectStorageHelper<T> can help save a Windows 8 developer quite a few headaches. ObjectStorageHelper<T> simply requires you to pass it an object to be saved, tell it where to save it (Roaming, Local or Temporary), and you’re done. Retrieving an object from storage is equally as simple: var objectStorageHelper = new ObjectStorageHelper<Poco>(StorageType.Local); var myPoco = await objectStorageHelper.LoadAsync(); Please check the homepage for the project at http://winrtstoragehelper.codeplex.com/ for (much) more info. A number of people have used and tested ObjectStorageHelper<T> since those early days and one of those folks in particular, David Burela, was good enough to report a couple of bugs: Saving Asynchronously Save fails when class is in another project As a result of David’s bug reports and some more extensive testing on my side I have overhauled the initial code that I wrote last October and am confident that it is now much more robust and ready for primetime (check the commit history if you’re interested). The source code (which, again, you can find on Codeplex at http://winrtstoragehelper.codeplex.com/) includes a suite of unit tests to test all of the basic use cases (if you can think of any more please let me know). If you use this in any of your Windows 8 projects then please let me know. I love getting feedback and I’d also love to know if this is actually being used anywhere. @Jamiet

    Read the article

  • What is a resonable workflow for designing webapps?

    - by Evan Plaice
    It has been a while since I have done any substantial web development and I'd like to take advantage of the latest practices but I'm struggling to visualize the workflow to incorporate everything. Here's what I'm looking to use: CakePHP framework jsmin (JavaScript Minify) SASS (Synctactically Awesome StyleSheets) Git CakePHP: Pretty self explanatory, make modifications and update the source. jsmin: When you modify a script, do you manually run jsmin to output the new minified code, or would it be better to run a pre-commit hook that automatically generates jsmin outputs of javascript files that have changed. Assume that I have no knowledge of implementing commit hooks. SASS: I really like what SASS has to offer but I'm also aware that SASS code isn't supported by browsers by default so, at some point, the SASS code needs to be transformed to normal CSS. At what point in the workflow is this done. Git I'm terrified to admit it but, the last time I did any substantial web development, I didn't use SCM source control (IE, I did use source control but it consisted of a very detailed change log with backups). I have since had plenty of experience using Git (as well as mercurial and SVN) for desktop development but I'm wondering how to best implement it for web development). Is it common practice to implement a remote repository on the web host so I can push the changes directly to the production server, or is there some cross platform (windows/linux) tool that makes it easy to upload only changed files to the production server. Are there web hosting companies that make it eas to implement a remote repository, do I need SSH access, etc... I know how to accomplish this on my own testing server with a remote repository with a separate remote tracking branch already but I've never done it on a remote production web hosting server before so I'm not aware of the options yet. Extra: I was considering implementing a javascript framework where separate javascript files used on a page are compiled into a single file for each page on the production server to limit the number of file downloads needed per page. Does something like this already exist? Is there already an open source project out in the wild that implements something similar that I could use and contribute to? Considering how paranoid web devs are about performance (and the fact that the number of file requests on a website is a big hit to performance) I'm guessing that there is some wizard hacker on the net who has already addressed this issue.

    Read the article

  • New Wine in New Bottles

    - by Tony Davis
    How many people, when their car shows signs of wear and tear, would consider upgrading the engine and keeping the shell? Even if you're cash-strapped, you'll soon work out the subtlety of the economics, the cost of sudden breakdowns, the precious time lost coping with the hassle, and the low 'book value'. You'll generally buy a new car. The same philosophy should apply to database systems. Mainstream support for SQL Server 2005 ends on April 12; many DBAS, if they haven't done so already, will be considering the migration to SQL Server 2008 R2. Hopefully, that upgrade plan will include a fresh install of the operating system on brand new hardware. SQL Server 2008 R2 and Windows Server 2008 R2 are designed to work together. The improved architecture, processing power, and hyper-threading capabilities of modern processors will dramatically improve the performance of many SQL Server workloads, and allow consolidation opportunities. Of course, there will be many DBAs smiling ruefully at the suggestion of such indulgence. This is nothing like the real world, this halcyon place where hardware and software budgets are limitless, development and testing resources are plentiful, and third party vendors immediately certify their applications for the latest-and-greatest platform! As with cars, or any other technology, the justification for a complete upgrade is complex. With Servers, the extra cost at time of upgrade will generally pay you back in terms of the increased performance of your business applications, reduced maintenance costs, training costs and downtime. Also, if you plan and design carefully, it's possible to offset hardware costs with reduced SQL Server licence costs. In his forthcoming SQL Server Hardware book, Glenn Berry describes a recent case where he was able to replace 4 single-socket database servers with one two-socket server, saving about $90K in hardware costs and $350K in SQL Server license costs. Of course, there are exceptions. If you do have a stable, reliable, secure SQL Server 6.5 system that still admirably meets the needs of a specific business requirement, and has no security vulnerabilities, then by all means leave it alone. Why upgrade just for the sake of it? However, as soon as a system shows sign of being unfit for purpose, or is moving out of mainstream support, the ruthless DBA will make the strongest possible case for a belts-and-braces upgrade. We'd love to hear what you think. What does your typical upgrade path look like? What are the major obstacles? Cheers, Tony.

    Read the article

  • 503.1 Service Unavailable Error Resolution

    - by Lee Brandt
    I was having a hell of a time tonight with my IIS on my development laptop. I don’t remember doing anything to change the IIS settings. I don’t use IIS that much on my dev machine. Usually Cassini is enough for testing my development efforts but tonight I needed to replicate a problem that seems to stem from x86 v x64 mismatch, so I went to create an IIS site pointed to my dev folder. When I did, I got a “503.1 Service Unavailable Error”. First thing I did is go over all my setting to make sure I didn’t screw something up when I set up the site. It was pointing to the right place, and the app pool settings seemed to be alright. However, when I got the 503.1 error and went back to my app pool list, I saw that the app pool I was using was stopped again. I must’ve started and ran it a dozen times to verify that I wasn’t seeing things. After having a colleague look at it and not finding an answer, I started poking around Google. I cam across a post from Phil Haack about the same error. His fix was not mine, however. When I ran his command on the CLI, I didn’t see the reserved routes for HTTP.SYS there. Finally, I looked in the event viewer (where I should have looked as soon as I saw that my app pool was stopping) and saw an error in there. For the IIS-W3SVC-WP Source I saw: The worker process for application pool 'DefaultAppPool' encountered an error 'Cannot read configuration file due to insufficient permissions ' trying to read configuration data from file '\\?\C:\Windows\Microsoft.NET\Framework64\v4.0.30319\CONFIG\machine.config', line number '0'. The data field contains the error code. So I went to that path and saw a little lock on the file icon. I opened up the security tab for file properties and saw that I was missing the IIS_IUSRS group. On a machine that was working correctly, I verified that it indeed had the IIS_IUSRS group set to Read and Read & Execute allowed. So I set mine up the same and voila! Hopefully this helps somebody else, too.

    Read the article

  • Java Spotlight Episode 78: Jasper Potts on the JavaFX Scene Builder

    - by Roger Brinkley
    Tweet An interview with Jasper Potts about the new JavaFX Scene Builder. Joining us this week on the Java All Star Developer Panel are Dalibor Topic, Java Free and Open Source Software Ambassador and Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News JavaFX Scene Builder Developer Preview available for testing. Java EE Unlock the Java EE 6 Platform using NetBeans 7.1 Tuning GlassFish for Production JSF 2.2 Update from Ed Burns John Rose at Microsoft's Lang.NEXT summit Recording of John's Java 8 presentation Jeroen Frijters' presentation on IKVM.NET Martin Odersky's keynote JVM Language Summit 2012 July 30 – August 1; Oracle Santa Clara (same as last year) CFP coming in a few days JVM Language Summit 2011 Presentations & Recordings Proposed development schedule for JDK 8 Say hello to Mathias Axelsson Events April 11, Cleveland JUG, Cleveland, OH April 12, GreenJUG, Greenville, SC April 17-18, JavaOne Russia, Moscow Russia April 18–20, Devoxx France, Paris, France April 17-20, GIDS, Bangalore April 21, Java Summit, Chennai April 26, Mix-IT, Lyon, France, May 3-4, JavaOne India, Hyderabad, India May 5, Bangalore, Pune, ?? - JUG outreach May 7, OTN Developer Day, Mumbai May 8, OTN Developer Day, Delhi Feature InterviewJasper Potts is the Developer Experience Architect for the Java Client Group at Oracle. Responsible for technical design for everything thats sis on the core platform including Controls, Tools, Samples and Blueprints. Formally a lead engineer on the JavaFX & Swing teams working on the new JavaFX UI Controls and Graphics frameworks. Also responsible for designing, developing and presenting demos during the keynotes at JavaOne and Devoxx. A JavaOne Rockstar presenter having presented many sessions on JavaFX and Swing at many conferences. Prior to Sun he founded Xerto a desktop applications company developing Imagery a Java professional photo management application. In this interview Jasper talks about the recently release JavaFX Scene Builder. Mail Bag What’s Cool Contribute to GlassFish in Five Different Ways Stephen Chin and James Weaver join Oracle Adam Bien - Building Java FX 2 Libraries From Source With Maven 3 Paul Sandoz - Java Boomerang Building Jigsaw on Mac OS X using VirtualBox Mandy Chung: Jigsaw for Mac OS X

    Read the article

  • Github Organization Repositories, Issues, Multiple Developers, and Forking - Best Workflow Practices

    - by Jim Rubenstein
    A weird title, yes, but I've got a bit of ground to cover I think. We have an organization account on github with private repositories. We want to use github's native issues/pull-requests features (pull requests are basically exactly what we want as far as code reviews and feature discussions). We found the tool hub by defunkt which has a cool little feature of being able to convert an existing issue to a pull request, and automatically associate your current branch with it. I'm wondering if it is best practice to have each developer in the organization fork the organization's repository to do their feature work/bug fixes/etc. This seems like a pretty solid work flow (as, it's basically what every open source project on github does) but we want to be sure that we can track issues and pull requests from ONE source, the organization's repository. So I have a few questions: Is a fork-per-developer approach appropriate in this case? It seems like it could be a little overkill. I'm not sure that we need a fork for every developer, unless we introduce developers who don't have direct push access and need all their code reviewed. In which case, we would want to institute a policy like that, for those developers only. So, which is better? All developers in a single repository, or a fork for everyone? Does anyone have experience with the hub tool, specifically the pull-request feature? If we do a fork-per-developer (or even for less-privileged devs) will the pull-request feature of hub operate on the pull requests from the upstream master repository (the organization's repository?) or does it have different behavior? EDIT I did some testing with issues, forks, and pull requests and found that. If you create an issue on your organization's repository, then fork the repository from your organization to your own github account, do some changes, merge to your fork's master branch. When you try to run hub -i <issue #> you get an error, User is not authorized to modify the issue. So, apparently that work flow won't work.

    Read the article

  • Say goodbye to System.Reflection.Emit (any dynamic proxy generation) in WinRT

    - by mbrit
    tl;dr - Forget any form of dynamic code emitting in Metro-style. It's not going to happen.Over the past week or so I've been trying to get Moq (the popular open source TDD mocking framework) to work on WinRT. Irritatingly, the day before Release Preview was released it was actually working on Consumer Preview. However in Release Preview (RP) the System.Reflection.Emit namespace is gone. Forget any form of dynamic code generation and/or MSIL injection.This kills off any project based on the popular Castle Project Dynamic Proxy component, of which Moq is one example. You can at this point in time not perform any form of mocking using dynamic injection in your Metro-style unit testing endeavours.So let me take you through my journey on this, so that other's don't have to...The headline fact is that you cannot load any assembly that you create at runtime. WinRT supports one Assembly.Load method, and that takes the name of an assembly. That has to be placed within the deployment folder of your app. You cannot give it a filename, or stream. The methods are there, but private. Try to invoke them using Reflection and you'll be met with a caspol exception.You can, in theory, use Rotor to replace SRE. It's all there, but again, you can't load anything you create.You can't write to your deployment folder from within your Metro-style app. But, can you use another service on the machine to move a file that you create into the deployment folder and load it? Not really.The networking stack in Metro-style is intentionally "damaged" to prevent socket communication from Metro-style to any end-point on the local machine. (It just times out.) This militates against an approach where your Metro-style app can signal a properly installed service on the machine to create proxies on its behalf. If you wanted to do this, you'd have to route the calls through a C&C server somewhere. The reason why Microsoft has done this is obvious - taking out SRE know means they don't have to do it in an emergency later. The collateral damage in removing SRE is that you can't do mocking in test mode, but you also can't do any form of injection in production mode. There are plenty of reasons why enterprise apps might want to do this last point particularly. At CP, the assumption was that their inspection tools would prevent SRE being used as a malware vector - it now seems they are less confident about that. (For clarity, the risk here is in allowing a nefarious program to download instructions from a C&C server and make up executable code on the fly to run, getting around the marketplace restrictions.)So, two things:- System.Reflection.Emit is gone in Metro-style/WinRT. Get over it - dynamic, on-the-fly code generation is not going to to happen.- I've more or less got a version of Moq working in Metro-style. This is based on the idea of "baking" the dynamic proxies before you use them. You can find more information here: https://github.com/mbrit/moqrt

    Read the article

  • Game of Phones

    - by Carlos Chang
    Game  of  Phones There’s an excellent DZone article titled: 2014 Guide to Mobile Development. It’s loaded with excellent information including some results from a mobile related survey to more than 1000 IT professionals. Without giving away too much, these highlights should convince you to read the entire article.  Web and Hybrid apps are gaining tons of traction particularly in the enterprise. If you want to better understand the differences between Web, Native and Hybrid, this article has you covered. Enterprise developers are increasingly more interested in cross platform tools. Makes sense right?  I mean, unless you have infinite resources (e.g. Facebook) and can afford to write native apps to every platform, finding something that can meet your needs for iOS and Android makes sense.  And toss in the possibility of Windows Phone …and oh, just to be current, the addition of Apple’s new mobile language, Swift, to add to Objective C.. and oh boy.  Why not check out cross platform tools? BTW, don’t  forget testing on each platform, and maintenance and the next versions of the app. It’s not one and done. If you’re successful, you’re never done. Various mobile vendors are represented and many provide some great information.  Oracle's own Suhas Uliyar, VP of Mobile Strategy, represented with some great insights into the challenges of mobile back end integration (SOA, mBaaS, etc.) and moving from "mobile first" to a mobile plus world. BTW, Suhas was recently named Top 100 Wireless Technology Experts for 2014 by Today's Wireless World magazine.  And if your not yet convinced, DZone did a very nice job with their mobile infographic stylized after the insanely popular series, Game of Thrones.  Even though there were no dragons illustrated, worth the price of admission just for that.   Check it out here.

    Read the article

  • SSIS Expression Editor & Tester

    Published today on CodePlex is the SSIS Expression Editor & Tester project. If you want to try it just pop over to CodePlex and download it. About five years ago I developed my own expression editor control. It first got used in our custom tasks as the MS editor didn’t become available until SQL 2005 SP1, but even then it had some handy features I preferred. For example resizable panes so that if your expression result was more than two lines you could see them all. It also meant I could change the functions available in the tree view, the most obvious use being to add some handy snippets and samples that I used a lot. This quickly developed into a small expression testing tool. I’d develop complex expressions using my editor and then copy it back into the package itself. I have been meaning to make the tool available for some time and finally made the effort, the code is checked-in and the signed downloads are published on CodePlex. There are two flavours, SQL 2005 or 2008, and just a simple zip file to download and extract. The tool doesn’t need installing, and is completely portable. It does need SSIS to be installed on the local machine though. Each zip file contains two files: ExpressionTester.exe – The tool itself, run this. ExpressionEditor.dll – The reusable editor control. A while ago the gentlemen behind BIDS Helper noticed the editor on a task and asked about using it. This became incorporated into their variable window extensions feature. To try and help them and anyone else that wants to use the editor control, it is available as a single assembly that you can reference yourself, and of course all the source code is on CodePlex too. Just add a reference to the ExpressionEditor.dll assembly and you should be up and running in no time. There is a sample project Package Test in the source code which shows how to use the editor control form in it’s simplest form, or if you want to host control directly then the tester tool is a perfect example.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation with Input Parameters

    - by Stuart Brierley
    As previsouly noted in my post on Schema Generation using the Community ODBC Adapter, I ran into a problem when trying to generate a schema to represent a MySQL stored procedure that had input parameters.  After a bit of investigation and a few deadends I managed to figure out a way around this issue - detailed below are both the problem and solution in case you ever run into this yourself. The Problem Imagine a stored procedure that is coded as follows in MySQL: StuTest(in DStr varchar(80)) BEGIN   Declare GRNID int;   Select grn_id into GRNID from grn_header where distribution_number = DStr;   Select GRNID; END This is quite a simple stored procedure but can be used to illustrate the issue with parameters quite niceley. When generating the schema using the Add Generated Items wizard, I tried selecting "Stored Procedure" and then in the Statement Information window typing the stored procedure name: StuTest Pressing generate then gives the following error: "Incorrect Number of arguments for Procedure StuTest; expected 1, got 0" If you attempt to supply a value for the parameter you end up with a schema that will only ever supply the parameter value that you specify.  For example supplying StuTest('123') will always call the procedure with a parameter value of 123. The Solution   I tried contacting Two Connect about this, but their experience of testing the adapter with MySQL was limited. After looking through the code for the ODBC adapter myself and trying a few things out, I was eventually able to use the ODBC adapter to call a test stored procedure using a two way send port. In the generate schema wizard instead of selecting Stored Procedure I had to choose SQL Script instead, detailing the following script: Call StuTest(@InputParameter) By default this would create a request schema with an attribute called InputParameter, with a SQL type of NVarChar(1).  In most cases this is not going to be correct for the stored procedure being called. To change the type from the default that is applied you need to select the "Override default query processing" check box when specifying the script in the wizard.  This then opens the BizTalk ODBC Override window which lets you change the properties of the parameters and also test out the query script.  Once I had done this I was then able to generate the correct schema, which included an attribute representing the parameter.  By deploying the schema assembly I was then able to try the ODBC adapter out on a two way send port. When supplied with an appropriate message instance (for the generated request schema) this send port successfully returned the expected response.

    Read the article

  • Tips On Using The Service Contracts Import Program

    - by LuciaC
    Prior to release 12.1 there was no supported way to import contracts into the EBS Service Contracts application - there were no public APIs nor contract load programs provided.  From release 12.1 onwards the 'Service Contracts Import Program' is provided to load service contracts into the application. The Service Contracts Import functionality is explained in How to Use the Service Contracts Import Program - Scope and Limitations (Doc ID 1057242.1).  This note includes an attached document which explains the program architecture, shows the Entity Relationship Diagram and details the interface table definitions. The Import program takes data from the interface tables listed below and populates the contracts schema tables:  OKS_USAGE_COUNTERS_INTERFACE OKS_SALES_CREDITS_INTERFACEOKS_NOTES_INTERFACEOKS_LINES_INTERFACEOKS_HEADERS_INTERFACEOKS_COVERED_LEVELS_INTERFACEThese interface tables must be loaded via a custom load program.The Service Contracts Import concurrent request is then submitted to create contracts from this legacy data. The parameters to run the Import program are:  Parameter Description  Mode Validate only, Import  Batch Number Batch_Id (unique id populated into the OKS_HEADERS_INTERFACE table)  Number of Workers Number of workers required (these are spawned as separate sub-requests)  Commit size Represents number of successfully processed contracts commited to database The program spawns sub-requests for the import worker(s) and the 'Service Contracts Import Report'.  The data is validated prior to import and into the Contracts tables and will report errors in the Service Contracts Import Report program output file (Import Execution Report).  Troubleshooting tips are provided in R12.1 - Common Service Contract Import Errors (Doc ID 762545.1); this document lists some, but not all, import errors.  The document will be updated over time.  Additional help is given in Debugging Tip for Service Contracts Import Errors (Doc ID 971426.1).After you successfully import contracts, you can purge the records from the interface tables by running the Service Contracts Import Purge concurrent program. Note that there is no supported way to mass delete data from the Contracts schema tables once they are populated, so data loaded by the Import program must be fully tested and verified before the program is run to load data into a Production system.A Service Contracts Import Test program has been provided which will take an existing contract in the application and load the interface tables using the data from that contract.  This can be used as an example for guidance on how to load the interface tables.  The Test program functionality is explained in How to Use the Service Contracts Test Import Program Provided in Release 12.1 (Doc ID 761209.1).  Note that the Test program has some limitations which do not apply to the full Import program and is not a supported program, it is simply a testing tool.  

    Read the article

  • How To Build An Enterprise Application - Introduction

    - by Tuan Nguyen
    An enterprise application is a software which fulfills 4 core quality attributes: Reliability Flexibility Reusability Maintainability Reliability is the ability of a system or component to perform its required functions under stated conditions for a specific period of time. Because there are no ways more than testing to make sure a system is reliability, we can exchange the term reliability with the term testability. Flexibility is the ability of changing a system's core features without violating unrelated features or components. Although flexibility can helps us to achieve interoperability easily but the opposite is not true. For example, a program might run on multiple platforms, contains logic for many scenarios but that wouldn't mean it was flexibility if it forces us rewrite code in all components when we just want to change some aspects of a feature it had. Reusability is the ability of sharing one or more system's components for another system. We should just open a component's reusability in the context in which it is used. For example, we write classes that implement UI logic and deliver them to only classes which implementing UI. Maintainability is the ability of adding or removing features to a system after it was released. Maintainability consists of many factors such as readability, analyzability, extensibility therein extensibility is critical. Maintainability requires us to write code that is longer and complexer than normal but it doesn't mean we introduce unneccessarily complex code. We always try to make our code clear and transparent to everyone. An application enterprise is built on an enterprise design which consists of two parts: low-level design and high-level design. At low-level design, it focuses on building loose-coupled classes or components. Particularly, it recommends: Each class or component undertakes only single responsibility (design based on unit test) Classes or components implement and work through interfaces (design based on contract) Dependency relationship between classes and components could be injected at run-time (design based on dependency) At high-level design, it focuses on architecting system into tiers and layers. Particularly, it recommends: Divide system into subsystems for deployment. Each subsytem is called a tier. Typical, an enterprise application would have 3 tiers as illustrated in the following figure: Arrange classes and components to logical containers called layers. Typical, an enterprise application would have 5 layers as illustrated in the following figure

    Read the article

  • Silverlight Cream for March 07, 2011 -- #1055

    - by Dave Campbell
    In this Issue: Max Paulousky, Chris Rouw, David Anson, Jesse Liberty, Shawn Wildermuth, Simon Guindon, and Dhananjay Kumar. Above the Fold: Silverlight: "Faster Databinding in WPF and Silverlight using OptimizedObservableCollection" Simon Guindon WP7: "Phoney Tools Updated (WP7 Open Source Library)" Shawn Wildermuth From SilverlightCream.com: Problems With Sharing Windows Phone 7 Applications Within A Large Group Of Beta Testers Max Paulousky has a post up discussing the issues surrounding beta testing a WP7 app with a large group of testers... and how to pull it all off. WP7 Insights #1: Consuming REST APIs within a WP7 app Chris Rouw is beginning a WP7 series based on his recent experience of getting a client's app into the marketplace. This first in his series is on consuming REST APIs ... lots of good code and explanations. Improving Windows Phone 7 application performance is even easier with these LowProfileImageLoader and DeferredLoadListBox updates David Anson has an update to his LowProfileImageLoader and DeferredLoadListBox after issues brought up by readers... so we all win with the great feedback from alert devs. When Isolated Storage Isn’t Enough Jesse Liberty started looking at Jeremy Likness' Sterling with this post in the WP7 From Scratch series. He starts with downloading it from CodePlex ... great way to get into Sterling if you haven't already. Phoney Tools Updated (WP7 Open Source Library) Shawn Wildermuth has the latest drop of his Phoney Tools up... this is the last Alpha. I've added a tag for it as well. He's fixed some things, added others... check out the post and go grab the code. Faster Databinding in WPF and Silverlight using OptimizedObservableCollection Simon Guindon is a blogger I've not been following, but this post on an OptimizedObservableCollection caught my eye. He added an AddRange() to the ObservableCollection to get a speed enhancement when adding items... and a pretty good speed enhancement it is. Reading files asynchronously using WebClient class in Silverlight Dhananjay Kumar is another prolific blogger that I've not been following, so we'll start with his latest... a step-by-step guide to reading an XML file asynchronously. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Windows 7 - traceroute hop with high latency! [closed]

    - by Mac
    I've been experiencing this problem for quite a while, and it's quite frustrating. I'll do a traceroute, to www.l.google.com, for example. This is the result (please note: I will replace some parts of personal information with text - i.e. ISP.IP is in reality an actual IP address, and ISPNAME replaces the actual ISP name): Tracing route to www.l.google.com [173.194.34.212] over a maximum of 30 hops: 1 1 ms 1 ms <1 ms 192.168.1.1 2 9 ms 8 ms 10 ms ISP.EXCHANGE.NAME [ISP.IP.172.205] 3 161 ms 171 ms 177 ms host-ISP.IP.215.246.ISPNAME.net [ISP.IP.215.246] 4 12 ms 9 ms 10 ms host-ISP.IP.215.246.ISPNAME.net [ISP.IP.215.246] 5 10 ms 9 ms 17 ms host-ISP.IP.224.165.ISPNAME.net [ISP.IP.224.165] 6 10 ms 9 ms 10 ms 10.42.0.3 7 9 ms 9 ms 10 ms host-ISP.IP.202.129.ISPNAME.net [ISP.IP.202.129] 8 10 ms 9 ms 9 ms host-ISP.IP.209.33.ISPNAME.net [ISP.IP.209.33] 9 77 ms 129 ms 164 ms host-ISP.IP.198.162.ISPNAME.net [ISP.IP.198.162] 10 43 ms 42 ms 43 ms 72.14.212.13 11 42 ms 42 ms 42 ms 209.85.252.36 12 59 ms 59 ms 59 ms 209.85.241.210 13 60 ms 76 ms 68 ms 72.14.237.124 14 59 ms 59 ms 58 ms mad01s08-in-f20.1e100.net [173.194.34.212] Trace complete. Notice that there is a spike on the 3rd hop, but also notice that the 3rd and 4th hop are to the exact same destination. Furthermore, when I ping the offended hop separately, I get the low latency I would expect to that server: Pinging ISP.IP.215.246 with 32 bytes of data: Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=12ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Ping statistics for ISP.IP.215.246: Packets: Sent = 10, Received = 10, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 9ms, Maximum = 12ms, Average = 9ms I'm baffled as to why or how this is happening, and it seems to "fix itself" at random times. Here is an example of where it was working as expected: http://i.imgur.com/bysno.png Notice how many fewer hops were taken. Please note that all the posted results occurred within 10 minutes of testing. I've tried contacting my ISP, and they seem clueless; in their eyes, as long as "the download speed is not slow", then they're doing everything right. Any insight would be very much appreciated, and thanks in advanced!

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >