Search Results

Search found 9271 results on 371 pages for 'properties'.

Page 346/371 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • New <%: %> Syntax for HTML Encoding Output in ASP.NET 4 (and ASP.NET MVC 2)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the nineteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post covers a small, but very useful, new syntax feature being introduced with ASP.NET 4 – which is the ability to automatically HTML encode output within code nuggets.  This helps protect your applications and sites against cross-site script injection (XSS) and HTML injection attacks, and enables you to do so using a nice concise syntax. HTML Encoding Cross-site script injection (XSS) and HTML encoding attacks are two of the most common security issues that plague web-sites and applications.  They occur when hackers find a way to inject client-side script or HTML markup into web-pages that are then viewed by other visitors to a site.  This can be used to both vandalize a site, as well as enable hackers to run client-script code that steals cookie data and/or exploits a user’s identity on a site to do bad things. One way to help mitigate against cross-site scripting attacks is to make sure that rendered output is HTML encoded within a page.  This helps ensures that any content that might have been input/modified by an end-user cannot be output back onto a page containing tags like <script> or <img> elements.  ASP.NET applications (especially those using ASP.NET MVC) often rely on using <%= %> code-nugget expressions to render output.  Developers today often use the Server.HtmlEncode() or HttpUtility.Encode() helper methods within these expressions to HTML encode the output before it is rendered.  This can be done using code like below: While this works fine, there are two downsides of it: It is a little verbose Developers often forget to call the HtmlEncode method New <%: %> Code Nugget Syntax With ASP.NET 4 we are introducing a new code expression syntax (<%:  %>) that renders output like <%= %> blocks do – but which also automatically HTML encodes it before doing so.  This eliminates the need to explicitly HTML encode content like we did in the example above.  Instead you can just write the more concise code below to accomplish the same thing: We chose the <%: %> syntax so that it would be easy to quickly replace existing instances of <%= %> code blocks.  It also enables you to easily search your code-base for <%= %> elements to find and verify any cases where you are not using HTML encoding within your application to ensure that you have the correct behavior. Avoiding Double Encoding While HTML encoding content is often a good best practice, there are times when the content you are outputting is meant to be HTML or is already encoded – in which case you don’t want to HTML encode it again.  ASP.NET 4 introduces a new IHtmlString interface (along with a concrete implementation: HtmlString) that you can implement on types to indicate that its value is already properly encoded (or otherwise examined) for displaying as HTML, and that therefore the value should not be HTML-encoded again.  The <%: %> code-nugget syntax checks for the presence of the IHtmlString interface and will not HTML encode the output of the code expression if its value implements this interface.  This allows developers to avoid having to decide on a per-case basis whether to use <%= %> or <%: %> code-nuggets.  Instead you can always use <%: %> code nuggets, and then have any properties or data-types that are already HTML encoded implement the IHtmlString interface. Using ASP.NET MVC HTML Helper Methods with <%: %> For a practical example of where this HTML encoding escape mechanism is useful, consider scenarios where you use HTML helper methods with ASP.NET MVC.  These helper methods typically return HTML.  For example: the Html.TextBox() helper method returns markup like <input type=”text”/>.  With ASP.NET MVC 2 these helper methods now by default return HtmlString types – which indicates that the returned string content is safe for rendering and should not be encoded by <%: %> nuggets.  This allows you to use these methods within both <%= %> code nugget blocks: As well as within <%: %> code nugget blocks: In both cases above the HTML content returned from the helper method will be rendered to the client as HTML – and the <%: %> code nugget will avoid double-encoding it. This enables you to default to always using <%: %> code nuggets instead of <%= %> code blocks within your applications.  If you want to be really hardcore you can even create a build rule that searches your application looking for <%= %> usages and flags any cases it finds as an error to enforce that HTML encoding always takes place. Scaffolding ASP.NET MVC 2 Views When you use VS 2010 (or the free Visual Web Developer 2010 Express) you’ll find that the views that are scaffolded using the “Add View” dialog now by default always use <%: %> blocks when outputting any content.  For example, below I’ve scaffolded a simple “Edit” view for an article object.  Note the three usages of <%: %> code nuggets for the label, textbox, and validation message (all output with HTML helper methods): Summary The new <%: %> syntax provides a concise way to automatically HTML encode content and then render it as output.  It allows you to make your code a little less verbose, and to easily check/verify that you are always HTML encoding content throughout your site.  This can help protect your applications against cross-site script injection (XSS) and HTML injection attacks.  Hope this helps, Scott

    Read the article

  • Blend for Visual Studio 2013 Prototyping Applications with SketchFlow

    - by T
    Originally posted on: http://geekswithblogs.net/tburger/archive/2014/08/10/blend-for-visual-studio-2013-prototyping-applications-with-sketchflow.aspxSketchFlow enables rapid creating of dynamic interface mockups very quickly. The SketchFlow workspace is the same as the standard Blend workspace with the inclusion of three panels: the SketchFlow Feedback panel, the SketchFlow Animation panel and the SketchFlow Map panel. By using SketchFlow to prototype, you can get feedback early in the process. It helps to surface possible issues, lower development iterations, and increase stakeholder buy in. SketchFlow prototypes not only provide an initial look but also provide a way to add additional ideas and input and make sure the team is on track prior to investing in complete development. When you have completed the prototyping, you can discard the prototype and just use the lessons learned to design the application from or extract individual elements from your prototype and include them in the application. I don’t recommend trying to transition the entire project into a development project. Objects that you add with the SketchFlow style have a hand-sketched look. The sketch style is used to remind stakeholders that this is a prototype. This encourages them to focus on the flow and functionality without getting distracted by design details. The sketchflow assets are under sketchflow in the asset panel and are identifiable by the postfix “–Sketch”. For example “Button-Sketch”. You can mix sketch and standard controls in your interface, if required. Be creative, if there is a missing control or your interface has a different look and feel than the out of the box one, reuse other sketch controls to mimic the functionality or look and feel. Only use standard controls if it doesn’t distract from the idea that this is a prototype and not a standard application. The SketchFlow Map panel provides information about the structure of your application. To create a new screen in your prototype: Right-click the map surface and choose “Create a Connected Screen”. Name the screens with names that are meaningful to the stakeholders. The start screen is the one that has the green arrow. To change the start screen, right click on any other screen and set to start screen. Only one screen can be the start screen at a time. Rounded screen are component screens to mimic reusable custom controls that will be built into the final application. You can change the colors of all of the boxes and should use colors to create functional groupings. The groupings can be identified in the SketchFlow Project Settings. To add connections between screens in the SketchFlow Map panel. Move the mouse over a screen in the SketchFlow and a menu will appear at the bottom of the screen node. In the menu, click Connect to an existing screen. Drag the arrow to another screen on the Map. You add navigation to your prototype by adding connections on the SketchFlow map or by adding navigation directly to items on your interface. To add navigation from objects on the artboard, right click the item then from the menu, choose “Navigate to”. This will expose a sub-menu with available screens, backward, or forward. When the map has connected screens, the SketchFlow Player displays the connected screens on the Navigate sidebar. All screens show in the SketchFlow Player Map. To see the SketchFlow Player, run your SketchFlow prototype. The Navigation sidebar is meant to show the desired user work flow. The map can be used to view the different screens regardless of suggested navigation in the navigation bar. The map is able to be hidden and shown. As mentioned, a component screen is a shared screen that is used in more than one screen and generally represents what will be a custom object in the application. To create a component screen, you can create a screen, right click on it in the SketchFlow Map and choose “Make into component screen”. You can mouse over a screen and from the menu that appears underneath, choose create and insert component screen. To use an existing screen, select if from the Asset panel under SketchFlow, Components. You can use Storyboards and Visual State animations in your SketchFlow project. However, SketchFlow also offers its own animation technique that is simpler and better suited for prototyping. The SketchFlow Animation panel is above your artboard by default. In SketchFlow animation, you create frames and then position the elements on your interface for each frame. You then specify elapsed time and any effects you want to apply to the transition. The + at the top is what creates new frames. Once you have a new Frame, select it and change the property you want to animate. In the example above, I changed the Text of the result box. You can adjust the time between frames in the lower area between the frames. The easing and effects functions are changed in the center between each frame. You edit the hold time for frames by clicking the clock icon in the lower left and the hold time will appear on each frame and can be edited. The FluidLayout icon (also located in the lower left) will create smooth transitions. Next to the FluidLayout icon is the name of that Animation. You can rename the animation by clicking on it and editing the name. The down arrow chevrons next to the name allow you to view the list of all animations in this prototype and select them for editing. To add the animation to the interface object (such as a button to start the animation), select the PlaySketchFlowAnimationAction from the SketchFlow behaviors in the Assets menu and drag it to an object on your interface. With the PlaySketchFlowAnimationAction that you just added selected in the Objects and Timeline, edit the properties to change the EventName to the event you want and choose the SketchFlowAnimation you want from the drop down list. You may want to add additional information to your screens that isn’t really part of the prototype but is relevant information or a request for clarification or feedback from the reviewer. You do this with annotations or notes. Both appear on the user interface, however, annotations can be switched on or off at design and review time. Notes cannot be switched off. To add an Annotation, chose the Create Annotation from the Tools menu. The annotation appears on the UI where you will add the notes. To display or Hide annotations, click the annotation toggle at the bottom right on the artboard . After to toggle annotations on, the identifier of the person who created them appears on the artboard and you must click that to expand the notes. To add a note to the artboard, simply select the Note-Sketch from Assets ->SketchFlow ->Styles ->Sketch Styles. Drag and drop it to the artboard and place where you want it. When you are ready for users to review the prototype, you have a few options available. Click File -> Export and choose one of the options from the list: Publish to Sharepoint, Package SketchFlowProject, Export to Microsoft Word, or Export as Images. I suggest you play with as many of the options as you can to see what they do. Both the Sharepoint and Packaged SketchFlowProject allow you to collect feedback from one or more users that you can import into the project. The user can make notes on the UI and in the Feedback area in the bottom left corner of the player. When the user is done adding feedback, it is exported from the right most folder icon in the My Feedback panel. Feeback is imported on a panel named SketchFlow Feedback. To get that panel to show up, select Window -> SketchFlow Feedback. Once you have the panel showing, click the + in the upper right of the panel and find the notes you exported. When imported, they will show up in a list and on the artboard. To document your prototype, use the Export to Microsoft Word option from the File menu. That should get you started with prototyping.

    Read the article

  • Software Architecture: Quality Attributes

    Quality is what all software engineers should strive for when building a new system or adding new functionality. Dictonary.com ambiguously defines quality as a grade of excellence. Unfortunately, quality must be defined within the context of a situation in that each engineer must extract quality attributes from a project’s requirements. Because quality is defined by project requirements the meaning of quality is constantly changing base on the project. Software architecture factors that indicate the relevance and effectiveness The relevance and effectiveness of architecture can vary based on the context in which it was conceived and the quality attributes that are required to meet. Typically when evaluating architecture for a specific system regarding relevance and effectiveness the following questions should be asked.   Architectural relevance and effectiveness questions: Does the architectural concept meet the needs of the system for which it was designed? Out of the competing architectures for a system, which one is the most suitable? If we look at the first question regarding meeting the needs of a system for which it was designed. A system that answers yes to this question must meet all of its quality goals. This means that it consistently meets or exceeds performance goals for the system. In addition, the system meets all the other required system attributers based on the systems requirements. The suitability of a system is based on several factors. In order for a project to be suitable the necessary resources must be available to complete the task. Standard Project Resources: Money Trained Staff Time Life cycle factors that affect the system and design The development life cycle used on a project can drastically affect how a system’s architecture is created as well as influence its design. In the case of using the software development life cycle (SDLC) each phase must be completed before the next can begin.  This waterfall approach does not allow for changes in a system’s architecture after that phase is completed. This can lead to major system issues when the architecture for the system is not as optimal because of missed quality attributes. This can occur when a project has poor requirements and makes misguided architectural decisions to name a few examples. Once the architectural phase is complete the concepts established in this phase must move on to the design phase that is bound to use the concepts and guidelines defined in the previous phase regardless of any missing quality attributes needed for the project. If any issues arise during this phase regarding the selected architectural concepts they cannot be corrected during the current project. This directly has an effect on the design of a system because the proper qualities required for the project where not used when the architectural concepts were approved. When this is identified nothing can be done to fix the architectural issues and system design must use the existing architectural concepts regardless of its missing quality properties because the architectural concepts for the project cannot be altered. The decisions made in the design phase then preceded to fall down to the implementation phase where the actual system is coded based on the approved architectural concepts established in the architecture phase regardless of its architectural quality. Conversely projects using more of an iterative or agile methodology to implement a system has more flexibility to correct architectural decisions based on missing quality attributes. This is due to each phase of the SDLC is executed more than once so any issues identified in architecture of a system can be corrected in the next architectural phase. Subsequently the corresponding changes will then be adjusted in the following design phase so that when the project is completed the optimal architectural and design decision are applied to the solution. Architecture factors that indicate functional suitability Systems that have function shortcomings do not have the proper functionality based on the project’s driving quality attributes. What this means in English is that the system does not live up to what is required of it by the stakeholders as identified by the missing quality attributes and requirements. One way to prevent functional shortcomings is to test the project’s architecture, design, and implementation against the project’s driving quality attributes to ensure that none of the attributes were missed in any of the phases. Another way to ensure a system has functional suitability is to certify that all its requirements are fully articulated so that there is no chance for misconceptions or misinterpretations by all stakeholders. This will help prevent any issues regarding interpreting the system requirements during the initial architectural concept phase, design phase and implementation phase. Consider the applicability of other architectural models When considering an architectural model for a project is also important to consider other alternative architectural models to ensure that the model that is selected will meet the systems required functionality and high quality attributes. Recently I can remember talking about a project that I was working on and a coworker suggested a different architectural approach that I had never considered. This new model will allow for the same functionally that is offered by the existing model but will allow for a higher quality project because it fulfills more quality attributes. It is always important to seek alternatives prior to committing to an architectural model. Factors used to identify high-risk components A high risk component can be defined as a component that fulfills 2 or more quality attributes for a system. An example of this can be seen in a web application that utilizes a remote database. One high-risk component in this system is the TCIP component because it allows for HTTP connections to handle by a web server and as well as allows for the server to also connect to a remote database server so that it can import data into the system. This component allows for the assurance of data quality attribute and the accessibility quality attribute because the system is available on the network. If for some reason the TCIP component was to fail the web application would fail on two quality attributes accessibility and data assurance in that the web site is not accessible and data cannot be update as needed. Summary As stated previously, quality is what all software engineers should strive for when building a new system or adding new functionality. The quality of a system can be directly determined by how closely it is implemented when compared to its desired quality attributes. One way to insure a higher quality system is to enforce that all project requirements are fully articulated so that no assumptions or misunderstandings can be made by any of the stakeholders. By doing this a system has a better chance of becoming a high quality system based on its quality attributes

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • CodePlex Daily Summary for Thursday, April 08, 2010

    CodePlex Daily Summary for Thursday, April 08, 2010New ProjectsBackUpAnyWhere: BackUpAnyWhereCustomFormbyEndUser: 在项目开发中,经常遇到不同的用户对同一报表有不同要求的情况,有时甚至用户需要从头生成一个报表,在以前可能使用第三方的开发工具来实现。在SQL Server2005中,通过使用Reporting Services可以使最终用户不通过编码,只要了解数据结构就能自行编辑报表。本例使用Adventur...DbExecutor - linq based database executor: IEnumerable based database reader. (linq like primitive sql executor)DeepZoomRenderingPack: A collection of libraries and plug-ins architecture that turns various files (like PDF, PS, etc.) into a "Visual" representation that the DeepZoom ...DotNetNuke Russian Language packs: DNNRussianLP - DotNetNuke Russian Language pack. F# Refactor: Deisgned to bring Code Refactoring capabilities to the F# Language in Visual Studio 2010. Invocando WebService e Site HTTP dinamicamente com HTTPWebRequest C#: Invocando Site HTTP e WebService dinamicamente com HTTPWebRequest Passando o SoapAction e Envelope XML Escrito em C# www.biztalkbrasil.c...Jitbit WYSWYG BBCode Editor: "Jitbit WYSIWYG-BBCode" is a browser-based JavaScript-powered WYSIWYG BBCode editorMRDS Services for Phidgets: MRDS (Microsoft Robotics Developer Studio) Services for Phidgets provides additional services for Phidgets sensors and controllers that are not inc...MSBuild Addin: This tool is a simple addin for VisualStudio 2008 used in association with Microsoft MSBuild. It allows you to run MSBuild directly inside Visual S...NISHIL-BizTalk Custom Eventlog Functiod: While testing our maps at times when it fails we cant trace it because we don’t know what the output of the functiods are. Normally in a single ma...Northest GNSG: Supinfo B3C Paris Northest University project. Galego, Neveu, Simon, Geissmann.Oily: Composite application project for oil parameters. It's developed in C#Outlook.Utility: The MSDN article Outlook Customization for Integrating with Enterprise Applications at http://msdn.microsoft.com/en-us/library/Aa479345 has quite a...Particle Plot Pivot: Scan select particle physics experiment web sites for plots and generate a Pivot display for easy browsing.project tca: project tca - translating chat application. Satisfyr: A new way of performing assertions on tests so that they remain agnostic to the underlying test framework, and leverage .NET built-in lambda syntax.sejce2008: jce se course wiki and projects linksSGB Controls: SGB Controls is a set of standard .net controls that include a number of enhancements to make life easier for the developer. These controls incl...Syringe: Syringe is a lightweight service container and dependency injection library designed for use with ASP.NET MVC2. Supported features: Dependency inj...topicbox: topicboxUr-Index: Ur-Index makes it a lot easier to create onomastic indexes for books in pdf format.VietGeeks ZohoDocApis: Implement .NET Zoho Document Apis library to help developer can intergrate Zoho Docs easy with their websitesWebometrics Dashboard: Webometrics Dashboardwebpress: It is a WebBased CMS and Blog platform.WPF Ink Canvas Toolbar: WPF Ink Canvas Toolbar makes it easy for WPF developers to use pen input in TabletPC or UMPC applications. The WPF InkCanvas control has drawing, e...WS-TMS: WS GISG HTT TMSNew ReleasesBatterySaver: Version 1.0: Fixed battery increase/decrease events not firing Fixed memory corruption error Added working set trimming (used very sparingly) Fixed poorly rende...Chargify.NET: Chargify.NET 0.65: Added in Transactions, Subscription Re-activation, and finally XML documentation (which has been missing in the previous releases).DbExecutor - linq based database executor: DbExecutor ver.1.0.0.1: renameDotNetNuke Russian Language packs: Russian Language Pack for DotNetNuke 04.09.02: Russian Language Pack for DotNetNuke 04.09.02Encrypted Notes: Encrypted Notes 1.6.3: This is the latest version of Encrypted Notes (1.6.3), with general improvements. It has an installer that will create a directory 'CPascoe' in My ...Invocando WebService e Site HTTP dinamicamente com HTTPWebRequest C#: Código projeto CallSiteHTTP: Código escrito em C#.NET 2.0 - VS2005 Contem: Solution completa(código e executável) XML de configuração - Config.xml ...Jitbit WYSWYG BBCode Editor: Main package: Contains the JS-file, CSS-file and a sample.Live Writer Picasa Plugin: Live Writer Picasa Plugin 1.1.0: Changelog Communication with Picasa Web Albums is done directly via HTTP now (v1.0.0 used Google's GData .NET Libraries) The plugin can search fo...MRDS Services for Phidgets: Phidgets for RDS 2008 R3: First Beta Release This ZIP file contains a web page called Readme_CodePlex.html that explains how to install the RDS Phidgets services for RDS 200...MSBuild Addin: MsBuildAddin-v1.0.0: Initial versionMSBuild Addin: MsBuildAddin-v1.0.0-src.zip: Initial versionOutlook.Utility: Outlook.Utility v1: I have used most of the code in previous projects and seems to be quite stable. Of course the point of open sourcing this is so this project is use...Scrum Dashboard: Scrum Dashboard v3 Alpha 1: Scrum Dashboard v3 is targeting .NET 4, TFS 2010 and the brand new Scrum for Team System v3 process templates. Most of the code has been rewritten ...SharePoint Labs: SPLab4004A-FRA-Level100: SPLab4004A-FRA-Level100 This SharePoint Lab will teach you the 4th best practice you should apply when writing code with the SharePoint API. Lab La...SharePoint Labs: SPLab5012A-FRA-Level100: SPLab5012A-FRA-Level100 This SharePoint Lab will teach you how to provision a new welcome page (how to change and rename the default.aspx page) on ...Shweet: SharePoint 2010 Team Messaging built with Pex: Shweet Source Code: Although the latest version pex and moles used with this project is not available, we thought it would be useful to provide a download to the source.Syringe: Syringe 1.0: Features Dependency injection on properties of services in container Dependency injection on constructors of services in container ASP.Net Mvc ...Text to HTML: 0.4.1.0: Cambios de la versiónOptimización del código de exportación reduciendo el código. Cambio en el icono de exportación. Añadido menú Seleccionar t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 23: Build 23 Fix: Executing "Blame" through the Solution Explorer on a file opens TortoiseMerge rather than TortoiseBlame. Build 22 (beta) New: Visua...WPF Ink Canvas Toolbar: WPF Ink Canvas Toolbar 1.0: First release - included custom colour selectionMost Popular ProjectsRawrWBFS ManagerMicrosoft SQL Server Product Samples: DatabaseASP.NET Ajax LibrarySilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitMost Active ProjectsGraffiti CMSnopCommerce. Open Source online shop e-commerce solution.RawrShweet: SharePoint 2010 Team Messaging built with Pexpatterns & practices – Enterprise LibraryAcadsysAutoPocoIonics Isapi Rewrite FilterNcqrs Framework - The CQRS framework for .NETFarseer Physics Engine

    Read the article

  • Web Site Performance and Assembly Versioning – Part 3 Versioning Combined Files Using Mercurial

    - by capgpilk
    Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion Versioning Combined Files Using Mercurial – this post I have worked on a project recently where there was a need to version the system (library dll, css and javascript files) by date and Mercurial revision number. This was in the format:- 0.12.524.407 {major}.{year}.{month}{date}.{mercurial revision} Each time there is an internal build using the CI server, it would label the files using this format. When it came time to do a major release, it became v1.{year}.{month}{date}.{mercurial revision}, with each public release having a major version increment. Also as a requirement, each assembly also had to have a new GUID on each build. So like in previous posts, we need to edit the csproj file, and add a couple of Default targets. 1: <?xml version="1.0" encoding="utf-8"?> 2: <Project ToolsVersion="4.0" DefaultTargets="Hg-Revision;AssemblyInfo;Build" 3: xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 4: <PropertyGroup> Right below the closing tag of the entire project we add our two targets, the first is to get the Mercurial revision number. We first need to import the tasks for MSBuild which can be downloaded from http://msbuildhg.codeplex.com/ 1: <Import Project="..\Tools\MSBuild.Mercurial\MSBuild.Mercurial.Tasks" />   1: <Target Name="Hg-Revision"> 2: <HgVersion LocalPath="$(MSBuildProjectDirectory)" Timeout="5000" 3: LibraryLocation="C:\TortoiseHg\"> 4: <Output TaskParameter="Revision" PropertyName="Revision" /> 5: </HgVersion> 6: <Message Text="Last revision from HG: $(Revision)" /> 7: </Target> With the main Mercurial files being located at c:\TortoiseHg To get a valid GUID we need to escape from the csproj markup and call some c# code which we put in a property group for later reference. 1: <PropertyGroup> 2: <GuidGenFunction> 3: <![CDATA[ 4: public static string ScriptMain() { 5: return System.Guid.NewGuid().ToString().ToUpper(); 6: } 7: ]]> 8: </GuidGenFunction> 9: </PropertyGroup> Now we add in our target for generating the GUID. 1: <Target Name="AssemblyInfo"> 2: <Script Language="C#" Code="$(GuidGenFunction)"> 3: <Output TaskParameter="ReturnValue" PropertyName="NewGuid" /> 4: </Script> 5: <Time Format="yy"> 6: <Output TaskParameter="FormattedTime" PropertyName="year" /> 7: </Time> 8: <Time Format="Mdd"> 9: <Output TaskParameter="FormattedTime" PropertyName="daymonth" /> 10: </Time> 11: <AssemblyInfo CodeLanguage="CS" OutputFile="Properties\AssemblyInfo.cs" 12: AssemblyTitle="name" AssemblyDescription="description" 13: AssemblyCompany="none" AssemblyProduct="product" 14: AssemblyCopyright="Copyright ©" 15: ComVisible="false" CLSCompliant="true" Guid="$(NewGuid)" 16: AssemblyVersion="$(Major).$(year).$(daymonth).$(Revision)" 17: AssemblyFileVersion="$(Major).$(year).$(daymonth).$(Revision)" /> 18: </Target> So this will give use an AssemblyInfo.cs file like this just prior to calling the Build task:- 1: using System; 2: using System.Reflection; 3: using System.Runtime.CompilerServices; 4: using System.Runtime.InteropServices; 5:  6: [assembly: AssemblyTitle("name")] 7: [assembly: AssemblyDescription("description")] 8: [assembly: AssemblyCompany("none")] 9: [assembly: AssemblyProduct("product")] 10: [assembly: AssemblyCopyright("Copyright ©")] 11: [assembly: ComVisible(false)] 12: [assembly: CLSCompliant(true)] 13: [assembly: Guid("9C2C130E-40EF-4A20-B7AC-A23BA4B5F2B7")] 14: [assembly: AssemblyVersion("0.12.524.407")] 15: [assembly: AssemblyFileVersion("0.12.524.407")] Therefore giving us the correct version for the assembly. This can be referenced within your project whether web or Windows based like this:- 1: public static string AppVersion() 2: { 3: return Assembly.GetExecutingAssembly().GetName().Version.ToString(); 4: } As mentioned in previous posts in this series, you can label css and javascript files using this version number and the GetAssemblyIdentity task from the main MSBuild task library build into the .Net framework. 1: <GetAssemblyIdentity AssemblyFiles="bin\TheAssemblyFile.dll"> 2: <Output TaskParameter="Assemblies" ItemName="MyAssemblyIdentities" /> 3: </GetAssemblyIdentity> Then use this to write out the files:- 1: <WriteLinestoFile 2: File="Client\site-style-%(MyAssemblyIdentities.Version).combined.min.css" 3: Lines="@(CSSLinesSite)" Overwrite="true" />

    Read the article

  • Oracle Solaris Cluster 4.2 Event and its SNMP Interface

    - by user12609115
    Background The cluster event SNMP interface was first introduced in Oracle Solaris Cluster 3.2 release. The details of the SNMP interface are described in the Oracle Solaris Cluster System Administration Guide and the Cluster 3.2 SNMP blog. Prior to the Oracle Solaris Cluster 4.2 release, when the event SNMP interface was enabled, it would take effect on WARNING or higher severity events. The events with WARNING or higher severity are usually for the status change of a cluster component from ONLINE to OFFLINE. The interface worked like an alert/alarm interface when some components in the cluster were out of service (changed to OFFLINE). The consumers of this interface could not get notification for all status changes and configuration changes in the cluster. Cluster Event and its SNMP Interface in Oracle Solaris Cluster 4.2 The user model of the cluster event SNMP interface is the same as what was provided in the previous releases. The cluster event SNMP interface is not enabled by default on a freshly installed cluster; you can enable it by using the cluster event SNMP administration commands on any cluster nodes. Usually, you only need to enable it on one of the cluster nodes or a subset of the cluster nodes because all cluster nodes get the same cluster events. When it is enabled, it is responsible for two basic tasks. • Logs up to 100 most recent NOTICE or higher severity events to the MIB. • Sends SNMP traps to the hosts that are configured to receive the above events. The changes in the Oracle Solaris Cluster 4.2 release are1) Introduction of the NOTICE severity for the cluster configuration and status change events.The NOTICE severity is introduced for the cluster event in the 4.2 release. It is the severity between the INFO and WARNING severity. Now all severities for the cluster events are (from low to high) • INFO (not exposed to the SNMP interface) • NOTICE (newly introduced in the 4.2 release) • WARNING • ERROR • CRITICAL • FATAL In the 4.2 release, the cluster event system is enhanced to make sure at least one event with the NOTICE or a higher severity will be generated when there is a configuration or status change from a cluster component instance. In other words, the cluster events from a cluster with the NOTICE or higher severities will cover all status and configuration changes in the cluster (include all component instances). The cluster component instance here refers to an instance of the following cluster componentsnode, quorum, resource group, resource, network interface, device group, disk, zone cluster and geo cluster heartbeat. For example, pnode1 is an instance of the cluster node component, and oracleRG is an instance of the cluster resource group. With the introduction of the NOTICE severity event, when the cluster event SNMP interface is enabled, the consumers of the SNMP interface will get notification for all status and configuration changes in the cluster. A thrid-party system management platform with the cluster SNMP interface integration can generate alarms and clear alarms programmatically, because it can get notifications for the status change from ONLINE to OFFLINE and also from OFFLINE to ONLINE. 2) Customization for the cluster event SNMP interface • The number of events logged to the MIB is 100. When the number of events stored in the MIB reaches 100 and a new qualified event arrives, the oldest event will be removed before storing the new event to the MIB (FIFO, first in, first out). The 100 is the default and minimum value for the number of events stored in the MIB. It can be changed by setting the log_number property value using the clsnmpmib command. The maximum number that can be set for the property is 500. • The cluster event SNMP interface takes effect on the NOTICE or high severity events. The NOTICE severity is also the default and lowest event severity for the SNMP interface. The SNMP interface can be configured to take effect on other higher severity events, such as WARNING or higher severity events by setting the min_severity property to the WARNING. When the min_severity property is set to the WARNING, the cluster event SNMP interface would behave the same as the previous releases (prior to the 4.2 release). Examples, • Set the number of events stored in the MIB to 200 # clsnmpmib set -p log_number=200 event • Set the interface to take effect on WARNING or higher severity events. # clsnmpmib set -p min_severity=WARNING event Administering the Cluster Event SNMP Interface Oracle Solaris Cluster provides the following three commands to administer the SNMP interface. • clsnmpmib: administer the SNMP interface, and the MIB configuration. • clsnmphost: administer hosts for the SNMP traps • clsnmpuser: administer SNMP users (specific for SNMP v3 protocol) Only clsnmpmib is changed in the 4.2 release to support the aforementioned customization of the SNMP interface. Here are some simple examples using the commands. Examples: 1. Enable the cluster event SNMP interface on the local node # clsnmpmib enable event 2. Display the status of the cluster event SNMP interface on the local node # clsnmpmib show -v 3. Configure my_host to receive the cluster event SNMP traps. # clsnmphost add my_host Cluster Event SNMP Interface uses the common agent container SNMP adaptor, which is based on the JDMK SNMP implementation as its SNMP agent infrastructure. By default, the port number for the SNMP MIB is 11161, and the port number for the SNMP traps is 11162. The port numbers can be changed by using the cacaoadm. For example, # cacaoadm list-params Print all changeable parameters. The output includes the snmp-adaptor-port and snmp-adaptor-trap-port properties. # cacaoadm set-param snmp-adaptor-port=1161 Set the SNMP MIB port number to 1161. # cacaoadm set-param snmp-adaptor-trap-port=1162 Set the SNMP trap port number to 1162. The cluster event SNMP MIB is defined in sun-cluster-event-mib.mib, which is located in the /usr/cluster/lib/mibdirectory. Its OID is 1.3.6.1.4.1.42.2.80, that can be used to walk through the MIB data. Again, for more detail information about the cluster event SNMP interface, please see the Oracle Solaris Cluster 4.2 System Administration Guide. - Leland Chen 

    Read the article

  • Asserting with JustMock

    - by mehfuzh
    In this post, i will be digging in a bit deep on Mock.Assert. This is the continuation from previous post and covers up the ways you can use assert for your mock expectations. I have used another traditional sample of Talisker that has a warehouse [Collaborator] and an order class [SUT] that will call upon the warehouse to see the stock and fill it up with items. Our sample, interface of warehouse and order looks similar to : public interface IWarehouse {     bool HasInventory(string productName, int quantity);     void Remove(string productName, int quantity); }   public class Order {     public string ProductName { get; private set; }     public int Quantity { get; private set; }     public bool IsFilled { get; private set; }       public Order(string productName, int quantity)     {         this.ProductName = productName;         this.Quantity = quantity;     }       public void Fill(IWarehouse warehouse)     {         if (warehouse.HasInventory(ProductName, Quantity))         {             warehouse.Remove(ProductName, Quantity);             IsFilled = true;         }     }   }   Our first example deals with mock object assertion [my take] / assert all scenario. This will only act on the setups that has this “MustBeCalled” flag associated. To be more specific , let first consider the following test code:    var order = new Order(TALISKER, 0);    var wareHouse = Mock.Create<IWarehouse>();      Mock.Arrange(() => wareHouse.HasInventory(Arg.Any<string>(), 0)).Returns(true).MustBeCalled();    Mock.Arrange(() => wareHouse.Remove(Arg.Any<string>(), 0)).Throws(new InvalidOperationException()).MustBeCalled();    Mock.Arrange(() => wareHouse.Remove(Arg.Any<string>(), 100)).Throws(new InvalidOperationException());      //exercise    Assert.Throws<InvalidOperationException>(() => order.Fill(wareHouse));    // it will assert first and second setup.    Mock.Assert(wareHouse); Here, we have created the order object, created the mock of IWarehouse , then I setup our HasInventory and Remove calls of IWarehouse with my expected, which is called by the order.Fill internally. Now both of these setups are marked as “MustBeCalled”. There is one additional IWarehouse.Remove that is invalid and is not marked.   On line 9 ,  as we do order.Fill , the first and second setups will be invoked internally where the third one is left  un-invoked. Here, Mock.Assert will pass successfully as  both of the required ones are called as expected. But, if we marked the third one as must then it would fail with an  proper exception. Here, we can also see that I have used the same call for two different setups, this feature is called sequential mocking and will be covered later on. Moving forward, let’s say, we don’t want this must call, when we want to do it specifically with lamda. For that let’s consider the following code: //setup - data var order = new Order(TALISKER, 50); var wareHouse = Mock.Create<IWarehouse>();   Mock.Arrange(() => wareHouse.HasInventory(TALISKER, 50)).Returns(true);   //exercise order.Fill(wareHouse);   //verify state Assert.True(order.IsFilled); //verify interaction Mock.Assert(()=> wareHouse.HasInventory(TALISKER, 50));   Here, the snippet shows a case for successful order, i haven’t used “MustBeCalled” rather i used lamda specifically to assert the call that I have made, which is more justified for the cases where we exactly know the user code will behave. But, here goes a question that how we are going assert a mock call if we don’t know what item a user code may request for. In that case, we can combine the matchers with our assert calls like we do it for arrange: //setup - data  var order = new Order(TALISKER, 50);  var wareHouse = Mock.Create<IWarehouse>();    Mock.Arrange(() => wareHouse.HasInventory(TALISKER, Arg.Matches<int>( x => x <= 50))).Returns(true);    //exercise  order.Fill(wareHouse);    //verify state  Assert.True(order.IsFilled);    //verify interaction  Mock.Assert(() => wareHouse.HasInventory(Arg.Any<string>(), Arg.Matches<int>(x => x <= 50)));   Here, i have asserted a mock call for which i don’t know the item name,  but i know that number of items that user will request is less than 50.  This kind of expression based assertion is now possible with JustMock. We can extent this sample for properties as well, which will be covered shortly [in other posts]. In addition to just simple assertion, we can also use filters to limit to times a call has occurred or if ever occurred. Like for the first test code, we have one setup that is never invoked. For such, it is always valid to use the following assert call: Mock.Assert(() => wareHouse.Remove(Arg.Any<string>(), 100), Occurs.Never()); Or ,for warehouse.HasInventory we can do the following: Mock.Assert(() => wareHouse.HasInventory(Arg.Any<string>(), 0), Occurs.Once()); Or,  to be more specific, it’s even better with: Mock.Assert(() => wareHouse.HasInventory(Arg.Any<string>(), 0), Occurs.Exactly(1));   There are other filters  that you can apply here using AtMost, AtLeast and AtLeastOnce but I left those to the readers. You can try the above sample that is provided in the examples shipped with JustMock.Please, do check it out and feel free to ping me for any issues.   Enjoy!!

    Read the article

  • Building extensions for Expression Blend 4 using MEF

    - by Timmy Kokke
    Introduction Although it was possible to write extensions for Expression Blend and Expression Design, it wasn’t very easy and out of the box only one addin could be used. With Expression Blend 4 it is possible to write extensions using MEF, the Managed Extensibility Framework. Until today there’s no documentation on how to build these extensions, so look thru the code with Reflector is something you’ll have to do very often. Because Blend and Design are build using WPF searching the visual tree with Snoop and Mole belong to the tools you’ll be using a lot exploring the possibilities.  Configuring the extension project Extensions are regular .NET class libraries. To create one, load up Visual Studio 2010 and start a new project. Because Blend is build using WPF, choose a WPF User Control Library from the Windows section and give it a name and location. I named mine DemoExtension1. Because Blend looks for addins named *.extension.dll  you’ll have to tell Visual Studio to use that in the Assembly Name. To change the Assembly Name right click your project and go to Properties. On the Application tab, add .Extension to name already in the Assembly name text field. To be able to debug this extension, I prefer to set the output path on the Build tab to the extensions folder of Expression Blend. This means that everything that used to go into the Debug folder is placed in the extensions folder. Including all referenced assemblies that have the copy local property set to false. One last setting. To be able to debug your extension you could start Blend and attach the debugger by hand. I like it to be able to just hit F5. Go to the Debug tab and add the the full path to Blend.exe in the Start external program text field. Extension Class Add a new class to the project.  This class needs to be inherited from the IPackage interface. The IPackage interface can be found in the Microsoft.Expression.Extensibility namespace. To get access to this namespace add Microsoft.Expression.Extensibility.dll to your references. This file can be found in the same folder as the (Expression Blend 4 Beta) Blend.exe file. Make sure the Copy Local property is set to false in this reference. After implementing the interface the class would look something like: using Microsoft.Expression.Extensibility; namespace DemoExtension1 { public class DemoExtension1:IPackage { public void Load(IServices services) { } public void Unload() { } } } These two methods are called when your addin is loaded and unloaded. The parameter passed to the Load method, IServices services, is your main entry point into Blend. The IServices interface exposes the GetService<T> method. You will be using this method a lot. Almost every part of Blend can be accessed thru a service. For example, you can use to get to the commanding services of Blend by calling GetService<ICommandService>() or to get to the Windowing services by calling GetService<IWindowService>(). To get Blend to load the extension we have to implement MEF. (You can get up to speed on MEF on the community site or read the blog of Mr. MEF, Glenn Block.)  In the case of Blend extensions, all that needs to be done is mark the class with an Export attribute and pass it the type of IPackage. The Export attribute can be found in the System.ComponentModel.Composition namespace which is part of the .NET 4 framework. You need to add this to your references. using System.ComponentModel.Composition; using Microsoft.Expression.Extensibility;   namespace DemoExtension1 { [Export(typeof(IPackage))] public class DemoExtension1:IPackage { Blend is able to find your addin now. Adding UI The addin doesn’t do very much at this point. The WPF User Control Library came with a UserControl so lets use that in this example. I just drop a Button and a TextBlock onto the surface of the control to have something to show in the demo. To get the UserControl to work in Blend it has to be registered with the WindowService.  Call GetService<IWindowService>() on the IServices interface to get access to the windowing services. The UserControl will be used in Blend on a Palette and has to be registered to enable it. This is done by calling the RegisterPalette on the IWindowService interface and passing it an identifier, an instance of the UserControl and a caption for the palette. public void Load(IServices services) { IWindowService windowService = services.GetService<IWindowService>(); UserControl1 uc = new UserControl1(); windowService.RegisterPalette("DemoExtension", uc, "Demo Extension"); } After hitting F5 to start debugging Expression Blend will start. You should be able to find the addin in the Window menu now. Activating this window will show the “Demo Extension” palette with the UserControl, style according to the settings of Blend. Now what? Because little is publicly known about how to access different parts of Blend adding breakpoints in Debug mode and browsing thru objects using the Quick Watch feature of Visual Studio is something you have to do very often. This demo extension can be used for that purpose very easily. Add the click event handler to the button on the UserControl. Change the contructor to take the IServices interface and store this in a field. Set a breakpoint in the Button_Click method. public partial class UserControl1 : UserControl { private readonly IServices _services;   public UserControl1(IServices services) { _services = services; InitializeComponent(); }   private void button1_Click(object sender, RoutedEventArgs e) { } } Change the call to the constructor in the load method and pass it the services property. public void Load(IServices services) { IWindowService service = services.GetService<IWindowService>(); UserControl1 uc = new UserControl1(services); service.RegisterPalette("DemoExtension", uc, "Demo Extension"); } Hit F5 to compile and start Blend. Got to the window menu and start show the addin. Click on  the button to hit the breakpoint. Now place the carrot text _services text in the code window and hit Shift+F9 to show the Quick Watch window. Now start exploring and discovering where to find everything you need.  More Information The are no official resources available yet. Microsoft has released one extension for expression Blend that is very useful as a reference, the Microsoft Expression Blend® Add-in Preview for Windows® Phone. This will install a .extension.dll file in the extension folder of Blend. You can load this file with Reflector and have a peek at how Microsoft is building his addins. Conclusion I hope this gives you something to get started building extensions for Expression Blend. Until Microsoft releases the final version, which hopefully includes more information about building extensions, we’ll have to work on documenting it in the community.

    Read the article

  • Setup and Use SpecFlow BDD with DevExpress XAF

    - by Patrick Liekhus
    Let’s get started with using the SpecFlow BDD syntax for writing tests with the DevExpress XAF EasyTest scripting syntax.  In order for this to work you will need to download and install the prerequisites listed below.  Once they are installed follow the steps outlined below and enjoy. Prerequisites Install the following items: DevExpress eXpress Application Framework (XAF) found here SpecFlow found here Liekhus BDD/XAF Testing library found here Assumptions I am going to assume at this point that you have created your XAF application and have your Module, Win.Module and Win ready for usage.  You should have also set any attributes and/or settings as you see fit. Setup So where to start. Create a new testing project within your solution. I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e. AlbumManager.FunctionalTests). Add the following references to your project.  It should look like the reference list below. DevExpress.Data.v11.x DevExpress.Persistent.Base.v11.x DevExpress.Persistent.BaseImpl.v11.x DevExpress.Xpo.v11.2 Liekhus.Testing.BDD.Core Liekhus.Testing.BDD.DevExpress TechTalk.SpecFlow TestExecutor.v11.x (found in %Program Files%\DevExpress 2011.x\eXpressApp Framework\Tools\EasyTest Right click the TestExecutor reference and set the “Copy Local” setting to True.  This forces the TestExecutor executable to be available in the bin directory which is where the EasyTest script will be executed further down in the process. Add an Application Configuration File (app.config) to your test application.  You will need to make a few modifications to have SpecFlow generate Microsoft style unit tests.  First add the section handler for SpecFlow and then set your choice of testing framework.  I prefer MS Tests for my projects. Add the EasyTest configuration file to your project.  Add a new XML file and call it Config.xml. Open the properties window for the Config.xml file and set the “Copy to Ouput Directory” to “Copy Always”. You will setup the Config file according to the specifications of the EasyTest library my mapping to your executable and other settings.  You can find the details for the configuration of EasyTest here.  My file looks like this Create a new folder in your test project called “StepDefinitions”.  Add a new SpecFlow Step Definition file item under the StepDefinitions folder.  I typically call this class StepDefinition.cs. Have your step definition inherit from the Liekhus.Testing.BDD.DevExpress.StepDefinition class.  This will give you the default behaviors for your test in the next section. OK.  Now that we have done this series of steps, we will work on simplifying this.  This is an early preview of this new project and is not fully ready for consumption.  If you would like to experiment with it, please feel free.  Our goals are to make this a installable project on it’s own with it’s own project templates and default settings.  This will be coming in later versions.  Currently this project is in Alpha release. Let’s write our first test Remove the basic test that is created for you. We will not use the default test but rather create our own SpecFlow “Feature” files. Add a new item to your project and select the SpecFlow Feature file under C#. Name your feature file as you do your class files after the test they are performing. Writing a feature file uses the Cucumber syntax of Given… When… Then.  Think of it in these terms.  Givens are the pre-conditions for the test.  The Whens are the actual steps for the test being performed.  The Thens are the verification steps that confirm your test either passed or failed.  All of these steps are generated into a an EasyTest format and executed against your XAF project.  You can find more on the Cucumber syntax by using the Secret Ninja Cucumber Scrolls.  This document has several good styles of tests, plus you can get your fill of Chuck Norris vs Ninjas.  Pretty humorous document but full of great content. My first test is going to test the entry of a new Album into the application and is outlined below. The Feature section at the top is more for your documentation purposes.  Try to be descriptive of the test so that it makes sense to the next person behind you.  The Scenario outline is described in the Ninja Scrolls, but think of it as test template.  You can write one test outline and have multiple datasets (Scenarios) executed against that test.  Here are the steps of my test and their descriptions Given I am starting a new test – tells our test to create a new EasyTest file And (Given) the application is open – tells EasyTest to open our application defined in the Config.xml When I am at the “Albums” screen – tells XAF to navigate to the Albums list view And (When) I click the “New:Album” button – tells XAF to click the New Album button on the ribbon And (When) I enter the following information – tells XAF to find the field on the screen and put the value in that field And (When) I click the “Save and Close” button – tells XAF to click the “Save and Close” button on the detail window Then I verify results as “user” – tells the testing framework to execute the EasyTest as your configured user Once you compile and prepare your tests you should see the following in your Test View.  For each of your CreateNewAlbum lines in your scenarios, you will see a new test ready to execute. From here you will use your testing framework of choice to execute the test.  This in turn will execute the EasyTest framework to call back into your XAF application and test your business application. Again, please remember that this is an early preview and we are still working out the details.  Please let us know if you have any comments/questions/concerns. Thanks and happy testing.

    Read the article

  • Oracle Enterprise Manager 12c Integration With Oracle Enterprise Manager Ops Center 11g

    - by Scott Elvington
    In a blog entry earlier this year, we announced the availability of the Ops Center 11g plug-in for Enterprise Manager 12c. In this article I will walk you through the process of deploying the plug-in on your existing Enterprise Manager agents and show you some of the capabilities the plug-in provides. We'll also look at the integration from the Ops Center perspective. I will show you how to set up the connection to Enterprise Manager and give an overview of the information that is available. Installing and Configuring the Ops Center Plug-in The plug-in is available for download from the Self Update page (Setup ? Extensibility ? Self Update). The plug-in name is “Ops Center Infrastructure stack”. Once you have downloaded the plug-in you can navigate to the Plug-In management page (Setup ? Extensibility ? Plug-ins) to begin deployment. The plug-in must first be deployed on the Management Server. You will need to provide the repository password of the SYS user in order to deploy the plug-in to the Management Server. There are a few pre-requisites that need to be completed on the Ops Center side before the plug-in can be deployed and configured on the desired Enterprise Manager agents. Any servers, whether physical or virtual, for which you wish to see metrics and alerts need to be managed by Ops Center. This means that the Operating System needs to have an Ops Center management agent installed as a minimum. The plug-in can provide even more value when Ops Center is also managing the other “layers of the stack”, for example the service processor, the blade chassis or the XSCF of an M-Series server. The more information that Ops Center has about the stack, the more information that will be visible within Enterprise Manager via the plug-in. In order to access the information within Ops Center, the plug-in requires a user to connect as. This user does not require any particular Ops Center permissions or roles, it simply needs to exist. You can create a specific “EMPlugin” user within Ops Center or use an existing user. Oracle recommends creating a specific, non-privileged user account within Ops Center for this purpose. From the Ops Center Administration section, select Enterprise Controller, click the Users tab and finally click the Add User icon to create the desired user account. For the purpose of this article I have discovered and managed the OS and service processor of the server where my Enterprise Manager 12c installation is hosted. With the plug-in deployed to the Management Server and the setup done within Ops Center, we're now ready to deploy the plug-in to the agents and configure the targets to communicate with the Ops Center Enterprise Controller. From the Setup menu select Add Targets then Add Targets Manually. Select the bottom radio button “Add Targets Manually by Specifying Target Monitoring Properties”, select Infrastructure Stack from the Target Type dropdown and finally, select the Monitoring Agent where you wish to deploy the plug-in. Click the Add Manually.... button and fill in the details for the new target using the appropriate hostname for your Enterprise Controller and the user and password details for the plug-in access user. After the target has been added to the agent you will need to allow a few minutes for the initial data collection to complete. Once completed you can see the new target in the All Targets list. All metric collections are enabled by default except one. To enable Infrastructure Stack Alarms collection, navigate to the newly added target and then to Target ? Monitoring ? Metric and Collection Settings. There you can click the “Disabled” link under Collection Schedule to enable collection and set your desired collection frequency. By default, a Warning level alert in Ops Center will equate to a Warning level event in Enterprise Manager and a Critical alert will equate to a Critical event. This mapping can be altered in the Metric and Collection Settings also. The default incident rules in Enterprise Manager only create incidents from Critical events so keep this in mind in case you want to see incidents generated for Warning or Info level alerts from Ops Center. Also, because Enterprise Manager already monitors the OS through it's Host target type, the plug-in does not pull OS alerts from Ops Center so as to prevent duplication. In addition to alert propagation, the plug-in also provides data for several reports detailing the topology and configuration of the stack as well as any hardware sensor data that is available. These are available from the Information Publisher Reports. Navigate there from the Enterprise ? Reports menu or directly from the Infrastructure Stack target of interest. As an example, here is a sample of the Hardware Sensors report showing some of the available sensor data. The report can also be exported to a CSV file format if desired. Connecting Ops Center to Enterprise Manager Repository For an Enterprise Manager user, the plug-in provides a deeper visibility to the state of the infrastructure underlying the databases and middleware. On the Ops Center side, there is also a greater visibility to the targets running on the infrastructure. To set up the Ops Center data collection, just navigate to the Administration section and select the Grid Control link. Select the Configure/Connect action from the right-hand menu and complete the wizard forms to enable the connection to the Enterprise Manager repository and UI. Be sure to use the sysman account when configuring the database connection. Once the job completes and the initial data synchronization is done you will see new Target tabs on your OS assets. The new tab lists all the Enterprise Manager targets and any alerts, availability and performance data specific to the selected target. It is also possible to use the GoTo icon to launch the Enterprise Manager BUI in context of the specific target or alert to drill into more detail. Hopefully this brief overview of the integration between Enterprise Manager and Ops Center has provided a jumpstart to getting a more complete view of the full stack of your enterprise systems.

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • A pseudo-listener for AlwaysOn Availability Groups for SQL Server virtual machines running in Azure

    - by MikeD
    I am involved in a project that is implementing SharePoint 2013 on virtual machines hosted in Azure. The back end data tier consists of two Azure VMs running SQL Server 2012, with the SharePoint databases contained in an AlwaysOn Availability Group. I used this "Tutorial: AlwaysOn Availability Groups in Windows Azure (GUI)" to help me implement this setup.Because Azure DHCP will not assign multiple unique IP addresses to the same VM, having an AG Listener in Azure is not currently supported.  I wanted to figure out another mechanism to support a "pseudo listener" of some sort. First, I created a CNAME (alias) record in the DNS zone with a short TTL (time to live) of 5 minutes (I may yet make this even shorter). The record represents a logical name (let's say the alias is SPSQL) of the server to connect to for the databases in the availability group (AG). When Server1 was hosting the primary replica of the AG, I would set the CNAME of SPSQL to be SERVER1. When the AG failed over to Server1, I wanted to set the CNAME to SERVER2. Seemed simple enough.(It's important to point out that the connection strings for my SharePoint services should use the CNAME alias, and not the actual server name. This whole thing falls apart otherwise.)To accomplish this, I created identical SQL Agent Jobs on Server1 and Server2, with two steps:1. Step 1: Determine if this server is hosting the primary replica.This is a TSQL step using this script:declare @agName sysname = 'AGTest'set nocount on declare @primaryReplica sysnameselect @primaryReplica = agState.primary_replicafrom sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where ag.name = @AGname if not exists(   select *    from sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where @@Servername = agstate.primary_replica    and ag.name = @AGname)begin   raiserror ('Primary replica of %s is not hosted on %s, it is hosted on %s',17,1,@Agname, @@Servername, @primaryReplica) endThis script determines if the primary replica value of the AG group is the same as the server name, which means that our server is hosting the current AG (you should update the value of the @AgName variable to the name of your AG). If this is true, I want the DNS alias to point to this server. If the current server is not hosting the primary replica, then the script raises an error. Also, if the script can't be executed because it cannot connect to the server, that also will generate an error. For the job step settings, I set the On Failure option to "Quit the job reporting success". The next step in the job will set the DNS alias to this server name, and I only want to do that if I know that it is the current primary replica, otherwise I don't want to do anything. I also include the step output in the job history so I can see the error message.Job Step 2: Update the CNAME entry in DNS with this server's name.I used a PowerShell script to accomplish this:$cname = "SPSQL.contoso.com"$query = "Select * from MicrosoftDNS_CNAMEType"$dns1 = "dc01.contoso.com"$dns2 = "dc02.contoso.com"if ((Test-Connection -ComputerName $dns1 -Count 1 -Quiet) -eq $true){    $dnsServer = $dns1}elseif ((Test-Connection -ComputerName $dns2 -Count 1 -Quiet) -eq $true) {   $dnsServer = $dns2}else{  $msg = "Unable to connect to DNS servers: " + $dns1 + ", " + $dns2   Throw $msg}$record = Get-WmiObject -Namespace "root\microsoftdns" -Query $query -ComputerName $dnsServer  | ? { $_.Ownername -match $cname }$thisServer = [System.Net.Dns]::GetHostEntry("LocalHost").HostName + "."$currentServer = $record.RecordData if ($currentServer -eq $thisServer ) {     $cname + " CNAME is up to date: " + $currentServer}else{    $cname + " CNAME is being updated to " + $thisServer + ". It was " + $currentServer    $record.RecordData = $thisServer    $record.put()}This script does a few things:finds a responsive domain controller (Test-Connection does a ping and returns a Boolean value if you specify the -Quiet parameter)makes a WMI call to the domain controller to get the current CNAME record value (Get-WmiObject)gets the FQDN of this server (GetHostEntry)checks if the CNAME record is correct and updates it if necessary(You should update the values of the variables $cname, $dns1 and $dns2 for your environment.)Since my domain controllers are also hosted in Azure VMs, either one of them could be down at any point in time, so I need to find a DC that is responsive before attempting the DNS call. The other little thing here is that the CNAME record contains the FQDN of a machine, plus it ends with a period. So the comparison of the CNAME record has to take the trailing period into account. When I tested this step, I was getting ACCESS DENIED responses from PowerShell for the Get-WmiObject cmdlet that does a remote lookup on the DC. This occurred because the SQL Agent service account was not a member of the Domain Admins group, so I decided to create a SQL Credential to store the credentials for a domain administrator account and use it as a PowerShell proxy (rather than give the service account Domain Admins membership).In SQL Management Studio, right click on the Credentials node (under the server's Security node), and choose New Credential...Then, under SQL Agent-->Proxies, right click on the PowerShell node and choose New Proxy...Finally, in the job step properties for the PowerShell step, select the new proxy in the Run As drop down.I created this two step Job on both nodes of the Availability Group, but if you had more than two nodes, just create the same job on all the servers. I set the schedule for the job to execute every minute.When the server that is hosting the primary replica is running the job, the job history looks like this:The job history on the secondary server looks like this: When a failover occurs, the SQL Agent job on the new primary replica will detect that the CNAME needs to be updated within a minute. Based on the TTL of the CNAME (which I said at the beginning was 5 minutes), the SharePoint servers will get the new alias within five minutes and should be able to reconnect. I may want to shorten up the TTL to reduce the time it takes for the client connections to use the new alias. Using a DNS CNAME and a SQL Agent Job on all servers hosting AG replicas, I was able to create a pseudo-listener to automatically change the name of the server that was hosting the primary replica, for a scenario where I cannot use a regular AG listener (in this case, because the servers are all hosted in Azure).    

    Read the article

  • Video on Architecture and Code Quality using Visual Studio 2012&ndash;interview with Marcel de Vries and Terje Sandstrom by Adam Cogan

    - by terje
    Find the video HERE. Adam Cogan did a great Web TV interview with Marcel de Vries and myself on the topics of architecture and code quality.  It was real fun participating in this session.  Although we know each other from the MVP ALM community,  Marcel, Adam and I haven’t worked together before. It was very interesting to see how we agreed on so many terms, and how alike we where thinking.  The basics of ensuring you have a good architecture and how you could document it is one thing.  Also, the same agreement on the importance of having a high quality code base, and how we used the Visual Studio 2012 tools, and some others (NDepend for example)  to measure and ensure that the code quality was where it should be.  As the tools, methods and thinking popped up during the interview it was a lot of “Hey !  I do that too!”.  The tools are not only for “after the fact” work, but we use them during the coding.  That way the tools becomes an integrated part of our coding work, and helps us to find issues we may have overlooked.  The video has a bunch of call outs, pinpointing important things to remember. These are also listed on the corresponding web page. I haven’t seen that touch before, but really liked this way of doing it – it makes it much easier to spot the highlights.  Titus Maclaren and Raj Dhatt from SSW have done a terrific job producing this video.  And thanks to Lei Xu for doing the camera and recording job.  Thanks guys ! Also, if you are at TechEd Amsterdam 2012, go and listen to Adam Cogan in his session on “A modern architecture review: Using the new code review tools” Friday 29th, 10.15-11.30 and Marcel de Vries session on “Intellitrace, what is it and how can I use it to my benefit” Wednesday 27th, 5-6.15 The highlights points out some important practices.  I’ll elaborate on a few of them here: Add instructions on how to compile the solution.  You do this by adding a text file with instructions to the solution, and keep it under source control.  These instructions should contain what is needed on top of a standard install of Visual Studio.  I do a lot of code reviews, and more often that not, I am not even able to compile the program, because they have used some tool or library that needs to be installed.  The same applies to any new developer who enters into the team, so do this to increase your productivity when the team changes, or a team member switches computer. Don’t forget to document what you have to configure on the computer, the IIS being a common one. The more automatic you can do this, the better.  Use NuGet to get down libraries. When the text document gets more than say, half a page, with a bunch of different things to do, convert it into a powershell script instead.  The metrics warning levels.  These are very conservatively set by Microsoft.  You rarely see anything but green, and besides, you should have color scales for each of the metrics.  I have a blog post describing a more appropriate set of levels, based on both research work and industry “best practices”.  The essential limits are: Cyclomatic complexity and coupling:  Higher numbers are worse On method levels: Green :  From 0 to 10 Yellow:  From 10 to 20  (some say 15).   Acceptable, but have a look to see if there is something unneeded here. Red: From 20 to 40:   Action required, get these down. Bleeding Red: Above 40   This is the real red alert.  Immediate action!  (My invention, as people have asked what do I do when I have cyclomatic complexity of 150.  The only answer I could think of was: RUN! ) Maintainability index:  Lower numbers are worse, scale from 0 to 100. On method levels: Green:  60 to 100 Yellow:  40 – 60.    You will always have methods here too, accept the higher ones, take a look at those who are down to the lower limit.  Check up against the other metrics.) Red:  20 – 40:  Action required, fix these. Bleeding red:  Below 20.  Immediate action required. When doing metrics analysis, you should leave the generated code out.  You do this by adding attributes, unfortunately Microsoft has “forgotten” to add these to all their stuff, so you might have to add them to some of the code.  It most cases it can be done so that it is not overwritten by a new round of code generation.  Take a look a my blog post here for details on how to do that. Class level metrics might also be useful, at least for coupling and maintenance.  But it is much more difficult to set any fixed limits on those.  Any metric aggregations on higher level tend to be pretty useless, as the number of methods vary pretty much, and there are little science on what number of methods can be regarded as good or bad.  NDepend have a recommendation, but they say it may vary too.  And in these days of data binding, the number might be pretty high, as properties counts as methods.  However, if you take the worst case situations, classes with more than 20 methods are suspicious, and coupling and cyclomatic complexity go red above 20, so any classes with more than 20x20 = 400 for these measures should be checked over. In the video we mention the SOLID principles, coined by “Uncle Bob” (Richard Martin). One of them, the Dependency Inversion principle we discuss in the video.  It is important to note that this principle is NOT on whether you should use a Dependency Inversion Container or not, it is about how you design the interfaces and interactions between your classes.  The Dependency Inversion Container is just one technique which is based on this principle, but which main purpose is to isolate things you would like to change at runtime, for example if you implement a plug in architecture.  Overuse of a Dependency Inversion Container is however, NOT a good thing.  It should be used for a purpose and not as a general DI solution.  The general DI solution and thinking however is useful far beyond the DIC.   You should always “program to an abstraction”, and not to the concreteness.  We also talk a bit about the GRASP patterns, a term coined by Craig Larman in his book Applying UML and design patterns. GRASP patterns stand for General Responsibility Assignment Software Patterns and describe fundamental principles of object design and responsibility assignment.  What I find great with these patterns is that they is another way to focus on the responsibility of a class.  One of the things I most often found that is broken in software designs, is that the class lack responsibility, and as a result there are a lot of classes mucking around in the internals of the other classes.  We also discuss the term “Code Smells”.  This term was invented by Kent Beck and Martin Fowler when they worked with Fowler’s “Refactoring” book. A code smell is a set of “bad” coding practices, which are the drivers behind a corresponding set of refactorings.  Here is a good list of the smells, and their corresponding refactor patterns. See also this.

    Read the article

  • Build an Organization Chart In Visio 2010

    - by Mysticgeek
    With trying to manage a business these days, it’s very important to have an Organization Chart to keep everything manageable. Here we’ll show you how to build one in Visio 2010. This Guest Article was written by our friends over at Office 2010 Club. Need for Organization Charts The need of creating Organization Charts are becoming indispensable these days, as companies start focusing on extensive hiring for far reach availability, increase in productivity and targeting diverse markets. Considering this rigorous change, creating an organization chart can help stakeholders in comprehending the ever growing organization structure & hierarchy with an ease. It shows the basic structure of organization along with defining the relationships between employees working in different departments. Opportunely, Microsoft Visio 2010 offers an easy way to create Organization chart. As before now, orthodox ways of listing organization hierarchy have been used for defining the structure of departments along with communication possible including; horizontal and vertical communications. To transform these lists which defines organizational structure, into a detailed chart, Visio 2010 includes an add-in for importing Excel spreadsheet, which comes in handy for pulling out data from spreadsheet to create an organization chart. Importantly, you don’t need to indulge yourself in maze of defining organizational hierarchies and chalking-out structure, as you just need to specify the column & row headers, along with data you need to import and it will automatically create out chart defining; organizational hierarchies with specified credentials of each employee, categorized in their corresponding departments. Creating Organization Charts in Visio 2010 To start off with, we have created an Excel spreadsheet having fields, Name, Supervisor, Designation, Department and Phone. The Name field contains name of all the employees working in different departments, whereas Supervisor field contains name of supervisors or team leads. This field is vital for creating Organization Chart, as it defines the basic structure & hierarchy in chart. Now launch Visio 2010, head over to View tab, under Add-Ons menu, from Business options, click Organization Chart Wizard. This will start Organization Chart Wizard, in the first step, enable Information that’s already stored in a file or database option, and click Next. As we are importing Excel sheet, select the second option for importing Excel spreadsheet. Specify the Excel file path and click Next to continue. In this step, you need to specify the fields which actually defines the structure of an organization. In our case, these are Name & Supervisor fields. After specifying fields, click Next to Proceed further. As organization chart is primarily for showing the hierarchy of departments/employees working in organization along with how they are linked together, and who supervises whom. Considering this, in this step we will leave out Supervisor field, because it’s inclusion wouldn’t be necessary as Visio automatically chalks-out the basic structure defined in Excel sheet. Add the rest of the fields under Displayed fields category, and click Next. Now choose the fields which you want to include in Organization Chart’s shapes and click Next. This step is about breaking the chart into multiple pages, if you are dealing with 100+ employees, you may want to specify numbers of pages on which Organization Chart will be displayed. But in our case, we are dealing with much less amount of data, so we will enable I want the wizard to automatically break my organization chart across pages option. Specify the name you need to show on the top of the page. If you are having less than 20 hierarchies, enter the name of the highest ranked employee in organization and click Finish to end the wizard. It will instantly create an Organization chart out of specified Excel spreadsheet. Highest ranked employee will be shown on top of the organization chart, supervising various employees from different departments. As shown below, his immediate subordinates further manages other employees and so on. For advance customizations, head over to Org Chart tab, here you will find different groups for setting up the Org Chart’s hierarchy and manage other employees’ positions. Under Arrange group, shapes’ arrangements can be changed and it provides easy navigation through the chart. You can also change the type of the position and hide subordinates of selected employee. From Picture group, you can insert a picture of the employees, departments, etc. From synchronization group, you have the option of creating a synced copy and expanding subordinates of selected employee. Under Organization Data group, you can change whole layout of Organization chart from Display Options including; shape display, show divider, enable/disable imported fields, change block position, and fill colors, etc. If at any point of time, you need to insert new position or announce vacancy, Organization Chart stencil is always available on the left sidebar. Drag the desired Organization Chart shape into main diagram page, to maintain the structure integrity, i.e, for inserting subordinates for a specific employee, drag the position shape over the existing employee shape box. For instance, We have added a consultant in organization, who is directly under CEO, for maintaining this, we have dragged the Consultant box and just dropped it over the CEO box to make the immediate subordinate position. Adding details to new position is a cinch, just right-click new position box and click Properties. This will open up Shape Data dialog, start filling in all the relevant information and click OK. Here you can see the newly created position is easily populated with all the specified information. Now expanding an Organization Chart doesn’t require maintenance of long lists any more. Under Design tab, you can also try out different designs & layouts over organization chart to make it look more flamboyant and professional.  Conclusion An Organization Chart is a great way of showing detailed organizational hierarchies; with defined credentials of employees, departments structure, new vacancies, newly hired employees, recently added departments, and importantly shows most convenient way of interaction between different departments & employees, etc. Similar Articles Productive Geek Tips Geek Reviews: Using Dia as a Free Replacement for Microsoft VisioMysticgeek Blog: Create Appealing Charts In Excel 2007Create Charts in Excel 2007 the Easy Way with Chart AdvisorCreate a Hyperlink in a Word 2007 Flow Chart and Hide Annoying ScreenTipsCreate A Flow Chart In Word 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace Steve Jobs’ iPhone 4 Keynote Video

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • Dynamic Code for type casting Generic Types 'generically' in C#

    - by Rick Strahl
    C# is a strongly typed language and while that's a fundamental feature of the language there are more and more situations where dynamic types make a lot of sense. I've written quite a bit about how I use dynamic for creating new type extensions: Dynamic Types and DynamicObject References in C# Creating a dynamic, extensible C# Expando Object Creating a dynamic DataReader for dynamic Property Access Today I want to point out an example of a much simpler usage for dynamic that I use occasionally to get around potential static typing issues in C# code especially those concerning generic types. TypeCasting Generics Generic types have been around since .NET 2.0 I've run into a number of situations in the past - especially with generic types that don't implement specific interfaces that can be cast to - where I've been unable to properly cast an object when it's passed to a method or assigned to a property. Granted often this can be a sign of bad design, but in at least some situations the code that needs to be integrated is not under my control so I have to make due with what's available or the parent object is too complex or intermingled to be easily refactored to a new usage scenario. Here's an example that I ran into in my own RazorHosting library - so I have really no excuse, but I also don't see another clean way around it in this case. A Generic Example Imagine I've implemented a generic type like this: public class RazorEngine<TBaseTemplateType> where TBaseTemplateType : RazorTemplateBase, new() You can now happily instantiate new generic versions of this type with custom template bases or even a non-generic version which is implemented like this: public class RazorEngine : RazorEngine<RazorTemplateBase> { public RazorEngine() : base() { } } To instantiate one: var engine = new RazorEngine<MyCustomRazorTemplate>(); Now imagine that the template class receives a reference to the engine when it's instantiated. This code is fired as part of the Engine pipeline when it gets ready to execute the template. It instantiates the template and assigns itself to the template: var template = new TBaseTemplateType() { Engine = this } The problem here is that possibly many variations of RazorEngine<T> can be passed. I can have RazorTemplateBase, RazorFolderHostTemplateBase, CustomRazorTemplateBase etc. as generic parameters and the Engine property has to reflect that somehow. So, how would I cast that? My first inclination was to use an interface on the engine class and then cast to the interface.  Generally that works, but unfortunately here the engine class is generic and has a few members that require the template type in the member signatures. So while I certainly can implement an interface: public interface IRazorEngine<TBaseTemplateType> it doesn't really help for passing this generically templated object to the template class - I still can't cast it if multiple differently typed versions of the generic type could be passed. I have the exact same issue in that I can't specify a 'generic' generic parameter, since there's no underlying base type that's common. In light of this I decided on using object and the following syntax for the property (and the same would be true for a method parameter): public class RazorTemplateBase :MarshalByRefObject,IDisposable { public object Engine {get;set; } } Now because the Engine property is a non-typed object, when I need to do something with this value, I still have no way to cast it explicitly. What I really would need is: public RazorEngine<> Engine { get; set; } but that's not possible. Dynamic to the Rescue Luckily with the dynamic type this sort of thing can be mitigated fairly easily. For example here's a method that uses the Engine property and uses the well known class interface by simply casting the plain object reference to dynamic and then firing away on the properties and methods of the base template class that are common to all templates:/// <summary> /// Allows rendering a dynamic template from a string template /// passing in a model. This is like rendering a partial /// but providing the input as a /// </summary> public virtual string RenderTemplate(string template,object model) { if (template == null) return string.Empty; // if there's no template markup if(!template.Contains("@")) return template; // use dynamic to get around generic type casting dynamic engine = Engine; string result = engine.RenderTemplate(template, model); if (result == null) throw new ApplicationException("RenderTemplate failed: " + engine.ErrorMessage); return result; } Prior to .NET 4.0  I would have had to use Reflection for this sort of thing which would have a been a heck of a lot more verbose, but dynamic makes this so much easier and cleaner and in this case at least the overhead is negliable since it's a single dynamic operation on an otherwise very complex operation call. Dynamic as  a Bailout Sometimes this sort of thing often reeks of a design flaw, and I agree that in hindsight this could have been designed differently. But as is often the case this particular scenario wasn't planned for originally and removing the generic signatures from the base type would break a ton of other code in the framework. Given the existing fairly complex engine design, refactoring an interface to remove generic types just to make this particular code work would have been overkill. Instead dynamic provides a nice and simple and relatively clean solution. Now if there were many other places where this occurs I would probably consider reworking the code to make this cleaner but given this isolated instance and relatively low profile operation use of dynamic seems a valid choice for me. This solution really works anywhere where you might end up with an inheritance structure that doesn't have a common base or interface that is sufficient. In the example above I know what I'm getting but there's no common base type that I can cast to. All that said, it's a good idea to think about use of dynamic before you rush in. In many situations there are alternatives that can still work with static typing. Dynamic definitely has some overhead compared to direct static access of objects, so if possible we should definitely stick to static typing. In the example above the application already uses dynamics extensively for dynamic page page templating and passing models around so introducing dynamics here has very little additional overhead. The operation itself also fires of a fairly resource heavy operation where the overhead of a couple of dynamic member accesses are not a performance issue. So, what's your experience with dynamic as a bailout mechanism? © Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • New OFM versions released SOA Suite 11.1.1.4 &amp; BPM 11.1.1.4 &amp; JDeveloper 11.1.1.4 WebLogic on JRockit 10.3.4 feedback from the community

    - by Jürgen Kress
    Oracle SOA Suite 11g Installations This is the latest release of the Oracle SOA Suite 11g. Please see the Documentation tab for Release Notes, Installation Guides and other release specific information. Please also see the List of New Features and Samples provided for this release. Release 11gR1 (11.1.1.4.0) Microsoft Windows (32-bit JVM) Linux (32-bit JVM) Generic Oracle JDeveloper 11g Rel 1 (11.1.1.x) (JDeveloper + ADF) Integrated development environment certified on Windows, Linux, and Macintosh. License is free (read the Pricing FAQ). Studio Edition for Windows (1.2 GB) | Studio Edition for Linux (1.3 GB) | See All See Additional Development Tools Oracle WebLogic Server 11g Rel 1 (10.3.4) Installers The WebLogic Server installers include Oracle Coherence and Oracle Enterprise Pack for Eclipse and supports development with other Fusion Middleware products . The zip includes WebLogic Server only and is intended for WebLogic Server development only. Linux x86 (1.1 GB) | Windows x86 (1 GB) Zip for Windows x86, Linux x86, Mac OS X (316 MB) | See All Oracle WebLogic Server 11gR1 (10.3.4) on JRockit Virtual Edition Download For additional downloads please visit the Oracle Fusion Middleware Products Update Center Share your feedback with the @soacommunity on twitter SOASimone Simone Geib SOA Suite 11gR1 (11.1.1.4.0) has just been released: http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html gschmutz gschmutz My new blog post: WebLogic Server, JDev, SOA, BPM, OSB and CEP 11.1.1.4 (PS3) available! - http://tinyurl.com/4negnpn simon_haslam Simon Haslam I'm very pleased to see WLS 10.3.4 for JRockit VE launched at the same time as the rest of PS3 http://j.mp/gl1nQm (32bit anyway) lucasjellema Lucas Jellema See http://www.oracle.com/ocom/groups/public/@otn/documents/webcontent/156082.xml for PS3 extension downloads BPM, SOA Editor, WebCenter demed demed List of new features in @OracleSOA 11gR1 PS3: http://bit.ly/fVRwsP is not extremely long but huge release by # of bugs fixed. Go! biemond Edwin Biemond WebLogic 10.3.4 new features http://bit.ly/f7L1Eu Exalogic Elastic Cloud , JPA2 , Maven plugin, OWSM policies on WebLogic SCA applications JDeveloper JDeveloper & ADF JDeveloper and Oracle ADF 11g Release 1 Patch Set 3 (11.1.1.4.0): New Features and Bug Fixes http://bit.ly/feghnY simon_haslam Simon Haslam WebLogic Server 10.3.4 (i.e. 11gR1 PS3) available now too http://bit.ly/eeysZ2 JDeveloper JDeveloper & ADF Share your impressions on the new JDeveloper 11g Patchset 3 release that came out today! Download it here: http://bit.ly/dogRN8 VikasAatOracle Vikas Anand SOA Suite 11gR1PS3 is Hotpluggable ...see list of features that @Demed posted..#soa #soacommunity   New versions of Oracle Fusion Middleware 11g R1 (11.1.1.4.x)  include: Oracle WebLogic Server 11g R1 (10.3.4) Oracle SOA Suite 11g R1 (11.1.1.4.0) Oracle Business Process Management 11g R1 (11.1.1.4.0) Oracle Complex Event Processing 11g R1 (11.1.1.4.0) Oracle Application Integration Architecture Foundation Pack 11g R1 (11.1.1.4.0) Oracle Service Bus 11g R1 (11.1.1.4.0) Oracle Enterprise Repository 11g R1 (11.1.1.4.0) Oracle Identity Management 11g R1 (11.1.1.4.0) Oracle Enterprise Content Management 11g R1 (11.1.1.4.0) Oracle WebCenter 11g R1 (11.1.1.4.0) - coming soon Oracle Forms, Reports, Portal & Discoverer 11g R1 (11.1.1.4.0) Oracle Repository Creation Utility 11g R1 (11.1.1.4.0) Oracle JDeveloper & Application Development Runtime 11g R1 (11.1.1.4.0) Resources Download  (OTN) Certification Documentation   New Features in Oracle SOA Suite 11g Release 1 (11.1.1.4.0) Updated: January, 2011 Go to Oracle SOA Suite 11g Doc Introduction Oracle SOA Suite 11gR1 (11.1.1.4.0) includes both bug fixes as well as new features listed below - click on the title of each feature for more details. Downloads, documentation links and more information on the Oracle SOA Suite available on the SOA Suite OTN page and as always, we welcome your feedback on the SOA OTN forum. New in Oracle SOA Suite in this release BPEL Component BPEL 2.0 support in JDeveloper The BPEL editor in JDeveloper now generates BPEL 2.0 code and introduces several new activities. Augmented XML variables auto-initialization capabilities The XML variable auto-initialization capabilities have been enhanced to support two need additional use cases: to initialize the to-spec node if it doesn't exist during the rule and to initialize array elements. New Assign Activity dialog The new Assign Activity supports the same drag & drop paradigm used for the XSLT mapper, greatly streamlining the task of assigning multiple variables. Mediator Component Time window parameter for the resequencer This new parameter lets users initiate a best-effort resequencing based on a time window rather than a number of messages. Support for attachments in the Mediator assign dialog The Mediator assign dialog now supports attachment, enabling usage of the Mediator to transmit attachments even if source and target schemas are different. Adapters & Bindings ChunkSize property added to the File Adapter header properties The ChunkSize property of the File Adapter is now available as a header property, allowing in-process modification of the value for this property. Improved support for distributed WLS JMS topics though automatic rebalancing of listeners The JMS Adapter has been enhanced to subscribe to administrative events from WLS JMS. Based on these events, it dynamically rebalances listeners when there are changes to the members of a local or remote WLS JMS distributed destination. JDeveloper configuration wizard for custom JCA adapters A new wizard is available in JDeveloper to configure custom-built adapters Administration & Enterprise Manager Enhanced purging capabilities to manage database growth Historical instance data can now be purged using three different strategies: batch script, scheduled batch script or data partitioning. Asynchronous bulk instance deletion in Enterprise Manager Bulk deletion of instances in Enterprise Manager now executes as an asynchronous operation in Enterprise Manager, returning control to the user as soon as the action has been submitted and acknowledged. B2B Ability to schedule partner downtime This feature allows trading partners to notify each other about planned downtime and to delay delivery of messages during that period. Message sequencing B2B now supports both inbound and outbound message sequencing. Simplified BAM integration with B2B B2B ships with various pre-configured artifacts to simplify monitoring in BAM. Instance Message Java API for B2B The new instance message Java API supports programmatic access to B2B instance message data. Oracle Service Bus (OSB) Certification of the File and FTP JCA Adapters The File and FTP JCA adapters are now certified for use with Oracle Service Bus (in addition to the native transports). Security enhancements Oracle Service Bus now supports SAML 2.0 as well as the OWSM authorization policies. Check the Oracle Service Bus 11.1.1.4 Release Notes for a complete list of new features. Installation, Hot-Pluggability & Certifications Ability to run Oracle SOA Suite on IBM WebSphere Application Server Oracle SOA Suite can now be deployed on IBM WebSphere Application Server Network Deployment (ND) 7.0.11 and IBM WebSphere Application Server 7.0.11. Single JVM developer installation template Oracle SOA Suite can now be targeted to the WebLogic admin server - there is no requirement to also have a managed server. This topology is intended to minimize the memory foorprint of development environments. This is in addition to the list of supported browsers, operating systems and databases already certified in prior releases. Complex Event Processing (CEP) IDE enhancements This release introduces several enhancements to the development IDE, such as adapter wizards and event-type repository. CQL enhancements CQL enhancements include JDBC data cartridges and parametrized queries. Tracing and injecting events in the Event Processing Network (EPN) In the development environment you can now trace and inject events. Check the Oracle CEP 11.1.1.4 Release Notes for a complete list of new features. SOA Suite page on OTN For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Suite 11.1.1.4,JDeveloper 11.1.1.4,WebLogic 10.3.4,JRockit 10.3.4,SOA Community,Oracle,OPN,SOA,Simone Geib,Guido Schmutz,Edwin Biemond,Lucas Jellema,Simon Haslam,Demed,Vikas Anand,Jürgen Kress

    Read the article

  • Quartz.Net Windows Service Configure Logging

    - by Tarun Arora
    In this blog post I’ll be covering, Logging for Quartz.Net Windows Service 01 – Why doesn’t Quartz.Net Windows Service log by default 02 – Configuring Quartz.Net windows service for logging to eventlog, file, console, etc 03 – Results: Logging in action If you are new to Quartz.Net I would recommend going through, A brief Introduction to Quartz.net Walkthrough of Installing & Testing Quartz.Net as a Windows Service Writing & Scheduling your First HelloWorld job with Quartz.Net   01 – Why doesn’t Quartz.Net Windows Service log by default If you are trying to figure out why… The Quartz.Net windows service isn’t logging The Quartz.Net windows service isn’t writing anything to the event log The Quartz.Net windows service isn’t writing anything to a file How do I configure Quartz.Net windows service to use log4Net How do I change the level of logging for Quartz.Net Look no further, This blog post should help you answer these questions. Quartz.NET uses the Common.Logging framework for all of its logging needs. If you navigate to the directory where Quartz.Net Windows Service is installed (I have the service installed in C:\Program Files (x86)\Quartz.net, you can find out the location by looking at the properties of the service) and open ‘Quartz.Server.exe.config’ you’ll see that the Quartz.Net is already set up for logging to ConsoleAppender and EventLogAppender, but only ‘ConsoleAppender’ is set up as active. So, unless you have the console associated to the Quartz.Net service you won’t be able to see any logging. <log4net> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <root> <level value="INFO" /> <appender-ref ref="ConsoleAppender" /> <!-- uncomment to enable event log appending --> <!-- <appender-ref ref="EventLogAppender" /> --> </root> </log4net> Problem: In the configuration above Quartz.Net Windows Service only has ConsoleAppender active. So, no logging will be done to EventLog. More over the RollingFileAppender isn’t setup at all. So, Quartz.Net will not log to an application trace log file. 02 – Configuring Quartz.Net windows service for logging to eventlog, file, console, etc Let’s change this behaviour by changing the config file… In the below config file, I have added the RollingFileAppender. This will configure Quartz.Net service to write to a log file. (<appender name="GeneralLog" type="log4net.Appender.RollingFileAppender">) I have specified the location for the log file (<arg key="configFile" value="Trace/application.log.txt"/>) I have enabled the EventLogAppender and RollingFileAppender to be written to by Quartz. Net windows service Changed the default level of logging from ‘Info’ to ‘All’. This means all activity performed by Quartz.Net Windows service will be logged. You might want to tune this back to ‘Debug’ or ‘Info’ later as logging ‘All’ will produce too much data to the logs. (<level value="ALL"/>) Since I have changed the logging level to ‘All’, I have added applicationSetting to remove logging log4Net internal debugging. (<add key="log4net.Internal.Debug" value="false"/>) <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> <sectionGroup name="common"> <section name="logging" type="Common.Logging.ConfigurationSectionHandler, Common.Logging" /> </sectionGroup> </configSections> <common> <logging> <factoryAdapter type="Common.Logging.Log4Net.Log4NetLoggerFactoryAdapter, Common.Logging.Log4net"> <arg key="configType" value="INLINE" /> <arg key="configFile" value="Trace/application.log.txt"/> <arg key="level" value="ALL" /> </factoryAdapter> </logging> </common> <appSettings> <add key="log4net.Internal.Debug" value="false"/> </appSettings> <log4net> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d [%t] %-5p %l - %m%n" /> </layout> </appender> <appender name="GeneralLog" type="log4net.Appender.RollingFileAppender"> <file value="Trace/application.log.txt"/> <appendToFile value="true"/> <maximumFileSize value="1024KB"/> <rollingStyle value="Size"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{HH:mm:ss} [%t] %-5p %c - %m%n"/> </layout> </appender> <root> <level value="ALL" /> <appender-ref ref="ConsoleAppender" /> <appender-ref ref="EventLogAppender" /> <appender-ref ref="GeneralLog"/> </root> </log4net> </configuration>   Note – Please ensure you restart the Quartz.Net Windows service for the config changes to be picked up by the service   03 – Results: Logging in action Once you start the Quartz.Net Windows Service up, the logging should be initiated to write all activities in the Console, EventLog and File… See screen shots below… Figure – Quartz.Net Windows Service logging all activity to the event log Figure – Quartz.Net Windows Service logging all activity to the application log file Where is the output from log4Net ConsoleAppender? As a default behaviour, the console isn't available in windows services, web services, windows forms. The output will simply be dismissed. Unless you are running the process interactively. Which you can do by firing up Quartz.Server.exe –i to see the output   This was fourth in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering troubleshooting why a scheduled task hasn’t fired on Quartz.net windows service. All Quartz.Net specific blog posts can listed here. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • Using SQL Execution Plans to discover the Swedish alphabet

    - by Rob Farley
    SQL Server is quite remarkable in a bunch of ways. In this post, I’m using the way that the Query Optimizer handles LIKE to keep it SARGable, the Execution Plans that result, Collations, and PowerShell to come up with the Swedish alphabet. SARGability is the ability to seek for items in an index according to a particular set of criteria. If you don’t have SARGability in play, you need to scan the whole index (or table if you don’t have an index). For example, I can find myself in the phonebook easily, because it’s sorted by LastName and I can find Farley in there by moving to the Fs, and so on. I can’t find everyone in my suburb easily, because the phonebook isn’t sorted that way. I can’t even find people who have six letters in their last name, because also the book is sorted by LastName, it’s not sorted by LEN(LastName). This is all stuff I’ve looked at before, including in the talk I gave at SQLBits in October 2010. If I try to find everyone who’s names start with F, I can do that using a query a bit like: SELECT LastName FROM dbo.PhoneBook WHERE LEFT(LastName,1) = 'F'; Unfortunately, the Query Optimizer doesn’t realise that all the entries that satisfy LEFT(LastName,1) = 'F' will be together, and it has to scan the whole table to find them. But if I write: SELECT LastName FROM dbo.PhoneBook WHERE LastName LIKE 'F%'; then SQL is smart enough to understand this, and performs an Index Seek instead. To see why, I look further into the plan, in particular, the properties of the Index Seek operator. The ToolTip shows me what I’m after: You’ll see that it does a Seek to find any entries that are at least F, but not yet G. There’s an extra Predicate in there (a Residual Predicate if you like), which checks that each LastName is really LIKE F% – I suppose it doesn’t consider that the Seek Predicate is quite enough – but most of the benefit is seen by its working out the Seek Predicate, filtering to just the “at least F but not yet G” section of the data. This got me curious though, particularly about where the G comes from, and whether I could leverage it to create the Swedish alphabet. I know that in the Swedish language, there are three extra letters that appear at the end of the alphabet. One of them is ä that appears in the word Västerås. It turns out that Västerås is quite hard to find in an index when you’re looking it up in a Swedish map. I talked about this briefly in my five-minute talk on Collation from SQLPASS (the one which was slightly less than serious). So by looking at the plan, I can work out what the next letter is in the alphabet of the collation used by the column. In other words, if my alphabet were Swedish, I’d be able to tell what the next letter after F is – just in case it’s not G. It turns out it is… Yes, the Swedish letter after F is G. But I worked this out by using a copy of my PhoneBook table that used the Finnish_Swedish_CI_AI collation. I couldn’t find how the Query Optimizer calculates the G, and my friend Paul White (@SQL_Kiwi) tells me that it’s frustratingly internal to the QO. He’s particularly smart, even if he is from New Zealand. To investigate further, I decided to do some PowerShell, leveraging the Get-SqlPlan function that I blogged about recently (make sure you also have the SqlServerCmdletSnapin100 snap-in added). I started by indicating that I was going to use Finnish_Swedish_CI_AI as my collation of choice, and that I’d start whichever letter cam straight after the number 9. I figure that this is a cheat’s way of guessing the first letter of the alphabet (but it doesn’t actually work in Unicode – luckily I’m using varchar not nvarchar. Actually, there are a few aspects of this code that only work using ASCII, so apologies if you were wanting to apply it to Greek, Japanese, etc). I also initialised my $alphabet variable. $collation = 'Finnish_Swedish_CI_AI'; $firstletter = '9'; $alphabet = ''; Now I created the table for my test. A single field would do, and putting a Clustered Index on it would suffice for the Seeks. Invoke-Sqlcmd -server . -data tempdb -query "create table dbo.collation_test (col varchar(10) collate $collation primary key);" Now I get into the looping. $c = $firstletter; $stillgoing = $true; while ($stillgoing) { I construct the query I want, seeking for entries which start with whatever $c has reached, and get the plan for it: $query = "select col from dbo.collation_test where col like '$($c)%';"; [xml] $pl = get-sqlplan $query "." "tempdb"; At this point, my $pl variable is a scary piece of XML, representing the execution plan. A bit of hunting through it showed me that the EndRange element contained what I was after, and that if it contained NULL, then I was done. $stillgoing = ($pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange -ne $null); Now I could grab the value out of it (which came with apostrophes that needed stripping), and append that to my $alphabet variable.   if ($stillgoing)   {  $c=$pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange.RangeExpressions.ScalarOperator.ScalarString.Replace("'","");     $alphabet += $c;   } Finally, finishing the loop, dropping the table, and showing my alphabet! } Invoke-Sqlcmd -server . -data tempdb -query "drop table dbo.collation_test;"; $alphabet; When I run all this, I see that the Swedish alphabet is ABCDEFGHIJKLMNOPQRSTUVXYZÅÄÖ, which matches what I see at Wikipedia. Interesting to see that the letters on the end are still there, even with Case Insensitivity. Turns out they’re not just “letters with accents”, they’re letters in their own right. I’m sure you gave up reading long ago, and really aren’t that fazed about the idea of doing this using PowerShell. I chose PowerShell because I’d already come up with an easy way of grabbing the estimated plan for a query, and PowerShell does allow for easy navigation of XML. I find the most interesting aspect of this as the fact that the Query Optimizer uses the next letter of the alphabet to maintain the SARGability of LIKE. I’m hoping they do something similar for a whole bunch of operations. Oh, and the fact that you know how to find stuff in the IKEA catalogue. Footnote: If you are interested in whether this works in other languages, you might want to consider the following screenshot, which shows that in principle, it should work with Japanese. It might be a bit harder to run this in PowerShell though, as I’m not sure how it translates. In Hiragana, the Japanese alphabet starts ?, ?, ?, ?, ?, ...

    Read the article

  • Converting a generic list into JSON string and then handling it in java script

    - by Jalpesh P. Vadgama
    We all know that JSON (JavaScript Object Notification) is very useful in case of manipulating string on client side with java script and its performance is very good over browsers so let’s create a simple example where convert a Generic List then we will convert this list into JSON string and then we will call this web service from java script and will handle in java script. To do this we need a info class(Type) and for that class we are going to create generic list. Here is code for that I have created simple class with two properties UserId and UserName public class UserInfo { public int UserId { get; set; } public string UserName { get; set; } } Now Let’s create a web service and web method will create a class and then we will convert this with in JSON string with JavaScriptSerializer class. Here is web service class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; namespace Experiment.WebService { /// <summary> /// Summary description for WsApplicationUser /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. [System.Web.Script.Services.ScriptService] public class WsApplicationUser : System.Web.Services.WebService { [WebMethod] public string GetUserList() { List<UserInfo> userList = new List<UserInfo>(); for (int i = 1; i <= 5; i++) { UserInfo userInfo = new UserInfo(); userInfo.UserId = i; userInfo.UserName = string.Format("{0}{1}", "J", i.ToString()); userList.Add(userInfo); } System.Web.Script.Serialization.JavaScriptSerializer jSearializer = new System.Web.Script.Serialization.JavaScriptSerializer(); return jSearializer.Serialize(userList); } } } Note: Here you must have this attribute here in web service class ‘[System.Web.Script.Services.ScriptService]’ as this attribute will enable web service to call from client side. Now we have created a web service class let’s create a java script function ‘GetUserList’ which will call web service from JavaScript like following function GetUserList() { Experiment.WebService.WsApplicationUser.GetUserList(ReuqestCompleteCallback, RequestFailedCallback); } After as you can see we have inserted two call back function ReuqestCompleteCallback and RequestFailedCallback which handle errors and result from web service. ReuqestCompleteCallback will handle result of web service and if and error comes then RequestFailedCallback will print the error. Following is code for both function. function ReuqestCompleteCallback(result) { result = eval(result); var divResult = document.getElementById("divUserList"); CreateUserListTable(result); } function RequestFailedCallback(error) { var stackTrace = error.get_stackTrace(); var message = error.get_message(); var statusCode = error.get_statusCode(); var exceptionType = error.get_exceptionType(); var timedout = error.get_timedOut(); // Display the error. var divResult = document.getElementById("divUserList"); divResult.innerHTML = "Stack Trace: " + stackTrace + "<br/>" + "Service Error: " + message + "<br/>" + "Status Code: " + statusCode + "<br/>" + "Exception Type: " + exceptionType + "<br/>" + "Timedout: " + timedout; } Here in above there is a function called you can see that we have use ‘eval’ function which parse string in enumerable form. Then we are calling a function call ‘CreateUserListTable’ which will create a table string and paste string in the a div. Here is code for that function. function CreateUserListTable(userList) { var tablestring = '<table ><tr><td>UsreID</td><td>UserName</td></tr>'; for (var i = 0, len = userList.length; i < len; ++i) { tablestring=tablestring + "<tr>"; tablestring=tablestring + "<td>" + userList[i].UserId + "</td>"; tablestring=tablestring + "<td>" + userList[i].UserName + "</td>"; tablestring=tablestring + "</tr>"; } tablestring = tablestring + "</table>"; var divResult = document.getElementById("divUserList"); divResult.innerHTML = tablestring; } Now let’s create div which will have all html that is generated from this function. Here is code of my web page. We also need to add a script reference to enable web service from client side. Here is all HTML code we have. <form id="form1" runat="server"> <asp:ScriptManager ID="myScirptManger" runat="Server"> <Services> <asp:ServiceReference Path="~/WebService/WsApplicationUser.asmx" /> </Services> </asp:ScriptManager> <div id="divUserList"> </div> </form> Now as we have not defined where we are going to call ‘GetUserList’ function so let’s call this function on windows onload event of javascript like following. window.onload=GetUserList(); That’s it. Now let’s run it on browser to see whether it’s work or not and here is the output in browser as expected. That’s it. This was very basic example but you can crate your own JavaScript enabled grid from this and you can see possibilities are unlimited here. Stay tuned for more.. Happy programming.. Technorati Tags: JSON,Javascript,ASP.NET,WebService

    Read the article

  • CodePlex Daily Summary for Sunday, May 16, 2010

    CodePlex Daily Summary for Sunday, May 16, 2010New Projects3D Calculator: 3D Calc is a simple calculator application for Windows Phone 7, the purpose of this project is to demo the 3D animations capabilities of WP7 and sh...azaleas: AzaleasBlueset Studio Opensource Projects: Only for Opensource projects form Blueset Studio.Breck: A Phoenix and Jumper Moneky Production: Breck is a first person non-violent shooter developed in C++ and Dark GDK. After the main game is developed we are looking into making a sequel or...Discuz! Forum SDK: This project is use to login in and post or reply topic on discuz forum.Dominion.NET: Evolving Dominion source code originally written in VB6 and posted by "jatill" on Collectible Card Game Headquarters. Migration of the design and s...EkspSys2010-ITR: A mini project for the course Experimental System devolopment in spring 2010Facebook Graph Toolkit: This project is a .Net implementation of the Facebook Graph API. The aim of this project is to be a replacement to the existing Facebook Toolkit (h...iFree: This is a solution for Vietnamese network socialInfoPath Editor for Developer: InfoPath Editor for developer allows user to modify the html text directly inside InfoPath designer or filler and push the change back to InfoPath ...iZeit: Run your own online calendar, with blog integration, recurrence, todo list and categories.machgos dotNet Tests: Just some little test-projects for learningmim: TBAMinePost: MinePost is a game made for the first 48 hour Reddit Game Jam.Mockina: Mockina is a mock framework. Expression tree syntax is used to specify which members to mock, both public and non-public. The code is easy to under...MSBuild Launch Pad (mPad): This is just another shell extension for MSBuild to enable quick execution of MSBuild scripts via Windows Explorer context menu. (C) 2010 Lex LiPeacock: A browser like tabbed applicationPrimeCalculation: PrimeCalculation is a .NET app to calculate primes in a given range. Speed on Core2Duo 2,4GHZ: Found all primes from 0 to 1 billion in 35 seconds (...Slightly Silverlight: A Framework that leverages Silverlight for processing, business logic but standard HTML for the presentation layer.Stopwatch: Stopwatch is a tool for measuring the time. To start and pause stopwatch you only need to press a key on the keyboard. An additional context menu a...YAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib is an XML Serialization library which helps you structure freely the XML result, choose among private and public fields to be serialized, an...New ReleasesActivate Your Glutes: v1.0.3.0: This release is a migration to VS2010, .Net 4, MVC2 and Entity Framework 4. The code has also been considerably cleaned up - taking advantage of E...AnyCAD: AnyCAD.Free.ENU.v1.1: http://www.anycad.net Modeling •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extrude, Loft, Chamfer, Sweep, Revol •Boolean: ...Blueset Studio Opensource Projects: 多功能计算器 3.5: 稳定版本。Code for Rapid C# Windows Development eBook: LLBLGen LINQPad Data Context Driver Ver 1.0.0.0: First release of a Static LLBLGen Pro Data Context Driver for LINQPad I recommend LINQPad 4 as it seems more stable with this driver than LINQPad 2.DSQLT - Dynamic SQL Templates: Release 1.2. Some behaviour has changed!!: Attention. Some behaviour has changed! Now its necessary to use WildCards in the pattern-parameter for DSQLT.AllSourceContains DSQLT.Databases DSQ...FDS AutoCAD plug-in: FDS to AutoCAD plug-in: Basic functionality was implemented. Some routines like setting fds executable location are still not automated.Feature Builder Guidance Extensions: FBGX 2 - Standalone FX: Background: The Feature Builder Guidance is extensible and displays guidance content supplied by all the Feature Builder Guidance Extensions (FBGX...Floe IRC Client: Floe IRC Client 2010-05 R3: - You can now right click on the input box to get options for toggling bold, underline, colors, etc. - The size of the nickname column is now saved...Floe IRC Client: Floe IRC Client 2010-05 R4: - A user's channel status now appears next to their nick when they talk (e.g. @Nick or +Nick) - Fixed an error where certain kinds of network probl...HD-Trailers.NET Downloader: HD-Trailers.NET Downloader v1.0: Version 1.0 Thanks to Wolfgang for all his help. I let this project languish for too long while focusing on other things, but his involvement has ...InfoPath Editor for Developer: InfoPath Editor Beta 1: Intial Release: Can load InfoPath inner html. Can edit InfoPath inner html. InfoPath 2007 only.LinkSharp: LinkSharp 0.1.0: First release of LinkSharp. Set up iis, and use the sql script to create a new database.PowerAuras: PowerAuras V3.0.0F: This version adds better integration with GTFO New Flags Added PvP flag In 5-Man Instance In Raid Instance In Battleground In ArenaRx Contrib: V1.4: Add the ability to catch internal exception and the ability to publish error by queue adaptersSEO SiteMap: SEO SiteMap RC1: -SevenZipLib Library: v9.13.2: Stable release associated with 7z.dll 9.13 beta. Ability to create and update archives not implemente yet.Silverlight / WPF Controls: Upload, FlipPanel, DeepZoom, Animation, Encryption: Code Camp Demonstration: This code example demonstrates MVVM/MEF with WPF with attached properties,security and custom ICommand class.SQL Data Capture - Black Box Application Testing: SQLDataCapture V1.2: Added Entity Framework Support to CRUD generator (Insert Stored Procedure) and switched to VS 2010 for development.Stopwatch: Stopwatch 0.1: Stopwatch Release 0.1VCC: Latest build, v2.1.30515.0: Automatic drop of latest buildYet another developer blog - Examples: Asynchronous TreeView in ASP.NET MVC: This sample application shows how to use jQuery TreeView plugin for creating an asynchronous TreeView in ASP.NET MVC. This application is accompani...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrPHPExcelBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersMirror Testing SystemN2 CMSStyleCop

    Read the article

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >