Search Results

Search found 4931 results on 198 pages for 'burnt hand'.

Page 193/198 | < Previous Page | 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • CodePlex Daily Summary for Monday, March 07, 2011

    CodePlex Daily Summary for Monday, March 07, 2011Popular ReleasesDotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 2: What is new in DotNetAge 2.0 ? Completely update DJME to DJME2, enhance user experience ,more beautiful and more interactively visit DJME project home to lean more about DJME http://www.dotnetage.com/sites/home/djme.html A new widget engine has came! Faster and easiler. Runtime performance enhanced. SEO enhanced. UI Designer enhanced. A new web resources explorer. Page manager enhanced. BlogML supports added that allows you import/export your blog data to/from dotnetage publishi...Master Data Services Manager: stable 1.0.3: Update 2011-03-07 : bug fixes added external configuration File : configuration.config added TreeView Display of model (still in dev) http://img96.imageshack.us/img96/5067/screenshot073l.jpg added Connection Parameters (username, domain, password, stored encrypted in configuration file) http://img402.imageshack.us/img402/5350/screenshot072qc.jpgSharePoint Content Inventory: Release 1.1: Release 1.1Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.4 Beta: - Moved the core of the PopupMenu class to the new PopupMenuBase class. - Renamed the MenuTriggerElement class to MenuTriggerRelationship. - Renamed the ApplicationMenus property to MenuTriggers. - Renamed the ImageLeftOpacity property to ImageOpacity. - Renamed the ImageLeftVisibility property to ImageVisibility. - Renamed the ImageLeftMinWidth property to ImageMinWidth. - Renamed the ImagePathForRightMargin property to ImageRightPath. - Renamed the ImageSourceForRightMargin property to Ima...Kooboo CMS: Kooboo CMS 3.0 Beta: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB and MongoDB. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder t...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.2: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager added fullscreen for the popup and popupformIronPython: 2.7 Release Candidate 2: On behalf of the IronPython team, I am pleased to announce IronPython 2.7 Release Candidate 2. The releases contains a few minor bug fixes, including a working webbrowser module. Please see the release notes for 61395 for what was fixed in previous releases.LINQ to Twitter: LINQ to Twitter Beta v2.0.20: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation.IIS Tuner: IIS Tuner 1.0: IIS and ASP.NET performance optimization toolMinemapper: Minemapper v0.1.6: Once again supports biomes, thanks to an updated Minecraft Biome Extractor, which added support for the new Minecraft beta v1.3 map format. Updated mcmap to support new biome format.CRM 2011 OData Query Designer: CRM 2011 OData Query Designer: The CRM 2011 OData Query Designer is a Silverlight 4 application that is packaged as a Managed CRM 2011 Solution. This tool allows you to build OData queries by selecting filter criteria, select attributes and order by attributes. The tool also allows you to Execute the query and view the ATOM and JSON data returned. The look and feel of this component will improve and new functionality will be added in the near future so please provide feedback on your experience. Import this solution int...Sandcastle Help File Builder: SHFB v1.9.3.0 Release: This release supports the Sandcastle June 2010 Release (v2.6.10621.1). It includes full support for generating, installing, and removing MS Help Viewer files. This new release is compiled under .NET 4.0, supports Visual Studio 2010 solutions and projects as documentation sources, and adds support for projects targeting the Silverlight Framework. This release uses the Sandcastle Guided Installation package used by Sandcastle Styles. Download and extract to a folder and then run SandcastleI...mytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.53.0 beta 2: New SEO Optimisation WEB.mytrip.mvc 1.0.53.0 Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) SRC.mytrip.mvc 1.0.53.0 System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.5, MVC3 RTM WARNING For run and debug SRC.mytrip.mvc 1.0.53.0 dow...AutoLoL: AutoLoL v1.6.4: It is now possible to run the clicker anyway when it can't detect the Masteries Window Fixed a critical bug in the open file dialog Removed the resize button Some UI changes 3D camera movement is now more intuitive (Trackball rotation) When an error occurs on the clicker it will attempt to focus AutoLoLYAF.NET (aka Yet Another Forum.NET): v1.9.5.5 RTW: YAF v1.9.5.5 RTM (Date: 3/4/2011 Rev: 4742) Official Discussion Thread here: http://forum.yetanotherforum.net/yaf_postsm47149_v1-9-5-5-RTW--Date-3-4-2011-Rev-4742.aspx Changes in v1.9.5.5 Rev. #4661 - Added "Copy" function to forum administration -- Now instead of having to manually re-enter all the access masks, etc, you can just duplicate an existing forum and modify after the fact. Rev. #4642 - New Setting to Enable/Disable Last Unread posts links Rev. #4641 - Added Arabic Language t...Snippet Designer: Snippet Designer 1.3.1: Snippet Designer 1.3.1 for Visual Studio 2010This is a bug fix release. Change logFixed bug where Snippet Designer would fail if you had the most recent Productivity Power Tools installed Fixed bug where "Export as Snippet" was failing in non-english locales Fixed bug where opening a new .snippet file would fail in non-english localesChiave File Encryption: Chiave 1.0: Final Relase for Chave 1.0 Stable: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Now with added support to Windows XP! Change Log from 0.9.2 to 1.0: ==================== Added: > Added Icon Overlay for Windows 7 Taskbar Icon. >Added Thumbnail Toolbar buttons to make the navigation easier...ASP.NET: Sprite and Image Optimization Preview 3: The ASP.NET Sprite and Image Optimization framework is designed to decrease the amount of time required to request and display a page from a web server by performing a variety of optimizations on the page’s images. This is the third preview of the feature and works with ASP.NET Web Forms 4, ASP.NET MVC 3, and ASP.NET Web Pages (Razor) projects. The binaries are also available via NuGet: AspNetSprites-Core AspNetSprites-WebFormsControl AspNetSprites-MvcAndRazorHelper It includes the foll...Network Monitor Open Source Parsers: Microsoft Network Monitor Parsers 3.4.2554: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, we have added 4 new protocol parsers and updated 79 existing parsers in the NetworkMonitor_Pa...Image Resizer for Windows: Image Resizer 3 Preview 1: Prepare to have your minds blown. This is the first preview of what will eventually become 39613. There are still a lot of rough edges and plenty of areas still under construction, but for your basic needs, it should be relativly stable. Note: You will need the .NET Framework 4 installed to use this version. Below is a status report of where this release is in terms of the overall goal for version 3. If you're feeling a bit technically ambitious and want to check out some of the features th...New ProjectsAppFactory: Die AppFactory Dient zur Vereinfachung der entwicklung von WPF Anwedungen. Es ist in C# entwickelt.Change the Default Playback Sound Device: ChangePlaybackDevice makes it easier for personal user to change the default playback sound device. You'll no longer have to change the default playback sound device by hand. It's developed in C#. Conectayas: Conectayas is an open source "Connect Four" alike game but transformable to "Tic-Tac-Toe" and to a lot of similar games that uses mouse. Written in DHTML (JavaScript, CSS and HTML). Very configurable. This cross-platform and cross-browser game was tested under BeOS, Linux, *BSD, Windows and others.Diamond: The all in one toolkit for WPF and Silverligth projects.Digital Disk File Format: Digital Disk is a File format that uses a simple key system, it is currently in development. It is written in vb.net, but will be expanded into other languagesdotnetMvcMalll: this is a asp.net mvc mallEasyCache .NET: EasyCache .NET is a simplified API over the ASP.NET Cache object. Its purpose is to offer a more concise syntax for adding and retrieving items from the cache.Eric Fang SharePoint workflow activities: Eric Fang SharePoint workflow activitiesExpert.NET: Expert.NET is an expert system framework for .NET applications. Written in F#, it provides constructs for defining probabilistic rulesets, as well as an inference engine. Expert.NET is ideal for encoding domain knowledge used by troubleshooting applications.GameGolem: The GameGolem is an XNA Casual Gamers portal. The purpose is to create a single ClickOnce deployed "Game Launcher" which exposes simple API for games to keep track of highscores, achivements, etc. GameGolem will become a Kongregate-like XNA-based casual games portal.Hundiyas: Hundiyas is an open source "Battleship" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses mouse. This cross-platform and cross-browser game was tested under BeOS, Linux, *BSD, Windows and others.ISBC: Practicas ISBC 10/11maocaijun.database: databaseNCLI: A simple API for command line argument parsing, written in C#.nEMO: nEMO is a pure C# framework for Evolutionary Multiobjective Optimization.N-tier architecture sample: A sample on how to practically design a system following an n-tier (multitier) architecture in line with the patterns and practices presented by Microsofts Application Architectural Guide 2.0. Focus is on a service application and it´s client applications of various types.PostsByMonth Widget: This is a simple widget for the Graffiti CMS application that allows you to get a monthly list of new posts to the site. It's configurable to allow for the # of posts to display as well as the the format of the month/year header, the title and the individual line entries. This is written in .Net 3.5 with Vb.Net.Puzzle Pal: Smartphone assistant for all your puzzling events.RavenDB Notification: Notification plugin for RavenDB. With this plugin you are able to subscribe to insert and delete notifications from the RavenDB server. Very helpfull if you need to process new documents on the remote clients and you do not like to query DB for new changes.Teamwork by Intrigue Deviation: A feature-rich team collaboration and project management effort built around Scrum methodology with MVC/2.test_flow: test flowTicari Uygulama Paketi: Ticari Uygulama Paketi (TUP), Microsoft Ofis 2010 ürünleri için gelistirilmis eklenti yazilimidir.XpsViewer: XpsVieweryaphan: yaphan cms.

    Read the article

  • Using Deployment Manager

    - by Jess Nickson
    One of the teams at Red Gate has been working very hard on a new product: Deployment Manager. Deployment Manager is a free tool that lets you deploy updates to .NET apps, services and databases through a central dashboard. Deployment Manager has been out for a while, but I must admit that even though I work in the same building, until now I hadn’t even looked at it. My job at Red Gate is to develop and maintain some of our community sites, which involves carrying out regular deployments. One of the projects I have to deploy on a fairly regular basis requires me to send my changes to our build server, TeamCity. The output is a Zip file of the build. I then have to go and find this file, copy it across to the staging machine, extract it, and copy some of the sub-folders to other places. In order to keep track of what builds are running, I need to rename the folders accordingly. However, even after all that, I still need to go and update the site and its applications in IIS to point at these new builds. Oh, and then, I have to repeat the process when I deploy on production. Did I mention the multiple configuration files that then need updating as well? Manually? The whole process can take well over half an hour. I’m ready to try out a new process. Deployment Manager is designed to massively simplify the deployment processes from what could be lots of manual copying of files, managing of configuration files, and database upgrades down to a few clicks. It’s a big promise, but I decided to try out this new tool on one of the smaller ASP.NET sites at Red Gate, Format SQL (the result of a Red Gate Down Tools week). I wanted to add some new functionality, but given it was a new site with no set way of doing things, I was reluctant to have to manually copy files around servers. I decided to use this opportunity as a chance to set the site up on Deployment Manager and check out its functionality. What follows is a guide on how to get set up with Deployment Manager, a brief overview of its features, and what I thought of the experience. To follow along with the instructions that follow, you’ll first need to download Deployment Manager from Red Gate. It has a free ‘Starter Edition’ which allows you to create up to 5 projects and agents (machines you deploy to), so it’s really easy to get up and running with a fully-featured version. The Initial Set Up After installing the product and setting it up using the administration tool it provides, I launched Deployment Manager by going to the URL and port I had set it to run on. This loads up the main dashboard. The dashboard does a good job of guiding me through the process of getting started, beginning with a prompt to create some environments. 1. Setting up Environments The dashboard informed me that I needed to add new ‘Environments’, which are essentially ways of grouping the machines you want to deploy to. The environments that get added will show up on the main dashboard. I set up two such environments for this project: ‘staging’ and ‘live’.   2. Add Target Machines Once I had created the environments, I was ready to add ‘target machine’s to them, which are the actual machines that the deployment will occur on.   To enable me to deploy to a new machine, I needed to download and install an Agent on it. The ‘Add target machine’ form on the ‘Environments’ page helpfully provides a link for downloading an Agent.   Once the agent has been installed, it is just a case of copying the server key to the agent, and the agent key to the server, to link them up.   3. Run Health Check If, after adding your new target machine, the ‘Status’ flags an error, it is possible that the Agent and Server keys have not been entered correctly on both Deployment Manager and the Agent service.     You can ‘Check Health’, which will give you more information on any issues. It is probably worth running this regardless of what status the ‘Environments’ dashboard is claiming, just to be on the safe side.     4. Add Projects Going back to the main Dashboard tab at this point, I found that it was telling me that I needed to set up a new project.   I clicked the ‘project’ link to get started, gave my new project a name and clicked ‘Create’. I was then redirected to the ‘Steps’ page for the project under the Projects tab.   5. Package Steps The ‘Steps’ page was fairly empty when it first loaded.   Adding a ‘step’ allowed me to specify what packages I wanted to grab for the deployment. This part requires a NuGet package feed to be set up, which is where Deployment Manager will look for the packages. At Red Gate, we already have one set up, so I just needed to tell Deployment Manager about it. Don’t worry; there is a nice guide included on how to go about doing all of this on the ‘Package Feeds’ page in ‘Settings’, if you need any help with setting these bits up.    At Red Gate we use a build server, TeamCity, which is capable of publishing built projects to the NuGet feed we use. This makes the workflow for Format SQL relatively simple: when I commit a change to the project, the build server is configured to grab those changes, build the project, and spit out a new NuGet package to the Red Gate NuGet package feed. My ‘package step’, therefore, is set up to look for this package on our feed. The final part of package step was simply specifying which machines from what environments I wanted to be able to deploy the project to.     Format SQL Now the main Dashboard showed my new project and environment in a rather empty looking grid. Clicking on my project presented me with a nice little message telling me that I am now ready to create my first release!   Create a release Next I clicked on the ‘Create release’ button in the Projects tab. If your feeds and package step(s) were set up correctly, then Deployment Manager will automatically grab the latest version of the NuGet package that you want to deploy. As you can see here, it was able to pick up the latest build for Format SQL and all I needed to do was enter a version number and description of the release.   As you can see underneath ‘Version number’, it keeps track of what version the previous release was given. Clicking ‘Create’ created the release and redirected me to a summary of it where I could check the details before deploying.   I clicked ‘Deploy this release’ and chose the environment I wanted to deploy to and…that’s it. Deployment Manager went off and deployed it for me.   Once I clicked ‘Deploy release’, Deployment Manager started to automatically update and provide continuing feedback about the process. If any errors do arise, then I can expand the results to see where it went wrong. That’s it, I’m done! Keep in mind, if you hit errors with the deployment itself then it is possible to view the log output to try and determine where these occurred. You can keep expanding the logs to narrow down the problem. The screenshot below is not from my Format SQL deployment, but I thought I’d post one to demonstrate the logging output available. Features One of the best bits of Deployment Manager for me is the ability to very, very easily deploy the same release to multiple machines. Deploying this same release to production was just a case of selecting the deployment and choosing the ‘live’ environment as the place to deploy to. Following on from this is the fact that, as Deployment Manager keeps track of all of your releases, it is extremely easy to roll back to a previous release if anything goes pear-shaped! You can view all your previous releases and select one to re-deploy. I needed this feature more than once when differences in my production and staging machines lead to some odd behavior.     Another option is to use the TeamCity integration available. This enables you to set Deployment Manager up so that it will automatically create releases and deploy these to an environment directly from TeamCity, meaning that you can always see the latest version up and running without having to do anything. Machine Specific Deployments ‘What about custom configuration files?’ I hear you shout. Certainly, it was one of my concerns. Our setup on the staging machine is not in line with that on production. What this means is that, should we deploy the same configuration to both, one of them is going to break. Thankfully, it turns out that Deployment Manager can deal with this. Given I had environments ‘staging’ and ‘live’, and that staging used the project’s web.config file, while production (‘live’) required the config file to undergo some transformations, I simply added a web.live.config file in the project, so that it would be included as part of the NuGet package. In this file, I wrote the XML document transformations I needed and Deployment Manager took care of the rest. Another option is to set up ‘variables’ for your project, which allow you to specify key-value pairs for your configuration file, and which environment to apply them to. You’ll find Variables as a full left-hand submenu within the ‘Projects’ tab. These features will definitely be of interest if you have a large number of environments! There are still many other features that I didn’t get a chance to play around with like running PowerShell scripts for more personalised deployments. Maybe next time! Also, let’s not forget that my use case in this article is a very simple one – deploying a single package. I don’t believe that all projects will be equally as simple, but I already appreciate how much easier Deployment Manager could make my life. I look forward to the possibility of moving our other sites over to Deployment Manager in the near future.   Conclusion In this article I have described the steps involved in setting up and configuring an instance of Deployment Manager, creating a new automated deployment process, and using this to actually carry out a deployment. I’ve tried to mention some of the features I found particularly useful, such as error logging, easy release management allowing you to deploy the same release multiple times, and configuration file transformations. If I had to point out one issue, then it would be that the releases are immutable, which from a development point of view makes sense. However, this causes confusion where I have to create a new release to deploy to a newly set up environment – I cannot simply deploy an old release onto a new environment, the whole release needs to be recreated. I really liked how easy it was to get going with the product. Setting up Format SQL and making a first deployment took very little time. Especially when you compare it to how long it takes me to manually deploy the other site, as I described earlier. I liked how it let me know what I needed to do next, with little messages flagging up that I needed to ‘create environments’ or ‘add some deployment steps’ before I could continue. I found the dashboard incredibly convenient. As the number of projects and environments increase, it might become awkward to try and search them and find out what state they are in. Instead, the dashboard handily keeps track of the latest deployments of each project and lets you know what version is running on each of the environments, and when that deployment occurred. Finally, do you remember my complaint about having to rename folders so that I could keep track of what build they came from? This is yet another thing that Deployment Manager takes care of for you. Each release is put into its own directory, which takes the name of whatever version number that release has, though these can be customised if necessary. If you’d like to take a look at Deployment Manager for yourself, then you can download it here.

    Read the article

  • CodePlex Daily Summary for Wednesday, June 08, 2011

    CodePlex Daily Summary for Wednesday, June 08, 2011Popular ReleasesHTML-IDEx: HTML-IDEx .15 ALPHA: This release fixes line counting a little bit and adds the masshighlight() sub, which highlights pasted and inserted code.AutoLoL: AutoLoL v2.0.3: - Improved summoner spells are now displayed - Fixed some of the startup errors people got - Double clicking an item selects it - Some usability changes that make using AutoLoL just a little easier - Bug fixesVidCoder: 0.9.2: Updated to HandBrake 4024svn. This fixes problems with mpeg2 sources: corrupted previews, incorrect progress indicators and encodes that incorrectly report as failed. Fixed a problem that prevented target sizes above 2048 MB.SharePoint Search XSL Samples: SharePoint 2010 Samples: I have updated some of the samples from the 2007 release. These all work in SharePoint 2010. I removed the Pivot on File Extension because SharePoint 2010 search has refiners that perform the same function.SCCM Client Actions Tool: SCCM Client Actions Tool v0.5: SCCM Client Actions Tool v0.5 is currently the most stable version and includes all of the functionality requested so far. It comes as a ZIP file that contains three files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA on Windows XP. Cmdkey.exe is natively available starting from Windows Vista. Config.ini – A configuration file for default settings. This file is...AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta5: ??AcDown?????????????,??????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta5 ?????????? ???? ?? ???????? ???"????????"?? ????????????? ????????/???? ?? ???"????"??? ?? ??????????? ?? ?? ??????????? ?? ?????????????????? ??????????????????? ???????????????? ????????????Discussions???????? ????AcDown??????????????Media Companion: MC 3.405-3 latest patch: -1 Added ability to choose to rename invalid nfos to info -2 Fix for multipart episodes not showing / Fix to skip invalid nfo's during rebuild -3 If movie plot empty use outline This file is just the mediacompanion.exe It has the cutting edge bug fixes as they are fixed during the week. The patch file number will be referred to in the relevant issue tracker comment. For the latest full program, you need to download the relevant weekly plus the patch.WatchersNET.TagCloud: WatchersNET.TagCloud 02.00.01: changes Module Packages are now generated with MSBuild Added Cancel Edit Button to Cancel an Custom Tag Edit Fixed Issue #14 editing of custom tags Fixed Issue with Flash File and Google BotVFPX: GoFish 4 Beta 1: Current beta is Build 144 (released 2011-06-07 ) See the GoFish4 info page for details and video link: http://vfpx.codeplex.com/wikipage?title=GoFishOnTopReplica: Release 3.3.2: Incremental update over 3.3 and 3.3.1. Added Polish language translation (thanks to Jan Romanczyk). Added German language translation (thanks to Eric Hoffmann). Fixed some localization issues.SQL Compact Query Analyzer: Build 0.3.0.0: Beta build of SQL Compact Query Analyzer Features: - Execute SQL Queries against a SQL Server Compact Edition database - Easily edit the contents of the database - Supports SQLCE 3.1, 3.5 and 4.0 Prerequisites: - .NET Framework 4.0ShowUI: Write-UI -in PowerShell: ShowUI: ShowUI is a PowerShell module to help you write rich user interfaces in script.Babylon Toolkit: Babylon.Toolkit v1.0.4: Note about samples: In order to run samples, you need to configure visual studio to run them as an "Out-of-browser application". in order to do that, go to the property page of a sample project, go to the Debug tab, and check the "Out-of-browser application" radio. New features : New Effects BasicEffect3Lights (3 dir lights instead of 1 position light) CartoonEffect (work in progress) SkinnedEffect (with normal and specular map support) SplattingEffect (for multi-texturing with smooth ...SizeOnDisk: 1.0.8.2: With installerTerrariViewer: TerrariViewer v2.5: Added new items associated with Terraria v1.0.3 to the character editor. Fixed multiple bugs with Piggy Bank EditorySterling NoSQL OODB for .NET 4.0, Silverlight 4 and 5, and Windows Phone 7: Sterling OODB v1.5: Welcome to the Sterling 1.5 RTM. This version is backwards compatible without modification to the 1.4 beta. For the 1.0, you will need to upgrade your database. Please see this discussion for details. You must modify your 1.0 code for persistence. The 1.5 version defaults to an in-memory driver. To save to isolated storage or use one of the new mechanisms, see the available drivers and pass an instance of the appropriate one to your database (different databases may use different drivers). ...Grammar and Spell Checking Plugin for Windows Live Writer: Grammar Checker Plugin v1.0: First version of the grammar checker plugin for Windows Live Writer. You can show your appreciation for this plugin and support further development by donating via PayPal. Any amount will be appreciated. Thank you. Donatepatterns & practices: Project Silk: Project Silk Community Drop 10 - June 3, 2011: Changes from previous drop: Many code changes: please see the readme.mht for details. New "Application Notifications" chapter. Updated "Server-Side Implementation" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To install and run the reference implementation, you must perform the fol...Claims Based Identity & Access Control Guide: Release Candidate: Highlights of this release This is the release candidate drop of the new "Claims Identity Guide" edition. In this release you will find: All code samples, including all ACS v2: ACS as a Federation Provider - Showing authentication with LiveID, Google, etc. ACS as a FP with Multiple Business Partners. ACS and REST endpoints. Using a WP7 client with REST endpoints. All ACS specific chapters. Two new chapters on SharePoint (SSO and Federation) All revised v1 chapters We are now ...Terraria Map Generator: TerrariaMapTool 1.0.0.4 Beta: 1) Fixed the generated map.html file so that the file:/// is included in the base path. 2) Added the ability to use parallelization during generation. This will cause the program to use as many threads as there are physical cores. 3) Fixed some background overdraw.New ProjectsAmur: Amur is a programming language project that allows the team members to explore functional programming language design. The compiler will target .NET and, for now, be developed in C#.Arche: Arche makes it easier for devlopers to architecture base with code generators. It's developed in C#. Avignon: A WPF version of the board game Carcassonne.Code Exercises in C#: A few c# files to demonstrate coding abilityCows And Bulls Project: Per tutti gli sviluppatori dotNET Che vogliono ritrovarsi per parlare di qualsiasi metodologia di sviluppo Agile Il Cows And Bulls Project È un progetto CodePlex Che si pone l'obiettivo di promuovere la discussione e lo scambio di esperienze sullo sviluppo Agile del software A differenza di altri progetti open source Il nostro progetto non utilizza la comunità per migliorare il codice sorgente ma usa il codice sorgente per migliorare una comunità Grazie molte a Stefania Menocci che h...CRM 2011 Workflow Utilities: This project includes custom workflow activities for CRM 2011 which provide additional workflow steps for actions such as "delete" and "share" within the CRM Process designer. These activities can be used in both workflows and dialogs but are not supported in CRM Online. Dot Net Reflector: OPEN SOURCE AND FREE Reflector. Let it be said right now. Dot Net Reflector forever will be free and here on codeplex.Exemplo de TFS do Papo: Projeto para testar conexao do TFS com Windows Phone 7Extended WCF Discovery: Extend the WCF discovery to support: 1. Service publish its real service address - such as external IP when service is behind NAT 2. Client discovery over any network topology (behind NAT) Also (in the roadmap): Binding discovery-clients will receive the binding from the server.FileProtector: This will make it easy for any user to protect their personal files. It's developed completely using C#Fingertip detection via OpenNI: "Fingertip Detection" prject is intended to give usual PC user availability to control PC unsing only hand and finger gestures. Project is built on .NET framework. Used technologies : 1. OpenNI 2. NITE 3. EmguCVHomeData: Stores and displays data from the household.Informicus LibraryHub: 4VasiliyLifeHelper: lifehelperMaxZhang.EasyEntities: EasyEntitiesMedianamik: MedianamikMethodWorx CMS: Open Source .NET CMS for fully hosted solutions, or integration into ASP.NET and ASP.NET MVC projects. Microsoft Tag API for BlackBerry: Microsoft Tag is a compact, yet, data rich and user friendly tag system. This API allows for accessing and using Tags.MVVM Demo: A MVVM demoMyReportSL: My Report SiverlightNSS College Website: An effort to develop an open source website for NSS College, Rajakumari with the contributions from its current students and alumni...Oldies: Deletes or moves old files from current folderPhaLinks a Modern Native Lisp: PhaLinks is a modern lips with a custom VM that runs on PyPy. ProdUI: ProdUI is designed to be used when a GUI needs to be manipulated automatically (Prodded), either for testing, or to perform human UI interactions for data entry on systems that don't allow back-end access. The ProdUI toolset is developed in C#, using the UI Automation API and failing back on Win32 calls if that fails. Every attempt is made to verify that the action was actually performed, and proper notification if not.The system is designed to allow for single "off the cuff" prods, as well ...ResourceManager: A tool to allow relationships between various resources to be established and to show the effects of altering one resource on others. This is initially intended for use of keeping track of servers and licenses used by applications, but I'm hoping to leave it open to expansion.seriesCounter: SeriesCounter makes it easier for people that watch series to keep track at what episode of a series they are. You'll no longer have to remember it yourself. It's developed in C#.Shai Chi Android: Shai Chi AndroidSistemaControleMultas_LPUNICAP: Nada a declararSmarx Role: Smarx Role is a Windows Azure role that supports publishing web applications written in Node.js, Ruby, and Python. Apps are published/synchronized via Git or blob storage, allowing nearly instantaneous changes to published applications. It automatically pulls in dependent modules using each language's package manager (npm, Gem, or pip).Tetris3D: try to complete a 3D tetris cloneTrackMania WebServices SDK .NET: TrackMania WebServices SDK .NET is a .NET 4 library which provide every tools to get statistics from TrackMania ForeverTV Program Analyst: A TV program analyzing software, based on specific program log from tv stations. VB.net Roguelike: This is an attempt to make a Roguelike in VB.net. This is in it's very early stages, and any help would be appreciated! Definition of Roguelike: (from Wikipedia) The roguelike is a sub-genre of role-playing video games, characterized by randomization for replayability, permanent death, and turn-based movement. Most roguelikes feature ASCII graphics, with newer ones increasingly offering tile-based graphics. Games are typically dungeon crawls, with many monsters, items, and environmental f...Wbfs Engine: Provides a simple and easy to use library for accessing games and wbfs partitions with .NET

    Read the article

  • CodePlex Daily Summary for Tuesday, October 29, 2013

    CodePlex Daily Summary for Tuesday, October 29, 2013Popular ReleasesCrowd CMS: Crowd CMS FREE - Official Release: This is the original source files for Crowd CMS Free (v1.0.0) and is the latest stable release which has been bug-tested and fixed.Event-Based Components AppBuilder: AB3.AppDesigner.57.12: Iteration 57.12 (Redesign): I'm still not satisfied with the WireDecorator because of it's complexity. Therefore I need some iterations to simplify this class. In this iteration I introduced a new WireArrowDecorator which contains everything that's needed to act with the arrow of a wire. Coming soon: Iteration 58: Add a new wire by mouse (without text editing) Iteration 59: Add a wire segment (without text editing) Iteration 60: Edit existing wire definition (without text editing) ...HandJS: Hand.js v1.1.3: New configuration: HANDJS.doNotProcessCSS to disable CSS scanning.Layered Architecture Solution Guidance (LASG): LASG 1.0.1.0 for Visual Studio 2012: PRE-REQUISITES Open GAX (Please install Oct 4, 2012 version) Microsoft® System CLR Types for Microsoft® SQL Server® 2012 Microsoft® SQL Server® 2012 Shared Management Objects Microsoft Enterprise Library 6.0 (for the generated code) Windows Azure SDK (for layered cloud applications) Silverlight 5 SDK (for Silverlight applications) THE RELEASE This release only works on Visual Studio 2012. Known Issue If you choose the Database project, the solution unfolding time will be slow....DirectX Tool Kit: October 2013: October 28, 2013 Updated for Visual Studio 2013 and Windows 8.1 SDK RTM Added DGSLEffect, DGSLEffectFactory, VertexPositionNormalTangentColorTexture, and VertexPositionNormalTangentColorTextureSkinning Model loading and effect factories support loading skinned models MakeSpriteFont now has a smooth vs. sharp antialiasing option: /sharp Model loading from CMOs now handles UV transforms for texture coordinates A number of small fixes for EffectFactory Minor code and project cleanup ...WPF Extended DataGrid: WPF Extended DataGrid 2.0.0.8 binaries: Fixed issue with auto filter select all checkbox was getting checked automatically after reopening the filter.ExtJS based ASP.NET Controls: FineUI v4.0beta1: +2013-10-28 v4.0 beta1 +?????Collapsed???????????????。 -????:window/group_panel.aspx??,???????,???????,?????????。 +??????SelectedNodeIDArray???????????????。 -????:tree/checkbox/tree_checkall.aspx??,?????,?????,????????????。 -??TimerPicker???????(????、????ing)。 -??????????????????????(???)。 -?????????????,??type=text/css(??~`)。 -MsgTarget???MessageTarget,???None。 -FormOffsetRight?????20px??5px。 -?Web.config?PageManager??FormLabelAlign???。 -ToolbarPosition??Left/Right。 -??Web.conf...CODE Framework: 4.0.31028.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.ResX Resource Manager: 1.0.0.26: 1.0.0.26 WI1141: Improve message when an XML file fails to load. Ignore projects that fail to load. 1.0.0.25 WI1133: Improve error message 1.0.0.24 WI1121: Improve error messages 1.0.0.23 WI1098: VS Crash after changing filter 1.0.0.22 WI1091: Context menu for copy resource key. WI1090: Enable multi-line paste. 1.0.0.21 WI1060: ResX Manager crashes after update. 1.0.0.20 WI958: Reference something in DGX, so we don't need to load the assembly dynamically. WI1055: Add the possibility to ma...VidCoder: 1.5.10 Beta: Broke out all the encoder-specific passthrough options into their own dropdown. This should make what they do a bit more clear and clean up the codec list a bit. Updated HandBrake core to SVN 5855.Indent Guides for Visual Studio: Indent Guides v14: ImportantThis release has a separate download for Visual Studio 2010. The first link is for VS 2012 and later. Version History Changed in v14 Improved performance when scrolling and editing Fixed potential crash when Resharper is installed Fixed highlight of guides split around pragmas in C++/C# Restored VS 2010 support as a separate download Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.3 (mvc5): version 3.5.3 - support for mvc5 version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 ========================== - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js...Media Companion: Media Companion MC3.585b: IMDB plot scraping Fixed. New* Movie - Rename Folder using Movie Set, option to move ignored articles to end of Movie Set, only for folder renaming. Fixed* Media Companion - Fixed if using profiles, config files would blown up in size due to some settings duplicating. * Ignore Article of An was cutting of last character of movie title. * If Rescraping title, sort title changed depending on 'Move article to end of Sort Title' setting. * Movie - If changing Poster source order, list would beco...MoreTerra (Terraria World Viewer): MoreTerra 1.11.4: Release 1.11.4 =========== = Compatibility = =========== Updated to add the new tiles/walls in 1.2.1PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.0.7: This is a bug fix release, containing some important fixes! Fixed issue where Session 0 was not detected correctly, resulting in issues when attempting to display a UI when none was allowed Fixed Installation Prompt and Installation Restart Prompt appearing when deploy mode was non-interactive or silent Fixed issue where defer prompt is displayed after force closing multiple applications Fixed issue executing blocked app execution dialog from UNC path (executed instead from local tempo...BlackJumboDog: Ver5.9.7: 2013.10.24 Ver5.9.7 (1)FTP???????、2?????????????shift-jis????????????? (2)????HTTP????、???????POST??????????????????CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.1.0.34322 Alpha 4: This experimental release of the CtrlAltStudio Viewer includes the following significant features: Oculus Rift support. Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-1-0-34322-alpha-4 Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or sup...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 32 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) This release has been tested with Visual Studio 2008, 2010, 2012 and 2013, using TortoiseSVN 1.6, 1.7 and 1.8. It should also still work with Visual Studio 2005, but I couldn't find anyone to test it in VS2005. Build 32 (beta) changelogNew: Added Visual Studio 2013 support New: Added Visual Studio 2012 support New: Added SVN 1.8 support New: Added 'Ch...ABCat: ABCat v.2.0.1a: ?????????? ???????? ? ?????????? ?????? ???? ??? Win7. ????????? ?????? ????????? ?? ???????. ????? ?????, ???? ????? ???????? ????????? ?????????? ????????? "?? ??????? ????? ???????????? ?????????? ??????...", ?? ?????????? ??????? ? ?????????? ?????? Microsoft SQL Ce ?? ????????? ??????: http://www.microsoft.com/en-us/download/details.aspx?id=17876. ???????? ?????? x64 ??? x86 ? ??????????? ?? ?????? ???????????? ???????. ??? ??????? ????????? ?? ?????????? ?????? Entity Framework, ? ???? ...patterns & practices: Data Access Guidance: Data Access Guidance 2013: This is the 2013 release of Data Access Guidance. The documentation for this RI is also available on MSDN: Data Access for Highly-Scalable Solutions: Using SQL, NoSQL, and Polyglot Persistence: http://msdn.microsoft.com/en-us/library/dn271399.aspxNew Projects7COM0152 - Web 2.0 Community Website: This is a project to create a User Content Cenerated community website for the Web Application Development module at the University of Hertfordshire.Airport System: Telerik Academy Teamwork - Airport SystemAppointments and ToDo Manager: Javascript Single Page ApplicationBlack NBT: A library to manage NBT (Named Binary Tag) files at format 19133. This project inclue a test client for the library. It's written en C# for .Net 4.5browser4gp: Applicazione che consente agli sviluppatori di pagine web di comunicare con il sistema operativo sottostante.BS Flow: It's a web project that using bootstrap to show instagram image flow.coLearning .NET Class - Team 3 v.2: .NET CRM tool. Grow your business one contact at a time with an expertly crafted API for generating leads and a simple web interface for managing contacts.Collections2: The Collection2 library allows you to create 2 way dictionaries. Ex: mydictionary[1] == "One", mydictionary["One"] == 1. Database Application: Load MySQL DB to SQL Server Unzip and import excel sales reports to SQL Server Generate PDF from SQL ServerDesign Pattern in C#: design pattern implementations in C#DropLogin&DropUser: Mass drop logins and users from many sql servers ??????? ??????? ?????? ????? SQL Server ? ????????????? ?? ???? ???? ??????.ElencySolutions.MultiProperty.CMS7: This project is simply a EPiServer CMS 7 re-write of the original code found here: https://episerveresmp.codeplex.com/File System Explorer: This application will drill into the Windows File System like Windows File Explorer but is more geared to working with multiple directories at a time. GriffinTeam: GriffinTeamjean1028jabbrchang: dfaLibDxp: Set of libraries for manipulating document-oriented databases for FSharp (F#)Library System: Telerik ASP.NET Web Forms Exam - Library SystemMaruko Toolbox: A Video-prosessing GUIMugen MVVM Toolkit ReSharper plugin: ReSharper plugin for Mugen MVVM Toolkit.RESX Utils: This is a tool to aid RESX resources management. It can detect and remove unused resources, compare RESX files and detect duplicates and discrepancies.Scorpicore2: follows banknotes and understanding their historySharePoint Logs Viewer: Develop & Maintain your SharePoint application quickly by using "SharePoint Logs Viewer"SPCloudMigrator: This project aims to simplify the migration from on-premise sharepoint version to sharepoint online.SSIS Connector for Sharepoint Online: SharePoint Online Source & Destination DataFlow Components to fetch data from SPOnline & push data in SPOnline.Transfer data from SP Online List to DBMS or FileTMS - Testing Management System: Una empresa de desarrollo de software, desea contar con una aplicación Web que le permita manejar los errores de sus productos. Esta aplicación permitirá a losWMI Inventory Client: WMI Inventory Client ?????????????? ?????????????? ?? WMIWPF MDI Metro: Metro Dark Theme for WPF MDI Project on codeplex.Zigbee playground: This is a project for experimenting with Zigbee wireless mesh networking.

    Read the article

  • Converting Encrypted Values

    - by Johnm
    Your database has been protecting sensitive data at rest using the cell-level encryption features of SQL Server for quite sometime. The employees in the auditing department have been inviting you to their after-work gatherings and buying you drinks. Thousands of customers implicitly include you in their prayers of thanks giving as their identities remain safe in your company's database. The cipher text resting snuggly in a column of the varbinary data type is great for security; but it can create some interesting challenges when interacting with other data types such as the XML data type. The XML data type is one that is often used as a message type for the Service Broker feature of SQL Server. It also can be an interesting data type to capture for auditing or integrating with external systems. The challenge that cipher text presents is that the need for decryption remains even after it has experienced its XML metamorphosis. Quite an interesting challenge nonetheless; but fear not. There is a solution. To simulate this scenario, we first will want to create a plain text value for us to encrypt. We will do this by creating a variable to store our plain text value: -- set plain text value DECLARE @PlainText NVARCHAR(255); SET @PlainText = 'This is plain text to encrypt'; The next step will be to create a variable that will store the cipher text that is generated from the encryption process. We will populate this variable by using a pre-defined symmetric key and certificate combination: -- encrypt plain text value DECLARE @CipherText VARBINARY(MAX); OPEN SYMMETRIC KEY SymKey     DECRYPTION BY CERTIFICATE SymCert     WITH PASSWORD='mypassword2010';     SET @CipherText = EncryptByKey                          (                            Key_GUID('SymKey'),                            @PlainText                           ); CLOSE ALL SYMMETRIC KEYS; The value of our newly generated cipher text is 0x006E12933CBFB0469F79ABCC79A583--. This will be important as we reference our cipher text later in this post. Our final step in preparing our scenario is to create a table variable to simulate the existence of a table that contains a column used to hold encrypted values. Once this table variable has been created, populate the table variable with the newly generated cipher text: -- capture value in table variable DECLARE @tbl TABLE (EncVal varbinary(MAX)); INSERT INTO @tbl (EncVal) VALUES (@CipherText); We are now ready to experience the challenge of capturing our encrypted column in an XML data type using the FOR XML clause: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT               EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); If you add the SELECT @XML statement at the end of this portion of the code you will see the contents of the XML data in its raw format: <root>   <MYTABLE EncVal="AG4Skzy/sEafeavMeaWDBwEAAACE--" /> </root> Strangely, the value that is captured appears nothing like the value that was created through the encryption process. The result being that when this XML is converted into a readable data set the encrypted value will not be able to be decrypted, even with access to the symmetric key and certificate used to perform the decryption. An immediate thought might be to convert the varbinary data type to either a varchar or nvarchar before creating the XML data. This approach makes good sense. The code for this might look something like the following: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT              CONVERT(NVARCHAR(MAX),EncVal) AS EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); However, this results in the following error: Msg 9420, Level 16, State 1, Line 26 XML parsing: line 1, character 37, illegal xml character A quick query that returns CONVERT(NVARCHAR(MAX),EncVal) reveals that the value that is causing the error looks like something off of a genuine Chinese menu. While this situation does present us with one of those spine-tingling, expletive-generating challenges, rest assured that this approach is on the right track. With the addition of the "style" argument to the CONVERT method, our solution is at hand. When dealing with converting varbinary data types we have three styles available to us: - The first is to not include the style parameter, or use the value of "0". As we see, this style will not work for us. - The second option is to use the value of "1" will keep our varbinary value including the "0x" prefix. In our case, the value will be 0x006E12933CBFB0469F79ABCC79A583-- - The third option is to use the value of "2" which will chop the "0x" prefix off of our varbinary value. In our case, the value will be 006E12933CBFB0469F79ABCC79A583-- Since we will want to convert this back to varbinary when reading this value from the XML data we will want the "0x" prefix, so we will want to change our code as follows: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT              CONVERT(NVARCHAR(MAX),EncVal,1) AS EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); Once again, with the inclusion of the SELECT @XML statement at the end of this portion of the code you will see the contents of the XML data in its raw format: <root>   <MYTABLE EncVal="0x006E12933CBFB0469F79ABCC79A583--" /> </root> Nice! We are now cooking with gas. To continue our scenario, we will want to parse the XML data into a data set so that we can glean our freshly captured cipher text. Once we have our cipher text snagged we will capture it into a variable so that it can be used during decryption: -- read back xml DECLARE @hdoc INT; DECLARE @EncVal NVARCHAR(MAX); EXEC sp_xml_preparedocument @hDoc OUTPUT, @xml; SELECT @EncVal = EncVal FROM OPENXML (@hdoc, '/root/MYTABLE') WITH ([EncVal] VARBINARY(MAX) '@EncVal'); EXEC sp_xml_removedocument @hDoc; Finally, the decryption of our cipher text using the DECRYPTBYKEYAUTOCERT method and the certificate utilized to perform the encryption earlier in our exercise: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                            CERT_ID('AuditLogCert'),                            N'mypassword2010',                            @EncVal                           )                     ) EncVal; Ah yes, another hurdle presents itself! The decryption produced the value of NULL which in cryptography means that either you don't have permissions to decrypt the cipher text or something went wrong during the decryption process (ok, sometimes the value is actually NULL; but not in this case). As we see, the @EncVal variable is an nvarchar data type. The third parameter of the DECRYPTBYKEYAUTOCERT method requires a varbinary value. Therefore we will need to utilize our handy-dandy CONVERT method: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                             CERT_ID('AuditLogCert'),                             N'mypassword2010',                             CONVERT(VARBINARY(MAX),@EncVal)                           )                     ) EncVal; Oh, almost. The result remains NULL despite our conversion to the varbinary data type. This is due to the creation of an varbinary value that does not reflect the actual value of our @EncVal variable; but rather a varbinary conversion of the variable itself. In this case, something like 0x3000780030003000360045003--. Considering the "style" parameter got us past XML challenge, we will want to consider its power for this challenge as well. Knowing that the value of "1" will provide us with the actual value including the "0x", we will opt to utilize that value in this case: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                            CERT_ID('SymCert'),                            N'mypassword2010',                            CONVERT(VARBINARY(MAX),@EncVal,1)                           )                     ) EncVal; Bingo, we have success! We have discovered what happens with varbinary data when captured as XML data. We have figured out how to make this data useful post-XML-ification. Best of all we now have a choice in after-work parties now that our very happy client who depends on our XML based interface invites us for dinner in celebration. All thanks to the effective use of the style parameter.

    Read the article

  • Adjust timezone of an AVM Fritz!Box 7390

    It's been a while that I purchased an AVM Fritz!Box 7390 but since I'm using this 'PABX' here in Mauritius, I'm not really happy about the wrong time in the logs or handsets connected. Lately, I had some spare time to address this issue, and the following article describes how to adjust the timezone settings in general. The original idea came from an FAQ found in c't 21/11 (for a 7270 written in German language) but I added a couple of things based on other resources online. The following tutorial may be valid for other models, too. Use your common sense and think before you act. Brief introduction to AVM Fritz!Box devices The Fritz!Box series of AVM has been around for more than a decade and those little 'red boxes' have a high level of versatility for your small office or home. High-speed connections, secure WLAN and convenient telephony make a home network out of any network. Whether it's a computer, tablet or smartphone, any device can be connected to the FRITZ!Box. And best of all, installation is so simple that users will be online in a matter of minutes. If you want to have peace of your mind in your small network then a Fritz!Box is the easiest way to achieve that. I'm using my box primarly as WiFi access point, VoIP gateway and media server but only because it came in second after my Linux system. Limitations in the administrative Web UI Unfortunately, there are no possibilities to adjust the timezone settings in the Web UI at all - even not in Expert mode. I assume that this is part of the 'simplification' provided by AVM's design team. That's okay, as long as you reside in Central Europe, and the implicit time handling is correct for your location. Adjusting the timezone I got my device through an order at Amazon Germany already some time ago, and honestly I wasn't bothered too much about the pre-configured (fixed) timezone setting - CET or CEST depending on daylight saving. But you know, it's that kind of splinter at the back of your head that keeps nagging and bothering you indirectly. So, finally I sat down yesterday evening and did a quick research on how to change the timezone. Even though there are a number of results, I read the FAQ from the c't magazine first, as I consider this as a trusted and safe source of information. Of course, it is most important to avoid to 'brick' your device. You've been warned - No support Tinkering with the configuration of any AVM devices seems to be a violation of their official support channels. So, be warned and continue onlyin case that you're sure about what you are going to do. The following solutions are 'as-is' and they worked for my box flawlessly but may cause an issue in your case. Don't blame me... Solution 1 - Backup, modify and restore That's the way as described in the c't article and a couple of other forum postings I found online, mainly from Australia. Login the administrative Web UI and navigate to 'System => Einstellungen sichern' (System => Backup configuration) and store your current configuration to a local file on your machine. Despite some online postings it is not necessary to specify a password in order to secure or encrypt your backup. IMHO, this only adds another unnecessary layer of complexity to the process. Anyway, next you should create a another copy of your settings and keep it unmodified. That's our safety net to restore the current settings in case that we might have to issue a factory setting reset to the box. Now, open the configuration file with an advanced text editor which is capable to deal with Unix carriage returns properly - Windows Notepad doesn't do the job but Wordpad or Notepad++. Personally, I don't care and simply use geany, gedit or nano on Linux. In total there are 3 modifications that we have to apply to the configuration file - one new line and two adjustments. First, we have to add an instruction near the top of file that overrides the device internal checksum validation. Without this line, your settings won't be accepted. Caution: The drectives are case-sensitve and your outcome should read something like this: **** FRITZ!Box Fon WLAN 7390 CONFIGURATION EXPORTPassword=$$$$<ignore>FirmwareVersion=84.05.52CONFIG_INSTALL_TYPE=iks_16MB_xilinx_4eth_2ab_isdn_nt_te_pots_wlan_usb_host_dect_64415OEM=avmCountry=049Language=deNoChecks=yes**** CFGFILE:ar7.cfg/* * /var/flash/ar7.cfg * Mon Jul 29 10:49:18 2013 */ar7cfg {... Then search for the expression 'timezone' and you should find a section like this one (~ line 1113): timezone_manual {        enabled = no;        offset = 0;        dst_enabled = no;        TZ_string = "";        name = "";} We would like to manually handle the timezone setting in our device and therefore we have to enable it and set the proper value for Mauritius. The configuration block should like so afterwards: timezone_manual {        enabled = yes;        offset = 0;        dst_enabled = no;        TZ_string = "MUT-4";        name = "";} We specify the designation and the offset in hours of the timezone we would like to have. Caution: The offset indicates the value one has to add to the local time to arrive at UTC. More details are described in the Explanation of TZ strings. Mauritius has GMT+4 which means that we have to substract 4 hours from the local time to have UTC. Finally, we restore the modified configuration file via the administrative Web UI under 'System => Einstellungen sichern => Wiederherstellen' (System => Backup configuration => Restore). This triggers a reboot of the device, so please be patient and wait until the Web UI displays the login dialog again. Good luck! Solution 2 - Telnet A more elegant, read: technically interesting, way to adjust configuration settings in your Fritz!Box is to access it directly through Telnet. By default AVM disables that protocol channel and you have to enable it with a connected telephone. In order to activate the telnet service dial the following combination: #96*7* #96*8* (to disable telnet again after work has been completed) If you're using an AVM handset like the Fritz!Fon then you will receive a confirmation message on the display like so: telnetd ein Next, depending on your favourite operating system, you either launch a Command prompt in Windows or a terminal in Linux, get your Admin password ready, and you connect to your box like so: $ telnet fritz.box Trying 192.168.1.1...Connected to fritz.box.Escape character is '^]'.password: BusyBox v1.19.3 (2012-10-12 14:52:09 CEST) built-in shell (ash)Enter 'help' for a list of built-in commands.ermittle die aktuelle TTYtty is "/dev/pts/0"Console Ausgaben auf dieses Terminal umgelenkt# That's it, you are connected and we can continue to change the configuration manually. In order to adjust the timezone setting we have to open the ar7.cfg file. As we are now operating in a specialised environment, we only have limited capabilities at hand. One of those is a reduced version of vi - nvi. Let's open a second browser window with the fine manual page of nvi and start to edit our configuration file: # nvi /var/flash/ar7.cfg In our configuration file, we have to navigate to the timezone directives. The easiest way is to search for the expression 'timezone' by typing in the following: /timezone    (press Enter/Return) Now, we should see the exact lines of code like in the backed up version: timezone_manual {                                                                            enabled = no;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "";                                                     name = "";                                                        } And of course, we apply the same changes as described in the previous section: timezone_manual {                                                                            enabled = yes;                                                          offset = 0;                                                         dst_enabled = no;                                                   TZ_string = "MUT-4";                                                     name = "";                                                        } Finally, we have to write our changes back to the file and apply the new settings. :wq    (press Enter/Return) # ar7cfgchanged That's it! Finally, close the telnet session by pressing Ctrl+] and enter 'quit'. Additional ideas... There are a couple of more possibilities to enhance and to extend the usability of a Fritz!Box. There are lots of resources available on the net, but I'd like to name a few here. Especially for Linux users it is essential to be able to connect to any device remotely in a  safe and secure way. And the installation of a SSH server on the box would be a first step to improve this situation, also to avoid to run telnet after all. Sometimes, there might be problems in your VoIP connections, feel free to adjust the settings of codecs and connection handling, too. I guess, you'll get the idea... The only frontiers are in your mind.

    Read the article

  • Detecting HTML5/CSS3 Features using Modernizr

    - by dwahlin
    HTML5, CSS3, and related technologies such as canvas and web sockets bring a lot of useful new features to the table that can take Web applications to the next level. These new technologies allow applications to be built using only HTML, CSS, and JavaScript allowing them to be viewed on a variety of form factors including tablets and phones. Although HTML5 features offer a lot of promise, it’s not realistic to develop applications using the latest technologies without worrying about supporting older browsers in the process. If history has taught us anything it’s that old browsers stick around for years and years which means developers have to deal with backward compatibility issues. This is especially true when deploying applications to the Internet that target the general public. This begs the question, “How do you move forward with HTML5 and CSS3 technologies while gracefully handling unsupported features in older browsers?” Although you can write code by hand to detect different HTML5 and CSS3 features, it’s not always straightforward. For example, to check for canvas support you need to write code similar to the following:   <script> window.onload = function () { if (canvasSupported()) { alert('canvas supported'); } }; function canvasSupported() { var canvas = document.createElement('canvas'); return (canvas.getContext && canvas.getContext('2d')); } </script> If you want to check for local storage support the following check can be made. It’s more involved than it should be due to a bug in older versions of Firefox. <script> window.onload = function () { if (localStorageSupported()) { alert('local storage supported'); } }; function localStorageSupported() { try { return ('localStorage' in window && window['localStorage'] != null); } catch(e) {} return false; } </script> Looking through the previous examples you can see that there’s more than meets the eye when it comes to checking browsers for HTML5 and CSS3 features. It takes a lot of work to test every possible scenario and every version of a given browser. Fortunately, you don’t have to resort to writing custom code to test what HTML5/CSS3 features a given browser supports. By using a script library called Modernizr you can add checks for different HTML5/CSS3 features into your pages with a minimal amount of code on your part. Let’s take a look at some of the key features Modernizr offers.   Getting Started with Modernizr The first time I heard the name “Modernizr” I thought it “modernized” older browsers by added missing functionality. In reality, Modernizr doesn’t actually handle adding missing features or “modernizing” older browsers. The Modernizr website states, “The name Modernizr actually stems from the goal of modernizing our development practices (and ourselves)”. Because it relies on feature detection rather than browser sniffing (a common technique used in the past – that never worked that great), Modernizr definitely provides a more modern way to test features that a browser supports and can even handle loading additional scripts called shims or polyfills that fill in holes that older browsers may have. It’s a great tool to have in your arsenal if you’re a web developer. Modernizr is available at http://modernizr.com. Two different types of scripts are available including a development script and custom production script. To generate a production script, the site provides a custom script generation tool rather than providing a single script that has everything under the sun for HTML5/CSS3 feature detection. Using the script generation tool you can pick the specific test functionality that you need and ignore everything that you don’t need. That way the script is kept as small as possible. An example of the custom script download screen is shown next. Notice that specific CSS3, HTML5, and related feature tests can be selected. Once you’ve downloaded your custom script you can add it into your web page using the standard <script> element and you’re ready to start using Modernizr. <script src="Scripts/Modernizr.js" type="text/javascript"></script>   Modernizr and the HTML Element Once you’ve add a script reference to Modernizr in a page it’ll go to work for you immediately. In fact, by adding the script several different CSS classes will be added to the page’s <html> element at runtime. These classes define what features the browser supports and what features it doesn’t support. Features that aren’t supported get a class name of “no-FeatureName”, for example “no-flexbox”. Features that are supported get a CSS class name based on the feature such as “canvas” or “websockets”. An example of classes added when running a page in Chrome is shown next:   <html class=" js flexbox canvas canvastext webgl no-touch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths"> Here’s an example of what the <html> element looks like at runtime with Internet Explorer 9:   <html class=" js no-flexbox canvas canvastext no-webgl no-touch geolocation postmessage no-websqldatabase no-indexeddb hashchange no-history draganddrop no-websockets rgba hsla multiplebgs backgroundsize no-borderimage borderradius boxshadow no-textshadow opacity no-cssanimations no-csscolumns no-cssgradients no-cssreflections csstransforms no-csstransforms3d no-csstransitions fontface generatedcontent video audio localstorage sessionstorage no-webworkers no-applicationcache svg inlinesvg smil svgclippaths">   When using Modernizr it’s a common practice to define an <html> element in your page with a no-js class added as shown next:   <html class="no-js">   You’ll see starter projects such as HTML5 Boilerplate (http://html5boilerplate.com) or Initializr (http://initializr.com) follow this approach (see my previous post for more information on HTML5 Boilerplate). By adding the no-js class it’s easy to tell if a browser has JavaScript enabled or not. If JavaScript is disabled then no-js will stay on the <html> element. If JavaScript is enabled, no-js will be removed by Modernizr and a js class will be added along with other classes that define supported/unsupported features. Working with HTML5 and CSS3 Features You can use the CSS classes added to the <html> element directly in your CSS files to determine what style properties to use based upon the features supported by a given browser. For example, the following CSS can be used to render a box shadow for browsers that support that feature and a simple border for browsers that don’t support the feature: .boxshadow #MyContainer { border: none; -webkit-box-shadow: #666 1px 1px 1px; -moz-box-shadow: #666 1px 1px 1px; } .no-boxshadow #MyContainer { border: 2px solid black; }   If a browser supports box-shadows the boxshadow CSS class will be added to the <html> element by Modernizr. It can then be associated with a given element. This example associates the boxshadow class with a div with an id of MyContainer. If the browser doesn’t support box shadows then the no-boxshadow class will be added to the <html> element and it can be used to render a standard border around the div. This provides a great way to leverage new CSS3 features in supported browsers while providing a graceful fallback for older browsers. In addition to using the CSS classes that Modernizr provides on the <html> element, you also use a global Modernizr object that’s created. This object exposes different properties that can be used to detect the availability of specific HTML5 or CSS3 features. For example, the following code can be used to detect canvas and local storage support. You can see that the code is much simpler than the code shown at the beginning of this post. It also has the added benefit of being tested by a large community of web developers around the world running a variety of browsers.   $(document).ready(function () { if (Modernizr.canvas) { //Add canvas code } if (Modernizr.localstorage) { //Add local storage code } }); The global Modernizr object can also be used to test for the presence of CSS3 features. The following code shows how to test support for border-radius and CSS transforms:   $(document).ready(function () { if (Modernizr.borderradius) { $('#MyDiv').addClass('borderRadiusStyle'); } if (Modernizr.csstransforms) { $('#MyDiv').addClass('transformsStyle'); } });   Several other CSS3 feature tests can be performed such as support for opacity, rgba, text-shadow, CSS animations, CSS transitions, multiple backgrounds, and more. A complete list of supported HTML5 and CSS3 tests that Modernizr supports can be found at http://www.modernizr.com/docs.   Loading Scripts using Modernizr In cases where a browser doesn’t support a specific feature you can either provide a graceful fallback or load a shim/polyfill script to fill in missing functionality where appropriate (more information about shims/polyfills can be found at https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills). Modernizr has a built-in script loader that can be used to test for a feature and then load a script if the feature isn’t available. The script loader is built-into Modernizr and is also available as a standalone yepnope script (http://yepnopejs.com). It’s extremely easy to get started using the script loader and it can really simplify the process of loading scripts based on the availability of a particular browser feature. To load scripts dynamically you can use Modernizr’s load() function which accepts properties defining the feature to test (test property), the script to load if the test succeeds (yep property), the script to load if the test fails (nope property), and a script to load regardless of if the test succeeds or fails (both property). An example of using load() with these properties is show next: Modernizr.load({ test: Modernizr.canvas, yep: 'html5CanvasAvailable.js’, nope: 'excanvas.js’, both: 'myCustomScript.js' }); In this example Modernizr is used to not only load scripts but also to test for the presence of the canvas feature. If the target browser supports the HTML5 canvas then the html5CanvasAvailable.js script will be loaded along with the myCustomScript.js script (use of the yep property in this example is a bit contrived – it was added simply to demonstrate how the property can be used in the load() function). Otherwise, a polyfill script named excanvas.js will be loaded to add missing canvas functionality for Internet Explorer versions prior to 9. Once excanvas.js is loaded the myCustomScript.js script will be loaded. Because Modernizr handles loading scripts, you can also use it in creative ways. For example, you can use it to load local scripts when a 3rd party Content Delivery Network (CDN) such as one provided by Google or Microsoft is unavailable for whatever reason. The Modernizr documentation provides the following example that demonstrates the process for providing a local fallback for jQuery when a CDN is down:   Modernizr.load([ { load: '//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.js', complete: function () { if (!window.jQuery) { Modernizr.load('js/libs/jquery-1.6.4.min.js'); } } }, { // This will wait for the fallback to load and // execute if it needs to. load: 'needs-jQuery.js' } ]); This code attempts to load jQuery from the Google CDN first. Once the script is downloaded (or if it fails) the function associated with complete will be called. The function checks to make sure that the jQuery object is available and if it’s not Modernizr is used to load a local jQuery script. After all of that occurs a script named needs-jQuery.js will be loaded. Conclusion If you’re building applications that use some of the latest and greatest features available in HTML5 and CSS3 then Modernizr is an essential tool. By using it you can reduce the amount of custom code required to test for browser features and provide graceful fallbacks or even load shim/polyfill scripts for older browsers to help fill in missing functionality. 

    Read the article

  • CodePlex Daily Summary for Thursday, March 10, 2011

    CodePlex Daily Summary for Thursday, March 10, 2011Popular ReleasesTweetSharp: TweetSharp v2.0.0: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Beta ChangesAdded user streams support Serialization is not attempted for Twitter 5xx errors Fixes based on feedback Third Party Library VersionsHammock v1.2.0: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comSharePoint Field Groups To Users: GroupsToUsers Release 1.0: SharePoint "Groups to Users" is a custom field that displays two separate drop-down lists. The first drop-down populates with all SharePoint Groups from the current web. By selecting a particular group the second drop-down list gets populated with all Users within the selected SharePoint group.DirectQ: Release 1.8.7 (RC2): More fixes and improvements. Note for multiplayer - you may need to set r_waterwarp to 0 or 2 before connecting to a server, otherwise you will get a "Mod_PointInLeaf: bad model" error and not be able to connect. You can set it back to 1 after you connect, of course. This only came to light after releasing, and will be fixed in the next one.Microsoft All-In-One Code Framework: Visual Studio 2008 Code Samples 2011-03-09: Code samples for Visual Studio 2008Office Web.UI: Version 2.4: After having lost all modifications done for 2.3. I finally did it again... Have a look at http://www.officewebui.com/change-log Also, the documentation continues to grow... http://www.officewebui.com/category/kb ThanksmyCollections: Version 1.3: New in version 1.3 : Added Editor management for Books Added Amazon API for Books Us, Fr, De Added Amazon Us, Fr, De for Movies Added The MovieDB for Fr and De Added Author for Books Added Editor and Platform for Games Added Amazon Us, De for Games Added Studio for XXX Added Background for XXX Bug fixing with Softonic API Bug fixing with IMDB UI improvement Removed GraceNote Added Amazon Us,Fr, De for Series Added TVDB Fr and De for Series Added Tracks for Musi...Facebook Graph Toolkit: Facebook Graph Toolkit 1.1: Version 1.1 (8 Mar 2011)new Dialog class for redirecting users to Facebook dialogs new Async publishing methods new Check for Extended Permissions option fixed bug: inappropiate condition of redirecting to login in Api class fixed bug: IframeRedirect method not workingpatterns & practices : Composite Services: Composite Services Guidance - CTP2: This is the second CTP of the p&p Composite Service Guidance.Python Tools for Visual Studio: 1.0 Beta 1: Beta 1You can't install IronPython Tools for Visual Studio side-by-side with Python Tools for Visual Studio. A race condition sometimes causes local MPI debugging to miss breakpoints. When MPI jobs on a cluster fail they don’t get cleaned up correctly, which can cause debugging to stall because the associated MPI job is stuck in the queue. The "Threads" view has a race condition which can cause it not to display properly at times. VS2010 shortcuts that are pinned to the taskbar are so...DotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 2: What is new in DotNetAge 2.0 ? Completely update DJME to DJME2, enhance user experience ,more beautiful and more interactively visit DJME project home to lean more about DJME http://www.dotnetage.com/sites/home/djme.html A new widget engine has came! Faster and easiler. Runtime performance enhanced. SEO enhanced. UI Designer enhanced. A new web resources explorer. Page manager enhanced. BlogML supports added that allows you import/export your blog data to/from dotnetage publishi...Kooboo CMS: Kooboo CMS 3.0 Beta: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB and MongoDB. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder t...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.2: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager added fullscreen for the popup and popupformIronPython: 2.7 Release Candidate 2: On behalf of the IronPython team, I am pleased to announce IronPython 2.7 Release Candidate 2. The releases contains a few minor bug fixes, including a working webbrowser module. Please see the release notes for 61395 for what was fixed in previous releases.LINQ to Twitter: LINQ to Twitter Beta v2.0.20: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation.Minemapper: Minemapper v0.1.6: Once again supports biomes, thanks to an updated Minecraft Biome Extractor, which added support for the new Minecraft beta v1.3 map format. Updated mcmap to support new biome format.Sandcastle Help File Builder: SHFB v1.9.3.0 Release: This release supports the Sandcastle June 2010 Release (v2.6.10621.1). It includes full support for generating, installing, and removing MS Help Viewer files. This new release is compiled under .NET 4.0, supports Visual Studio 2010 solutions and projects as documentation sources, and adds support for projects targeting the Silverlight Framework. This release uses the Sandcastle Guided Installation package used by Sandcastle Styles. Download and extract to a folder and then run SandcastleI...AutoLoL: AutoLoL v1.6.4: It is now possible to run the clicker anyway when it can't detect the Masteries Window Fixed a critical bug in the open file dialog Removed the resize button Some UI changes 3D camera movement is now more intuitive (Trackball rotation) When an error occurs on the clicker it will attempt to focus AutoLoLYAF.NET (aka Yet Another Forum.NET): v1.9.5.5 RTW: YAF v1.9.5.5 RTM (Date: 3/4/2011 Rev: 4742) Official Discussion Thread here: http://forum.yetanotherforum.net/yaf_postsm47149_v1-9-5-5-RTW--Date-3-4-2011-Rev-4742.aspx Changes in v1.9.5.5 Rev. #4661 - Added "Copy" function to forum administration -- Now instead of having to manually re-enter all the access masks, etc, you can just duplicate an existing forum and modify after the fact. Rev. #4642 - New Setting to Enable/Disable Last Unread posts links Rev. #4641 - Added Arabic Language t...Snippet Designer: Snippet Designer 1.3.1: Snippet Designer 1.3.1 for Visual Studio 2010This is a bug fix release. Change logFixed bug where Snippet Designer would fail if you had the most recent Productivity Power Tools installed Fixed bug where "Export as Snippet" was failing in non-english locales Fixed bug where opening a new .snippet file would fail in non-english localesChiave File Encryption: Chiave 1.0: Final Relase for Chave 1.0 Stable: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Now with added support to Windows XP! Change Log from 0.9.2 to 1.0: ==================== Added: > Added Icon Overlay for Windows 7 Taskbar Icon. >Added Thumbnail Toolbar buttons to make the navigation easier...New ProjectsAll2Iso: Convert any disk image format to ISO. Actually it can only convert from BIN. Any help is appreciated.Asset Management by Joko for GENCPROS: This is my initial test project using codeplex storageAxvius: Axvius' core C# API library. Includes, Now, a class for overriding the system clock; especially useful for unit tests.Collision Avoidance Simulator: The vertex buff er, in conjunction with the spatial con guration of human models, a particle system and a reduced set of rules are processed in order to obtain a dynamic knowledge base for collision avoidance calculations. Developed in C++.Configuring role link in Biztalk 2009: Configuring role link in Biztalk 2009CrazySnake: Crazy Snake é o famoso jogo da cobra, esse em sua versão tanto para windows, quanto para windows phone 7dIRca WP7 IRC Client: IRC client with possible SL/WPF ports. Utilizes native tcp sockets until the communication layer from MS solidifies. Basically put, this project would not exist without the work of some wp7 hackers. Pip pip old boys.DoanVienProject: Ðây là porject qu?n lý doàn viênDynamics AX Build Scripts: Sharing build scripts for Dynamics AX integration with source control, focused on Team Foundation Server (TFS)EPiServer CMS ElencySolutions.MultipleProperty: The MultipleProperty classes are for use in EPiServer CMS 6 and provide an easy way for developers to build complex custom properties that comprise of other EPiServer custom properties. Feriados Móveis Brasil: This project aim to calculate the holidays in Brazil who is based on catholic dates. The main holiday is the Easter Sunday and the other holidays are calculated based on that date. Cálculo de feriados móveis para o Brasil baseados nas datas festivas católicas.flyskynet: myselft projectGriffTom: GriffTomImage Resizer (????????): ??????????????????,????????????????。?????jpg??。 ??????,????????????。iPray: Islamic Prayer Software.Japanese Learners & Enthusiasts Kanji Project: Help create games, puzzles, and exercises to assist learners of Japanese to master Kanji comprehension. Games/etc. will be written in C#, jQuery and/or Silverlight for ASP.NET MVC 3 Razor. Initial goal is for a user of the site to master grade level 1 Kanji (first 80 Kanji).MailChimp Amazon Simple Email Service .NET Wrapper: A .NET 4 wrapper for MailChimp's Amazon Simple Email Service. It's developed in C# using Hammock.NGuice: .NET????Guice???????????。???.NET?????????,?Guice?.NET??????????。???????:http://code.google.com/p/google-guice/Rubrica Persone: Libreria che contiene gli oggetti e le form per la gestione di una rubrica di persone, facilmente integrabile in altre applicazioni.SCCM Client Center Integration Pack for Opalis: "SCCM Client Center Integration Pack for Opalis" is an System Center Opalis Integration Pack to manage and orchestrate System Center Configuration Manager (SCCM) 2007 Agents from Opalis workflows.Sugar-free programming: I like to think about breadth or depth developer, or Mort, Elvis, or Einstein developer stereotypes, as roles we could play accordingly to the task at hand…see more: http://blogs.msdn.com/b/marcod/archive/2011/03/01/sugarfreecs1.aspxTest Project 1: This is test project siteTestZoner: TestZoneTextBookReader: ???????????????????????,??????????,??????! ??.net 2.0uSiteBuilder: uSiteBuilder is a framework made for .NET developers to simplify, speedup and take Umbraco development to next level. Aim of this framework is to reduce developer interaction with Umbraco back-end (browser based development), to create Umbraco websites in a more .NET way...VinculacionMicrosoft: Vinculacion Microsoft is a project for distributing Dreamsparks and Faculty Connection codes to students and professors. It is developed in ASP .Net and designed for Universities in Mexico interested in the different benefits that Microsoft has for them. Vio: Vio is an application for Sharetronix Based websites. Allowing users to connect to their community via their Windows Desktop.whatsnew.exe a command line utility to find new files: whatsnew.exe is a command line utility that lists the files created (new files) in a given number of days. whatsnew.exe 's syntax is very simple: whatsnew path numberofdays Also whatsnew supports other options like HTML or XML output, hyperlinked outputs and more.

    Read the article

  • Top 50 ASP.Net Interview Questions & Answers

    - by Samir R. Bhogayta
    1. What is ASP.Net? It is a framework developed by Microsoft on which we can develop new generation web sites using web forms(aspx), MVC, HTML, Javascript, CSS etc. Its successor of Microsoft Active Server Pages(ASP). Currently there is ASP.NET 4.0, which is used to develop web sites. There are various page extensions provided by Microsoft that are being used for web site development. Eg: aspx, asmx, ascx, ashx, cs, vb, html, xml etc. 2. What’s the use of Response.Output.Write()? We can write formatted output  using Response.Output.Write(). 3. In which event of page cycle is the ViewState available?   After the Init() and before the Page_Load(). 4. What is the difference between Server.Transfer and Response.Redirect?   In Server.Transfer page processing transfers from one page to the other page without making a round-trip back to the client’s browser.  This provides a faster response with a little less overhead on the server.  The clients url history list or current url Server does not update in case of Server.Transfer. Response.Redirect is used to redirect the user’s browser to another page or site.  It performs trip back to the client where the client’s browser is redirected to the new page.  The user’s browser history list is updated to reflect the new address. 5. From which base class all Web Forms are inherited? Page class.  6. What are the different validators in ASP.NET? Required field Validator Range  Validator Compare Validator Custom Validator Regular expression Validator Summary Validator 7. Which validator control you use if you need to make sure the values in two different controls matched? Compare Validator control. 8. What is ViewState? ViewState is used to retain the state of server-side objects between page post backs. 9. Where the viewstate is stored after the page postback? ViewState is stored in a hidden field on the page at client side.  ViewState is transported to the client and back to the server, and is not stored on the server or any other external source. 10. How long the items in ViewState exists? They exist for the life of the current page. 11. What are the different Session state management options available in ASP.NET? In-Process Out-of-Process. In-Process stores the session in memory on the web server. Out-of-Process Session state management stores data in an external server.  The external server may be either a SQL Server or a State Server.  All objects stored in session are required to be serializable for Out-of-Process state management. 12. How you can add an event handler?  Using the Attributes property of server side control. e.g. [csharp] btnSubmit.Attributes.Add(“onMouseOver”,”JavascriptCode();”) [/csharp] 13. What is caching? Caching is a technique used to increase performance by keeping frequently accessed data or files in memory. The request for a cached file/data will be accessed from cache instead of actual location of that file. 14. What are the different types of caching? ASP.NET has 3 kinds of caching : Output Caching, Fragment Caching, Data Caching. 15. Which type if caching will be used if we want to cache the portion of a page instead of whole page? Fragment Caching: It caches the portion of the page generated by the request. For that, we can create user controls with the below code: [xml] <%@ OutputCache Duration=”120? VaryByParam=”CategoryID;SelectedID”%> [/xml] 16. List the events in page life cycle.   1) Page_PreInit 2) Page_Init 3) Page_InitComplete 4) Page_PreLoad 5) Page_Load 6) Page_LoadComplete 7) Page_PreRender 8)Render 17. Can we have a web application running without web.Config file?   Yes 18. Is it possible to create web application with both webforms and mvc? Yes. We have to include below mvc assembly references in the web forms application to create hybrid application. [csharp] System.Web.Mvc System.Web.Razor System.ComponentModel.DataAnnotations [/csharp] 19. Can we add code files of different languages in App_Code folder?   No. The code files must be in same language to be kept in App_code folder. 20. What is Protected Configuration? It is a feature used to secure connection string information. 21. Write code to send e-mail from an ASP.NET application? [csharp] MailMessage mailMess = new MailMessage (); mailMess.From = “[email protected]”; mailMess.To = “[email protected]”; mailMess.Subject = “Test email”; mailMess.Body = “Hi This is a test mail.”; SmtpMail.SmtpServer = “localhost”; SmtpMail.Send (mailMess); [/csharp] MailMessage and SmtpMail are classes defined System.Web.Mail namespace.  22. How can we prevent browser from caching an ASPX page?   We can SetNoStore on HttpCachePolicy object exposed by the Response object’s Cache property: [csharp] Response.Cache.SetNoStore (); Response.Write (DateTime.Now.ToLongTimeString ()); [/csharp] 23. What is the good practice to implement validations in aspx page? Client-side validation is the best way to validate data of a web page. It reduces the network traffic and saves server resources. 24. What are the event handlers that we can have in Global.asax file? Application Events: Application_Start , Application_End, Application_AcquireRequestState, Application_AuthenticateRequest, Application_AuthorizeRequest, Application_BeginRequest, Application_Disposed,  Application_EndRequest, Application_Error, Application_PostRequestHandlerExecute, Application_PreRequestHandlerExecute, Application_PreSendRequestContent, Application_PreSendRequestHeaders, Application_ReleaseRequestState, Application_ResolveRequestCache, Application_UpdateRequestCache Session Events: Session_Start,Session_End 25. Which protocol is used to call a Web service? HTTP Protocol 26. Can we have multiple web config files for an asp.net application? Yes. 27. What is the difference between web config and machine config? Web config file is specific to a web application where as machine config is specific to a machine or server. There can be multiple web config files into an application where as we can have only one machine config file on a server. 28.  Explain role based security ?   Role Based Security used to implement security based on roles assigned to user groups in the organization. Then we can allow or deny users based on their role in the organization. Windows defines several built-in groups, including Administrators, Users, and Guests. [xml] <AUTHORIZATION>< authorization > < allow roles=”Domain_Name\Administrators” / >   < !– Allow Administrators in domain. — > < deny users=”*”  / >                            < !– Deny anyone else. — > < /authorization > [/xml] 29. What is Cross Page Posting? When we click submit button on a web page, the page post the data to the same page. The technique in which we post the data to different pages is called Cross Page posting. This can be achieved by setting POSTBACKURL property of  the button that causes the postback. Findcontrol method of PreviousPage can be used to get the posted values on the page to which the page has been posted. 30. How can we apply Themes to an asp.net application? We can specify the theme in web.config file. Below is the code example to apply theme: [xml] <configuration> <system.web> <pages theme=”Windows7? /> </system.web> </configuration> [/xml] 31: What is RedirectPermanent in ASP.Net?   RedirectPermanent Performs a permanent redirection from the requested URL to the specified URL. Once the redirection is done, it also returns 301 Moved Permanently responses. 32: What is MVC? MVC is a framework used to create web applications. The web application base builds on  Model-View-Controller pattern which separates the application logic from UI, and the input and events from the user will be controlled by the Controller. 33. Explain the working of passport authentication. First of all it checks passport authentication cookie. If the cookie is not available then the application redirects the user to Passport Sign on page. Passport service authenticates the user details on sign on page and if valid then stores the authenticated cookie on client machine and then redirect the user to requested page 34. What are the advantages of Passport authentication? All the websites can be accessed using single login credentials. So no need to remember login credentials for each web site. Users can maintain his/ her information in a single location. 35. What are the asp.net Security Controls? <asp:Login>: Provides a standard login capability that allows the users to enter their credentials <asp:LoginName>: Allows you to display the name of the logged-in user <asp:LoginStatus>: Displays whether the user is authenticated or not <asp:LoginView>: Provides various login views depending on the selected template <asp:PasswordRecovery>:  email the users their lost password 36: How do you register JavaScript for webcontrols ? We can register javascript for controls using <CONTROL -name>Attribtues.Add(scriptname,scripttext) method. 37. In which event are the controls fully loaded? Page load event. 38: what is boxing and unboxing? Boxing is assigning a value type to reference type variable. Unboxing is reverse of boxing ie. Assigning reference type variable to value type variable. 39. Differentiate strong typing and weak typing In strong typing, the data types of variable are checked at compile time. On the other hand, in case of weak typing the variable data types are checked at runtime. In case of strong typing, there is no chance of compilation error. Scripts use weak typing and hence issues arises at runtime. 40. How we can force all the validation controls to run? The Page.Validate() method is used to force all the validation controls to run and to perform validation. 41. List all templates of the Repeater control. ItemTemplate AlternatingltemTemplate SeparatorTemplate HeaderTemplate FooterTemplate 42. List the major built-in objects in ASP.NET?  Application Request Response Server Session Context Trace 43. What is the appSettings Section in the web.config file? The appSettings block in web config file sets the user-defined values for the whole application. For example, in the following code snippet, the specified ConnectionString section is used throughout the project for database connection: [csharp] <em><configuration> <appSettings> <add key=”ConnectionString” value=”server=local; pwd=password; database=default” /> </appSettings></em> [/csharp] 44.      Which data type does the RangeValidator control support? The data types supported by the RangeValidator control are Integer, Double, String, Currency, and Date. 45. What is the difference between an HtmlInputCheckBox control and anHtmlInputRadioButton control? In HtmlInputCheckBoxcontrol, multiple item selection is possible whereas in HtmlInputRadioButton controls, we can select only single item from the group of items. 46. Which namespaces are necessary to create a localized application? System.Globalization System.Resources 47. What are the different types of cookies in ASP.NET? Session Cookie – Resides on the client machine for a single session until the user does not log out. Persistent Cookie – Resides on a user’s machine for a period specified for its expiry, such as 10 days, one month, and never. 48. What is the file extension of web service? Web services have file extension .asmx.. 49. What are the components of ADO.NET? The components of ADO.Net are Dataset, Data Reader, Data Adaptor, Command, connection. 50. What is the difference between ExecuteScalar and ExecuteNonQuery? ExecuteScalar returns output value where as ExecuteNonQuery does not return any value but the number of rows affected by the query. ExecuteScalar used for fetching a single value and ExecuteNonQuery used to execute Insert and Update statements.

    Read the article

  • Enabling DNS for IPv6 infrastructure

    After successful automatic distribution of IPv6 address information via DHCPv6 in your local network it might be time to start offering some more services. Usually, we would use host names in order to communicate with other machines instead of their bare IPv6 addresses. During the following paragraphs we are going to enable our own DNS name server with IPv6 address resolving. This is the third article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. What's your name and your IPv6 address? $ sudo service bind9 status * bind9 is running If the service is not recognised, you have to install it first on your system. This is done very easy and quickly like so: $ sudo apt-get install bind9 Once again, there is no specialised package for IPv6. Just the regular application is good to go. But of course, it is necessary to enable IPv6 binding in the options. Let's fire up a text editor and modify the configuration file. $ sudo nano /etc/bind/named.conf.optionsacl iosnet {        127.0.0.1;        192.168.1.0/24;        ::1/128;        2001:db8:bad:a55::/64;};listen-on { iosnet; };listen-on-v6 { any; };allow-query { iosnet; };allow-transfer { iosnet; }; Most important directive is the listen-on-v6. This will enable your named to bind to your IPv6 addresses specified on your system. Easiest is to specify any as value, and named will bind to all available IPv6 addresses during start. More details and explanations are found in the man-pages of named.conf. Save the file and restart the named service. As usual, check your log files and correct your configuration in case of any logged error messages. Using the netstat command you can validate whether the service is running and to which IP and IPv6 addresses it is bound to, like so: $ sudo service bind9 restart $ sudo netstat -lnptu | grep "named\W*$"tcp        0      0 192.168.1.2:53        0.0.0.0:*               LISTEN      1734/named      tcp        0      0 127.0.0.1:53          0.0.0.0:*               LISTEN      1734/named      tcp6       0      0 :::53                 :::*                    LISTEN      1734/named      udp        0      0 192.168.1.2:53        0.0.0.0:*                           1734/named      udp        0      0 127.0.0.1:53          0.0.0.0:*                           1734/named      udp6       0      0 :::53                 :::*                                1734/named   Sweet! Okay, now it's about time to resolve host names and their assigned IPv6 addresses using our own DNS name server. $ host -t aaaa www.6bone.net 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: www.6bone.net is an alias for 6bone.net.6bone.net has IPv6 address 2001:5c0:1000:10::2 Alright, our newly configured BIND named is fully operational. Eventually, you might be more familiar with the dig command. Here is the same kind of IPv6 host name resolve but it will provide more details about that particular host as well as the domain in general. $ dig @2001:db8:bad:a55::2 www.6bone.net. AAAA More details on the Berkeley Internet Name Domain (bind) daemon and IPv6 are available in Chapter 22.1 of Peter Bieringer's HOWTO on IPv6. Setting up your own DNS zone Now, that we have an operational named in place, it's about time to implement and configure our own host names and IPv6 address resolving. The general approach is to create your own zone database below the bind folder and to add AAAA records for your hosts. In order to achieve this, we have to define the zone first in the configuration file named.conf.local. $ sudo nano /etc/bind/named.conf.local //// Do any local configuration here//zone "ios.mu" {        type master;        file "/etc/bind/zones/db.ios.mu";}; Here we specify the location of our zone database file. Next, we are going to create it and add our host names, our IP and our IPv6 addresses. $ sudo nano /etc/bind/zones/db.ios.mu $ORIGIN .$TTL 259200     ; 3 daysios.mu                  IN SOA  ios.mu. hostmaster.ios.mu. (                                2014031101 ; serial                                28800      ; refresh (8 hours)                                7200       ; retry (2 hours)                                604800     ; expire (1 week)                                86400      ; minimum (1 day)                                )                        NS      server.ios.mu.$ORIGIN ios.mu.server                  A       192.168.1.2server                  AAAA    2001:db8:bad:a55::2client1                 A       192.168.1.3client1                 AAAA    2001:db8:bad:a55::3client2                 A       192.168.1.4client2                 AAAA    2001:db8:bad:a55::4 With a couple of machines in place, it's time to reload that new configuration. Note: Each time you are going to change your zone databases you have to modify the serial information, too. Named loads the plain text zone definitions and converts them into an internal, indexed binary format to improve lookup performance. If you forget to change your serial then named will not use the new records from the text file but the indexed ones. Or you have to flush the index and force a reload of the zone. This can be done easily by either restarting the named: $ sudo service bind9 restart or by reloading the configuration file using the name server control utility - rndc: $ sudo rndc reconfig Check your log files for any error messages and whether the new zone database has been accepted. Next, we are going to resolve a host name trying to get its IPv6 address like so: $ host -t aaaa server.ios.mu. 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: server.ios.mu has IPv6 address 2001:db8:bad:a55::2 Looks good. Alternatively, you could have just ping'd the system as well using the ping6 command instead of the regular ping: $ ping6 serverPING server(2001:db8:bad:a55::2) 56 data bytes64 bytes from 2001:db8:bad:a55::2: icmp_seq=1 ttl=64 time=0.615 ms64 bytes from 2001:db8:bad:a55::2: icmp_seq=2 ttl=64 time=0.407 ms^C--- ios1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.407/0.511/0.615/0.104 ms That also looks promising to me. How about your configuration? Next, it might be interesting to extend the range of available services on the network. One essential service would be to have web sites at hand.

    Read the article

  • ASP.NET MVC 3 Hosting :: How to Deploy Web Apps Using ASP.NET MVC 3, Razor and EF Code First - Part II

    - by mbridge
    In previous post, I have discussed on how to work with ASP.NET MVC 3 and EF Code First for developing web apps. In this post, I will demonstrate on working with domain entity with deep object graph, Service Layer and View Models and will also complete the rest of the demo application. In the previous post, we have done CRUD operations against Category entity and this post will be focus on Expense entity those have an association with Category entity. Domain Model Category Entity public class Category   {       public int CategoryId { get; set; }       [Required(ErrorMessage = "Name Required")]       [StringLength(25, ErrorMessage = "Must be less than 25 characters")]       public string Name { get; set;}       public string Description { get; set; }       public virtual ICollection<Expense> Expenses { get; set; }   } Expense Entity public class Expense     {                public int ExpenseId { get; set; }                public string  Transaction { get; set; }         public DateTime Date { get; set; }         public double Amount { get; set; }         public int CategoryId { get; set; }         public virtual Category Category { get; set; }     } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. Repository class for Expense Transaction Let’s create repository class for handling CRUD operations for Expense entity public class ExpenseRepository : RepositoryBase<Expense>, IExpenseRepository     {     public ExpenseRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface IExpenseRepository : IRepository<Expense> { } Service Layer If you are new to Service Layer, checkout Martin Fowler's article Service Layer . According to Martin Fowler, Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations. Controller classes should be lightweight and do not put much of business logic onto it. We can use the service layer as the business logic layer and can encapsulate the rules of the application. Let’s create a Service class for coordinates the transaction for Expense public interface IExpenseService {     IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime ednDate);     Expense GetExpense(int id);             void CreateExpense(Expense expense);     void DeleteExpense(int id);     void SaveExpense(); } public class ExpenseService : IExpenseService {     private readonly IExpenseRepository expenseRepository;            private readonly IUnitOfWork unitOfWork;     public ExpenseService(IExpenseRepository expenseRepository, IUnitOfWork unitOfWork)     {                  this.expenseRepository = expenseRepository;         this.unitOfWork = unitOfWork;     }     public IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime endDate)     {         var expenses = expenseRepository.GetMany(exp => exp.Date >= startDate && exp.Date <= endDate);         return expenses;     }     public void CreateExpense(Expense expense)     {         expenseRepository.Add(expense);         unitOfWork.Commit();     }     public Expense GetExpense(int id)     {         var expense = expenseRepository.GetById(id);         return expense;     }     public void DeleteExpense(int id)     {         var expense = expenseRepository.GetById(id);         expenseRepository.Delete(expense);         unitOfWork.Commit();     }     public void SaveExpense()     {         unitOfWork.Commit();     } } View Model for Expense Transactions In real world ASP.NET MVC applications, we need to design model objects especially for our views. Our domain objects are mainly designed for the needs for domain model and it is representing the domain of our applications. On the other hand, View Model objects are designed for our needs for views. We have an Expense domain entity that has an association with Category. While we are creating a new Expense, we have to specify that in which Category belongs with the new Expense transaction. The user interface for Expense transaction will have form fields for representing the Expense entity and a CategoryId for representing the Category. So let's create view model for representing the need for Expense transactions. public class ExpenseViewModel {     public int ExpenseId { get; set; }       [Required(ErrorMessage = "Category Required")]     public int CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]     public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]     public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } The ExpenseViewModel is designed for the purpose of View template and contains the all validation rules. It has properties for mapping values to Expense entity and a property Category for binding values to a drop-down for list values of Category. Create Expense transaction Let’s create action methods in the ExpenseController for creating expense transactions public ActionResult Create() {     var expenseModel = new ExpenseViewModel();     var categories = categoryService.GetCategories();     expenseModel.Category = categories.ToSelectListItems(-1);     expenseModel.Date = DateTime.Today;     return View(expenseModel); } [HttpPost] public ActionResult Create(ExpenseViewModel expenseViewModel) {                      if (!ModelState.IsValid)         {             var categories = categoryService.GetCategories();             expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);             return View("Save", expenseViewModel);         }         Expense expense=new Expense();         ModelCopier.CopyModel(expenseViewModel,expense);         expenseService.CreateExpense(expense);         return RedirectToAction("Index");              } In the Create action method for HttpGet request, we have created an instance of our View Model ExpenseViewModel with Category information for the drop-down list and passing the Model object to View template. The extension method ToSelectListItems is shown below public static IEnumerable<SelectListItem> ToSelectListItems(         this IEnumerable<Category> categories, int  selectedId) {     return           categories.OrderBy(category => category.Name)                 .Select(category =>                     new SelectListItem                     {                         Selected = (category.CategoryId == selectedId),                         Text = category.Name,                         Value = category.CategoryId.ToString()                     }); } In the Create action method for HttpPost, our view model object ExpenseViewModel will map with posted form input values. We need to create an instance of Expense for the persistence purpose. So we need to copy values from ExpenseViewModel object to Expense object. ASP.NET MVC futures assembly provides a static class ModelCopier that can use for copying values between Model objects. ModelCopier class has two static methods - CopyCollection and CopyModel.CopyCollection method will copy values between two collection objects and CopyModel will copy values between two model objects. We have used CopyModel method of ModelCopier class for copying values from expenseViewModel object to expense object. Finally we did a call to CreateExpense method of ExpenseService class for persisting new expense transaction. List Expense Transactions We want to list expense transactions based on a date range. So let’s create action method for filtering expense transactions with a specified date range. public ActionResult Index(DateTime? startDate, DateTime? endDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!startDate.HasValue)     {         startDate = new DateTime(dtNow.Year, dtNow.Month, 1);         endDate = startDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of start date's month, if end date is not passed     if (startDate.HasValue && !endDate.HasValue)     {         endDate = (new DateTime(startDate.Value.Year, startDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }     var expenses = expenseService.GetExpenses(startDate.Value ,endDate.Value);     //if request is Ajax will return partial view     if (Request.IsAjaxRequest())     {         return PartialView("ExpenseList", expenses);     }     //set start date and end date to ViewBag dictionary     ViewBag.StartDate = startDate.Value.ToShortDateString();     ViewBag.EndDate = endDate.Value.ToShortDateString();     //if request is not ajax     return View(expenses); } We are using the above Index Action method for both Ajax requests and normal requests. If there is a request for Ajax, we will call the PartialView ExpenseList. Razor Views for listing Expense information Let’s create view templates in Razor for showing list of Expense information ExpenseList.cshtml @model IEnumerable<MyFinance.Domain.Expense>   <table>         <tr>             <th>Actions</th>             <th>Category</th>             <th>                 Transaction             </th>             <th>                 Date             </th>             <th>                 Amount             </th>         </tr>       @foreach (var item in Model) {              <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.ExpenseId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.ExpenseId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divExpenseList" })             </td>              <td>                 @item.Category.Name             </td>             <td>                 @item.Transaction             </td>             <td>                 @String.Format("{0:d}", item.Date)             </td>             <td>                 @String.Format("{0:F}", item.Amount)             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New Expense", "Create") |         @Html.ActionLink("Create New Category", "Create","Category")     </p> Index.cshtml @using MyFinance.Helpers; @model IEnumerable<MyFinance.Domain.Expense> @{     ViewBag.Title = "Index"; }    <h2>Expense List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery-ui.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.ui.datepicker.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/jquery-ui-1.8.6.custom.css")" rel="stylesheet" type="text/css" />      @using (Ajax.BeginForm(new AjaxOptions{ UpdateTargetId="divExpenseList", HttpMethod="Get"})) {     <table>         <tr>         <td>         <div>           Start Date: @Html.TextBox("StartDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["StartDate"].ToString())), new { @class = "ui-datepicker" })         </div>         </td>         <td><div>            End Date: @Html.TextBox("EndDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["EndDate"].ToString())), new { @class = "ui-datepicker" })          </div></td>          <td> <input type="submit" value="Search By TransactionDate" /></td>         </tr>     </table>         }   <div id="divExpenseList">             @Html.Partial("ExpenseList", Model)     </div> <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Ajax search functionality using Ajax.BeginForm The search functionality of Index view is providing Ajax functionality using Ajax.BeginForm. The Ajax.BeginForm() method writes an opening <form> tag to the response. You can use this method in a using block. In that case, the method renders the closing </form> tag at the end of the using block and the form is submitted asynchronously by using JavaScript. The search functionality will call the Index Action method and this will return partial view ExpenseList for updating the search result. We want to update the response UI for the Ajax request onto divExpenseList element. So we have specified the UpdateTargetId as "divExpenseList" in the Ajax.BeginForm method. Add jQuery DatePicker Our search functionality is using a date range so we are providing two date pickers using jQuery datepicker. You need to add reference to the following JavaScript files to working with jQuery datepicker. - jquery-ui.js - jquery.ui.datepicker.js For theme support for datepicker, we can use a customized CSS class. In our example we have used a CSS file “jquery-ui-1.8.6.custom.css”. For more details about the datepicker component, visit jquery UI website at http://jqueryui.com/demos/datepicker . In the jQuery ready event, we have used following JavaScript function to initialize the UI element to show date picker. <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Summary In this two-part series, we have created a simple web application using ASP.NET MVC 3 RTM, Razor and EF Code First CTP 5. I have demonstrated patterns and practices  such as Dependency Injection, Repository pattern, Unit of Work, ViewModel and Service Layer. My primary objective was to demonstrate different practices and options for developing web apps using ASP.NET MVC 3 and EF Code First. You can implement these approaches in your own way for building web apps using ASP.NET MVC 3. I will refactor this demo app on later time.

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • CSS issue with elements spanning columns

    - by bigFoot
    Hi folks. Overview: I'm trying to create a relatively simple page layout detailed below and running into problems no matter how I try to approach it. Concept: - A standard-size-block layout. I'll quote unit widths: each content block is 240px square with 5px of margin around it. - A left column of fixed width of 1 unit (245px - 1 block + margin to left). No problems here. - A right column of variable width to fill the remaining space. No problems here either. - In the left column, a number of 1unit x 1unit blocks fixed down the column. Also some blank space at the top - again, not a problem. - In the right column: a number of free-floating blocks of standard unit-sizes which float around and fill the space given to them by the browser window. No problems here. - Lastly, a single element, 2 units wide, which sits half in the left column and half in the right column, and which the blocks in the right column still float around. Here be dragons. Please see here for a diagram: http://is.gd/bPUGI Problem: No matter how I approach this, it goes wrong. Below is code for my existing attempt at a solution. My current problem is that the 1x1 blocks on the right do not respect the 2x1 block, and as a result half of the 2x1 block is overwritten by a 1x1 block in the right-hand column. I'm aware that this is almost certainly an issue with position: absolute taking things out of flow. However, can't really find a way round that which doesn't just throw up another problem instead. Code: <html> <head> <title>wat</title> <style type="text/css"> body { background: #ccc; color: #000; padding: 0px 5px 5px 0px; margin: 0px; } #leftcol { width: 245px; margin-top: 490px; position: absolute; } #rightcol { left: 245px; position: absolute; } #bigblock { float: left; position: relative; margin-top: -240px; background: red; } .cblock { margin: 5px 0px 0px 5px; float: left; overflow: hidden; display: block; background: #fff; } .w1 { width: 240px; } .w2 { width: 485px; } .l1 { height: 240px; } </head> <body> <div class="cblock w2 l1" id="bigblock"> <h1>DRAGONS</h1> <p>Here be they</p> </div> <div id="leftcol"> <div class="cblock w1 l1"> <h1>Left 1</h1> <p>1x1 block</p> </div> </div> <div id="rightcol"> <div class="cblock w1 l1"> <h1>Right 1</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 2</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 3</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 4</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 5</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 6</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 7</h1> <p>1x1 block</p> </div> </div> </body> </html> Constraints: One final note that I need cross-browser compatibility, though I'm more than happy to enforce this with JS if necessary. That said, if a CSS-only solution exists, I'd be extremely happy. Thanks in advance!

    Read the article

  • Windows Vista/Win7 Privilege Problem: SeDebugPrivilege & OpenProcess

    - by KevenK
    Everything I've been able to find about escalating to the appropriate privileges for my needs has agreed with my current methods, but the problem exists. I'm hoping maybe someone has some Windows Vista/Win7 internals experience that might shine some light where there is only darkness. I'm sure this will get long, but please bare with me. Context: I'm working on an app that requires accessing the memory of other processes on the current machine. This, obviously, requires administrator rights. It also requires SeDebugPrivilege, which I believe myself to be acquiring correctly, although I question if more privileges aren't necessary and thus the cause of my problems. Code has so far worked successfully on all versions of Windows XP, and on my test Vista32 and Win7x64 environments. Process: Program will Always be run with Administrator Rights. This can be assumed throughout this post. Escalating the current process's Access Token to include SeDebugPrivilege rights. Using EnumProcesses to create a list of current PIDs on the system Opening a handle using OpenProcess with PROCESS_ALL_ACCESS access rights Using ReadProcessMemory to read the memory of the other process. Problem: Everything has been working fine during development and my personal testing (including Windows XP 32 & 64, Windows Vista 32, and Windows 7 x64). However, during a test deployment onto both Windows Vista(32-bit) and Windows 7(64-bit) machines of a colleague, there seems to be a privilege/rights problem with OpenProcess failing with a generic Access Denied error. This occurs both when running as a limited User (as would be expected) and also when run explicitly as Administrator (Right-click Run as Administrator and when run from an Administrator level command prompt). However, this problem has been unreproducible for myself in my test environment. I have witnessed the problem first hand, so I trust that the problem exists. The only difference that I can discern between the actual environment and my test environment is that the actual error is occurring when using a Domain Administrator account at the UAC prompt, whereas my tests (which work with no errors) use a local administrator account at the UAC prompt. It appears that although the credentials being used allow UAC to 'run as administrator', the process is still not obtaining the correct rights to be able to OpenProcess on another process. I am not familiar enough with the internals of Vista/Win7 to know what this might be, and I am hoping someone has an idea of what could be the cause. The Kicker: The person who has reported this error, and who's environment can regularly reproduce this bug, has a small application named along the lines of RunWithDebugEnabled which is a small bootstrap program which appears to escalate its own privileges and then launch the executable passed to it (thus inheriting the escalated privileges). When run with this program, using the same Domain Administrator credentials at UAC prompt, the program works correctly and is able to successfully call OpenProcess and operates as intended. So this is definitely a problem with acquiring the correct privileges, and it is known that the Domain Administrator account is an administrator account that should be able to access the correct rights. (Obviously obtaining this source code would be great, but I wouldn't be here if that were possible). Notes: As noted, the errors reported by the failed OpenProcess attempts are Access Denied. According to MSDN documentation of OpenProcess: If the caller has enabled the SeDebugPrivilege privilege, the requested access is granted regardless of the contents of the security descriptor. This leads me to believe that perhaps there is a problem under these conditions either with (1) Obtaining SeDebugPrivileges or (2) Requiring other privileges which have not been mentioned in any MSDN documentation, and which might differ between a Domain Administrator account and a Local Administrator account Sample Code: void sample() { ///////////////////////////////////////////////////////// // Note: Enabling SeDebugPrivilege adapted from sample // MSDN @ http://msdn.microsoft.com/en-us/library/aa446619%28VS.85%29.aspx // Enable SeDebugPrivilege HANDLE hToken = NULL; TOKEN_PRIVILEGES tokenPriv; LUID luidDebug; if(OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken) != FALSE) { if(LookupPrivilegeValue(NULL, SE_DEBUG_NAME, &luidDebug) != FALSE) { tokenPriv.PrivilegeCount = 1; tokenPriv.Privileges[0].Luid = luidDebug; tokenPriv.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; if(AdjustTokenPrivileges(hToken, FALSE, &tokenPriv, 0, NULL, NULL) != FALSE) { // Always successful, even in the cases which lead to OpenProcess failure cout << "SUCCESSFULLY CHANGED TOKEN PRIVILEGES" << endl; } else { cout << "FAILED TO CHANGE TOKEN PRIVILEGES, CODE: " << GetLastError() << endl; } } } CloseHandle(hToken); // Enable SeDebugPrivilege ///////////////////////////////////////////////////////// vector<DWORD> pidList = getPIDs(); // Method that simply enumerates all current process IDs ///////////////////////////////////////////////////////// // Attempt to open processes for(int i = 0; i < pidList.size(); ++i) { HANDLE hProcess = NULL; hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pidList[i]); if(hProcess == NULL) { // Error is occurring here under the given conditions cout << "Error opening process PID(" << pidList[i] << "): " << GetLastError() << endl; } CloseHandle(hProcess); } // Attempt to open processes ///////////////////////////////////////////////////////// } Thanks! If anyone has some insight into what possible permissions/privileges/rights/etc that I may be missing to correctly open another process (Assuming the executable has been properly "Run as Administrator"ed) on Windows Vista and Windows 7 under the above conditions, it would be most greatly appreciated. I wouldn't be here if I weren't absolutely stumped, but I'm hopeful that once again the experience and knowledge of the group shines bright. I thank you for taking the time to read this wall of text. The good intentions alone are appreciated, thanks for being the type of person that makes SO so useful to all!

    Read the article

  • WPF Custom Buttons below ListBox Items

    - by Ryan
    WPF Experts - I am trying to add buttons below my custom listbox and also have the scroll bar go to the bottom of the control. Only the items should move and not the buttons. I was hoping for some guidance on the best way to achieve this. I was thinking the ItemsPanelTemplate needed to be modified but was not certain. Thanks My code is below <!-- List Item Selected --> <LinearGradientBrush x:Key="GotFocusStyle" EndPoint="0.5,1" StartPoint="0.5,0"> <LinearGradientBrush.GradientStops> <GradientStop Color="Black" Offset="0.501"/> <GradientStop Color="#FF091F34"/> <GradientStop Color="#FF002F5C" Offset="0.5"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> <!-- List Item Hover --> <LinearGradientBrush x:Key="MouseOverFocusStyle" StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Color="#FF013B73" Offset="0.501"/> <GradientStop Color="#FF091F34"/> <GradientStop Color="#FF014A8F" Offset="0.5"/> <GradientStop Color="#FF003363" Offset="1"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> <!-- List Item Selected --> <LinearGradientBrush x:Key="LostFocusStyle" EndPoint="0.5,1" StartPoint="0.5,0"> <LinearGradientBrush.RelativeTransform> <TransformGroup> <ScaleTransform CenterX="0.5" CenterY="0.5"/> <SkewTransform CenterX="0.5" CenterY="0.5"/> <RotateTransform CenterX="0.5" CenterY="0.5"/> <TranslateTransform/> </TransformGroup> </LinearGradientBrush.RelativeTransform> <GradientStop Color="#FF091F34" Offset="1"/> <GradientStop Color="#FF002F5C" Offset="0.4"/> </LinearGradientBrush> <!-- List Item Highlight --> <SolidColorBrush x:Key="ListItemHighlight" Color="#FFE38E27" /> <!-- List Item UnHighlight --> <SolidColorBrush x:Key="ListItemUnHighlight" Color="#FF6FB8FD" /> <Style TargetType="ListBoxItem"> <EventSetter Event="GotFocus" Handler="ListItem_GotFocus"></EventSetter> <EventSetter Event="LostFocus" Handler="ListItem_LostFocus"></EventSetter> </Style> <DataTemplate x:Key="CustomListData" DataType="{x:Type ListBoxItem}"> <Border BorderBrush="Black" BorderThickness="1" Margin="-2,0,0,-1"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type ListBoxItem}}, Path=ActualWidth}" /> </Grid.ColumnDefinitions> <Label VerticalContentAlignment="Center" BorderThickness="0" BorderBrush="Transparent" Foreground="{StaticResource ListItemUnHighlight}" FontSize="24" Tag="{Binding .}" Grid.Column="0" MinHeight="55" Cursor="Hand" FontFamily="Arial" FocusVisualStyle="{x:Null}" KeyboardNavigation.TabNavigation="None" Background="{StaticResource LostFocusStyle}" MouseMove="ListItem_MouseOver" > <Label.ContextMenu> <ContextMenu Name="editMenu"> <MenuItem Header="Edit"/> </ContextMenu> </Label.ContextMenu> <TextBlock Text="{Binding .}" Margin="15,0,40,0" TextWrapping="Wrap"></TextBlock> </Label> <Image Tag="{Binding .}" Source="{Binding}" Margin="260,0,0,0" Grid.Column="1" Stretch="None" Width="16" Height="22" HorizontalAlignment="Center" VerticalAlignment="Center" /> </Grid> </Border> </DataTemplate> </Window.Resources> <Window.DataContext> <ObjectDataProvider ObjectType="{x:Type local:ImageLoader}" MethodName="LoadImages" /> </Window.DataContext> <ListBox ItemsSource="{Binding}" Width="320" Background="#FF021422" BorderBrush="#FF1C4B79" > <ListBox.Resources> <SolidColorBrush x:Key="{x:Static SystemColors.HighlightBrushKey}">Transparent</SolidColorBrush> <Style TargetType="{x:Type ListBox}"> <Setter Property="ScrollViewer.HorizontalScrollBarVisibility" Value="Disabled" /> <Setter Property="ItemTemplate" Value="{StaticResource CustomListData }" /> </Style> </ListBox.Resources> </ListBox>

    Read the article

  • Django doesn't refresh my request object when reloading the current page.

    - by Boris Rusev
    I have a Django web site which I want ot be viewable in different languages. Until this morning everything was working fine. Here is the deal. I go to my say About Us page and it is in English. Below it there is the change language button and when I press it everything "magically" translates to Bulgarian just the way I want it. On the other hand I have a JS menu from which the user is able to browse through the products. I click on 'T-Shirt' then a sub-menu opens bellow the previously pressed containing different categories - Men, Women, Children. The link guides me to a page where the exact clothes I have requested are listed. BUT... When I try to change the language THEN, nothing happens. I go to the Abouts Page, change the language from there, return to the clothes catalog and the language is changed... I will no paste some code. This is my change button code: function changeLanguage() { if (getCookie('language') == 'EN') { setCookie("language", 'BG'); } else { setCookie("language", 'EN'); } window.location.reload(); } These are my URL patterns: urlpatterns = patterns('', # Example: # (r'^enter_clothing/', include('enter_clothing.foo.urls')), # Uncomment the admin/doc line below and add 'django.contrib.admindocs' # to INSTALLED_APPS to enable admin documentation: # (r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: (r'^site_media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': '/home/boris/Projects/enter_clothing/templates/media', 'show_indexes': True}), (r'^$', 'enter_clothing.clothes_app.views.index'), (r'^home', 'enter_clothing.clothes_app.views.home'), (r'^products', 'enter_clothing.clothes_app.views.products'), (r'^orders', 'enter_clothing.clothes_app.views.orders'), (r'^aboutUs', 'enter_clothing.clothes_app.views.aboutUs'), (r'^contactUs', 'enter_clothing.clothes_app.views.contactUs'), (r'^admin/', include(admin.site.urls)), (r'^(\w+)/(\w+)/page=(\d+)', 'enter_clothing.clothes_app.views.displayClothes'), ) My About Us page: @base def aboutUs(request): return """<b>%s</b>""" % getTranslation("About Us Text", request.COOKIES['language']) The @base method: def base(myfunc): def inner_func(*args, **kwargs): try: args[0].COOKIES['language'] except: args[0].COOKIES['language'] = 'BG' resetGlobalVariables() initCollections(args[0]) categoriesByCollection = dict((collection, getCategoriesFromCollection(args[0], collection)) for collection in collections) if args[0].COOKIES['language'] == 'BG': for k, v in categoriesByCollection.iteritems(): categoriesByCollection[k] = reduce(lambda a,b: a+b, map(lambda x: """<li><a href="/%s/%s/page=1">%s</a></li>""" % (translateCategory(args[0], x), translateCollection(args[0], k), str(x)), v), "") else: for k, v in categoriesByCollection.iteritems(): categoriesByCollection[k] = reduce(lambda a,b: a+b, map(lambda x: """<li><a href="/%s/%s/page=1">%s</a></li>""" % (str(x), str(k), str(x)), v), "") contents = myfunc(*args, **kwargs) return render_to_response('index.html', {'title': title, 'categoriesByCollection': categoriesByCollection.iteritems(), 'keys': enumerate(keys), 'values': enumerate(values), 'contents': contents, 'btnHome':getTranslation("Home Button", args[0].COOKIES['language']), 'btnProducts':getTranslation("Products Button", args[0].COOKIES['language']), 'btnOrders':getTranslation("Orders Button", args[0].COOKIES['language']), 'btnAboutUs':getTranslation("About Us Button", args[0].COOKIES['language']), 'btnContacts':getTranslation("Contact Us Button", args[0].COOKIES['language']), 'btnChangeLanguage':getTranslation("Button Change Language", args[0].COOKIES['language'])}) return inner_func And the catalog page: @base def displayClothes(request, category, collection, page): clothesToDisplay = getClothesFromCollectionAndCategory(request, category, collection) contents = "" pageCount = len(clothesToDisplay) / ( rowCount * columnCount) + 1 matrixSize = rowCount * columnCount currentPage = str(page).replace("page=", "") currentPage = int(currentPage) - 1 #raise Exception(request) # this is for the clothes layout for x in range(currentPage * matrixSize, matrixSize * (currentPage + 1)): if x < len(clothesToDisplay): if request.COOKIES['language'] == 'EN': contents += """<div class="clothes">%s</div>""" % clothesToDisplay[x].getEnglishHTML() else: contents += """<div class="clothes">%s</div>""" % clothesToDisplay[x].getBulgarianHTML() if (x + 1) % columnCount == 0: contents += """<div class="clear"></div>""" contents += """<div class="clear"></div>""" # this is for the page links if pageCount > 1: for x in range(0, pageCount): if x == currentPage: contents += """<a href="/%s/%s/page=%s"><span style="font-size: 20pt; color: black;">%s</span></a>""" % (category, collection, x + 1, x + 1) else: contents += """<a href="/%s/%s/page=%s"><span style="font-size: 20pt; color: blue;">%s</span></a>""" % (category, collection, x + 1, x + 1) return """%s""" % (contents) Let me explain that you needn't be alarmed by the large quantities of code I have posted. You don't have to understand it or even look at all of it. I've published it just in case because I really can't understand the origins of the bug. Now this is how I have narrowed the problem. I am debuging with "raise Exception(request)" every time I want to know what's inside my request object. When I place this in my aboutUs method, the language cookie value changes every time I press the language button. But NOT when I am in the displayClothes method. There the language stays the same. Also I tried putting the exception line in the beginning of the @base method. It turns out the situation there is exactly the same. When I am in my About Us page and click on the button, the language in my request object changes, but when I press the button while in the catalog page it remains unchanged. That is all I could find, and I have no idea as to how Django distinguishes my pages and in what way. P.S. The JavaScript I think works perfectly, I have tested it in multiple ways. Thank you, I hope some of you will read this enormous post, and don't hesitate to ask for more code excerpts.

    Read the article

  • When adding WCF service reference, configuration details are not added to web.config

    - by Mikey Cee
    Hi, I am trying to add a WCF service reference to my web application using VS2010. It seems to add OK, but the web.config is not updated, meaning I get a runtime exception: Could not find default endpoint element that references contract 'CoolService.CoolService' in the ServiceModel client configuration section. This might be because no configuration file was found for your application, or because no endpoint element matching this contract could be found in the client element. Obviously, because the service is not defined in my web.config. Steps to reproduce: Right click solution Add New Project ASP.NET Empty Web Application. Right click Service References in the new web app Add Service Reference. Enter address of my service and click Go. My service is visible in the left-hand Services section, and I can see all its operations. Type a namespace for my service. Click OK. The service reference is generated correctly, and I can open the Reference.cs file, and it all looks OK. Open the web.config file. It is still empty! <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <bindings /> <client /> </system.serviceModel> Why is this happening? It also happens with a console application, or any other project type I try. Any help? Here is the app.config from my WCF service: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <services> <service name="CoolSQL.Server.WCF.CoolService"> <endpoint address="" binding="webHttpBinding" contract="CoolSQL.Server.WCF.CoolService" behaviorConfiguration="SilverlightFaultBehavior"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="http://localhost:8732/Design_Time_Addresses/CoolSQL.Server.WCF/CoolService/" /> </baseAddresses> </host> </service> </services> <behaviors> <endpointBehaviors> <behavior name="webBehavior"> <webHttp /> </behavior> <behavior name="SilverlightFaultBehavior"> <silverlightFaults /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name=""> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <webHttpBinding> <binding name="DefaultBinding" bypassProxyOnLocal="true" useDefaultWebProxy="false" hostNameComparisonMode="WeakWildcard" sendTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:00:10" maxReceivedMessageSize="2147483647" transferMode="Streamed"> <readerQuotas maxArrayLength="2147483647" maxStringContentLength="2147483647" /> </binding> </webHttpBinding> </bindings> <extensions> <behaviorExtensions> <add name="silverlightFaults" type="CoolSQL.Server.WCF.SilverlightFaultBehavior, CoolSQL.Server.WCF" /> </behaviorExtensions> </extensions> <diagnostics> <messageLogging logEntireMessage="true" logMalformedMessages="false" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="false" maxMessagesToLog="3000" maxSizeOfMessageToLog="2000" /> </diagnostics> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0" /> </startup> <system.diagnostics> <sources> <source name="System.ServiceModel.MessageLogging" switchValue="Information, ActivityTracing"> <listeners> <add name="messages" type="System.Diagnostics.XmlWriterTraceListener" initializeData="c:\messages.e2e" /> </listeners> </source> </sources> </system.diagnostics> </configuration>

    Read the article

  • Why isn't TextBox.Text in WPF animatable?

    - by cplotts
    Ok, I have just run into something that is really catching me off-guard. I was helping a fellow developer with a couple of unrelated questions and in his project he was animating text into some TextBlock(s). So, I went back to my desk and recreated the project (in order to answer his questions), but I accidentally used TextBox instead of TextBlock. My text wasn't animating at all! (A lot of help, I was!) Eventually, I figured out that his xaml was using TextBlock and mine was using TextBox. What is interesting, is that Blend wasn't creating key frames when I was using TextBox. So, I got it to work in Blend using TextBlock(s) and then modified the xaml by hand, converting the TextBlock(s) into TextBox(es). When I ran the project, I got the following error: InvalidOperationException: '(0)' Storyboard.TargetProperty path contains nonanimatable property 'Text'. Well, it seems as if Blend was smart enough to know that ... and not generate the key frames in the animation (it would just modify the value directly on the TextBox). +1 for Blend. So, the question became: why isn't TextBox.Text animatable? The usual answer is that the particular property you are animating isn't a DependencyProperty. But, this isn't the case, TextBox.Text is a DependencyProperty. So, now I am bewildered! Why can't you animate TextBox.Text? Let me include some xaml to illustrate the problem. The following xaml works ... but uses TextBlock(s). <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="TextBoxTextQuestion.MainWindow" x:Name="Window" Title="MainWindow" Width="640" Height="480" > <Window.Resources> <Storyboard x:Key="animateTextStoryboard"> <StringAnimationUsingKeyFrames Storyboard.TargetProperty="(TextBlock.Text)" Storyboard.TargetName="textControl"> <DiscreteStringKeyFrame KeyTime="0:0:1" Value="Goodbye"/> </StringAnimationUsingKeyFrames> </Storyboard> </Window.Resources> <Window.Triggers> <EventTrigger RoutedEvent="FrameworkElement.Loaded"> <BeginStoryboard Storyboard="{StaticResource animateTextStoryboard}"/> </EventTrigger> </Window.Triggers> <Grid x:Name="LayoutRoot"> <StackPanel Orientation="Vertical" HorizontalAlignment="Center" VerticalAlignment="Center"> <TextBlock x:Name="textControl" Text="Hello" FontFamily="Calibri" FontSize="32"/> <TextBlock Text="World!" Margin="0,25,0,0" FontFamily="Calibri" FontSize="32"/> </StackPanel> </Grid> </Window> The following xaml does not work and uses TextBox.Text: <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="TextBoxTextQuestion.MainWindow" x:Name="Window" Title="MainWindow" Width="640" Height="480" > <Window.Resources> <Storyboard x:Key="animateTextStoryboard"> <StringAnimationUsingKeyFrames Storyboard.TargetProperty="(TextBox.Text)" Storyboard.TargetName="textControl"> <DiscreteStringKeyFrame KeyTime="0:0:1" Value="Goodbye"/> </StringAnimationUsingKeyFrames> </Storyboard> </Window.Resources> <Window.Triggers> <EventTrigger RoutedEvent="FrameworkElement.Loaded"> <BeginStoryboard Storyboard="{StaticResource animateTextStoryboard}"/> </EventTrigger> </Window.Triggers> <Grid x:Name="LayoutRoot"> <StackPanel Orientation="Vertical" HorizontalAlignment="Center" VerticalAlignment="Center"> <TextBox x:Name="textControl" Text="Hello" FontFamily="Calibri" FontSize="32"/> <TextBox Text="World!" Margin="0,25,0,0" FontFamily="Calibri" FontSize="32"/> </StackPanel> </Grid> </Window>

    Read the article

  • What strategy do you use for package naming in Java projects and why?

    - by Tim Visher
    I thought about this awhile ago and it recently resurfaced as my shop is doing its first real Java web app. As an intro, I see two main package naming strategies. (To be clear, I'm not referring to the whole 'domain.company.project' part of this, I'm talking about the package convention beneath that.) Anyway, the package naming conventions that I see are as follows: Functional: Naming your packages according to their function architecturally rather than their identity according to the business domain. Another term for this might be naming according to 'layer'. So, you'd have a *.ui package and a *.domain package and a *.orm package. Your packages are horizontal slices rather than vertical. This is much more common than logical naming. In fact, I don't believe I've ever seen or heard of a project that does this. This of course makes me leery (sort of like thinking that you've come up with a solution to an NP problem) as I'm not terribly smart and I assume everyone must have great reasons for doing it the way they do. On the other hand, I'm not opposed to people just missing the elephant in the room and I've never heard a an actual argument for doing package naming this way. It just seems to be the de facto standard. Logical: Naming your packages according to their business domain identity and putting every class that has to do with that vertical slice of functionality into that package. I have never seen or heard of this, as I mentioned before, but it makes a ton of sense to me. I tend to approach systems vertically rather than horizontally. I want to go in and develop the Order Processing system, not the data access layer. Obviously, there's a good chance that I'll touch the data access layer in the development of that system, but the point is that I don't think of it that way. What this means, of course, is that when I receive a change order or want to implement some new feature, it'd be nice to not have to go fishing around in a bunch of packages in order to find all the related classes. Instead, I just look in the X package because what I'm doing has to do with X. From a development standpoint, I see it as a major win to have your packages document your business domain rather than your architecture. I feel like the domain is almost always the part of the system that's harder to grok where as the system's architecture, especially at this point, is almost becoming mundane in its implementation. The fact that I can come to a system with this type of naming convention and instantly from the naming of the packages know that it deals with orders, customers, enterprises, products, etc. seems pretty darn handy. It seems like this would allow you to take much better advantage of Java's access modifiers. This allows you to much more cleanly define interfaces into subsystems rather than into layers of the system. So if you have an orders subsystem that you want to be transparently persistent, you could in theory just never let anything else know that it's persistent by not having to create public interfaces to its persistence classes in the dao layer and instead packaging the dao class in with only the classes it deals with. Obviously, if you wanted to expose this functionality, you could provide an interface for it or make it public. It just seems like you lose a lot of this by having a vertical slice of your system's features split across multiple packages. I suppose one disadvantage that I can see is that it does make ripping out layers a little bit more difficult. Instead of just deleting or renaming a package and then dropping a new one in place with an alternate technology, you have to go in and change all of the classes in all of the packages. However, I don't see this is a big deal. It may be from a lack of experience, but I have to imagine that the amount of times you swap out technologies pales in comparison to the amount of times you go in and edit vertical feature slices within your system. So I guess the question then would go out to you, how do you name your packages and why? Please understand that I don't necessarily think that I've stumbled onto the golden goose or something here. I'm pretty new to all this with mostly academic experience. However, I can't spot the holes in my reasoning so I'm hoping you all can so that I can move on. Thanks in advance!

    Read the article

  • Please help me correct the small bugs in this image editor

    - by Alex
    Hi, I'm working on a website that will sell hand made jewelry and I'm finishing the image editor, but it's not behaving quite right. Basically, the user uploads an image which will be saved as a source and then it will be resized to fit the user's screen and saved as a temp. The user will then go to a screen that will allow them to crop the image and then save it to it's final versions. All of that works fine, except, the final versions have 3 bugs. First is some black horizontal line on the very bottom of the image. Second is an outline of sorts that follows the edges. I thought it was because I was reducing the quality, but even at 100% it still shows up... And lastly, I've noticed that the cropped image is always a couple of pixels lower than what I'm specifying... Anyway, I'm hoping someone whose got experience in editing images with C# can maybe take a look at the code and see where I might be going off the right path. Oh, by the way, this in an ASP.NET MVC application. Here's the code: using System; using System.Drawing; using System.Drawing.Drawing2D; using System.Drawing.Imaging; using System.IO; using System.Linq; using System.Web; namespace Website.Models.Providers { public class ImageProvider { private readonly ProductProvider ProductProvider = null; private readonly EncoderParameters HighQualityEncoder = new EncoderParameters(); private readonly ImageCodecInfo JpegCodecInfo = ImageCodecInfo.GetImageEncoders().Single( c => (c.MimeType == "image/jpeg")); private readonly string Path = HttpContext.Current.Server.MapPath("~/Resources/Images/Products"); private readonly short[][] Dimensions = new short[3][] { new short[2] { 640, 480 }, new short[2] { 240, 0 }, new short[2] { 80, 60 } }; ////////////////////////////////////////////////////////// // Constructor ////////////////////////////////////////// ////////////////////////////////////////////////////////// public ImageProvider( ProductProvider ProductProvider) { this.ProductProvider = ProductProvider; HighQualityEncoder.Param[0] = new EncoderParameter(Encoder.Quality, 100L); } ////////////////////////////////////////////////////////// // Crop ////////////////////////////////////////////// ////////////////////////////////////////////////////////// public void Crop( string FileName, Image Image, Crop Crop) { using (Bitmap Source = new Bitmap(Image)) { using (Bitmap Target = new Bitmap(Crop.Width, Crop.Height)) { using (Graphics Graphics = Graphics.FromImage(Target)) { Graphics.InterpolationMode = InterpolationMode.HighQualityBicubic; Graphics.SmoothingMode = SmoothingMode.HighQuality; Graphics.PixelOffsetMode = PixelOffsetMode.HighQuality; Graphics.CompositingQuality = CompositingQuality.HighQuality; Graphics.DrawImage(Source, new Rectangle(0, 0, Target.Width, Target.Height), new Rectangle(Crop.Left, Crop.Top, Crop.Width, Crop.Height), GraphicsUnit.Pixel); }; Target.Save(FileName, JpegCodecInfo, HighQualityEncoder); }; }; } ////////////////////////////////////////////////////////// // Crop & Resize ////////////////////////////////////// ////////////////////////////////////////////////////////// public void CropAndResize( Product Product, Crop Crop) { using (Image Source = Image.FromFile(String.Format("{0}/{1}.source", Path, Product.ProductId))) { using (Image Temp = Image.FromFile(String.Format("{0}/{1}.temp", Path, Product.ProductId))) { float Percent = ((float)Source.Width / (float)Temp.Width); short Width = (short)(Temp.Width * Percent); short Height = (short)(Temp.Height * Percent); Crop.Height = (short)(Crop.Height * Percent); Crop.Left = (short)(Crop.Left * Percent); Crop.Top = (short)(Crop.Top * Percent); Crop.Width = (short)(Crop.Width * Percent); Img Img = new Img(); this.ProductProvider.AddImageAndSave(Product, Img); this.Crop(String.Format("{0}/{1}.cropped", Path, Img.ImageId), Source, Crop); using (Image Cropped = Image.FromFile(String.Format("{0}/{1}.cropped", Path, Img.ImageId))) { this.Resize(this.Dimensions[0], String.Format("{0}/{1}-L.jpg", Path, Img.ImageId), Cropped, HighQualityEncoder); this.Resize(this.Dimensions[1], String.Format("{0}/{1}-T.jpg", Path, Img.ImageId), Cropped, HighQualityEncoder); this.Resize(this.Dimensions[2], String.Format("{0}/{1}-S.jpg", Path, Img.ImageId), Cropped, HighQualityEncoder); }; }; }; this.Purge(Product); } ////////////////////////////////////////////////////////// // Queue ////////////////////////////////////////////// ////////////////////////////////////////////////////////// public void QueueFor( Product Product, HttpPostedFileBase PostedFile) { using (Image Image = Image.FromStream(PostedFile.InputStream)) { this.Resize(new short[2] { 1152, 0 }, String.Format("{0}/{1}.temp", Path, Product.ProductId), Image, HighQualityEncoder); }; PostedFile.SaveAs(String.Format("{0}/{1}.source", Path, Product.ProductId)); } ////////////////////////////////////////////////////////// // Purge ////////////////////////////////////////////// ////////////////////////////////////////////////////////// private void Purge( Product Product) { string Source = String.Format("{0}/{1}.source", Path, Product.ProductId); string Temp = String.Format("{0}/{1}.temp", Path, Product.ProductId); if (File.Exists(Source)) { File.Delete(Source); }; if (File.Exists(Temp)) { File.Delete(Temp); }; foreach (Img Img in Product.Imgs) { string Cropped = String.Format("{0}/{1}.cropped", Path, Img.ImageId); if (File.Exists(Cropped)) { File.Delete(Cropped); }; }; } ////////////////////////////////////////////////////////// // Resize ////////////////////////////////////////////// ////////////////////////////////////////////////////////// public void Resize( short[] Dimensions, string FileName, Image Image, EncoderParameters EncoderParameters) { if (Dimensions[1] == 0) { Dimensions[1] = (short)(Image.Height / ((float)Image.Width / (float)Dimensions[0])); }; using (Bitmap Bitmap = new Bitmap(Dimensions[0], Dimensions[1])) { using (Graphics Graphics = Graphics.FromImage(Bitmap)) { Graphics.InterpolationMode = InterpolationMode.HighQualityBicubic; Graphics.SmoothingMode = SmoothingMode.HighQuality; Graphics.PixelOffsetMode = PixelOffsetMode.HighQuality; Graphics.CompositingQuality = CompositingQuality.HighQuality; Graphics.DrawImage(Image, 0, 0, Dimensions[0], Dimensions[1]); }; Bitmap.Save(FileName, JpegCodecInfo, EncoderParameters); }; } } } Here's one of the images this produces:

    Read the article

  • T-SQL Dynamic SQL and Temp Tables

    - by George
    It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure. However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed. A simple 2 table scenario: I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items. I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order. Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree. The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned. What I was hoping to do was this: Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this: TempSQL = " INSERT INTO #ItemsToQuery OrderId, ItemsId FROM Orders, Items WHERE Orders.OrderID = Items.OrderId AND /* Some unpredictable Order filters go here */ AND /* Some unpredictable Items filters go here */ " Then, I would call a stored procedure, CREATE PROCEDURE GetItemsAndOrders(@tempSql as text) Execute (@tempSQL) --to create the #ItemsToQuery table SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery) SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery) The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client. 3 around come to mind but I'm look for a better one: 1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure. 2) I could perform all the queries from the client. The first would be something like this: SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter) This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables. I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one. If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.

    Read the article

  • treeview binding wpf cannot bind nested property in a class

    - by devnet247
    Hi all New to wpf and therefore struggling a bit. I am putting together a quick demo before we go for the full implementation I have a treeview on the left with Continent Country City structure when a user select the city it should populate some textboxes in a tabcontrol on the right hand side I made it sort of work but cannot make it work with composite objects. In a nutshell can you spot what is wrong with my zaml or code. Why is not binding to a my CityDetails.ClubsCount or CityDetails.PubsCount? What I am building is based on http://www.codeproject.com/KB/WPF/TreeViewWithViewModel.aspx Thanks a lot for any suggestions or reply DataModel public class City { public City(string cityName) { CityName = cityName; } public string CityName { get; set; } public string Population { get; set; } public string Area { get; set; } public CityDetails CityDetailsInfo { get; set; } } public class CityDetails { public CityDetails(int pubsCount,int clubsCount) { PubsCount = pubsCount; ClubsCount = clubsCount; } public int ClubsCount { get; set; } public int PubsCount { get; set; } } ViewModel public class CityViewModel : TreeViewItemViewModel { private City _city; private RelayCommand _testCommand; public CityViewModel(City city, CountryViewModel countryViewModel):base(countryViewModel,false) { _city = city; } public string CityName { get { return _city.CityName; } } public string Area { get { return _city.Area; } } public string Population { get { return _city.Population; } } public City City { get { return _city; } set { _city = value; } } public CityDetails CityDetailsInfo { get { return _city.CityDetailsInfo; } set { _city.CityDetailsInfo = value; } } } XAML <DockPanel> <DockPanel LastChildFill="True"> <Label DockPanel.Dock="top" Content="Title " HorizontalAlignment="Center"></Label> <StatusBar DockPanel.Dock="Bottom"> <StatusBarItem Content="Status Bar" ></StatusBarItem> </StatusBar> <Grid DockPanel.Dock="Top"> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="2*"/> </Grid.ColumnDefinitions> <TreeView Name="tree" ItemsSource="{Binding Continents}"> <TreeView.ItemContainerStyle> <Style TargetType="{x:Type TreeViewItem}"> <Setter Property="IsExpanded" Value="{Binding IsExpanded,Mode=TwoWay}"/> <Setter Property="IsSelected" Value="{Binding IsSelected,Mode=TwoWay}"/> <Setter Property="FontWeight" Value="Normal"/> <Style.Triggers> <Trigger Property="IsSelected" Value="True"> <Setter Property="FontWeight" Value="Bold"></Setter> </Trigger> </Style.Triggers> </Style> </TreeView.ItemContainerStyle> <TreeView.Resources> <HierarchicalDataTemplate DataType="{x:Type ViewModels:ContinentViewModel}" ItemsSource="{Binding Children}"> <StackPanel Orientation="Horizontal"> <Image Width="16" Height="16" Margin="3,0" Source="Images\Continent.png"/> <TextBlock Text="{Binding ContinentName}"/> </StackPanel> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="{x:Type ViewModels:CountryViewModel}" ItemsSource="{Binding Children}"> <StackPanel Orientation="Horizontal"> <Image Width="16" Height="16" Margin="3,0" Source="Images\Country.png"/> <TextBlock Text="{Binding CountryName}"/> </StackPanel> </HierarchicalDataTemplate> <DataTemplate DataType="{x:Type ViewModels:CityViewModel}" > <StackPanel Orientation="Horizontal"> <Image Width="16" Height="16" Margin="3,0" Source="Images\City.png"/> <TextBlock Text="{Binding CityName}"/> </StackPanel> </DataTemplate> </TreeView.Resources> </TreeView> <GridSplitter Grid.Row="0" Grid.Column="1" Background="LightGray" Width="5" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"/> <Grid Grid.Column="2" Margin="5" > <TabControl> <TabItem Header="Details" DataContext="{Binding Path=SelectedItem.City, ElementName=tree, Mode=OneWay}"> <StackPanel > <TextBlock VerticalAlignment="Center" FontSize="12" Text="{Binding CityName}"/> <TextBlock VerticalAlignment="Center" FontSize="12" Text="{Binding Area}"/> <TextBlock VerticalAlignment="Center" FontSize="12" Text="{Binding Population}"/> <!-- DONT WORK WHY--> <TextBlock VerticalAlignment="Center" FontSize="12" Text="{Binding SelectedItem.CityDetailsInfo.ClubsCount}"/> <TextBlock VerticalAlignment="Center" FontSize="12" Text="{Binding SelectedItem.CityDetailsInfo.PubsCount}"/> </StackPanel> </TabItem> </TabControl> </Grid> </Grid> </DockPanel> </DockPanel>

    Read the article

  • NServiceBus pipeline with Distributors

    - by David
    I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info: The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart. Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor. I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them. A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message. A process may need to join forks. I imagine I should use Sagas for this. Hopefully these assumptions are good otherwise I'm in more trouble than I thought. For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages. NodeA workers contain a IHandleMessages processor, and publish EventA NodeB workers contain a IHandleMessages processor, and publish Event B NodeC workers contain a IHandleMessages processor, and then the pipeline is complete. Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2): NodeA: <MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" > <MessageEndpointMappings> </MessageEndpointMappings> </UnicastBusConfig> NodeB: <MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> NodeC: <MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> And here are the relevant parts of the distributor configs: Distributor A: <add key="DataInputQueue" value="NodeA.Distrib.Data"/> <add key="ControlInputQueue" value="NodeA.Distrib.Control"/> <add key="StorageQueue" value="NodeA.Distrib.Storage"/> Distributor B: <add key="DataInputQueue" value="NodeB.Distrib.Data"/> <add key="ControlInputQueue" value="NodeB.Distrib.Control"/> <add key="StorageQueue" value="NodeB.Distrib.Storage"/> Distributor C: <add key="DataInputQueue" value="NodeC.Distrib.Data"/> <add key="ControlInputQueue" value="NodeC.Distrib.Control"/> <add key="StorageQueue" value="NodeC.Distrib.Storage"/> I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen: Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data@MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great. Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data@MYCOMPUTER is subscribing TWICE, while the other worker does not mention it. In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies. So, my questions: Is this kind of setup possible? Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup. Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced? Then the load-balanced nodes could simply reply back to the central broker, which seems easier. On the other hand, that seems at odds with the decentralization that is NServiceBus's strength. And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?

    Read the article

  • website creation - for non web programmers?

    - by Tim
    I thought I would find decent questions and answers for this, but none really caught my eye... I am a C++ developer and I own a few domains. I'd like to start off with simple web sites for each with a minimum of time and fuss and minimum learning. I have too many projects going and don't have the time to learn how to build websites. One is for a company that currently has only a single product with custom development as well. I hacked together some really bad html with paypal links on it. It is just one simple product. I want to add uservoice to it and maybe some other stuff like FAQ, forums, etc. Right now I just link to a google group I created. Another is a startup in development phase, but we want to provide simple content like whitepapers and press releases and a section for investors. - mostly an "about us" type of thing. We will also be providing details about our product. Then there is a blog site - currently using godaddy's quickblogcast. Not a bad start but I suspect I want to move to something else. The question is - is there a framework that I can use that will make decent, if not outstanding, sites? Again, I have my hands full with three projects in addition to my day job and don't have time to learn web programming. I also don;t want to just pay a web person and then be out in the cold for upgrades, changes, etc. I have been burned before. I am happy with a web-based app or a desktop app that builds html or whatever and then I can ftp it up to the hosting servers. To summarize: - simple to get started - low time to get a web page going - ability to integrate with a few hand-done pages - pay pal integration - uservoice integration - ability to put under my svn would be nice too EDIT Thanks to the responders. I understand now why my original searches failed. I was not searching for "CMS". I'll go back and do that. I would expect that this is a many-times-duplicate... EDIT: I am considering using Wordpress and Drupal - one for each of the sites. I did one Drupal site quickly just so I could qualify for one of the Microsoft programs for discounted dev tools - anyway - it was a quick and dirty homepage and I am still on the learning curve. I look forward to playing with it. So far it has been ok. I am not sure about doing a taste-test between the two - might be a waste of time where I could just become that much better at Drupal faster than spending time on wordpress... Will keep updated. EDIT: Selecting the Drupal answer by slim for now. That is what I am going with. Don't have time to check them all out. Wordpress sounds like a good option too, but such limited time... Results: I have tried wordpress and drupal so far. Wordpress is great for blogging or for a site that you want to run ads from, but I disagree that it is ready for a corporate site, unless you want to spend lots of time making your own theme, etc. But if you spend that time, why not work with drupal? Drupal was a little intimidating at first - but after spending about 4 hours reading the overview and step-by-step guide online was a HUGE step. I got a simple site up and running easily after that. Trying to make a website just by going to the admin panel without reading anything is a waste of time. You really need to read the docs. The site is great. start here: http://drupal.org/getting-started I'd suggest drupal to anyone. It has amazing capabilities, lots of support and lots of users. Just doing blogging? Wordpress is really great for that. So now I've got two sites running with a lot of the functionality I wanted - and they look good. ONE MORE EDIT Well, I have switched back to wordpress after buying a theme and then getting help from web developers. I guess either one will work - it is just a matter of getting comfortable witht he basics, using the right tools and trying things out.

    Read the article

  • Stack overflow error after creating a instance using 'new'

    - by Justin
    EDIT - The code looks strange here, so I suggest viewing the files directly in the link given. While working on my engine, I came across a issue that I'm unable to resolve. Hoping to fix this without any heavy modification, the code is below. void Block::DoCollision(GameObject* obj){ obj->DoCollision(this); } That is where the stack overflow occurs. This application works perfectly fine until I create two instances of the class using the new keyword. If I only had 1 instance of the class, it worked fine. Block* a = new Block(0, 0, 0, 5); AddGameObject(a); a = new Block(30, 0, 0, 5); AddGameObject(a); Those parameters are just x,y,z and size. The code is checked before hand. Only a object with a matching Collisonflag and collision type will trigger the DoCollision(); function. ((*list1)->m_collisionFlag & (*list2)->m_type) Maybe my check is messed up though. I attached the files concerned here http://celestialcoding.com/index.php?topic=1465.msg9913;topicseen#new. You can download them without having to sign up. The main suspects, I also pasted the code for below. From GameManager.cpp void GameManager::Update(float dt){ GameList::iterator list1; for(list1=m_gameObjectList.begin(); list1 != m_gameObjectList.end(); ++list1){ GameObject* temp = *list1; // Update logic and positions if((*list1)->m_active){ (*list1)->Update(dt); // Clip((*list1)->m_position); // Modify for bounce affect } else continue; // Check for collisions if((*list1)->m_collisionFlag != GameObject::TYPE_NONE){ GameList::iterator list2; for(list2=m_gameObjectList.begin(); list2 != m_gameObjectList.end(); ++list2){ if(!(*list2)->m_active) continue; if(list1 == list2) continue; if( (*list2)->m_active && ((*list1)->m_collisionFlag & (*list2)->m_type) && (*list1)->IsColliding(*list2)){ (*list1)->DoCollision((*list2)); } } } if(list1==m_gameObjectList.end()) break; } GameList::iterator end    = m_gameObjectList.end(); GameList::iterator newEnd = remove_if(m_gameObjectList.begin(),m_gameObjectList.end(),RemoveNotActive); if(newEnd != end)        m_gameObjectList.erase(newEnd,end); } void GameManager::LoadAllFiles(){ LoadSkin(m_gameTextureList, "Models/Skybox/Images/Top.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Models/Skybox/Images/Right.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Models/Skybox/Images/Back.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Models/Skybox/Images/Left.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Models/Skybox/Images/Front.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Models/Skybox/Images/Bottom.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Terrain/Textures/Terrain1.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Terrain/Textures/Terrain2.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Terrain/Details/TerrainDetails.bmp", GetNextFreeID()); LoadSkin(m_gameTextureList, "Terrain/Textures/Water1.bmp", GetNextFreeID()); Block* a = new Block(0, 0, 0, 5); AddGameObject(a); a = new Block(30, 0, 0, 5); AddGameObject(a); Player* d = new Player(0, 100,0); AddGameObject(d); } void Block::Draw(){ glPushMatrix(); glTranslatef(m_position.x(), m_position.y(), m_position.z()); glRotatef(m_facingAngle, 0, 1, 0); glScalef(m_size, m_size, m_size); glBegin(GL_LINES); glColor3f(255, 255, 255); glVertex3f(m_boundingRect.left, m_boundingRect.top, m_position.z()); glVertex3f(m_boundingRect.right, m_boundingRect.top, m_position.z()); glVertex3f(m_boundingRect.left, m_boundingRect.bottom, m_position.z()); glVertex3f(m_boundingRect.right, m_boundingRect.bottom, m_position.z()); glVertex3f(m_boundingRect.left, m_boundingRect.top, m_position.z()); glVertex3f(m_boundingRect.left, m_boundingRect.bottom, m_position.z()); glVertex3f(m_boundingRect.right, m_boundingRect.top, m_position.z()); glVertex3f(m_boundingRect.right, m_boundingRect.bottom, m_position.z()); glEnd(); // DrawBox(m_position.x(), m_position.y(), m_position.z(), m_size, m_size, m_size, 8); glPopMatrix(); } void Block::DoCollision(GameObject* obj){ GameObject* t = this;   // I modified this to see for sure that it was causing the mistake. // obj->DoCollision(NULL); // Just revert it back to /* void Block::DoCollision(GameObject* obj){     obj->DoCollision(this);   }   */ }

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198  | Next Page >