Search Results

Search found 10388 results on 416 pages for 'hardware selection'.

Page 410/416 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • CodePlex Daily Summary for Tuesday, March 09, 2010

    CodePlex Daily Summary for Tuesday, March 09, 2010New Projects.NET Excel Wrapper - Read, Write, Edit & Automate Excel Files in .NET with ease: .NET Excel Wrapper encapsulates the complexity of working with multiple Excel objects giving you one central point to do all your processing. It h...Advancement Voyage: Advancement Voyage is a high quality RPG experience that provides all the advancement and voyaging that a player could hope for.ASP.Net Routing configuration: ASP.NET routing configuration enables you to configure the routes in the web.config bbinjest: bbinjestBuildUp: BuildUp is a build number increment tool for C# .net projects. It is run as a post build step in Visual Studio.Controlled Vocabulary: This project is devoted to creating tools to assist with Controlling Vocabulary in communication. The initial delivery is an Outlook 2010 Add-in w...CycleList: A replacement for the WPF ListBox Control. Displays only a single item and allows the user to change the selected item by clicking on it once. Very...Forensic Suite: A suite of security softwareFREE DNN Chat Module for 123 Flash Chat -- Embed FREE Chat Room!: 123 Flash Chat is a live chat solution and its DotNetNuke Chat Module helps to embed a live chat room into website with DotNetNuke(DNN) integrated ...HouseFly experimental controls: Experimental controls for use in HouseFly.ICatalogAll: junkMidiStylus: MidiStylus allows you to control MIDI-enabled hardware or software using your pressure-sensitive pen tablet. The program maps the X position, Y po...myTunes: Search for your favorite artistsNColony - Pluggable Socialism?: NColony will maximize the use of MEF to create flexible application architectures through a suite of plug-in solutions. If MEF is an outlet for plu...Network Monitor Decryption Expert: NmDecrypt is a Network Monitor Expert which when given a trace with encrypted frames, a security certificate, and a passkey will create a new trace...occulo: occulo is a free steganography program, meant to embed files within images with optional encrytion. Open Ant: A implementation of a Open Source Ant which is created to show what is possible in the serious game AntMe! The First implementation of that ProjectProgramming Patterns by example: Design patterns provide solutions to common software design problems. This project will contain samples, written in c# and ruby, of each design pat...project4k: Developing bulk mail system storing email informationQuail - Selenium Remote Control Made Easy: Quail makes it easy for Quality Assurance departments write automated tests against web applications. Both HTML and Silverlight applications can b...RedBulb for XNA Framework: RedBulb is a collection of utility functions and classes that make writing games with XNA a lot easier. Key features: Console,GUI (Labels, Buttons,...RegExpress: RegExpress is a WPF application that combines interactive demos of regular expressions with slide content. This was designed for a user group prese...RemoveFolder: Small utility program to remove empty foldersScrumTFS: ScrumTFSSharePoint - Open internal link in new window list definition: A simple SharePoint list definition to render SharePoint internal links with the option to open them in a new window.SqlSiteMap4MVC: SqlSiteMapProvider for ASP.Net MVC.T Sina .NET Client: t.sina.com.cn api 新浪微博APITest-Lint-Extensions: Test Lint is a free Typemock VS 2010 Extension that finds common problems in your unit tests as you type them. this project will host extensions ...ThinkGearNET: ThinkGearNET is a library for easy usage of the Neurosky Mindset headset from .NET .Wiki to Maml: This project enables you to write wiki syntax and have it converted into MAML syntax for Sandcastle documentation projects.WPF Undo/Redo Framework: This project attempts to solve the age-old programmer problem of supporting unlimited undo/redo in an application, in an easily reusable manner. Th...WPFValidators: WPF Validators Validações de campos para WPFWSP Listener: The WSP listener is a windows service application which waits for new WSC and WSP files in a specific folder. If a new WSC and WSP file are added, ...New Releases.NET Excel Wrapper - Read, Write, Edit & Automate Excel Files in .NET with ease: First Release: This is the first release which includes the main library release..NET Excel Wrapper - Read, Write, Edit & Automate Excel Files in .NET with ease: Updated Version: New Features:SetRangeValue using multidimensional array Print current worksheet Print all worksheets Format ranges background, color, alig...ArkSwitch: ArkSwitch v1.1.2: This release removes all memory reporting information, and is focused on stability.BattLineSvc: V2.1: - Fixed a bug where on system start-up, it would pop up a notification box to let you know the service started. Annoying! And fixed! - Fixed the ...BuildUp: BuildUp 1.0 Alpha 1: Use at your own risk!Not yet feature complete. Basic build incrementing and attribute overriding works. Still working on cascading build incremen...Controlled Vocabulary: 1.0.0.1: Initial Alpha Release. System Requirements Outlook 2010 .Net Framework 3.5 Installation 1. Close Outlook (Use Task Manager to ensure no running i...CycleList: CycleList: The binaries contain the .NET 3.5 DLL ONLY. Please download source for usage examples.FluentNHibernate.Search: 0.3 Beta: 0.3 Beta take the following changes : Mappings : - Field Mapping without specifying "Name" - Id Mapping without specifiying "Field" - Builtin Anal...FREE DNN Chat Module for 123 Flash Chat -- Embed FREE Chat Room!: 123 Flash Chat DNN Chat Module: With FREE DotNetNuke Chat Module of 123 Flash Chat, webmaster will be assist to add a chat room into DotNetNuke instantly and help to attract more ...GameStore League Manager: League Manager 1.0 release 3: This release includes a full installer so that you can get your league running faster and generate interest quicker.iExporter - iTunes playlist exporting: iExporter gui v2.3.1.0 - console v1.2.1.0: Paypal donate! Solved a big bug for iExporter ( Gui & Console ) When a track isn't located under the main iTunes library, iExporter would crash! ...jQuery.cssLess: jQuery.cssLess 0.3: New - Removed the dependency from XRegExp - Added comment support (both CSS style and C style) - Optimised it for speed - Added speed test TOD...jQuery.cssLess: jQuery.cssLess 0.4: NEW - @import directive - preserving of comments in the resulting CSS - code refactoring - more class oriented approach TODO - implement operation...MapWindow GIS: MapWindow 6.0 msi (March 8): Rewrote the shapefile saving code in the indexed case so that it uses the shape indices rather than trying to create features. This should allow s...MidiStylus: MidiStylus 0.5.1: MidiStylus Beta 0.5.1 This release contains basic functionality for transmitting MIDI data based on X position, Y position, and pressure value rea...MiniTwitter: 1.09.1: MiniTwitter 1.09.1 更新内容 修正 URL に & が含まれている時に短縮 URL がおかしくなるバグを修正Mosaictor: first executable: .exe file of the app in its current state. Mind you that this will likely be highly unstable due to heaps of uncaught errors.MvcContrib a Codeplex Foundation project: T4MVC: T4MVC is a T4 template that generates strongly typed helpers for ASP.NET MVC. You can download it below, and check out the documention here.N2 CMS: 2.0 beta: Major Changes ASP.NET MVC 2 templates Refreshed management UI LINQ support Performance improvements Auto image resize Upgrade Make a comp...NotesForGallery: ASP.NET AJAX Photo Gallery Control: NotesForGallery 2.0: PresentationNotesForGallery is an open source control on top of the Microsoft ASP.NET AJAX framework for easy displaying image galleries in the as...occulo: occulo 0.1 binaries: Windows binaries. Tested on Windows XP SP2.occulo: occulo 0.1 source: Initial source release.Open NFe: DANFE 1.9.5: Ajuste de layout e correção dos campos de ISS.patterns & practices Web Client Developer Guidance: Web Application Guidance -- March 8th Drop: This iteration we focused on documentation and bug fixes.PoshConsole: PoshConsole 2.0 Beta: With this release, I am refocusing PoshConsole... It will be a PowerShell 2 host, without support for PowerShell 1.0 I have used some of the new P...Quick Performance Monitor: QPerfmon 1.1: Now you can specify different updating frequencies.RedBulb for XNA Framework: Cipher Puzzle (Sample) Creators Club Package: RedBulb Sample Game: Cipher Puzzle http://bayimg.com/image/galgfaacb.jpgRedBulb for XNA Framework: RedBulbStarter (Base Code): This is the code you need to start with. Quick Start Guide: Download the latest version of RedBulb: http://redbulb.codeplex.com/releases/view/415...RoTwee: RoTwee 7.0.0.0 (Alpha): Now this version is under improvement of code structure and may be buggy. However movement of rotation is quite good in this version thanks to clea...SCSI Interface for Multimedia and Block Devices: Release 9 - Improvements and Bug Fixes: Changes I have made in this version: Fixed INQUIRY command timeout problem Lowered ISOBurn's memory usage significantly by not explicitly setting...SharePoint - Open internal link in new window list definition: Open link in new window list definition: First release, with english and italian localization supportSharePoint Outlook Connector: Version 1.2.3.2: Few bug fixing and some ui enhancementsSysI: sysi, release build: Better than ever -- now allows for escalation to adminThe Silverlight Hyper Video Player [http://slhvp.com]: Beta 1: Beta (1.1) The code is ready for intensive testing. I will update the code at least every second day until we are ready to freeze for V1, which wi...Truecrafting: Truecrafting 0.52: fixed several trinkets that broke just before i released 0.51, sorry fixed water elemental not doing anything while summoned if not using glyph o...Truecrafting: Truecrafting 0.53: fixed mp5 calculations when gear contained mp5 and made the formulas more efficient no need to rebuild profiles with this release if placed in th...umbracoSamplePackageCreator (beta): Working Beta: For Visual Studio 2008 creating packages for Umbraco 4.0.3.VCC: Latest build, v2.1.30307.0: Automatic drop of latest buildVCC: Latest build, v2.1.30308.0: Automatic drop of latest buildVOB2MKV: vob2mkv-1.0.3: This is a maintenance update of the VOB2MKV utility. The MKVMUX filter now describes the cluster locations using a separate SeekHead element at th...WPFValidators: WPFValidators 1.0 Beta: Primeira versão do componente ainda em Beta, pode ser utilizada em produção pois esta funcionando bem e as futuras alterações não sofreram muito im...WSDLGenerator: WSDLGenerator 0.0.06: - Added option to generate SharePoint compatible *disco.aspx file. - Changed commandline optionsWSP Listener: WSP Listener version 1.0.0.0: First version of the WSP Listener includes: Easy cop[y paste installation of WSP solutions Extended logging E-mail when installation is finish...Yet another pali text reader: Pali Text Reader App v1.1: new features/updates + search history is now a tab + format codes in dictionary + add/edit terms in the dictionary + pali keyboard inserts symbols...Most Popular ProjectsMetaSharpi4o - Indexed LINQResExBraintree Client LibraryGeek's LibrarySharepoint Feature ManagerConfiguration ManagementOragon Architecture SqlBuilderTerrain Independant Navigating Automaton v2.0WBFS ManagerMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web ServicesFasterflect - A Fast and Simple Reflection APIFarseer Physics Enginepatterns & practices – Enterprise LibraryTeam FTW - Software ProjectIonics Isapi Rewrite Filter

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • CodePlex Daily Summary for Friday, March 05, 2010

    CodePlex Daily Summary for Friday, March 05, 2010New Projects.svn Folders Cleanup Tool: dotSVN Cleanup is a tool that allows you to remove the .svn folders . Just click, browse, say abracadabra ...and the magic is done. Have fun with...Accord: The Accord framework creates an easy we to integrate any Dependency Injection framework into your project, while abstracting the details of your im...Asp.net MVC Lab: Try asp.net mvc outASP.NET Themes management with Webforms: The provided source is an example for how to use themes in ASP.NET Webforms. this source is the "up to date" support for the article I wroteB&W Port Scanner: B&W Port Scanner (formerly Net Inspector) is a fast TCP Port scan utility. The main idea is support of customizable operations to be performed f...BizTalk SWAT - Simple Web Activity Tracker: This is a web based version of BizTalk HAT. The concept is designed to be able to share and enable sharing of orchestration info easily. Some of th...C# Linear Hash Table: A C# dictionary-like implementation of a linear hash table. It is more memory efficiant than the .NET dictionary, and also almost as fast. NOTE: On...DBF Import Export Wizard: DBF Import Export Wizard is a tool for anyone needing to import DBF files into SQL Server or to export SQL Server tables to a DBF file. This proje...Domain as XML - Driven Development: Visual Studio Code Samples: Domain as XML - Driven Development: Visual Studio Code SamplesEasyDownload: This application allows to manage downloads handling an stack of files and several useful configurationsEos2: .FlightTickets: This application allows to buy flight ticketsFotofly PhotoViewer: A Silverlight control that uses the Fotofly metadata library to show the people in a photo (using Windows Live Photo Gallery People metadata) and a...Fujiy source code: Source code examplesGameSet: This application allows to play games with distributed users.Injectivity (Dependency Injection): Injectivity is a dependency injection framework (written in C#) with a strong focus on the ease of configuration and performance. Having been writt...Inventory: Keep track of inputs, materias and salesLoanTin.Com Source Code: LoanTin.Com - a Social Networking Website as same as Tumblr.com, based on source code of Loantiner Project, allow anyone can share anything to anyo...mysln: my solutions.NumTextBox: TextBox控件重写 之NumTextBox,主要实现的功能是,只允许输入数字,或String,Numeric,Currency,Decimal,Float,Double,Short,Int,Long 修改自:http://www.codeproject.com/KB/edit/num...Quick Performance Monitor: This small utility helps to monitor performance counters without using the full blown perfmon tool from Windows. It supports a number of command li...Runo: Runo ResearchSales: This application allows to manage a hardware storeScrewWiki Form Auth Provider: Enables your ASP.NET site to use Forms Authentication to integrate with your ScrewWiki. User management is performed on a parent site, and cookie i...SDS: Scientific DataSet library and tools: The SDS library makes it easy for .Net developers to read, write and share scalars, vectors, matrices and multidimensional grids which are very com...ShapeSweeper: Minesweeper-like game for the Zune HD. Each hidden object has three properties to discover--location, color, and shape--and all three must be corre...SilverlightExcel: an Excel file viewer in Silverlight 4: SilverlightExcel is a Silverlight application allowing you to open and view Excel files and also create graphs.sPWadmin: pwAdmin is an Web Interface based on JSP that uses the PW-Java API to control an PW-Server.Video Player control in Silverlight: A control for playing video in Silverlight 4 with chapters on timeline control. This player will be easily skinnable and customizable. More Featur...XNA Light Pre Pass Renderer: A demo/sample that shows how to write a light pre pass renderer in XNA.Zimms: Collaboration Site for friends, a code depot, and scratch padNew Releases.svn Folders Cleanup Tool: dotSVN Cleanup Tool: dotSVN Cleanup Tool executableAccord: Alpha: Initial build of the Accord framework.AcPrac: AcPrac Ver 0.1: The first version of AcPrac. It is not fully functional, but rather a version to get the bugs out. Please report all bugs.ASP.NET: ASP.NET Browser Definition Files: This download contains: ASP.NET 4 Browser Definition Files -- You can use the new ASP.NET 4 browser definition files with earlier versions of ASP....B&W Port Scanner: Black`n`White Port Scanner 1.0: B&W Port Scanner 1.0 Final Release Date: 03.03.2010 Black`n`WhiteBizTalk SWAT - Simple Web Activity Tracker: BizTalk SWAT: This is a web based version of BizTalk HAT. The concept is designed to be able to share and enable sharing of orchestration info easily. It uses th...BTP Tools: CSB+CUV+HCSB dict files 2010-03-04: 5. is now missing a space between the Strong’s number and the Count: >CSB Translation: 圣所 7, 至圣所G39+G394 it should be: CSB Translation: 圣所 7, 至圣所G...C# Linear Hash Table: Linear Hash Table: First working version of the Linear Hash Table.Cassiopeia: WinTools 1.0 beta: First ReleaseComposure: Caliburn-44007-trunk-vs2010.net40: This is a very simple conversion of the Caliburn trunk (rev 44007) for use in Visual Studio 2010 RC1 built against .NET40. Because the conversion w...Cover Creator: CoverCreator 1.3.0: English and Polish version. Functionality to add image to the front page. Load / save covers.DBF Import Export Wizard: DBF Import Export Wizard Source Code: Version 0.1.0.3DBF Import Export Wizard: DBF_Import_Export_Wizard Setup 0.1.0.3: Zip file contains Setup.exeESB Toolkit Extensions: Tellago BizTalk ESB 2.0 Toolkit Extensions v0.2: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...Fotofly PhotoViewer: Fotofly Photoview v0.1: The first public release. Based on a Silverlight application I have been using for over a year at www.tassography.com. This version uses Fotofly v0...HPC with GPUs applied to CG: Cuda Soft Bodies simulation: Cuda src for soft bodiesHPC with GPUs applied to CG: Full Soft Bodies src: full src code for soft bodies simulationInjectivity (Dependency Injection): 2.8.166.2135: Release 2.8.166.2135 of the Injectivity dependency injection framework.Line Counter: 1.5 (Code Outline Preview): This version contains preview of the code outline feature, you can now view C# code outline within Line Counter. Note that the code outline now onl...Micajah Mindtouch Deki Wiki Copier: MicajahWikiCopier: You should use the following line arguments: WikiCopier.exe "http://oldwikiwithdata.wik.is/@api/deki" "login" "password" "http://newwiki.somename.l...ncontrols: Alpha 0.4.0.1: Added some example on the Console Project.NumTextBox: NumTextBox初始版本: TextBox控件重写 之NumTextBox,主要实现的功能是,只允许输入数字,或String,Numeric,Currency,Decimal,Float,Double,Short,Int,Long 此为初始版本PSCodeplex: PS CodePlex 0.2: PS CodePlex 0.2 has some breaking changes to the parameters. A few of the parameters are renamed and a few are made as switch parameters. Add-Rele...Quick Performance Monitor: QPerfMon First release - Version 1.0.0: The first release of the utility.RapidWebDev - .NET Enterprise Software Development Infrastructure: ProductManagement Quick Sample 0.2: This is a sample product management application to demonstrate how to develop enterprise software in RapidWebDev. The glossary of the system are ro...ScrewWiki Form Auth Provider: ScrewWiki Forms Authentication: Initial ReleaseSee.Sharper: See.Sharper.Docs-1.10.3.4: HTML documentation (including Doxygen project)See.Sharper: See.Sharper-1.10.3.4: Solution (Source files, debug and release binaries)Solar.Generic: Solar.Generic 0.8.0.0 Beta (Revised, Renamed): Solar.Generic 0.8.0.0 (Revised & Renamed) Renamed project from Solar.Commons to Solar.Generic. Project solution file is now in format of Visual ...Solar.Security: Solar.Security 1.1.0.0: Performed several major refactorings of code base. Stripped In-Memory implementation of IConfiguration interface of transactional behavior due to...sPWadmin: pwAdmin v0.7: -Star System Simulator: Star System Simulator 2.3: Changes in this release: Fixed several localisation issues. Features in this release: Model star systems in 3D. Euler-Cromer method. Improved...SysI: sysi, stable and ready: This time for sure.TheWhiteAmbit: TheWhiteAmbit - Demo: Two little demos demonstrating: - fast realtime raytracing - generating bent normals for shading (CUDA capable GPU needed = nVidia GeForce >8x00)VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 22 Beta: Build 22 (beta) New: Visual Studio 2010 RC support (VsTortoise for Visual Studio 2010 RC screenshots) New: VsTortoise integrates in to Solution E...WinMergeFS: WinMergeFS 0.1.42128alpha: WinMergeFS provides AuFS functionality for windows. With WinMergeFS users can mount multiple directories into a virtual drive. Plugin based root se...WSDLGenerator: WSDLGenerator 0.0.0.2: - Bugs fixed - Code refactored - Added support for custom typesXNA Light Pre Pass Renderer: LightPrePassRendererXNA: Zipped source code for the light pre pass renderer made with XNA.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)LiveUpload to FacebookASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrBlogEngine.NETSDS: Scientific DataSet library and toolsMapWindow GISpatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesIonics Isapi Rewrite FilterMDT Web FrontEndDiffPlex - a .NET Diff Generator

    Read the article

  • Fitting it together, database, reporting, applications in C#

    - by alvonellos
    Introduction Preamble I was hesitant to post this, since it's an application whose intricate details are defined elsewhere, and answers may not be helpful to others. Within the past few weeks (I was actually going to write a blog post about this after I finished) I've discovered that the barrier I'm encountering is one that's actually quite common for newer developers. This question is not so much about a specific thing as it is about piecing those things together. I've searched the internet far and wide, and found many tutorials on how to create applications that are kind of similar to what I'm looking for. I've also looked at hiring another, more experienced, developer to help me along, but all I've gotten are unqualified candidates that don't have the experience necessary and won't take care of the client or project like I will. I'd rather have the project never transpire than to release a solution that is half-baked. I've asked professors at my school, but they've not turned up answers to my question. I'm an experienced developer, and I've written many applications that are -- very abstractly -- close to what I'm doing, but my experiences from those applications aren't giving me enough leverage to solve this particular problem. I just hope that posting this article isn't a mistake for me to write. Project Description I have a project I'm working on for a client that is a rewrite of an application, originally written in Foxpro 2.6 by someone before me, that performs some analysis (which, sadly, I'm not allowed to disclose as per of my employment contract) on financial data. One day, after a long talk between the client and I -- where he intimately described his frustrations with all the bugs I've been hacking out of this code for 6 months now -- he told me to just rewrite it and gave me a month to write a good 1/8 of this 65k LOC Foxpro monstrosity. this 65k line of code foxpro monstrosity. It'll take me a good 3 - 6 months to rewrite this software (I know things the original programmer did not, like inheritance) going as I am right now, but I'm quickly discovering that I'm going to need to use databases. Prior to this contract I didn't even know about foxpro, and so I've had to learn foxpro on the fly, write procedures and make modifications to the database. I've actually come to like it, and this project would be rewritten in Foxpro if it were still a supported language, because over the past few months, I've come to like the features of Foxpro that make it so easy to develop data-driven applications. I once perfomed an experiment, comparing C# to Foxpro. What took me 45 minutes in C# took me two in Foxpro, and I knew C# prior to Foxpro. I was hoping to leverage the power of C#, but it intimidates me that in foxpro, you can have one line of code and be using a database. Prior to this, I have never written any serious database development from scratch. All the applications that I've written are in a different league. They are either completely data-naive or data-naive enough that I can get away with not using a database through serialization or by designing algorithms that work with the data in a manner that is stateless, so there is no need to worry about databases. I've come to realize, very quickly, that serialization and my efficacy with data structures has been my crutch all these years that's prevented me from adventuring into databases, and has consequently hindered my success in real-world programming. Sure, I've written some database stuff in Perl and Python, and I've done forms and worked with relational databases and tables, I'm a wizard in Access and Excel (seriously) and can do just about anything, but it just feels unnatural writing SQL code in another language... I don't mind writing SQL, and I don't It's that bridge between the database and the program code that drives me absolutely bonkers. I hope I'm not the only one to think this, but it bothers me that I have to create statements like the following string sSql = "SELECT * from tablename" When there's really no reason for that kind of unchecked language binding between two languages and two API's. Don't get my wrong, SQL is great, but I don't like the idea that, when executing commands on a SQL database, that one must intermix database and application software, and there's no database independence, which means that different versions of different databases can break code. This isn't very nice. The nicest thing about Foxpro is the cohesiveness between programming language and database. It's so easy, and Foxpro makes it easy, because the tool just fits the task. I can see why so many developers have created a career with this language, because it lowered the barrier of entry to data-driven applications that so many businesses need. It was wonderful. For my purposes today, with the demands and need for community support, extensibility, and language features, Foxpro isn't a solution that I feel would be the right tool for the job. I'm also worried about working too heavy with the database, because I've seen data-driven .NET applications have issues with database caches, running out of memory, and objects in the database not being collected. (Memory leaks) And OH the queries. Which one, how, and why? There are a plethora of different ways that a database can be setup, I think I counted 5 or 6 different kinds of database applications alone that I can chose from. That is a great mountain for me to climb when I don't even know where to begin when it comes to writing data-driven applications. The problem isn't that I don't know SQL or that I don't know C#. I know both and have worked with both extensively. It's making them work together that's the problem, and it's something I've never done in C# before. Reports The client likes paper. The data needs to be printed out in a format that is extensible, layered, and easy to use. I have never done reporting before, and so this is a bit of a problem. From the data source comes crystal reports, and so there's a dependency on the database, from what I understand. Code reuse A large part of the design decision that I've gone through so far is to break the task of writing a piece of this software into routines and modular DLL's and so forth such that much of the code can be reused. For example, when I setup this database, I want to be able to reuse the same database code over and over again. I also want to make sure that when the day comes that another developer is here, that he/she will be able to pick up just where I left off. The quicker I develop these applications, the better off I am. Tasks & Goals In my project, I need to write routines that apply algorithms and look for predefined patterns in financial data. Additionally, I need to simulate trading based on predefined algorithms and data. Then I need to prepare reports on that data. Additionally, I need to have a way to change the code base for this application quickly and effectively, without hacking together some band-aid solution for a problem that really needs a trauma ward. Special Considerations The solution must be fast, run quickly on existing hardware, and not be too much of a pain to maintain and write. I understand that anything I write I'm married to -- I'm responsible for the things that I write because my reputation and livelihood is dependent on it. Do I really need a database? What about performance? Performance was such a big issue that I hand wrote a data structure that is capable of performing 2 billion operations, using a total of 4 gigs of memory in under 1/4 of a second using the standard core two duo processor. I could not find a similar, pre-written data structure in C# to perform this task. What setup do I use in terms of database? What about reporting? I'd prefer to have PDF's generated, but I'd like to be able to visually sketch those reports and then just have a ReportFactory of some sort, that when I pass some variables in, it just does that data. About Me I'm a lone developer for a small business in this area. This is the first time I've done this and I've never had the breadth and depth of my knowledge tested. I'm incredibly frustrated with this project because I feel incredibly overwhelmed with the task at hand. I'm looking for that entry level point where I can draw a line and say "this is what I need to do" Conclusion I may have not been clear enough on my post. I'm still new to this whole thing, and I've been doing my best to contribute back to the community that I've leached so much knowledge from. I'd be glad to edit my post and add more information if possible. I'm looking for a big-picture solution or design process that helps me get off the ground in this world of data-driven applications, because I have a feeling that it's going to be concentric to my entire career as a programmer for some time. Specifically, if you didn't get it from the rest of the post (I may not have been clear enough) I really need some guidance as to where to go in terms of the design decisions for this project. Some things that'll be useful will be a pro/con list for the different kinds of database projects available in VS2010. I've tried, but generating that list has been as hard as solving the problem itself... If you could walk a developer writing a data-driven application for the first time in C#, how would you do that? Where would you point them to?

    Read the article

  • Conheça a nova Windows Azure

    - by Leniel Macaferi
    Hoje estamos lançando um grande conjunto de melhorias para a Windows Azure. A seguir está um breve resumo de apenas algumas destas melhorias: Novo Portal de Administração e Ferramentas de Linha de Comando O lançamento de hoje vem com um novo portal para a Windows Azure, o qual lhe permitirá gerenciar todos os recursos e serviços oferecidos na Windows Azure de uma forma perfeitamente integrada. O portal é muito rápido e fluido, suporta filtragem e classificação dos dados (o que o torna muito fácil de usar em implantações/instalações de grande porte), funciona em todos os navegadores, e oferece um monte de ótimos e novos recursos - incluindo suporte nativo à VM (máquina virtual), Web site, Storage (armazenamento), e monitoramento de Serviços hospedados na Nuvem. O novo portal é construído em cima de uma API de gerenciamento baseada no modelo REST dentro da Windows Azure - e tudo o que você pode fazer através do portal também pode ser feito através de programação acessando esta Web API. Também estamos lançando hoje ferramentas de linha de comando (que, igualmente ao portal, chamam as APIs de Gerenciamento REST) para tornar ainda ainda mais fácil a criação de scripts e a automatização de suas tarefas de administração. Estamos oferecendo para download um conjunto de ferramentas para o Powershell (Windows) e Bash (Mac e Linux). Como nossos SDKs, o código destas ferramentas está hospedado no GitHub sob uma licença Apache 2. Máquinas Virtuais ( Virtual Machines [ VM ] ) A Windows Azure agora suporta a capacidade de implantar e executar VMs duráveis/permanentes ??na nuvem. Você pode criar facilmente essas VMs usando uma nova Galeria de Imagens embutida no novo Portal da Windows Azure ou, alternativamente, você pode fazer o upload e executar suas próprias imagens VHD customizadas. Máquinas virtuais são duráveis ??(o que significa que qualquer coisa que você instalar dentro delas persistirá entre as reinicializações) e você pode usar qualquer sistema operacional nelas. Nossa galeria de imagens nativa inclui imagens do Windows Server (incluindo o novo Windows Server 2012 RC), bem como imagens do Linux (incluindo Ubuntu, CentOS, e as distribuições SUSE). Depois de criar uma instância de uma VM você pode facilmente usar o Terminal Server ou SSH para acessá-las a fim de configurar e personalizar a máquina virtual da maneira como você quiser (e, opcionalmente, capturar uma snapshot (cópia instantânea da imagem atual) para usar ao criar novas instâncias de VMs). Isto te proporciona a flexibilidade de executar praticamente qualquer carga de trabalho dentro da plataforma Windows Azure.   A novo Portal da Windows Azure fornece um rico conjunto de recursos para o gerenciamento de Máquinas Virtuais - incluindo a capacidade de monitorar e controlar a utilização dos recursos dentro delas.  Nosso novo suporte à Máquinas Virtuais também permite a capacidade de facilmente conectar múltiplos discos nas VMs (os quais você pode então montar e formatar como unidades de disco). Opcionalmente, você pode ativar o suporte à replicação geográfica (geo-replication) para estes discos - o que fará com que a Windows Azure continuamente replique o seu armazenamento em um data center secundário (criando um backup), localizado a pelo menos 640 quilômetros de distância do seu data-center principal. Nós usamos o mesmo formato VHD que é suportado com a virtualização do Windows hoje (o qual nós lançamos como uma especificação aberta), de modo a permitir que você facilmente migre cargas de trabalho existentes que você já tenha virtualizado na Windows Azure.  Também tornamos fácil fazer o download de VHDs da Windows Azure, o que também oferece a flexibilidade para facilmente migrar cargas de trabalho das VMs baseadas na nuvem para um ambiente local. Tudo o que você precisa fazer é baixar o arquivo VHD e inicializá-lo localmente - nenhuma etapa de importação/exportação é necessária. Web Sites A Windows Azure agora suporta a capacidade de rapidamente e facilmente implantar web-sites ASP.NET, Node.js e PHP em um ambiente na nuvem altamente escalável que te permite começar pequeno (e de maneira gratuita) de modo que você possa em seguida, adaptar/escalar sua aplicação de acordo com o crescimento do seu tráfego. Você pode criar um novo web site na Azure e tê-lo pronto para implantação em menos de 10 segundos: O novo Portal da Windows Azure oferece suporte integrado para a administração de Web sites, incluindo a capacidade de monitorar e acompanhar a utilização dos recursos em tempo real: Você pode fazer o deploy (implantação) para web-sites em segundos usando FTP, Git, TFS e Web Deploy. Também estamos lançando atualizações para as ferramentas do Visual Studio e da Web Matrix que permitem aos desenvolvedores uma fácil instalação das aplicações ASP.NET nesta nova oferta. O suporte de publicação do VS e da Web Matrix inclui a capacidade de implantar bancos de dados SQL como parte da implantação do site - bem como a capacidade de realizar a atualização incremental do esquema do banco de dados com uma implantação realizada posteriormente. Você pode integrar a publicação de aplicações web com o controle de código fonte ao selecionar os links "Set up TFS publishing" (Configurar publicação TFS) ou "Set up Git publishing" (Configurar publicação Git) que estão presentes no dashboard de um web-site: Ao fazer isso, você habilitará a integração com o nosso novo serviço online TFS (que permite um fluxo de trabalho do TFS completo - incluindo um build elástico e suporte a testes), ou você pode criar um repositório Git e referenciá-lo como um remote para executar implantações automáticas. Uma vez que você executar uma implantação usando TFS ou Git, a tab/guia de implantações/instalações irá acompanhar as implantações que você fizer, e permitirá que você selecione uma implantação mais antiga (ou mais recente) para que você possa rapidamente voltar o seu site para um estado anterior do seu código. Isso proporciona uma experiência de fluxo de trabalho muito poderosa.   A Windows Azure agora permite que você implante até 10 web-sites em um ambiente de hospedagem gratuito e compartilhado entre múltiplos usuários e bancos de dados (onde um site que você implantar será um dos vários sites rodando em um conjunto compartilhado de recursos do servidor). Isso te fornece uma maneira fácil para começar a desenvolver projetos sem nenhum custo envolvido. Você pode, opcionalmente, fazer o upgrade do seus sites para que os mesmos sejam executados em um "modo reservado" que os isola, de modo que você seja o único cliente dentro de uma máquina virtual: E você pode adaptar elasticamente a quantidade de recursos que os seus sites utilizam - o que te permite por exemplo aumentar a capacidade da sua instância reservada/particular de acordo com o aumento do seu tráfego: A Windows Azure controla automaticamente o balanceamento de carga do tráfego entre as instâncias das VMs, e você tem as mesmas opções de implantação super rápidas (FTP, Git, TFS e Web Deploy), independentemente de quantas instâncias reservadas você usar. Com a Windows Azure você paga por capacidade de processamento por hora - o que te permite dimensionar para cima e para baixo seus recursos para atender apenas o que você precisa. Serviços da Nuvem (Cloud Services) e Cache Distribuído (Distributed Caching) A Windows Azure também suporta a capacidade de construir serviços que rodam na nuvem que suportam ricas arquiteturas multicamadas, gerenciamento automatizado de aplicações, e que podem ser adaptados para implantações extremamente grandes. Anteriormente nós nos referíamos a esta capacidade como "serviços hospedados" - com o lançamento desta semana estamos agora rebatizando esta capacidade como "serviços da nuvem". Nós também estamos permitindo um monte de novos recursos com eles. Cache Distribuído Um dos novos recursos muito legais que estão sendo habilitados com os serviços da nuvem é uma nova capacidade de cache distribuído que te permite usar e configurar um cache distribuído de baixa latência, armazenado na memória (in-memory) dentro de suas aplicações. Esse cache é isolado para uso apenas por suas aplicações, e não possui limites de corte. Esse cache pode crescer e diminuir dinamicamente e elasticamente (sem que você tenha que reimplantar a sua aplicação ou fazer alterações no código), e suporta toda a riqueza da API do Servidor de Cache AppFabric (incluindo regiões, alta disponibilidade, notificações, cache local e muito mais). Além de suportar a API do Servidor de Cache AppFabric, esta nova capacidade de cache pode agora também suportar o protocolo Memcached - o que te permite apontar código escrito para o Memcached para o cache distribuído (sem que alterações de código sejam necessárias). O novo cache distribuído pode ser configurado para ser executado em uma de duas maneiras: 1) Utilizando uma abordagem de cache co-localizado (co-located). Nesta opção você aloca um percentual de memória dos seus roles web e worker existentes para que o mesmo seja usado ??pelo cache, e então o cache junta a memória em um grande cache distribuído.  Qualquer dado colocado no cache por uma instância do role pode ser acessado por outras instâncias do role em sua aplicação - independentemente de os dados cacheados estarem armazenados neste ou em outro role. O grande benefício da opção de cache "co-localizado" é que ele é gratuito (você não precisa pagar nada para ativá-lo) e ele te permite usar o que poderia ser de outra forma memória não utilizada dentro das VMs da sua aplicação. 2) Alternativamente, você pode adicionar "cache worker roles" no seu serviço na nuvem que são utilizados unicamente para o cache. Estes também serão unidos em um grande anel de cache distribuído que outros roles dentro da sua aplicação podem acessar. Você pode usar esses roles para cachear dezenas ou centenas de GBs de dados na memória de forma extramente eficaz - e o cache pode ser aumentado ou diminuído elasticamente durante o tempo de execução dentro da sua aplicação: Novos SDKs e Ferramentas de Suporte Nós atualizamos todos os SDKs (kits para desenvolvimento de software) da Windows Azure com o lançamento de hoje para incluir novos recursos e capacidades. Nossos SDKs estão agora disponíveis em vários idiomas, e todo o código fonte deles está publicado sob uma licença Apache 2 e é mantido em repositórios no GitHub. O SDK .NET para Azure tem em particular um monte de grandes melhorias com o lançamento de hoje, e agora inclui suporte para ferramentas, tanto para o VS 2010 quanto para o VS 2012 RC. Estamos agora também entregando downloads do SDK para Windows, Mac e Linux nos idiomas que são oferecidos em todos esses sistemas - de modo a permitir que os desenvolvedores possam criar aplicações Windows Azure usando qualquer sistema operacional durante o desenvolvimento. Muito, Muito Mais O resumo acima é apenas uma pequena lista de algumas das melhorias que estão sendo entregues de uma forma preliminar ou definitiva hoje - há muito mais incluído no lançamento de hoje. Dentre estas melhorias posso citar novas capacidades para Virtual Private Networking (Redes Privadas Virtuais), novo runtime do Service Bus e respectivas ferramentas de suporte, o preview público dos novos Azure Media Services, novos Data Centers, upgrade significante para o hardware de armazenamento e rede, SQL Reporting Services, novos recursos de Identidade, suporte para mais de 40 novos países e territórios, e muito, muito mais. Você pode aprender mais sobre a Windows Azure e se cadastrar para experimentá-la gratuitamente em http://windowsazure.com.  Você também pode assistir a uma apresentação ao vivo que estarei realizando às 1pm PDT (17:00Hs de Brasília), hoje 7 de Junho (hoje mais tarde), onde eu vou passar por todos os novos recursos. Estaremos abrindo as novas funcionalidades as quais me referi acima para uso público poucas horas após o término da apresentação. Nós estamos realmente animados para ver as grandes aplicações que você construirá com estes novos recursos. Espero que ajude, - Scott   Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Video Recording Not Working in ICS

    - by Nirav Ranpara
    I have implement code Record video in Android Phone . This code is working in 2.2 , 2.3 . not in ICS But when I checked in ICS code is not working ? here I posted code and xml file. videorecord.java import java.io.File; import java.io.IOException; import android.app.Activity; import android.app.AlertDialog; import android.content.Context; import android.content.DialogInterface; import android.content.Intent; import android.content.SharedPreferences; import android.hardware.Camera; import android.media.CamcorderProfile; import android.media.MediaRecorder; import android.os.Bundle; import android.os.CountDownTimer; import android.os.Environment; import android.util.Log; import android.view.Display; import android.view.KeyEvent; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.View; import android.widget.EditText; import android.widget.FrameLayout; import android.widget.ImageView; import android.widget.LinearLayout; import android.widget.TextView; import android.widget.Toast; public class videorecord extends Activity{ SharedPreferences.Editor pre; String filename; CountDownTimer t; private Camera myCamera; private MyCameraSurfaceView myCameraSurfaceView; private MediaRecorder mediaRecorder; Integer cnt=0; LinearLayout myButton; TextView myButton1; SurfaceHolder surfaceHolder; boolean recording; private TextView txtcount; private ImageView btnplay; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); recording = false; setContentView(R.layout.videorecord); init(); myCamera = getCameraInstance(); if(myCamera == null){ } myCameraSurfaceView = new MyCameraSurfaceView(this, myCamera); FrameLayout myCameraPreview = (FrameLayout)findViewById(R.id.videoview); Display display = getWindowManager().getDefaultDisplay(); int width = display.getWidth(); int height = display.getHeight(); myCameraSurfaceView.setLayoutParams(new LinearLayout.LayoutParams(width, height-60)); myCameraPreview.addView(myCameraSurfaceView); myButton = (LinearLayout)findViewById(R.id.mybutton); btnplay.setOnClickListener(myButtonOnClickListener); } private void init() { txtcount = (TextView) findViewById(R.id.txtcounter); //myButton1 = (TextView) findViewById(R.id.mybutton1); btnplay = (ImageView)findViewById(R.id.btnplay); t = new CountDownTimer( Long.MAX_VALUE , 1000) { @Override public void onTick(long millisUntilFinished) { cnt++; String time = new Integer(cnt).toString(); long millis = cnt; int seconds = (int) (millis / 60); int minutes = seconds / 60; seconds = seconds % 60; txtcount.setText(String.format("%d:%02d:%02d", minutes, seconds,millis)); } @Override public void onFinish() { } }; } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { if ((keyCode == KeyEvent.KEYCODE_BACK)) { if(recording) { new AlertDialog.Builder(videorecord.this).setTitle("Do you want to save Video ?") .setPositiveButton("OK", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { filename(); //finish(); } }).setNegativeButton("Cancle", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { // TODO Auto-generated method stub } }).show(); } else { if ((keyCode == KeyEvent.KEYCODE_BACK)) { //Intent homeIntent= new Intent(Intent.ACTION_MAIN); //homeIntent.addCategory(Intent.CATEGORY_HOME); //homeIntent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); //startActivity(homeIntent); //this.finishActivity(1); finish(); } //moveTaskToBack(true); // finish(); return super.onKeyDown(keyCode, event); } } else { // Toast.makeText(getApplicationContext(), "asd", Toast.LENGTH_LONG).show(); android.os.Process.killProcess(android.os.Process.myPid()) ; } return super.onKeyDown(keyCode, event); } ImageView.OnClickListener myButtonOnClickListener = new ImageView.OnClickListener(){ public void onClick(View v) { if(recording){ Log.e("Record error", "error in recording ."); mediaRecorder.stop(); t.cancel(); filename(); releaseMediaRecorder(); }else{ releaseCamera(); Log.e("Record Stop error", "error in recording ."); // if(!prepareMediaRecorder()){ prepareMediaRecorder(); finish(); } mediaRecorder.start(); recording = true; // myButton1.setText("STOP Recording"); // btnplay.setImageResource(android.R.drawable.ic_media_pause); btnplay.setImageResource(R.drawable.stoprec); t.start(); } }}; private Camera getCameraInstance(){ Camera c = null; try { c = Camera.open(); } catch (Exception e){ } return c; } private void filename() { AlertDialog.Builder alert = new AlertDialog.Builder(this); alert.setTitle("Save Video"); alert.setMessage("Enter File Name"); final EditText input = new EditText(this); alert.setView(input); alert.setPositiveButton("Ok", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { if(input.getText().length()>=1) { filename = input.getText().toString(); File sdcard = new File(Environment.getExternalStorageDirectory() + "/VideoRecord"); File from = new File(sdcard,"null.mp4"); File to = new File(sdcard,filename+".mp4"); from.renameTo(to); SharedPreferences sp = videorecord.this.getSharedPreferences("data", MODE_WORLD_WRITEABLE); pre = sp.edit(); pre.clear(); pre.commit(); pre.putString("lastvideo", filename+".mp4"); pre.commit(); //btnplay.setImageResource(android.R.drawable.ic_media_play); btnplay.setImageResource(R.drawable.startrec); // Intent intent = new Intent(videorecord.this,StopVidoWatch_Activity.class); // startActivity(intent); Intent myIntent = new Intent(getApplicationContext(), StopVidoWatch_Activity.class).setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(myIntent); } else { filename(); } } }); alert.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { // Intent intent = new Intent(videorecord.this,StopVidoWatch_Activity.class); // startActivity(intent); File file = new File(Environment.getExternalStorageDirectory() + "/VideoRecord/null.mp4"); //boolean deleted = file.delete(); file.delete(); finish(); } }); alert.show(); } private boolean prepareMediaRecorder(){ myCamera = getCameraInstance(); mediaRecorder = new MediaRecorder(); myCamera.unlock(); mediaRecorder.setCamera(myCamera); mediaRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER); mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mediaRecorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH)); File folder = new File(Environment.getExternalStorageDirectory() + "/VideoRecord"); boolean success = false; if (!folder.exists()) { success = folder.mkdir(); } if (!success) { } else { } mediaRecorder.setOutputFile("/sdcard/VideoRecord/"+filename+".mp4"); mediaRecorder.setMaxDuration(60000); mediaRecorder.setMaxFileSize(5000000); Display display = getWindowManager().getDefaultDisplay(); int width = display.getHeight(); int height = display.getWidth(); String s = new String(); s= s.valueOf(width); String s1 = new String(); s1= s1.valueOf(height); // Toast.makeText(videorecord.this, "Width : " + s , Toast.LENGTH_LONG).show(); // Toast.makeText(videorecord.this, "Height : " + s1 , Toast.LENGTH_LONG).show(); mediaRecorder.setVideoSize(height, width); mediaRecorder.setPreviewDisplay(myCameraSurfaceView.getHolder().getSurface()); try { mediaRecorder.prepare(); } catch (IllegalStateException e) { releaseMediaRecorder(); return false; } catch (IOException e) { releaseMediaRecorder(); return false; } return true; } @Override protected void onPause() { super.onPause(); releaseMediaRecorder(); releaseCamera(); } private void releaseMediaRecorder() { if (mediaRecorder != null) { mediaRecorder.reset(); mediaRecorder.release(); mediaRecorder = null; myCamera.lock(); } } private void releaseCamera(){ if (myCamera != null){ myCamera.release(); myCamera = null; } } public class MyCameraSurfaceView extends SurfaceView implements SurfaceHolder.Callback{ private SurfaceHolder mHolder; private Camera mCamera; public MyCameraSurfaceView(Context context, Camera camera) { super(context); mCamera = camera; mHolder = getHolder(); mHolder.addCallback(this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceChanged(SurfaceHolder holder, int format, int weight, int height) { if (mHolder.getSurface() == null){ return; } try { mCamera.stopPreview(); } catch (Exception e){ } try { mCamera.setPreviewDisplay(mHolder); mCamera.startPreview(); } catch (Exception e){ } } public void surfaceCreated(SurfaceHolder holder) { try { mCamera.setPreviewDisplay(holder); mCamera.startPreview(); } catch (IOException e) { } } public void surfaceDestroyed(SurfaceHolder holder) { } } } videorecord.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:layout_width="fill_parent" android:layout_height="fill_parent" > <FrameLayout android:id="@+id/videoview" android:layout_width="fill_parent" android:layout_height="fill_parent"></FrameLayout> <LinearLayout android:id="@+id/mybutton" android:layout_width="fill_parent" android:layout_marginBottom="0dip" android:layout_height="wrap_content" android:orientation="horizontal" android:layout_weight="0" > <!-- <TextView android:text="START Recording" android:id="@+id/mybutton1" android:layout_height="wrap_content" android:layout_width="wrap_content" style="@style/savestyle" android:layout_weight="1" android:gravity="left" > </TextView> --> <ImageView android:layout_height="wrap_content" android:id="@+id/btnplay" android:padding="5dip" android:background="#A0000000" android:textColor="#ffffffff" android:layout_width="wrap_content" android:src="@drawable/startrec" /> </LinearLayout> <TextView android:text="00:00:00" android:id="@+id/txtcounter" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="right|bottom" android:padding="5dip" android:background="#A0000000" android:textColor="#ffffffff" /> </FrameLayout> <RelativeLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@color/bgcolor" > <LinearLayout android:layout_above="@+id/mybutton" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent" > </LinearLayout> </RelativeLayout> </LinearLayout>

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • Do’s and Don’ts Building SharePoint Applications

    - by Bil Simser
    SharePoint is a great platform for building quick LOB applications. Simple things from employee time trackers to server and software inventory to full blown Help Desks can be crafted up using SharePoint from just customizing Lists. No programming necessary. However there are a few tricks I’ve painfully learned over the years that you can use for your own solutions. DO What’s In A Name? When you create a new list, column, or view you’ll commonly name it something like “Expense Reports”. However this has the ugly effect of creating a url to the list as “Expense%20Reports”. Or worse, an internal field name of “Expense_x0x0020_Reports” which is not only cryptic but hard to remember when you’re trying to find the column by internal name. While “Expense Reports 2011” is user friendly, “ExpenseReports2011” is not (unless you’re a programmer). So that’s not the solution. Well, not entirely. Instead when you create your column or list or view use the scrunched up name (I can’t think of the technical term for it right now) of “ExpenseReports2011”, “WomenAtTheOfficeThatAreMen” or “KoalaMeatIsGoodWhenBroiled”. After you’ve created it, go back and change the name to the more friendly “Silly Expense Reports That Nobody Reads”. The original internal name will be the url and code friendly one without spaces while the one used on data entry forms and view headers will be the human version. Smart Columns When building a view include columns that make sense. By default when you add a column the “Add to default view” is checked. Resist the urge to be lazy and leave it checked. Uncheck that puppy and decide consciously what columns should be included in the view. Pick columns that make sense to what the user is trying to do. This means you have to talk to the user. Yes, I know. That can be trying at times and even painful. Go ahead, talk to them. You might learn something. Find out what’s important to them and why. If they’re doing something repetitively as part of their job, try to make their life easier by including what’s most important to them. Do they really need to see the Created *and* Modified date of a document or do they just need the title and author? You’ll only find out after talking to them (or getting them drunk in a bar and leaving them in the back alley handcuffed to a garbage bin, don’t ask). Gotta Keep it Separated Hey, views are there for a reason. Use them. While “All Items” is a fine way to present a list of well, all items, it’s hardly sufficient to present a list of servers built before the Y2K bug hit. You’ll be scrolling the list for hours finally arriving at Page 387 of 12,591 and cursing that SharePoint guy for convincing you that putting your hardware into a list would be of any use to anyone. Next to collecting the data, presenting it is just as important. Views are often overlooked and many times ignored or misused. They’re the way you can slice and dice the data up so that you’re not trying to consume 3,000 years of human evolution on a single web page. Remember views can be filtered so feel free to create a view for each status or one for each operating system or one for each species of Information Worker you might be putting in that list or document library. Not only will it reduce the number of items someone sees at one time, it’ll also make the information that much more relevant. Also remember that each view is a separate page. Use it in navigation by creating a menu on the Quick Launch to each view. The discoverability of the Views menu isn’t overly obvious and if you violate the rule of columns (see Horizontally Scrolling below) the view menu doesn’t even show up until you shuffle the scroll bar to the left. Navigation links, big giant buttons, a screaming flashing “CLICK ME NOW” will help your users find their way. Sort It! Views are great so we’re building nice, rich views for the user. Awesomesauce. However sort is not very discoverable by the user. For example when you’re looking at a view how do you know if it’s ascending or descending and what is it sorted on. Maybe it’s sorted using two fields so what’s that all about? Help your users by letting them know the information they’re looking at is sorted. Maybe you name the view something appropriate like “Bogus Expense Claims Sorted By Deadbeats”. If you use the naming strategy just make sure you keep the name consistent with the description. In the previous example their better be a Deadbeat column so I can see the sort in action. Having a “Loser” column, while equally correct, is a little obtuse to the average Information Worker. Remember, they usually don’t use acronyms and even if they knew how to, it’s not immediately obvious to them that’s what you’re trying to convey. Another option is to simply drop a Content Editor Web Part above the list and explain exactly the view they’re looking at. Each view is it’s own page so one CEWP won’t be used across the board. Be descriptive in what the user is seeing but try to keep it brief. Dumping the first chapter of I, Claudius might be informative to the data but can gobble up screen real estate and miss the point of having the list. DO NOT Useless Attachments The attachments column is, in a word, useless. For the most part. Sure it indicates there’s an attachment on the list item but in the grand scheme of things that’s not overly informative. Maybe it is and by all means, if it makes sense to you include it. Colour it. Make it shine and stand like the Return of Clippy on every SharePoint list. Without it being functional it can be boring. EndUserSharePoint.com has an article to make the son of Clippy that much more useful so feel free to head over and check out this blog post by Paul Grenier on the task (Warning code ahead! Danger Will Robinson!) In any case, I would suggest you remove it from your views. Again if it’s important then include it but consider the jQuery solution above to make it functional. It’s added by default to views and one of things that people forget to clean up. Horizontal Scrolling Screen real estate is premium so building a list that contains 8,000 columns and stretches horizontally across 15 screens probably isn’t the most user friendly experience. Most users can’t figure out how to scroll vertically let alone horizontally so don’t make it even that more confusing for them. Take the Steve Krug approach in your view designs and try not to make the user think. Again views are your friend. Consider splitting up the data into views where one view contains 10 columns and other view contains the other 10. Okay, maybe your information doesn’t work that way but humans can only process 7 pieces of data at a time, 10 at most (then their heads explode and you don’t want to clean that mess up, especially on a Friday night before the big dance). It drives me batshit crazy when I see a view with 80 columns of data. I often ask the user “So what do you do with all this information”. The response is usually “With this data [the first 10 columns] I decide if I’m going to fire everyone, and with this data [the next 10 columns] I decide if I’m going to set the building on fire and collect the insurance”. It’s at that point I show them how to create two new views “People Who Are About To Get The Axe” and “Beach Time For The Executives”. Again, talk to your users and try to reason with them on cutting down the number of columns they see at once. Vertical Scrolling Another big faux pas I find is the use of multi-line comment fields in views. It’s not so bad when you have a statement like this in your view: “I really like, oh my god, thought I was going to scream when I saw this turtle then I decided what I was going to have for dinner and frankly I hate having to work late so when I was talking to the customer I thought, oh my god, what if the customer has turtles and then it appeared to me that I really was hungry so I'm going to have lunch now.” It’s fine if that’s the only column along with two or three others, but once you slap those 20 columns of data into the list, the comment field wraps and forms a new multi-page novel that takes up your entire screen. Do everyone a favour and just avoid adding the column to views. Train the user to just click through to the item if they need to see the contents. Duplicate Information Duplication is never good. Views and great as you can group data together. For example create a view of project status reports grouped by author. Then you can see what project manager is being a dip and not submitting their report. However if you group by author do you really need the Created By field as well in the view? Or if the view is grouped by Project then Author do you need both. Horizontal real estate is always at a premium so try not to clutter up the view with duplicate data like this. Oh  yeah, if you’re scratching your head saying “But Bil, if I don’t include the Project name in the view and I have a lot of items then how do I know which one I’m looking at”. That’s a hint that your grouping is too vague or you have too much data in the view based on that criteria. Filter it down a notch, create some views, and try to keep the group down to a single screen where you can see the group header at the top of the page. Again it’s just managing the information you have. Redundant, See Redundant This partially relates to duplicate information and smart columns but basically remember to not include the obvious in a view. Remember, don’t make me think. If you’ve gone to the trouble (and it was a lot of trouble wasn’t it?) to create separate views of your data by creating a “September Zombie Brain Sales”, “October Zombie Brain Sales”, etc. then please for the love of all that is holy do not include the Month and Product columns in your view. Similarly if you create a “My” view of anything (“My Favourite Brands of Spandex”, “My Co-Workers I Find The Urge To Disinfect”) then again, do not include the owner or author field (or whatever field you use to identify “My”). That’s just silly. Hope that helps! Happy customizing!

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • Building the Ultimate SharePoint 2010 Development Environment

    - by Manesh Karunakaran
    It’s been more than a month since SharePoint 2010 RTMed. And a lot of people have downloaded and set up their very own SharePoint 2010 development rigs. And quite a few people have written blogs about setting up good development environments, there is even an MSDN article on it. Two of the blogs worth noting are from MVPs Sahil Malik and Wictor Wilén. Make sure that you check these out as well. Part of the bad side-effects of being a geek is the need to do the technical stuff the best way possible (pragmatic or otherwise), but the problem with this is that what is considered “best” is relative. Precisely the reason why you are reading this post now. Most of the posts that I read are out dated/need updations or are using the wrong OS’es or virtualization solutions (again, opinions vary) or using them the wrong way. Here’s a developer’s view of Building the Ultimate SharePoint 2010 Development Rig. If you are a sales guy, it’s time to close this window. Confusion 1: Which Host Operating System and Virtualization Solution to use? This point has been beaten to death in numerous blog posts in the past, if you have time to invest, read this excellent post by our very own SharePoint Joel on this subject. But if you are planning to build the Ultimate Development Rig, then Windows Server 2008 R2 with Hyper-V is the option that you should be looking at. I have been using this as my primary OS for about 6-7 months now, and I haven’t had any Driver issue or Application compatibility issue. In my experience all the Windows 7 drivers work fine with WIN2008 R2 also. You can enable Aero for eye candy (and the Windows 7 look and feel) and except for a few things like the Hibernation support (which a can be enabled if you really want it), Windows Server 2008 R2, is the best Workstation OS that I have used till date. But frankly the answer to this question of which OS to use depends primarily on one question - Are you willing to change your primary OS? If the answer to that is ‘Yes’, then Windows 2008 R2 with Hyper-V is the best option, if not look at vmWare or VirtualBox, both are equally good. Those who are familiar with a Virtual PC background might prefer Sun VirtualBox. Besides, these provide support for running 64 bit guest machines on 32 bit hosts if the underlying hardware is truly 64 bit. See my earlier post on this. Since we are going to make the ultimate rig, we will use Windows Server 2008 R2 with Hyper-V, for reasons mentioned above. Confusion 2: Should I use a multi-(virtual) server set up? A lot of people use multiple servers for their development environments - like Wictor Wilén is suggesting - one server hosting the Active directory, one hosting SharePoint Server and another one for SQL Server. True, this mimics the production environment the best possible way, but as somebody who has fallen for this set up earlier, I can tell you that you don’t really get anything by doing this. Microsoft has done well to ensure that if you can do it on one machine, you can do it in a farm environment as well. Besides, when you run multiple Server class machine instances in parallel, there are a lot of unwanted processor cycles wasted for no good use. In my personal experience, as somebody who needs to switch between MOSS 2007/SharePoint 2010 environments from time to time, the best possible solution is to Make the host Windows Server 2008 R2 machine your Domain Controller (AD Server) Make all your Virtual Guest OS’es join this domain. Have each Individual Guest OS Image have it’s own local SQL Server instance. The advantages are that you can reuse the users and groups in each of the Guest operating systems, you can manage the users in one place, AD is light weight and doesn't take too much resources on your host machine and also having separate SQL instances for each of the Development images gives you maximum flexibility in terms of configuration, for example your SharePoint rigs can have simpler DB configurations, compared to your MS BI blast pits. Confusion 3: Which Operating System should I use to run SharePoint 2010 Now that’s a no brainer. Use Windows 2008 R2 as your Guest OS. When you are building the ultimate rig, why compromise? If you are planning to run Windows Server 2008 as your Guest OS, there are a few patches that you need to install at different times during the installation, for that follow the steps mentioned here Okay now that we have made our choices, let’s get to the interesting part of building the rig, Step 1: Prepare the host machine – Install Windows Server 2008 R2 Install Windows Server 2008 R2 on your best Desktop/Laptop. If you have read this far, I am quite sure that you are somebody who can install an OS on your own, so go ahead and do that. Make sure that you run the compatibility wizard before you go ahead and nuke your current OS. There are plenty of blogs telling you how to make a good Windows 2008 R2 Workstation that feels and behaves like a Windows 7 machine, follow one and once you are done, head to Step 2. Step 2: Configure the host machine as a Domain Controller Before we begin this, let me tell you, this step is completely optional, you don’t really need to do this, you can simply use the local users on the Guest machines instead, but if this is a much cleaner approach to manage users and groups if you run multiple guest operating systems.  This post neatly explains how to configure your Windows Server 2008 R2 host machine as a Domain Controller. Follow those simple steps and you are good to go. If you are not able to get it to work, try this. Step 3: Prepare the guest machine – Install Windows Server 2008 R2 Open Hyper-V Manager Choose to Create a new Guest Operating system Allocate at least 2 GB of Memory to the Guest OS Choose the Windows 2008 R2 Installation Media Start the Virtual Machine to commence installation. Once the Installation is done, Activate the OS. Step 4: Make the Guest operating systems Join the Domain This step is quite simple, just follow these steps below, Fire up Hyper-V Manager, open your Guest OS Click on Start, and Right click on ‘Computer’ and choose ‘Properties’ On the window that pops-up, click on ‘Change Settings’ On the ‘System Properties’ Window that comes up, Click on the ‘Change’ button Now a window named ‘Computer Name/Domain Changes’ opens up, In the text box titled Domain, type in the Domain name from Step 2. Click Ok and windows will show you the welcome to domain message and ask you to restart the machine, click OK to restart. If the addition to domain fails, that means that you have not set up networking in Hyper-V for the Guest OS to communicate with the Host. To enable it, follow the steps I had mentioned in this post earlier. Step 5: Install SQL Server 2008 R2 on the Guest Machine SQL Server 2008 R2 gets installed with out hassle on Windows Server 2008 R2. SQL Server 2008 needs SP2 to work properly on WIN2008 R2. Also SQL Server 2008 R2 allows you to directly add PowerPivot support to SharePoint. Choose to install in SharePoint Integrated Mode in Reporting Server Configuration. Step 6: Install KB971831 and SharePoint 2010 Pre-requisites Now install the WCF Hotfix for Microsoft Windows (KB971831) from this location, and SharePoint 2010 Pre-requisites from the SP2010 Installation media. Step 7: Install and Configure SharePoint 2010 Install SharePoint 2010 from the installation media, after the installation is complete, you are prompted to start the SharePoint Products and Technologies Configuration Wizard. If you are using a local instance of Microsoft SQL Server 2008, install the Microsoft SQL Server 2008 KB 970315 x64 before starting the wizard. If your development environment uses a remote instance of Microsoft SQL Server 2008 or if it has a pre-existing installation of Microsoft SQL Server 2008 on which KB 970315 x64 has already been applied, this step is not necessary. With the wizard open, do the following: Install SQL Server 2008 KB 970315 x64. After the Microsoft SQL Server 2008 KB 970315 x64 installation is finished, complete the wizard. Alternatively, you can choose not to run the wizard by clearing the SharePoint Products and Technologies Configuration Wizard check box and closing the completed installation dialog box. Install SQL Server 2008 KB 970315 x64, and then manually start the SharePoint Products and Technologies Configuration Wizard by opening a Command Prompt window and executing the following command: C:\Program Files\Common Files\Microsoft Shared Debug\Web Server Extensions\14\BIN\psconfigui.exe The SharePoint Products and Technologies Configuration Wizard may fail if you are using a computer that is joined to a domain but that is not connected to a domain controller. Step 8: Install Visual Studio 2010 and SharePoint 2010 SDK Install Visual Studio 2010 Download and Install the Microsoft SharePoint 2010 SDK Step 9: Install PowerPivot for SharePoint and Configure Reporting Services Pop-In the SQLServer 2008 R2 installation media once again and install PowerPivot for SharePoint. This will get added as another instance named POWERPIVOT. Configure Reporting Services by following the steps mentioned here, if you need to get down to the details on how the integration between SharePoint 2010 and SQL Server 2008 R2 works, see Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010 an excellent article by Alan Le Marquand Step 10: Download and Install Sample Databases for Microsoft SQL Server 2008R2 SharePoint 2010 comes with a lot of cool stuff like PerformancePoint Services and BCS, if you need to try these out, you need to have data in your databases. So if you want to save yourself the trouble of creating sample data for your PerformancePoint and BCS experiments, download and install Sample Databases for Microsoft SQL Server 2008R2 from CodePlex. And you are done! Fire up your Visual Studio 2010 and Start Coding away!!

    Read the article

  • CodePlex Daily Summary for Monday, December 20, 2010

    CodePlex Daily Summary for Monday, December 20, 2010Popular ReleasesLINQ to Twitter: LINQ to Twitter Beta v2.0.18: Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation. Bug fixes.ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4.3: Helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: Improvements for confirm, popup, popup form RenderView controller extension the user experience for crud in live demo has been substantially improved + added search all the features are shown in the live demoSQL Monitor: SQL Monitor 3.0 alpha 6: 1. add alert target with regexp support 2. fix actitivites not getting current server 3. allow to change connection in query 4. fix problem with open data tableGanttPlanner: GanttPlanner V1.0: GanttPlanner V1.0 include GanttPlanner.dll and also a Demo application.EnhSim: EnhSim 2.2.5 ALPHA: 2.2.5 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the new...HD-Trailers.NET Downloader: HD-Trailers.NET Downloader v1.2: A Couple of small changes required to add support for XBMC which requires -trailer be appended to trailer filenames 2. Changed Version Number to 1.2. Leaving it a beta just in case. 3. added assembly.directives.dll (needed that to successfully compile the application. Not sure if it has to be in the distribution package. 4. Leaving Version 1.1N2 CMS: 2.1 release candidate 3: * Web platform installer support available N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) A bunch of bugs were fixed File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the templ...TweetSharp: TweetSharp v2.0.0.0 - Preview 6: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 6 ChangesMaintenance release with user reported fixes Preview 5 ChangesMaintenance release with user reported fixes Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numer...Team Foundation Server Administration Tool: 2.1: TFS Administration Tool 2.1, is the first version of the TFS Administration Tool which is built on top of the Team Foundation Server 2010 object model. TFS Administration Tool 2.1 can be installed on machines that are running either Team Explorer 2010, or Team Foundation Server 2010.Hacker Passwords: HackerPasswords.zip: Source code, executable and documentationSubtitleTools: SubtitleTools 1.3: - Added .srt FileAssociation & Win7 ShowRecentCategory feature. - Applied UnifiedYeKe to fix Persian search problems. - Reduced file size of Persian subtitles for uploading @OSDB.Facebook C# SDK: 4.1.0: - Lots of bug fixes - Removed Dynamic Runtime Language dependencies from non-dynamic platforms. - Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample - Changed internal serialization to use Json.net - BREAKING CHANGE: Canvas Session is no longer support. Use Signed Request instead. Canvas Session has been deprecated by Facebook. - BREAKING CHANGE: Some renames and changes with Authorizer, CanvasAuthorizer, and Authorization ac...NuGet (formerly NuPack): NuGet 1.0 build 11217.102: Note: this release is slightly newer than RC1, and fixes a couple issues relating to updating packages to newer versions. NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (h...WCF Community Site: WCF Web APIs 10.12.17: Welcome to the second release of WCF Web APIs on codeplex Here is what is new in this release. WCF Support for jQuery - create WCF web services that are easy to consume from JavaScript clients, in particular jQuery. Better support for using JsonValue as dynamic Support for JsonValue change notification events for databinding and other purposes Support for going between JsonValue and CLR types WCF HTTP - create HTTP / REST based web services. This is a minor release which contains fixe...LiveChat Starter Kit: LCSK v1.0: This is a working version of the LCSK for Visual Studio 2010, ASP.NET MVC 3 (using Razor View Engine). this is still provider based (with 1 provider Sql) and this is still using WebService and Windows Forms operator console. The solution is cleaner, with an installer to create tables etc. You can also install it via nuget (Install-Package lcsk) Let me know your feedbackOrchard Project: Orchard 0.9: Orchard Release Notes Build: 0.9.253 Published: 12/16/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard-Using-Web-PI.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.0.9.253.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orch...DotNetNuke® Community Edition: 05.06.01 Beta: This is the initial Beta of DotNetNuke 5.6.1. See the DotNetNuke Roadmap a full list of changes in this release.MSBuild Extension Pack: December 2010: Release Blog Post The MSBuild Extension Pack December 2010 release provides a collection of over 380 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...mojoPortal: 2.3.5.8: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2358-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-12-13: Code samples for Visual Studio 2010New Projects.NET Micro Framework Binary Parsing and Processing Extensions: Prototype library and test cases for parsing binary data in the .NET Micro Framework. The goal is to improve processing speed, reduce GC work and simplify code.CarolLib Framework: A common library by Lance Zhang and Carol Xiong Include Helpers, Extensions, Configurations and Windows Service...Copy Document URL to Clipboard: This is a SharePoint 2010 Feature that adds a button in the Ribbon, allowing users to copy document urls (links) to clipboard.exTWEET download: <exTWEET> Download link for application to be downloaded & used from here unless it becomes publicly available. Then, this link will be removed. HandleSpy: Get Object information by its Handle.ibuy: helloLoA: PL: Podstawa gry bez tekstur, modeli, map i skryptów. EN: ---MP3Tunes Locker Windows Phone API: A simple REST client for connecting to your mp3tunes.com locker on your windows mobile phone. You can browse by artist and album and shuffle your songs.Netduino Library: Various useful classes to help develop for netduino.NReports: NReports is a reporting library using RDL file format, aiming at interoperatibility with SQL Reporting Services. It has began as a fork of the latest version 4.1 of fyiReporting (now defunct). Supported platforms are .NET 3.5 (Windows) and Mono 2.6 (Linux).qlink Framework: Qlink FrameWork development application that auto-generates a data layer object model based on your database schema and ...Rapid Image Editor: This is image editor which allows to edit images very rapidly and comfortably. Therefore it's named as Rapid.Serial-test with SHLCN: ?????????????????????????SharePoint Selbstbedienung: This is a project for all Microsoft SharePoint users. Whether you are a developer, administrator or end user, I want to provide you, complete NO-Deployment-solutions to create nice application for SharePoint 2010 or Office 365 plattform easily. TestRepo: test projectUmbraco Live At Education Calendar: Umbraco Live At Education CalendarUSS: Urban Statistical SystemVLBA Web Services: This is our seminar project to demonstrate the implementation and use of web services.Windows Phone 7 Isolated Storage Explorer: Isolated Storage Explorer for Windows Phone 7 emulator or device.Windows Phone 7 Logging Framework: Logging Framework for Windows Phone 7

    Read the article

  • Begin the Clone Wars Have!

    - by Antony Reynolds
    Creating a New Virtual Machine from an Existing Virtual Disk In previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if you want the DB and FMW running in the same virtual image and it has served me well for some proof of concepts and also for some testing of different JVMs.  However I also wanted to run some testing of FMW with the database running on a separate physical machine.  So in this post I will show how to take a VirtualBox image and create a new image based on the disks from that original image. What are my Options? There is more than one way to skin a cat, or in this case to create two separate VMs that can run on different hardware.  Some of the options include: Create new virtual disk images for each new VM. Clone the existing disk images and point the new VM at the cloned images. Point the new VM at the existing snapshots. #1 is too much like hard work, install OEL twice, install a database again, install FMW again, run RCU again!  Life is too short! #2 is probably the safest way of doing things.  VirtualBox allows you to clone a disk image for use in a separate machine.  However this of course duplicates the disk and means that it is now occupying 3 times the space, once for the original disk and twice more for the two clones I would need. #3 is the most space efficient way of doing things.  It does mean however that I can only run the new “cloned” images if I have access to the original image because that is where the base snapshots reside.  However this is not a problem for me as long as I remember to keep all threee images together.  So this is the approach we will follow. Snapshot, What Snapshot? As we are going to create new virtual machines based on existing snapshots we need to figure out which snapshot to use.  We do this by opening the “Media Manager” from within VirtualBox and moving the mouse over the snapshot images until we find the snapshots we want – the snapshot name is identified in the “Attached to:” comment.  In my case I wanted the FMW installed snapshot because that had a database configured for FMW alongside the FMW software.  I made a note of the filename of that snapshot (actually I just noted the first 5 characters as that was all that was needed to uniquely identify the snapshot file). When we create the new machines we will point them at the snapshot filename we have just checked. Network or NotWork? Because we want the two new machines to communicate with each other when hosted in different physical machines we can’t use the default NAT networking mode without a lot of hassle.  But at the same time we need them to have fixed IP addresses relative to each other so that they can see each other whilst also being able to see the outside world. To achieve all these requirements I created two network adapters for each machine.  Adapter 1 was a standard NAT mapping.  This will allow each machine to get a dynamic IP address (10.0.2.15 by default) that can be used to access the external world through the VBox provided NAT gateway.  This is the same as the existing configuration. The second adapter I created as a bridged adapter.  This gives the virtual machine direct access to the host network card and by using fixed IP addresses each machine can see the other.  It is important to choose fixed IP addresses that are not routable across your internal network so you don’t get any clashes with other machines on your network.  Of course you could always get proper fixed IP addresses from your network people, but I have serveral people using my images and as long as I don’t have two instances of the same VM on the same network segment this is easier and avoids reconfiguring the network every time someone wants a copy of my VM.  If it is available I would suggest using the 10.0.3.* network as 10.0.2.* is the default NAT network.  You can check availability by pinging 10.0.3.1 and 10.0.3.2 from your host machine.  If it times out then you are probably safe to use that. Creating the New VMs Now that I had collected the data that I needed I went ahead and created the new VMs. When asked for a “Boot Hard Disk” I used the “Choose a virtual hard disk file…” link to find the snapshot I had previously selected and set that to be the existing hard disk.  I chose the previously existing SOA 11.1.1.5 install for both the new DB and FMW machines because that snapshot had the database with the RCU completed that I wanted for my DB machine and it had the SOA software installed which I wanted for my FMW machine. After the initial creation of the virtual machine go into the network setting section and enable a second adapter which will be bridged.  Make a note of the MAC addresses (the last four digits should be sufficient) of the two adapters so that you can later set the bridged adapter to use fixed IP and the NAT adapter to use DHCP. We are now ready to start the VMs and reconfigure Linux. Reconfiguring Linux Because I now have two new machines I need to change their network configuration.  In particular I need to change the hostname, update the hosts file and change the network settings. Changing the Hostname I renamed both hosts by running the hostname command as root: hostname vboxfmw.oracle.com I also edited the /etc/sysconfig file and set the correct hostname in there. HOSTNAME=vboxfmw.oracle.com Changing the Network Settings I needed to change the network configuration to give the bridged network a fixed IP address.  I first explicitly set the MAC addresses of the two adapters, because the order of the virtual adapters in the VirtualBox Manager is not necessarily the same as the order of the adapters in the guest OS.  So I went in to the System->Preferences->Network Connections screen and explicitly set the “Device MAC address” for the two adapters. Having correctly mapped the Linux adapters to the VirtualBox adapters I then set the Bridged adapter to use fixed IP addressing rather than DHCP.  There is no need for additional routing or default gateways because we expect the two machine to be on the same LAN segment. Updating the Hosts File Having renamed the machines and reconfigured the network I then updated the /etc/hosts file to refer to the new machine name add a new line to the hosts file to provide an additional IP address for my server (the new fixed IP address) add a new line for the fixed IP address of the other virtual machine 10.0.3.101      vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.2.15       vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.3.102      vboxfmw.oracle.com      vboxfmw # Added by NetworkManager 127.0.0.1       localhost.localdomain   localhost ::1     vboxdb.oracle.com       vboxdb  localhost6.localdomain6 localhost6 To make sure everything takes effect I restarted the server. Reconfiguring the Database on the DB Machine Because we changed the hostname the listener and the EM console no longer start so I need to modify the listener.ora to use the new hostname and I also need to rebuild the EM configuration because it also relies on the hostname. I edited the $ORACLE_HOME/network/admin/listener.ora and changed the listening address to the new hostname:       (ADDRESS = (PROTOCOL = TCP)(HOST = vboxdb.oracle.com)(PORT = 1521)) After changing the listener.ora I was able to start the listener using: lsnrctl start I also had to reconfigure the EM database control.  I first deconfigured it using the command: emca -deconfig dbcontrol db -repos drop This drops the repository and removes any existing registered dbcontrols. I then re-configured it using the following command: emca -config dbcontrol db -repos create This creates the EM repository and then configures and starts dbcontrol. Now my database machine is ready so I can close it down and take a snapshot. Disabling the Database on the FMW Machine I set up the database to start automatically by creating a service called “dbora”.  On the FMW machine I do not need the database running so I can prevent it auto-starting by running the following command: chkconfig –del dbora Note that because I am using a snapshot it is not a waste of disk space to have the DB installed but not used.  As long as I don’t run it, it won’t cost me anything. I can now close the FMW machine down and take a snapshot. Creating a New Domain The FMW machine is now ready to create a new domain.  When creating the domain I can point it at the second machine which is running the database.  I can potentially run these machines on two separate physical machines as long as I have the original virtual machine available to both of the physical machines. Gotchas in Snapshotting VirtualBox does not support the concept of linked machines in a network like some virtualization technologies so when creating a snapshot it is a good idea to shut both VMs down and then take a snapshot on both of them.  This is because we want to keep the database in sync with the middleware.  One way to make sure that this happens would be to place all the domain configuration files on the database server via an NFS share, this would mean that all we would need to snapshot would be the database machine because that would hold all the state and configuration. The Sky’s the Limit We have covered a simple case of having just two machines.  I have a more complicated configuration in which two machine run a RAC database off the same base OS image, and two more machines run a SOA cluster based on the same OS image.  Just remember what machine holds state and what are the consequences of taking a snapshot.

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • Special thanks to everyone that helped me in 2010.

    - by mbcrump
    2010 has been a very good year for me and I wanted to create a list and thank everyone for what they have done for me.  I also wanted to thank everyone for reading and subscribing to my blog. It is hard to believe that people actually want to read what I write. I feel like I owe a huge thanks to everyone listed below. Looking back upon 2010, I feel that I’ve grown as a developer and you are part of that reason. Sometimes we get caught up in day to day work and forget to give thanks to those that helped us along the way. The list below is mine, it includes people and companies. This list is obviously not going to include everyone that has helped, just those that have stood out in my mind. When I think back upon 2010, their names keep popping up in my head. So here goes, in no particular order.  People Dave Campbell – For everything he has done for the Silverlight Community with his Silverlight Cream blog. I can’t think of a better person to get recognition at the Silverlight FireStarter event. I also wanted to thank him for spending several hours of his time helping me track down a bug in my feedburner account. Victor Gaudioso – For his large collection of video tutorials on his blog and the passion and enthusiasm he has for Silverlight. We have talked on the phone and I’ve never met anyone so fired up for Silverlight. Kunal Chowdhury – Kunal has always been available for me to bounce ideas off of. Kunal has also answered a lot of questions that stumped me. His blog and CodeProject article have green a great help to me and the Silverlight Community. Glen Gordon – I was looking frantically for a Windows Phone 7 several months before release and Glen found one for me. This allowed me to start a blog series on the Windows Phone 7 hardware and developing an application from start to finish that Scott Guthrie retweeted.  Jeff Blankenburg – For listening to my complaints in the early stages of Windows Phone 7. Jeff was always very polite and gave me his cell phone number to talk it over. He also walked me through several problems that I was having early on. Pete Brown – For writing Silverlight 4 in Action. This book is definitely a labor of love. I followed Pete on Twitter as he was writing it and he spent a lot of late nights and weekends working on it. I felt a lot smarter after reading it the first time. The second time was even better. John Papa – For all of his work on the Silverlight Firestarter and the Silverlight community in general. He has also helped me on a personal level with several things. Daniel Heisler – For putting up with me the past year while we worked on many .NET projects together in 2010. Alvin Ashcraft – For publishing a daily blog post on the best of .NET links. He has linked to my site many times and I really appreciate what he does for the community. Chris Alcock – For publishing the Morning Brew every weekday. I remember when I first appeared on his site, I started getting hundreds of hits on my site and wondered if I was getting a DOS attack or something. It was great to find out that Chris had linked to one of my articles. Joel Cochran – For spending a week teaching “Blend-O-Rama”. This was my one of my favorite sessions of this year. I learned a lot about Expression Blend from it and the best part was that it was free and during lunchtime. Jeremy Likness – Jeremy is smart – very smart. I have learned a lot from Jeremy over the past year. He is also involved in the Silverlight community in every way possible, from forums to blog post to screencast to open source. It goes on and on. The people that I met at VSLive Orlando 2010. I had a great time chatting with Walt Ritscher, Wallace McClure, Tim Huckabee and David Platt. Also a special thanks to all of my friends on Twitter like @wilhil, @DBVaughan, @DataArtist, @wbm, @DirkStrauss and @rsringeri and many many more. Software Companies / Events / May of gave me FREE stuff. =) Microsoft (3) – I was sent a free coupon code by Microsoft to take the Silverlight 4 Beta Exam. I jumped on the offer and took the exam. It was great being selected to try out the exam before it goes public even though Microsoft eventually published a universal coupon code for everyone. I am still waiting to find out if I passed the exam. My fingers are crossed. Microsoft reaching out to me with some questions regarding the .NET Community. I’ve never had a company contact me with such interest in the community. Having a contest where 75 people could win a $100 gift certificate and a T-Shirt for submitting a Windows Phone 7 app. I submitted my app and won. All of the free launch events this year (Windows Phone 7, Visual Studio 2010, ASP.NET MVC). Wintellect – For providing an awesome day of free technical training called T.E.N. Where else can you get free training from some of the best programmers in the world? I also won a contest from them that included a NETAdvantage Ultimate License from Infragistics. VSLive – I attended the Orlando 2010 Conference and it was the best developer’s conference that I have ever attended. I got to know a lot of people at this conference and hang out with many wonderful speakers. I live tweeted the event and while it may have annoyed some, the organizers of VSLive loved it. I won the contest on Twitter and they invited me back to the 2011 session of my choice. This is a very nice gift and I really appreciate the generosity. BarcodeLib.com – For providing free barcode generating tools for a Non-Profit ASP.NET project that I was working on. Their third party controls really made this a breeze compared to my existing solution. NDepend – It is absolutely the best tool to improve code quality. The product is extremely large and I would recommend heading over to their site to check it out. Silverlight Spy – I was writing a blog post on Silverlight Spy and Koen Zwikstra provided a FREE license to me. If you ever wanted to peek inside of a Silverlight Application then this is the tool for you. He is also working on a version that will support OOB and Windows Phone 7. I would recommend checking out his site. Birmingham .NET Users Group / Silverlight Nights User Group – It takes a lot of time to put together a user group meeting every month yet it always seems to happen. I don’t want to name names for fear of leaving someone out but both of these User Groups are excellent if you live in the Birmingham, Alabama area. Publishing Companies Manning Publishing – For giving me early access to Silverlight 4 in Action by Pete Brown. It was really nice to be able to read this awesome book while Pete was writing it. I was also one of the first people to publish a review of the book. Sams Publishing and DZone – For providing a copy of Silverlight 4 Unleashed by Laurent Bugnion for me to review for their site. The review is coming in January 2011. Special Shoutout to the following 3rd Party Silverlight Controls It has been a great pleasure to work with the following companies on 3rd Party Control Giveaways every month. It always amazes me how every 3rd Party Control company is so eager to help out the community. I’ve never been turned down by any of these companies! These giveaways have sparked a lot of interest in Silverlight and hopefully I can continue giving away a new set every month. If you are a 3rd Party Control company and are interested in participating in these giveaways then please email me at mbcrump29[at]gmail[d0t].com. The companies below have already participated in my giveaways: Infragistics (December 2010) - Win a set of Infragistics Silverlight Controls with Data Visualization!  Mindscape (November 2010) - Mindscape Silverlight Controls + Free Mega Pack Contest Telerik (October 2010) - Win Telerik RadControls for Silverlight! ($799 Value) Again, I just wanted to say Thanks to everyone for helping me grow as a developer.  Subscribe to my feed

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • CodePlex Daily Summary for Sunday, December 19, 2010

    CodePlex Daily Summary for Sunday, December 19, 2010Popular ReleasesTeam Foundation Server Administration Tool: 2.1: TFS Administration Tool 2.1, is the first version of the TFS Administration Tool which is built on top of the Team Foundation Server 2010 object model. TFS Administration Tool 2.1 can be installed on machines that are running either Team Explorer 2010, or Team Foundation Server 2010.SQL Monitor: SQL Monitor 3.0 alpha 5: 1. fix a problem with not expanding nodes in object explorer, sorryHacker Passwords: HackerPasswords.zip: Source code, executable and documentationWatchersNET.SiteMap: WatchersNET.SiteMap 01.03.03: Whats NewSkin Object: You can now filter by Terms for Example use: <object id="dnnSITEMAPSL" codetype="dotnetnuke/server" codebase="SITEMAPSL"> <param name="TaxMode" value="terms" /> <param name="TaxTerms" value="TermName1,TermName2" /> </object> changes Tax Term Filter should work correct nowSubtitleTools: SubtitleTools 1.3: - Added .srt FileAssociation & Win7 ShowRecentCategory feature. - Applied UnifiedYeKe to fix Persian search problems. - Reduced file size of Persian subtitles for uploading @OSDB.EnhSim: EnhSim 2.2.3 ALPHA: 2.2.3 ALPHAThis release adds in the changes for 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in th...Facebook C# SDK: 4.1.0: - Lots of bug fixes - Removed Dynamic Runtime Language dependencies from non-dynamic platforms. - Samples included in release for ASP.NET, MVC, Silverlight, Windows Phone 7, WPF, WinForms, and one Visual Basic Sample - Changed internal serialization to use Json.net - BREAKING CHANGE: Canvas Session is no longer support. Use Signed Request instead. Canvas Session has been deprecated by Facebook. - BREAKING CHANGE: Some renames and changes with Authorizer, CanvasAuthorizer, and Authorization ac...NuGet (formerly NuPack): NuGet 1.0 build 11217.102: Note: this release is slightly newer than RC1, and fixes a couple issues relating to updating packages to newer versions. NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (h...WCF Community Site: WCF Web APIs 10.12.17: Welcome to the second release of WCF Web APIs on codeplex Here is what is new in this release. WCF Support for jQuery - create WCF web services that are easy to consume from JavaScript clients, in particular jQuery. Better support for using JsonValue as dynamic Support for JsonValue change notification events for databinding and other purposes Support for going between JsonValue and CLR types WCF HTTP - create HTTP / REST based web services. This is a minor release which contains fixe...Orchard Project: Orchard 0.9: Orchard Release Notes Build: 0.9.253 Published: 12/16/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard-Using-Web-PI.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.0.9.253.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orch...Pyxis 2: Beta 2.2: An issue stopping the Checkbox from working properly has been resolved (gradient background has also been added) A TabDialog control has been added NumericUpDown control added A driver for the VS1053 has been added (not yet in use) PyxisAPI.OpenFile now works with all its features PyxisAPI.SaveFile now works with all features Settings window is now available from the Pyxis menu You can now connect using Static IP or DHCP You can now set your system time from the Settings windo...sgMotion Animation Library: sgMotion v1.3 for SunBurn 2.0.9 [Includes Sample]: sgMotion 1.3 release for use with SunBurn 2.0.9 (all editions). Includes example project with assets, and full Windows and Xbox support. (tested on all platforms)DotSpatial: DotSpatial 12-15-2010: This release contains a few minor bug fixes and hopefully the GDAL libraries for the 3.5 x86 build actually built to the correct directory this time.DotNetNuke® Community Edition: 05.06.01 Beta: This is the initial Beta of DotNetNuke 5.6.1. See the DotNetNuke Roadmap a full list of changes in this release.MSBuild Extension Pack: December 2010: Release Blog Post The MSBuild Extension Pack December 2010 release provides a collection of over 380 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...TweetSharp: TweetSharp v2.0.0.0 - Preview 5: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 5 ChangesMaintenance release with user reported fixes Preview 4 ChangesReintroduced fluent interface support via satellite assembly Added entities support, entity segmentation, and ITweetable/ITweeter interfaces for client development Numerous fixes reported by preview users Preview 3 ChangesNumerous ...Silverlight Contrib: Silverlight Contrib 2010.1.0: 2010.1.0 New FeaturesCompatibility Release for Silverlight 4 and Visual Studio 2010FlickrNet API Library: 3.1.4000: Newest release. Now contains dedicated Windows Phone 7 DLL as well as all previous DLLs. Also contains Windows Help file documentation now as standard.mojoPortal: 2.3.5.8: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2358-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-12-13: Code samples for Visual Studio 2010New ProjectsANLP - Another .NET Lexer Parser: This project aims to have a lexer/parser working in Silverlight and help people to write their own grammar and make the lexer/parser available in Silverlight.Better App Tab Shortcuts Firefox Extension: An extension for Firefox 4.0 which improves the shortcuts for selecting tabs.BizTalk Map Converter: Converts BizTalk Maps into ExcelCampo Minado: Campo minado em SilverlightCSharpHASH: Hash project is an college project conserning business software development. Written in C#.DS_HW1: hw1DynamicViewModel: MVVM using POCOs with .NET 4.0: This project aims to provide a way to implement the Model View ViewModel (MVVM) architectural pattern using Plain Old CLR Objects (POCOs) while taking full advantage of .NET 4.0 DynamicObject Class.Evolution Game: Evolution game that uses genetic algorithms to visualize life of different species.Fluent Parser: Fluent Parser is a library allowing to build a syntaxic analyser in C#. The grammar is built using only .NET objects into the form of a Parsing Expression Grammar (PEG).GoogleMusic Player: Choose and play music from newly launched google Music http://www.google.co.in/music create and save playlists Using C#Gracefully update SharePoint 2010 document metadata: Users want to update the metadata for a document in a document library, for which they don’t have access or if they wanted to update some read-only fields. Hacker Passwords: Hacker Passwords generator - free open sourceHTML5 Canvas FlowView: FlowView is a simple to use API for creating/implementing a flowview style element in HTML5/Canvas. The FlowView looks much like something from Apple's cover view.L#: A F# program to provide similar functionalities offered by Prolog for F# oriented applications.MapInfo Tools: Utilities for working with MapInfo files.Notepad X: Notepad X is an alternative open source text editor for Microsoft Windows, with a lot of customization options created to help users managing text documents, featuring tab navigation.OpenFlyClient: An educational open source of a client written in C# for FlyFF. This is for educational and evaluation purposes only. Using this for private servers is not allowed.Orkut API Library: Orkut API Library (more of a wrapper) for .NET makes it easier for .NET developers creating web and/or desktop applications to leverage Orkut OpenSocial API from within the familiar Visual Studio environment. PasteHtml.NET: PasteHtml.NET is a .NET API for using the PasteHtml.com service.Photon: Silverlight MVVM Framework RSATasync: Running analyses in RiskSpectrum PSA Professional software is a time-consuming task. RSATasync is a middleware, which uses .NET4 Parallel Extensions to parallelize analyses sent by the editor to the analyzer in the RiskSpectrum software.The Killie01 Project: all killie01 projects (opensource)United Nations News for Windows Phone 7: Open source project for United Nations News for Windows Phone 7. WCF SIP Stack: WCF SIPWordpress .NET: WPDotnet is a library to ease the Wordpress integration with ASP.NET. Some Server controls are implemented. If you don't want to use them, you can just use the classes.

    Read the article

  • C#: String Concatenation vs Format vs StringBuilder

    - by James Michael Hare
    I was looking through my groups’ C# coding standards the other day and there were a couple of legacy items in there that caught my eye.  They had been passed down from committee to committee so many times that no one even thought to second guess and try them for a long time.  It’s yet another example of how micro-optimizations can often get the best of us and cause us to write code that is not as maintainable as it could be for the sake of squeezing an extra ounce of performance out of our software. So the two standards in question were these, in paraphrase: Prefer StringBuilder or string.Format() to string concatenation. Prefer string.Equals() with case-insensitive option to string.ToUpper().Equals(). Now some of you may already know what my results are going to show, as these items have been compared before on many blogs, but I think it’s always worth repeating and trying these yourself.  So let’s dig in. The first test was a pretty standard one.  When concattenating strings, what is the best choice: StringBuilder, string concattenation, or string.Format()? So before we being I read in a number of iterations from the console and a length of each string to generate.  Then I generate that many random strings of the given length and an array to hold the results.  Why am I so keen to keep the results?  Because I want to be able to snapshot the memory and don’t want garbage collection to collect the strings, hence the array to keep hold of them.  I also didn’t want the random strings to be part of the allocation, so I pre-allocate them and the array up front before the snapshot.  So in the code snippets below: num – Number of iterations. strings – Array of randomly generated strings. results – Array to hold the results of the concatenation tests. timer – A System.Diagnostics.Stopwatch() instance to time code execution. start – Beginning memory size. stop – Ending memory size. after – Memory size after final GC. So first, let’s look at the concatenation loop: 1: // build num strings using concattenation. 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = "This is test #" + i + " with a result of " + strings[i]; 5: } Pretty standard, right?  Next for string.Format(): 1: // build strings using string.Format() 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = string.Format("This is test #{0} with a result of {1}", i, strings[i]); 5: }   Finally, StringBuilder: 1: // build strings using StringBuilder 2: for (int i = 0; i < num; i++) 3: { 4: var builder = new StringBuilder(); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: } So I take each of these loops, and time them by using a block like this: 1: // get the total amount of memory used, true tells it to run GC first. 2: start = System.GC.GetTotalMemory(true); 3:  4: // restart the timer 5: timer.Reset(); 6: timer.Start(); 7:  8: // *** code to time and measure goes here. *** 9:  10: // get the current amount of memory, stop the timer, then get memory after GC. 11: stop = System.GC.GetTotalMemory(false); 12: timer.Stop(); 13: other = System.GC.GetTotalMemory(true); So let’s look at what happens when I run each of these blocks through the timer and memory check at 500,000 iterations: 1: Operator + - Time: 547, Memory: 56104540/55595960 - 500000 2: string.Format() - Time: 749, Memory: 57295812/55595960 - 500000 3: StringBuilder - Time: 608, Memory: 55312888/55595960 – 500000   Egad!  string.Format brings up the rear and + triumphs, well, at least in terms of speed.  The concat burns more memory than StringBuilder but less than string.Format().  This shows two main things: StringBuilder is not always the panacea many think it is. The difference between any of the three is miniscule! The second point is extremely important!  You will often here people who will grasp at results and say, “look, operator + is 10% faster than StringBuilder so always use StringBuilder.”  Statements like this are a disservice and often misleading.  For example, if I had a good guess at what the size of the string would be, I could have preallocated my StringBuffer like so:   1: for (int i = 0; i < num; i++) 2: { 3: // pre-declare StringBuilder to have 100 char buffer. 4: var builder = new StringBuilder(100); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: }   Now let’s look at the times: 1: Operator + - Time: 551, Memory: 56104412/55595960 - 500000 2: string.Format() - Time: 753, Memory: 57296484/55595960 - 500000 3: StringBuilder - Time: 525, Memory: 59779156/55595960 - 500000   Whoa!  All of the sudden StringBuilder is back on top again!  But notice, it takes more memory now.  This makes perfect sense if you examine the IL behind the scenes.  Whenever you do a string concat (+) in your code, it examines the lengths of the arguments and creates a StringBuilder behind the scenes of the appropriate size for you. But even IF we know the approximate size of our StringBuilder, look how much less readable it is!  That’s why I feel you should always take into account both readability and performance.  After all, consider all these timings are over 500,000 iterations.   That’s at best  0.0004 ms difference per call which is neglidgable at best.  The key is to pick the best tool for the job.  What do I mean?  Consider these awesome words of wisdom: Concatenate (+) is best at concatenating.  StringBuilder is best when you need to building. Format is best at formatting. Totally Earth-shattering, right!  But if you consider it carefully, it actually has a lot of beauty in it’s simplicity.  Remember, there is no magic bullet.  If one of these always beat the others we’d only have one and not three choices. The fact is, the concattenation operator (+) has been optimized for speed and looks the cleanest for joining together a known set of strings in the simplest manner possible. StringBuilder, on the other hand, excels when you need to build a string of inderterminant length.  Use it in those times when you are looping till you hit a stop condition and building a result and it won’t steer you wrong. String.Format seems to be the looser from the stats, but consider which of these is more readable.  Yes, ignore the fact that you could do this with ToString() on a DateTime.  1: // build a date via concatenation 2: var date1 = (month < 10 ? string.Empty : "0") + month + '/' 3: + (day < 10 ? string.Empty : "0") + '/' + year; 4:  5: // build a date via string builder 6: var builder = new StringBuilder(10); 7: if (month < 10) builder.Append('0'); 8: builder.Append(month); 9: builder.Append('/'); 10: if (day < 10) builder.Append('0'); 11: builder.Append(day); 12: builder.Append('/'); 13: builder.Append(year); 14: var date2 = builder.ToString(); 15:  16: // build a date via string.Format 17: var date3 = string.Format("{0:00}/{1:00}/{2:0000}", month, day, year); 18:  So the strength in string.Format is that it makes constructing a formatted string easy to read.  Yes, it’s slower, but look at how much more elegant it is to do zero-padding and anything else string.Format does. So my lesson is, don’t look for the silver bullet!  Choose the best tool.  Micro-optimization almost always bites you in the end because you’re sacrificing readability for performance, which is almost exactly the wrong choice 90% of the time. I love the rules of optimization.  They’ve been stated before in many forms, but here’s how I always remember them: For Beginners: Do not optimize. For Experts: Do not optimize yet. It’s so true.  Most of the time on today’s modern hardware, a micro-second optimization at the sake of readability will net you nothing because it won’t be your bottleneck.  Code for readability, choose the best tool for the job which will usually be the most readable and maintainable as well.  Then, and only then, if you need that extra performance boost after profiling your code and exhausting all other options… then you can start to think about optimizing.

    Read the article

  • In Which We Demystify A Few Docupresentment Settings And Learn the Ethos of the Author

    - by Andy Little
    It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   With this inaugural post to That's The Way, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by checking out my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation and sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  Fine, I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for the best (IMHO) software company on the planet.  With That's The Way I hope to shed a little light and peek under the covers of some of the more interesting aspects of implementations involving the tech space within the Oracle Insurance Global Business Unit (IGBU), which includes Oracle Documaker, Rating & Underwriting, and Policy Administration to name a few.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So, back to our regularly-scheduled post, already in progress.  By this time you've visited Oracle's E-Delivery site and acquired your properly-licensed version of Oracle Documaker.  Wait -- you didn't find it?  Understandable -- navigating the voluminous download library within Oracle can be a daunting task.  It's pretty simple once you’ve done it a few times.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you’ll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you’ll see a list of applicable products, and you’ll click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client, cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I’d like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was a appealing to Oracle was our use of many acronyms, including the occasional use of multiple acronyms with the same meaning.  I apologize in advance and will try to point these out along the way.  Here’s your first sticky note to go along with that: IDS = Internet Document Server = Docupresentment Once you’ve completed the installation, you’ll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I’m one of you!  You’ll find it by default in  ~/docupresentment/docserv.  Forging onward with the meat of this post is learning about some special configuration options.  By now you’ve read the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the rubric of the configuration file and in fact there’s even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  But who wants to deal with a configuration utility when we have the tools and technology to edit the file <gasp> by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any havoc you may wreak! Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond of Notepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  Ok – moving on.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let’s take a look at <section name=”DocumentServer”>: There are a few entries I’d like to discuss.  First, <entry name=”StartCommand”>. This should be pretty self-explanatory; it’s the name of the executable that’s run when you fire up Docupresentment.  Immediately following that is <entry name=”StartArguments”> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The –Dids.configuration=docserv.xml parameter specifies the name of your configuration file. The –Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here). The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post. The <entry name=”Instances”> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine the sweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  The Docupresentment service that you fire up with docserver.bat or docserver.sh actually starts a watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these act independently from one another, so if one crashes, it does not affect any others.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  Bottom line: instance = Docupresentment process. And now, finally, to the settings which gave me pause on an not-too-long-ago implementation!  Docupresentment includes a feature that watches configuration files (such as docserv.xml and logconf.xml) and will automatically restart its instances to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name=”FileWatchTimeMillis”>.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don’t want to bring down an entire set of processes just to accept a new configuration value, so it’s best to leave this somewhere between 12 seconds to a few minutes.  Another point to keep in mind: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you’ll have to “restart” Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that’s another post). The next item up: <entry name=”FilePurgeTimeSeconds”>.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my ‘fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name=”FilePurgeTimeSeconds”> to the number of seconds appropriate for your requirements and you’re set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn’t interfere with response times unless the CPU happens to be really taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I’m going to address in this post: <entry name=”FilePurgeList”>.  The default is “filecache.properties”.  This establishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you’ll see two files created: filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out. I hope you’ve enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment, I welcome feedback.  If you have ideas for other posts you’d like to see, please do let me know.  You can reach me at [email protected]. ‘Til next time! ###

    Read the article

  • CodePlex Daily Summary for Monday, June 20, 2011

    CodePlex Daily Summary for Monday, June 20, 2011Popular ReleasesBlogEngine.NET: BlogEngine.NET 2.5 RC: BlogEngine.NET Hosting - Click Here! 3 Months FREE – BlogEngine.NET Hosting – Click Here! This is a Release Candidate version for BlogEngine.NET 2.5. The most current, stable version of BlogEngine.NET is version 2.0. Find out more about the BlogEngine.NET 2.5 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. Also, please note at this time the Mercurial source code repository on the Source Code tab is having issues, but will be fixed shortly. I...Microsoft All-In-One Code Framework - a centralized code sample library: All-In-One Code Framework 2011-06-19: Alternatively, you can install Sample Browser or Sample Browser VS extension, and download the code samples from Sample Browser. Improved and Newly Added Examples:For an up-to-date code sample index, please refer to All-In-One Code Framework Sample Catalog. NEW Samples for Windows Azure Sample Description Owner CSAzureStartupTask The sample demonstrates using the startup tasks to install the prerequisites or to modify configuration settings for your environment in Windows Azure Rafe Wu ...Facebook C# SDK: 5.0.40: This is a RTW release which adds new features to v5.0.26 RTW. Support for multiple FacebookMediaObjects in one request. Allow FacebookMediaObjects in batch requests. Removes support for Cassini WebServer (visual studio inbuilt web server). Better support for unit testing and mocking. updated SimpleJson to v0.6 Refer to CHANGES.txt for details. For more information about this release see the following blog posts: Facebook C# SDK - Multiple file uploads in Batch Requests Faceb...Candescent NUI: Candescent NUI (7774): This is the binary version of the source code in change set 7774.Media Companion: MC 3.408b weekly: Some minor fixes..... Fixed messagebox coming up during batch scrape Added <originaltitle></originaltitle> to movie nfo's - when a new movie is scraped, the original title will be set as the returned title. The end user can of course change the title in MC, however the original title will not change. The original title can be seen by hovering over the movie title in the right pane of the main movie tab. To update all of your current nfo's to add the original title the same as your current ...NLog - Advanced .NET Logging: NLog 2.0 Release Candidate: Release notes for NLog 2.0 RC can be found at http://nlog-project.org/nlog-2-rc-release-notesGendering Add-In for Microsoft Office Word 2010: Gendering Add-In: This is the first stable Version of the Gendering Add-In. Unzip the package and start "setup.exe". The .reg file shows how to config an alternate path for suggestion table.TerrariViewer: TerrariViewer v3.1 [Terraria Inventory Editor]: This version adds tool tips. Almost every picture box you mouse over will tell you what item is in that box. I have also cleaned up the GUI a little more to make things easier on my end. There are various bug fixes including ones associated with opening different characters in the same instance of the program. As always, please bring any bugs you find to my attention.Kinect Paint: KinectPaint V1.0: This is version 1.0 of Kinect Paint. To run it, follow the steps: Install the Kinect SDK for Windows (available at http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/download.aspx) Connect your Kinect device to the computer and to the power. Download the Zip file. Unblock the Zip file by right clicking on it, and pressing the Unblock button in the file properties (if available). Extract the content of the Zip file. Run KinectPaint.exe.CommonLibrary.NET: CommonLibrary.NET - 0.9.7 Beta: A collection of very reusable code and components in C# 3.5 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. Samples in <root>\src\Lib\CommonLibrary.NET\Samples CommonLibrary.NET 0.9.7Documentation 6738 6503 New 6535 Enhancements 6583 6737DropBox Linker: DropBox Linker 1.2: Public sub-folders are now monitored for changes as well (thanks to mcm69) Automatic public sync folder detection (thanks to mcm69) Non-Latin and special characters encoded correctly in URLs Pop-ups are now slot-based (use first free slot and will never be overlapped — test it while previewing timeout) Public sync folder setting is hidden when auto-detected Timeout interval is displayed in popup previews A lot of major and minor code refactoring performed .NET Framework 4.0 Client...Terraria World Viewer: Version 1.3.1: Update June 20th Removed "Draw Markers" checkbox from main window because of redundancy/confusing. (Select all or no items from the Settings tab for the same effect.) Fixed Marker preferences not being saved. It is now possible to render more than one map without having to restart the application. World file will not be locked while the world is being rendered. Note: The World Viewer might render an inaccurate map or even crash if Terraria decides to modify the World file during the pro...MVC Controls Toolkit: Mvc Controls Toolkit 1.1.5 RC: Added Extended Dropdown allows a prompt item to be inserted as first element. RequiredAttribute, if present, trggers if no element is chosen Client side javascript function to set/get the values of DateTimeInput, TypedTextBox, TypedEditDisplay, and to bind/unbind a "change" handler The selected page in the pager is applied the attribute selected-page="selected" that can be used in the definition of CSS rules to style the selected page items controls now interpret a null value as an empr...Umbraco CMS: Umbraco CMS 5.0 CTP 1: Umbraco 5 Community Technology Preview Umbraco 5 will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out our first CTP of version 5 today! If you're new to Umbraco and would like to get a quick low-down on our popular and easy-to-learn approach to content management, check out our intro video here. What's in the v5 CTP box? This is a preview version of version 5 and includes support for the following familiar Umbr...Ribbon Browser for Microsoft Dynamics CRM 2011: Ribbon Browser (1.0.514.30): Initial releaseCoding4Fun Kinect Toolkit: Coding4Fun.Kinect Toolkit: Version 1.0Kinect Mouse Cursor: Kinect Mouse Cursor v1.0: The initial release of the Kinect Mouse Cursor project!patterns & practices: Project Silk: Project Silk Community Drop 11 - June 14, 2011: Changes from previous drop: Many code changes: please see the readme.mht for details. New "Client Data Management and Caching" chapter. Updated "Application Notifications" chapter. Updated "Architecture" chapter. Updated "jQuery UI Widget" chapter. Updated "Widget QuickStart" appendix and code. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separat...Orchard Project: Orchard 1.2: Build: 1.2.41 Published: 6/14/2010 How to Install Orchard To install Orchard using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard.ashx. Web PI will detect your hardware environment and install the application. Alternatively, to install the release manually, download the Orchard.Web.1.2.41.zip file. http://orchardproject.net/docs/Manually-installing-Orchard-zip-file.ashx The zip contents are pre-built and ready-to-run. Simply extract the contents o...Snippet Designer: Snippet Designer 1.4.0: Snippet Designer 1.4.0 for Visual Studio 2010 Change logSnippet Explorer ChangesReworked language filter UI to work better in the side bar. Added result count drop down which lets you choose how many results to see. Language filter and result count choices are persisted after Visual Studio is closed. Added file name to search criteria. Search is now case insensitive. Snippet Editor Changes Snippet Editor ChangesAdded menu option for the $end$ symbol which indicates where the c...New Projects.NET Dev Tools Dashboard: .NET Dev Tools Dashboard is a WPF application that implements the Prism 4 standards. Modules can be added at runtime and after install of the base Dashboard shell. The first module is a WPF Gui for NAnt based on Colin Svingen's Gui for NAnt open source app for Windows Forms.Chutzpah - A JavaScript Test Runner: Chutzpah helps you integrate JavaScript unit testing into your website. It enables you to run JavaScript unit tests from the command line and from inside of Visual Studio. It also supports running in the TeamCity continuous integration server.Cow connect: Ziel des Projektes Cow connect, ist es ein Tool zu schrieben das Verschiedene Datenbanken, unterschiedlicher Herdenmanagement Tool z.b: Helm Multikuh, Westfalia C21, Holdi,…. Synchron zu halten. oder eine neue Datenbank zu GenerrierenCqrs CWDNUG Demo: A very simple CQRS demo for a presentation at the Canary Wharf Dot Net User Group. Uses Ncqrs, ASP.NET MVC 3. Not best practices - just something to get your teeth into if you're new to CQRS and Event Sourcing.Down123: down123EssionCalendar: EssionCal EssionCalGuardian: Protects reference type parameters and return values from NULL values.Honest Flex Time Management System: A Japanese time management system to handle worker hourlies. Will include some basic reporting functions as well.iStatServer.NET: Server implementation for iPhone iStat app. Windows analogue to iStat Server on OS x or iStatd on linux/solaris. Keolis Service Wrapper: Keolis Service WrapperKinectContrib: A set of extensions and helpers that will facilitate the use of the Microsoft Kinect motion sensor with the Kinect SDK.kingdomwp7: kingdomwp7Laboratório de Engenharia de Software - Projeto: Criado para estudar e aplicar novas tecnologias web.Multi Touch Digit OCR With Matlab Neural Network Wpf Project: Multi Touch Digit OCR Project is a wpf project that works on multi touch devices but it works well on normal devices , this project uses matlab core , that creates 4 feed forward neural network and train them with Back Propagation Algorithm for detecting numbers that you draw .PhoneGap Extensions - Alpha Version: This project was created to solve some issues about PhoneGap. The first issue to solve is how to reuse HTML code through include files. SABnzbd UI: This application is a Windows 7 enabled UI for SABnzbd, so that you don't have to navigate to the website, but instead you can view a slick UI on your desktop.SP:Social: A community project for providing integration points between SharePoint and other social networks like Facebook, LinkedIn, Live etc.Whisk: An extremely simplistic but cross-platform network rending environment for Blender.Windows phone 7, Game framewok: Windows phone 7, Game framewokWP7 - Multi Select Wrapped ListBox: Its a WP 7.1 project which contains a list box that can wrap listBoxItems and can scroll vertically.

    Read the article

  • Best Practices - updated: which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains). This is an updated and enlarged version of the post on this topic originally posted October 2012. One frequent question "what type of domain should I use to run applications?" There used to be a simple answer: "run applications in guest domains in almost all cases", but now there are more things to consider. Enhancements to Oracle VM Server for SPARC and introduction of systems like the current SPARC servers including the T4 and T5 systems, the Oracle SuperCluster T5-8 and Oracle SuperCluster M6-32 provide scale and performance much higher than the original servers that ran domains. Single-CPU performance, I/O capacity, memory sizes, are much larger now, and far more demanding applications are now being hosted in logical domains. The general advice continues to be "use guest domains in almost all cases", meaning, "use virtual I/O rather than physical I/O", unless there is a specific reason to use the other domain types. The sections below will discuss the criteria for choosing between domain types. Review: division of labor and types of domain Oracle VM Server for SPARC offloads management and I/O functionality from the hypervisor to domains (also called virtual machines), providing a modern alternative to older VM architectures that use a "thick", monolithic hypervisor. This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, further improving reliability and security. Oracle VM Server for SPARC defines the following types of domain, each with their own roles: Control domain - management control point for the server, runs the logical domain daemon and constraints engine, and is used to configure domains and manage resources. The control domain is the first domain to boot on a power-up, is always an I/O domain, and is usually a service domain as well. It doesn't have to be, but there's no reason to not leverage it for virtual I/O services. There is one control domain per T-series system, and one per Physical Domain (PDom) on an M5-32 or M6-32 system. M5 and M6 systems can be physically domained, with logical domains within the physical ones. I/O domain - a domain that has been assigned physical I/O devices. The devices may be one more more PCIe root complexes (in which case the domain is also called a root complex domain). The domain has native access to all the devices on the assigned PCIe buses. The devices can be any device type supported by Solaris on the hardware platform. a SR-IOV (Single-Root I/O Virtualization) function. SR-IOV lets a physical device (also called a physical function) or PF) be subdivided into multiple virtual functions (VFs) which can be individually assigned directly to domains. SR-IOV devices currently can be Ethernet or InfiniBand devices. direct I/O ownership of one or more PCI devices residing in a PCIe bus slot. The domain has direct access to the individual devices An I/O domain has native performance and functionality for the devices it owns, unmediated by any virtualization layer. It may also have virtual devices. Service domain - a domain that provides virtual network and disk devices to guest domains. The services are defined by commands that are run in the control domain. It usually is an I/O domain as well, in order for it to have devices to virtualize and serve out. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Device considerations Consider the following when choosing between virtual devices and physical devices: Virtual devices provide the best flexibility - they can be dynamically added to and removed from a running domain, and you can have a large number of them up to a per-domain device limit. Virtual devices are compatible with live migration - domains that exclusively have virtual devices can be live migrated between servers supporting domains. On the other hand: Physical devices provide the best performance - in fact, native "bare metal" performance. Virtual devices approach physical device throughput and latency, especially with virtual network devices that can now saturate 10GbE links, but physical devices are still faster. Physical I/O devices do not add load to service domains - all the I/O goes directly from the I/O domain to the device, while virtual I/O goes through service domains, which must be provided sufficient CPU and memory capacity. Physical I/O devices can be other than network and disk - we virtualize network, disk, and serial console, but physical devices can be the wide range of attachable certified devices, including things like tape and CDROM/DVD devices. In some cases the lines are now blurred: virtual devices have better performance than previously: starting with Oracle VM Server for SPARC 3.1 there is near-native virtual network performance. There is more flexibility with physical devices than before: SR-IOV devices can now be dynamically reconfigured on domains. Tradeoffs one used to have to make are now relaxed: you can often have the flexibility of virtual I/O with performance that previously required physical I/O. You can have the performance and isolation of SR-IOV with the ability to dynamically reconfigure it, just like with virtual devices. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI buses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain that is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure, as described in Availability Best Practices - Avoiding Single Points of Failure . Guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device does not result in an application outage. This also permits "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O buses, so there is more I/O capacity that can be used for applications. Increased server capacity made it attractive to run more vertically-scaled applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the Oracle SuperCluster engineered systems mentioned previously. In those engineered systems, I/O domains are used for high performance applications with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. Not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O to guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm command must be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. For reference, an excellent guide to secure deployment of domains by Stefan Hinker is at Secure Deployment of Oracle VM Server for SPARC. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. They should be considered the default domain type to use unless there is a specific requirement that mandates an I/O domain. I/O domains can be used for applications with the highest performance requirements. Single Root I/O Virtualization (SR-IOV) makes this more attractive by giving direct I/O access to more domains, and by permitting dynamic reconfiguration of SR-IOV devices. Today's larger systems provide multiple PCIe buses - for example, 16 buses on the T5-8 - making it possible to configure multiple I/O domains each owning their own bus. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so interruption of service in one service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. Oracle SuperCluster uses the control domain for applications, but it is an exception. It's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity servers that run Oracle VM Server for SPARC are attractive for applications with the most demanding resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide peak performance for critical applications. That said, the improved virtual device performance in Oracle VM Server means that the default choice should still be guest domains with virtual I/O.

    Read the article

  • How to mount a blu-ray drive?

    - by Stephan Schielke
    Maybe it is for the best to close this question. This has nothing to do with a bluray drive in general anymore. Probably a hardware defect. I will try to test it with a windows system and different cables again... Thx so far. I have a bluray/dvd/cdrom drive with SATA. Ubuntu wont find it under /dev/sd wodim --devices wodim: Overview of accessible drives (1 found) : ------------------------------------------------------------------------- 0 dev='/dev/sg2' rwrw-- : 'HL-DT-ST' 'BDDVDRW CH08LS10' ------------------------------------------------------------------------- cdrecord -scanbus scsibus2: 2,0,0 200) 'HL-DT-ST' 'BDDVDRW CH08LS10' '2.00' Removable CD-ROM fdisk dont even lists it. Ubuntu only automounts blank DVDs, but neither CDROM nor Blurays. I also changed the sata slot, sata cable and the power cable. The drive works with a windows system. This happens when I try to mount: sudo mount -t auto /dev/scd0 /media/bluray mount: you must specify the filesystem type I tried all filesystems there are. I also installed makemkv. It finds the drive but not the disc. Here is my /dev ls -al /dev total 12 drwxr-xr-x 17 root root 4420 2011-11-25 19:36 . drwxr-xr-x 28 root root 4096 2011-11-25 07:12 .. crw------- 1 root root 10, 235 2011-11-25 19:28 autofs -rw-r--r-- 1 root root 630 2011-11-25 19:28 .blkid.tab -rw-r--r-- 1 root root 630 2011-11-25 19:28 .blkid.tab.old drwxr-xr-x 2 root root 700 2011-11-25 19:27 block drwxr-xr-x 2 root root 100 2011-11-25 19:27 bsg crw------- 1 root root 10, 234 2011-11-25 19:28 btrfs-control drwxr-xr-x 3 root root 60 2011-11-25 19:27 bus drwxr-xr-x 2 root root 3820 2011-11-25 19:28 char crw------- 1 root root 5, 1 2011-11-25 19:28 console lrwxrwxrwx 1 root root 11 2011-11-25 19:28 core -> /proc/kcore drwxr-xr-x 2 root root 60 2011-11-25 19:28 cpu crw------- 1 root root 10, 60 2011-11-25 19:28 cpu_dma_latency drwxr-xr-x 7 root root 140 2011-11-25 19:27 disk crw------- 1 root root 10, 61 2011-11-25 19:28 ecryptfs crw-rw---- 1 root video 29, 0 2011-11-25 19:28 fb0 lrwxrwxrwx 1 root root 13 2011-11-25 19:28 fd -> /proc/self/fd crw-rw-rw- 1 root root 1, 7 2011-11-25 19:28 full crw-rw-rw- 1 root fuse 10, 229 2011-11-25 19:28 fuse crw------- 1 root root 251, 0 2011-11-25 19:28 hidraw0 crw------- 1 root root 251, 1 2011-11-25 19:28 hidraw1 crw------- 1 root root 10, 228 2011-11-25 19:28 hpet lrwxrwxrwx 1 root root 14 2011-11-25 19:27 .initramfs -> /run/initramfs drwxr-xr-x 4 root root 220 2011-11-25 19:28 input crw------- 1 root root 1, 11 2011-11-25 19:28 kmsg srw-rw-rw- 1 root root 0 2011-11-25 19:28 log brw-rw---- 1 root disk 7, 0 2011-11-25 19:28 loop0 brw-rw---- 1 root disk 7, 1 2011-11-25 19:28 loop1 brw-rw---- 1 root disk 7, 2 2011-11-25 19:28 loop2 brw-rw---- 1 root disk 7, 3 2011-11-25 19:28 loop3 brw-rw---- 1 root disk 7, 4 2011-11-25 19:28 loop4 brw-rw---- 1 root disk 7, 5 2011-11-25 19:28 loop5 brw-rw---- 1 root disk 7, 6 2011-11-25 19:28 loop6 brw-rw---- 1 root disk 7, 7 2011-11-25 19:28 loop7 drwxr-xr-x 2 root root 60 2011-11-25 19:27 mapper crw------- 1 root root 10, 227 2011-11-25 19:28 mcelog crw-r----- 1 root kmem 1, 1 2011-11-25 19:28 mem drwxr-xr-x 2 root root 60 2011-11-25 19:27 net crw------- 1 root root 10, 59 2011-11-25 19:28 network_latency crw------- 1 root root 10, 58 2011-11-25 19:28 network_throughput crw-rw-rw- 1 root root 1, 3 2011-11-25 19:28 null crw-rw-rw- 1 root root 195, 0 2011-11-25 19:28 nvidia0 crw-rw-rw- 1 root root 195, 255 2011-11-25 19:28 nvidiactl crw------- 1 root root 1, 12 2011-11-25 19:28 oldmem crw-r----- 1 root kmem 1, 4 2011-11-25 19:28 port crw------- 1 root root 108, 0 2011-11-25 19:28 ppp crw------- 1 root root 10, 1 2011-11-25 19:28 psaux crw-rw-rw- 1 root tty 5, 2 2011-11-25 20:00 ptmx drwxr-xr-x 2 root root 0 2011-11-25 19:27 pts brw-rw---- 1 root disk 1, 0 2011-11-25 19:28 ram0 brw-rw---- 1 root disk 1, 1 2011-11-25 19:28 ram1 brw-rw---- 1 root disk 1, 10 2011-11-25 19:28 ram10 brw-rw---- 1 root disk 1, 11 2011-11-25 19:28 ram11 brw-rw---- 1 root disk 1, 12 2011-11-25 19:28 ram12 brw-rw---- 1 root disk 1, 13 2011-11-25 19:28 ram13 brw-rw---- 1 root disk 1, 14 2011-11-25 19:28 ram14 brw-rw---- 1 root disk 1, 15 2011-11-25 19:28 ram15 brw-rw---- 1 root disk 1, 2 2011-11-25 19:28 ram2 brw-rw---- 1 root disk 1, 3 2011-11-25 19:28 ram3 brw-rw---- 1 root disk 1, 4 2011-11-25 19:28 ram4 brw-rw---- 1 root disk 1, 5 2011-11-25 19:28 ram5 brw-rw---- 1 root disk 1, 6 2011-11-25 19:28 ram6 brw-rw---- 1 root disk 1, 7 2011-11-25 19:28 ram7 brw-rw---- 1 root disk 1, 8 2011-11-25 19:28 ram8 brw-rw---- 1 root disk 1, 9 2011-11-25 19:28 ram9 crw-rw-rw- 1 root root 1, 8 2011-11-25 19:28 random crw-rw-r--+ 1 root root 10, 62 2011-11-25 19:28 rfkill lrwxrwxrwx 1 root root 4 2011-11-25 19:28 rtc -> rtc0 crw------- 1 root root 254, 0 2011-11-25 19:28 rtc0 lrwxrwxrwx 1 root root 3 2011-11-25 19:38 scd0 -> sr0 brw-rw---- 1 root disk 8, 0 2011-11-25 19:28 sda brw-rw---- 1 root disk 8, 1 2011-11-25 19:28 sda1 brw-rw---- 1 root disk 8, 2 2011-11-25 19:28 sda2 brw-rw---- 1 root disk 8, 3 2011-11-25 19:28 sda3 brw-rw---- 1 root disk 8, 5 2011-11-25 19:28 sda5 brw-rw---- 1 root disk 8, 6 2011-11-25 19:28 sda6 brw-rw---- 1 root disk 8, 16 2011-11-25 19:28 sdb brw-rw---- 1 root disk 8, 17 2011-11-25 19:28 sdb1 crw-rw---- 1 root disk 21, 0 2011-11-25 19:28 sg0 crw-rw---- 1 root disk 21, 1 2011-11-25 19:28 sg1 crw-rw----+ 1 root cdrom 21, 2 2011-11-25 19:28 sg2 lrwxrwxrwx 1 root root 8 2011-11-25 19:28 shm -> /run/shm crw------- 1 root root 10, 231 2011-11-25 19:28 snapshot drwxr-xr-x 4 root root 280 2011-11-25 19:28 snd brw-rw----+ 1 root cdrom 11, 0 2011-11-25 19:38 sr0 lrwxrwxrwx 1 root root 15 2011-11-25 19:28 stderr -> /proc/self/fd/2 lrwxrwxrwx 1 root root 15 2011-11-25 19:28 stdin -> /proc/self/fd/0 lrwxrwxrwx 1 root root 15 2011-11-25 19:28 stdout -> /proc/self/fd/1 crw-rw-rw- 1 root tty 5, 0 2011-11-25 19:35 tty crw--w---- 1 root tty 4, 0 2011-11-25 19:28 tty0 crw------- 1 root root 4, 1 2011-11-25 19:28 tty1 crw--w---- 1 root tty 4, 10 2011-11-25 19:28 tty10 crw--w---- 1 root tty 4, 11 2011-11-25 19:28 tty11 crw--w---- 1 root tty 4, 12 2011-11-25 19:28 tty12 crw--w---- 1 root tty 4, 13 2011-11-25 19:28 tty13 crw--w---- 1 root tty 4, 14 2011-11-25 19:28 tty14 crw--w---- 1 root tty 4, 15 2011-11-25 19:28 tty15 crw--w---- 1 root tty 4, 16 2011-11-25 19:28 tty16 crw--w---- 1 root tty 4, 17 2011-11-25 19:28 tty17 crw--w---- 1 root tty 4, 18 2011-11-25 19:28 tty18 crw--w---- 1 root tty 4, 19 2011-11-25 19:28 tty19 crw------- 1 root root 4, 2 2011-11-25 19:28 tty2 crw--w---- 1 root tty 4, 20 2011-11-25 19:28 tty20 crw--w---- 1 root tty 4, 21 2011-11-25 19:28 tty21 crw--w---- 1 root tty 4, 22 2011-11-25 19:28 tty22 crw--w---- 1 root tty 4, 23 2011-11-25 19:28 tty23 crw--w---- 1 root tty 4, 24 2011-11-25 19:28 tty24 crw--w---- 1 root tty 4, 25 2011-11-25 19:28 tty25 crw--w---- 1 root tty 4, 26 2011-11-25 19:28 tty26 crw--w---- 1 root tty 4, 27 2011-11-25 19:28 tty27 crw--w---- 1 root tty 4, 28 2011-11-25 19:28 tty28 crw--w---- 1 root tty 4, 29 2011-11-25 19:28 tty29 crw------- 1 root root 4, 3 2011-11-25 19:28 tty3 crw--w---- 1 root tty 4, 30 2011-11-25 19:28 tty30 crw--w---- 1 root tty 4, 31 2011-11-25 19:28 tty31 crw--w---- 1 root tty 4, 32 2011-11-25 19:28 tty32 crw--w---- 1 root tty 4, 33 2011-11-25 19:28 tty33 crw--w---- 1 root tty 4, 34 2011-11-25 19:28 tty34 crw--w---- 1 root tty 4, 35 2011-11-25 19:28 tty35 crw--w---- 1 root tty 4, 36 2011-11-25 19:28 tty36 crw--w---- 1 root tty 4, 37 2011-11-25 19:28 tty37 crw--w---- 1 root tty 4, 38 2011-11-25 19:28 tty38 crw--w---- 1 root tty 4, 39 2011-11-25 19:28 tty39 crw------- 1 root root 4, 4 2011-11-25 19:28 tty4 crw--w---- 1 root tty 4, 40 2011-11-25 19:28 tty40 crw--w---- 1 root tty 4, 41 2011-11-25 19:28 tty41 crw--w---- 1 root tty 4, 42 2011-11-25 19:28 tty42 crw--w---- 1 root tty 4, 43 2011-11-25 19:28 tty43 crw--w---- 1 root tty 4, 44 2011-11-25 19:28 tty44 crw--w---- 1 root tty 4, 45 2011-11-25 19:28 tty45 crw--w---- 1 root tty 4, 46 2011-11-25 19:28 tty46 crw--w---- 1 root tty 4, 47 2011-11-25 19:28 tty47 crw--w---- 1 root tty 4, 48 2011-11-25 19:28 tty48 crw--w---- 1 root tty 4, 49 2011-11-25 19:28 tty49 crw------- 1 root root 4, 5 2011-11-25 19:28 tty5 crw--w---- 1 root tty 4, 50 2011-11-25 19:28 tty50 crw--w---- 1 root tty 4, 51 2011-11-25 19:28 tty51 crw--w---- 1 root tty 4, 52 2011-11-25 19:28 tty52 crw--w---- 1 root tty 4, 53 2011-11-25 19:28 tty53 crw--w---- 1 root tty 4, 54 2011-11-25 19:28 tty54 crw--w---- 1 root tty 4, 55 2011-11-25 19:28 tty55 crw--w---- 1 root tty 4, 56 2011-11-25 19:28 tty56 crw--w---- 1 root tty 4, 57 2011-11-25 19:28 tty57 crw--w---- 1 root tty 4, 58 2011-11-25 19:28 tty58 crw--w---- 1 root tty 4, 59 2011-11-25 19:28 tty59 crw------- 1 root root 4, 6 2011-11-25 19:28 tty6 crw--w---- 1 root tty 4, 60 2011-11-25 19:28 tty60 crw--w---- 1 root tty 4, 61 2011-11-25 19:28 tty61 crw--w---- 1 root tty 4, 62 2011-11-25 19:28 tty62 crw--w---- 1 root tty 4, 63 2011-11-25 19:28 tty63 crw--w---- 1 root tty 4, 7 2011-11-25 19:28 tty7 crw--w---- 1 root tty 4, 8 2011-11-25 19:28 tty8 crw--w---- 1 root tty 4, 9 2011-11-25 19:28 tty9 crw------- 1 root root 5, 3 2011-11-25 19:28 ttyprintk crw-rw---- 1 root dialout 4, 64 2011-11-25 19:28 ttyS0 crw-rw---- 1 root dialout 4, 65 2011-11-25 19:28 ttyS1 crw-rw---- 1 root dialout 4, 74 2011-11-25 19:28 ttyS10 crw-rw---- 1 root dialout 4, 75 2011-11-25 19:28 ttyS11 crw-rw---- 1 root dialout 4, 76 2011-11-25 19:28 ttyS12 crw-rw---- 1 root dialout 4, 77 2011-11-25 19:28 ttyS13 crw-rw---- 1 root dialout 4, 78 2011-11-25 19:28 ttyS14 crw-rw---- 1 root dialout 4, 79 2011-11-25 19:28 ttyS15 crw-rw---- 1 root dialout 4, 80 2011-11-25 19:28 ttyS16 crw-rw---- 1 root dialout 4, 81 2011-11-25 19:28 ttyS17 crw-rw---- 1 root dialout 4, 82 2011-11-25 19:28 ttyS18 crw-rw---- 1 root dialout 4, 83 2011-11-25 19:28 ttyS19 crw-rw---- 1 root dialout 4, 66 2011-11-25 19:28 ttyS2 crw-rw---- 1 root dialout 4, 84 2011-11-25 19:28 ttyS20 crw-rw---- 1 root dialout 4, 85 2011-11-25 19:28 ttyS21 crw-rw---- 1 root dialout 4, 86 2011-11-25 19:28 ttyS22 crw-rw---- 1 root dialout 4, 87 2011-11-25 19:28 ttyS23 crw-rw---- 1 root dialout 4, 88 2011-11-25 19:28 ttyS24 crw-rw---- 1 root dialout 4, 89 2011-11-25 19:28 ttyS25 crw-rw---- 1 root dialout 4, 90 2011-11-25 19:28 ttyS26 crw-rw---- 1 root dialout 4, 91 2011-11-25 19:28 ttyS27 crw-rw---- 1 root dialout 4, 92 2011-11-25 19:28 ttyS28 crw-rw---- 1 root dialout 4, 93 2011-11-25 19:28 ttyS29 crw-rw---- 1 root dialout 4, 67 2011-11-25 19:28 ttyS3 crw-rw---- 1 root dialout 4, 94 2011-11-25 19:28 ttyS30 crw-rw---- 1 root dialout 4, 95 2011-11-25 19:28 ttyS31 crw-rw---- 1 root dialout 4, 68 2011-11-25 19:28 ttyS4 crw-rw---- 1 root dialout 4, 69 2011-11-25 19:28 ttyS5 crw-rw---- 1 root dialout 4, 70 2011-11-25 19:28 ttyS6 crw-rw---- 1 root dialout 4, 71 2011-11-25 19:28 ttyS7 crw-rw---- 1 root dialout 4, 72 2011-11-25 19:28 ttyS8 crw-rw---- 1 root dialout 4, 73 2011-11-25 19:28 ttyS9 d rwxr-xr-x 3 root root 60 2011-11-25 19:28 .udev crw-r----- 1 root root 10, 223 2011-11-25 19:28 uinput crw-rw-rw- 1 root root 1, 9 2011-11-25 19:28 urandom drwxr-xr-x 2 root root 60 2011-11-25 19:27 usb crw------- 1 root root 252, 0 2011-11-25 19:28 usbmon0 crw------- 1 root root 252, 1 2011-11-25 19:28 usbmon1 crw------- 1 root root 252, 2 2011-11-25 19:28 usbmon2 crw------- 1 root root 252, 3 2011-11-25 19:28 usbmon3 crw------- 1 root root 252, 4 2011-11-25 19:28 usbmon4 crw------- 1 root root 252, 5 2011-11-25 19:28 usbmon5 crw------- 1 root root 252, 6 2011-11-25 19:28 usbmon6 crw------- 1 root root 252, 7 2011-11-25 19:28 usbmon7 crw------- 1 root root 252, 8 2011-11-25 19:28 usbmon8 drwxr-xr-x 4 root root 80 2011-11-25 19:28 v4l crw------- 1 root root 10, 57 2011-11-25 19:28 vboxdrv crw------- 1 root root 10, 56 2011-11-25 19:28 vboxnetctl drwxr-x--- 4 root vboxusers 80 2011-11-25 19:28 vboxusb crw-rw---- 1 root tty 7, 0 2011-11-25 19:28 vcs crw-rw---- 1 root tty 7, 1 2011-11-25 19:28 vcs1 crw-rw---- 1 root tty 7, 2 2011-11-25 19:28 vcs2 crw-rw---- 1 root tty 7, 3 2011-11-25 19:28 vcs3 crw-rw---- 1 root tty 7, 4 2011-11-25 19:28 vcs4 crw-rw---- 1 root tty 7, 5 2011-11-25 19:28 vcs5 crw-rw---- 1 root tty 7, 6 2011-11-25 19:28 vcs6 crw-rw---- 1 root tty 7, 128 2011-11-25 19:28 vcsa crw-rw---- 1 root tty 7, 129 2011-11-25 19:28 vcsa1 crw-rw---- 1 root tty 7, 130 2011-11-25 19:28 vcsa2 crw-rw---- 1 root tty 7, 131 2011-11-25 19:28 vcsa3 crw-rw---- 1 root tty 7, 132 2011-11-25 19:28 vcsa4 crw-rw---- 1 root tty 7, 133 2011-11-25 19:28 vcsa5 crw-rw---- 1 root tty 7, 134 2011-11-25 19:28 vcsa6 crw------- 1 root root 10, 63 2011-11-25 19:28 vga_arbiter crw-rw----+ 1 root video 81, 0 2011-11-25 19:28 video0 crw-rw-rw- 1 root root 1, 5 2011-11-25 19:28 zero sg_scan -i gives me: sudo sg_scan -i /dev/sg0: scsi0 channel=0 id=0 lun=0 [em] ATA ST31000524NS SN12 [rmb=0 cmdq=0 pqual=0 pdev=0x0] /dev/sg1: scsi0 channel=0 id=1 lun=0 [em] ATA WDC WD15EADS-00S 01.0 [rmb=0 cmdq=0 pqual=0 pdev=0x0] /dev/sg2: scsi2 channel=0 id=0 lun=0 [em] HL-DT-ST BDDVDRW CH08LS10 2.00 [rmb=1 cmdq=0 pqual=0 pdev=0x5] sg_map gives me: /dev/sg0 /dev/sda /dev/sg1 /dev/sdb /dev/sg2 /dev/scd0 lsscsi -l gives me: [0:0:0:0] disk ATA ST31000524NS SN12 /dev/sda state=running queue_depth=1 scsi_level=6 type=0 device_blocked=0 timeout=30 [0:0:1:0] disk ATA WDC WD15EADS-00S 01.0 /dev/sdb state=running queue_depth=1 scsi_level=6 type=0 device_blocked=0 timeout=30 [2:0:0:0] cd/dvd HL-DT-ST BDDVDRW CH08LS10 2.00 /dev/sr0 state=running queue_depth=1 scsi_level=6 type=5 device_blocked=0 timeout=30 my udf mod is: filename: /lib/modules/3.0.0-14-generic/kernel/fs/udf/udf.ko license: GPL description: Universal Disk Format Filesystem author: Ben Fennema srcversion: 6ABDE012374D96B9685B8E5 depends: crc-itu-t vermagic: 3.0.0-14-generic SMP mod_unload modversions Do I need special drivers or mods enabled? Do I need to change some BIOS settings? edit: Somehow I am now able to fire the mount command without any filesystem errors, but now I get: mount: no medium found on /dev/sr0

    Read the article

  • The Internet of Things & Commerce: Part 2 -- Interview with Brian Celenza, Commerce Innovation Strategist

    - by Katrina Gosek, Director | Commerce Product Strategy-Oracle
    Internet of Things & Commerce Series: Part 2 (of 3) Welcome back to the second installation of my three part series on the Internet of Things & Commerce. A few weeks ago, I wrote “The Next 7,000 Days” about how we’ve become embedded in a digital architecture in the last 7,000 days since the birth of the internet – an architecture that everyday ties the massive expanse of the internet evermore closely with our physical lives. This blog series explores how this new blend of virtual and material will change how we shop and how businesses sell. Now enjoy reading my interview with Brian Celenza, one of the chief strategists in our Oracle Commerce innovation group. He comments on the past, present, and future of the how the growing Internet of Things relates and will relate to the buying and selling of goods on and offline. -------------------------------------------- QUESTION: You probably have one of the coolest jobs on our team, Brian – and frankly, one of the coolest jobs in our industry. As part of the innovation team for Oracle Commerce, you’re regularly working on bold features and groundbreaking commerce-focused experiences for our vision demos. As you look back over the past couple of years, what is the biggest trend (or trends) you’ve seen in digital commerce that started to bring us closer to this idea of what people are calling an “Internet of Things”? Brian: Well as you look back over the last couple of years, the speed at which change in our industry has moved looks like one of those blurred movement photos – you know the ones where the landscape blurs because the observer is moving so quickly your eye focus can’t keep up. But one thing that is absolutely clear is that the biggest catalyst for that speed of change – especially over the last three years – has been mobile. Mobile technology changed everything. Over the last three years the entire thought process of how to sell on (and offline) has shifted because of mobile technology advances. Particularly for eCommerce professionals who have started to move past the notion of “channels” for selling goods to this notion of “Mobile First”… then the Web site. Or more accurately, that everything – smartphones, web, store, tablet – is just one channel or has to act like one singular access point to the same product catalog, information and content. The most innovative eCommerce professionals realized some time ago that it’s not ideal to build an eCommerce Web site and then build everything on top of or off of it. Rather, they want to build an eCommerce API and then integrate it will all other systems. To accomplish this, they are leveraging all the latest mobile technologies or possibilities mobile technology has opened up: 4G and LTE, GPS, bluetooth, touch screens, apps, html5… How has this all started to come together for shopping experiences on and offline? Well to give you a personal example, I remember visiting an Apple store a few years ago and being amazed that I didn’t have to wait in line because a store associate knew everything about me from my ID – right there on the sales floor – and could check me out anywhere. Then just a few months later (when like any good addict) I went back to get the latest and greatest new gadget, I felt like I was stealing it because I could check myself out with my smartphone. I didn’t even need to see a sales associate OR go to a cash register. Amazing. And since then, all sort sorts of companies across all different types of industries – from food service to apparel –  are starting to see mobile payments in the billions of dollars now thanks not only to the convenience factor but to smart loyalty rewards programs as well. These are just some really simple current examples that come to mind. So many different things have happened in the last couple of years, it’s hard to really absorb all of the quickly – because as soon as you do, everything changes again! Just like that blurry speed photo image. For eCommerce, however, this type of new environment underscores the importance of building an eCommerce API – a platform that has services you can tap in to and build on as the landscape changes at a fever pitch. It’s a mobile first perspective. A web service perspective – particularly if you are thinking of how to engage customers across digital and physical spaces. —— QUESTION: Thanks for bringing us into the present – some really great examples you gave there to put things into perspective. So what do you see as the biggest trend right now around the “Internet of Things” – and what’s coming next few years? Brian: Honestly, even sitting where I am in the innovation group – it’s hard to look out even 12 months because, well, I don’t even think we’ve fully caught up with what is possible now. But I can definitely say that in the last 12 months and in the coming 12 months, in the technology and eCommerce world it’s all about iBeacons. iBeacons are awesome tools we have right now to tie together physical and digital shopping experiences. They know exactly where you are as a shopper and can communicate that to businesses. Currently there seem to be two camps of thought around iBeacons. First, many people are thinking of them like an “indoor GPS”, which to be fair they literally are. The use case this first camp envisions for iBeacons is primarily for advertising and marketing. So they use iBeacons to push location-based promotions to customers if they are close to a store or in a store. You may have seen these types of mobile promotions start to pop up occasionally on your smart phone as you pass by a store you’ve bought from in the past. That’s the work of iBeacons. But in my humble opinion, these promotions probably come too early in the customer journey and although they may be well timed and work to “convert” in some cases, I imagine in most they are just eroding customer trust because they are kind of a “one-size-fits-all” solution rather than one that is taking into account what exactly the customer might be looking for in that particular moment. Maybe they just want more information and a promotion is way too soon for that type of customer. The second camp is more in line with where my thinking falls. In this case, businesses take a more sensitive approach with iBeacons to customers’ needs. Instead of throwing out a “one-size-fits-all” to any passer by with iBeacons, the use case is more around looking at the physical proximity of a customer as an opportunity to provide a service: show expert reviews on a product they may be looking at in a particular aisle of a store, offer the opportunity to compare prices (and then offer a promotion), signal an in-store associate if a customer has been in the store for more than 10 minutes in one place. These are all less intrusive more value-driven uses of iBeacons. And they are more about building customer trust through service. To take this example a bit further into the future realm of “Big Data” and “Internet of Things” businesses could actually use the Oracle Commerce Platform and iBeacons to “silently” track customer movement w/in the store to provide higher quality service. And this doesn’t have to be creepy or intrusive. Simply if a customer has been in a particular department or aisle for more than a 5 or 10 minutes, an in-store associate could come over an offer some assistance already knowing customer preferences from their online profile and maybe even seeing the items in a shopping cart they started at home. None of this has to be revealed to the customer, but it certainly could boost the level of service an in-store sales associate could provide. Or, in another futuristic example, stores could use the digital footprint of the physical store transmitted by iBeacons to generate heat maps of the store that could be tracked over time. Imagine how much you could find out about which parts of the store are more busy during certain parts of the day or seasons. This could completely revolutionize how physical merchandising is deployed or where certain high value / new items are placed. And / or this use of iBeacons could also help businesses figure out if customers are getting held up in certain parts of the store during busy days like Black Friday. If long lines are causing customers to bounce from a physical store and leave those holiday gifts behind, maybe having employees with mobile check as an option could remove the cash register bottleneck. But going to back to my original statement, it’s all still very early in the story for iBeacons. The hardware manufacturers are still very new and there is still not one clear standard.  Honestly, it all goes back to building and maintaining an extensible and flexible platform for anywhere engagement. What you’re building today should allow you to rapidly take advantage of whatever unimaginable use cases wait around the corner. ------------------------------------------------------ I hope you enjoyed the brief interview with Brian. It’s really awesome to have such smart and innovation-minded individuals on our Oracle Commerce innovation team. Please join me again in a few weeks for Part 3 of this series where I interview one of the product managers on our team about how the blending of digital and in-store selling in influencing our product development and vision.

    Read the article

  • CodePlex Daily Summary for Wednesday, October 03, 2012

    CodePlex Daily Summary for Wednesday, October 03, 2012Popular ReleasesSharePoint Column & View Permission: SharePoint Column and View Permission v1.5: Version 1.5 of this project. If you will find any bugs please let me know at enti@zoznam.sk or post your findings in Issue TrackerZ3: Z3 4.1.1 source code: Snapshot corresponding to version 4.1.1.DirectX Tool Kit: October 2012: October 2, 2012 Added ScreenGrab module Added CreateGeoSphere for drawing a geodesic sphere Put DDSTextureLoader and WICTextureLoader into the DirectX C++ namespace Renamed project files for better naming consistency Updated WICTextureLoader for Windows 8 96bpp floating-point formats Win32 desktop projects updated to use Windows Vista (0x0600) rather than Windows 7 (0x0601) APIs Tweaked SpriteBatch.cpp to workaround ARM NEON compiler codegen bugHome Access Plus+: v8.1: HAP+ Web v8.1.1003.000079318 Fixed: Issue with the Help Desk and updating a ticket as an admin 79319 Fixed: formatting issue with the booking system admin header 79321 Moved to using the arrow with a circle symbol on the homepage instead of the > and < 79541 Added: 480px wide mobile theme to login page 79541 Added: 480px wide mobile theme to home page 79541 Added: slide events for homepage 79553 Fixed: Booking System Multiple Lesson Bug 79553 Fixed: IE Error Message 79684 Fixed: jQuery issue ...System.Net.FtpClient: System.Net.FtpClient 2012.10.02: This is the first release of the new code base. It is not compatible with the old API, I repeat it is not a drop in update for projects currently using System.Net.FtpClient. New users should download this release. The old code base (Branch: System.Net.FtpClient_1) will continue to be supported while the new code matures. This release is a complete re-write of System.Net.FtpClient. The API and code are simpler than ever before. There are some new features included as well as an attempt at be...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1002.3): Visual Ribbon Editor 1.3.1002.3 What's New: Multi-language support for Labels/Tooltips for custom buttons and groups Support for base language other than English (1033) Connect dialog will not require organization name for ADFS / IFD connections Automatic creation of missing labels for all provisioned languages Minor connection issues fixed Notes: Before saving the ribbon to CRM server, editor will check Ribbon XML for any missing <Title> elements inside existing <LocLabel> elements...YAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib 2.10: See change-log for the list of new features added and bugs fixedRenameApp: RenameApp 1.0: First release of RenameAppJsonToStaticTypeGenerator: JsonToStaticTypeGenerator 0.1: This is the first alpha release of JsonToStaticTypeGenerator.XiaoKyun: XiaoKyun V1.00: https://xiaokyun.codeplex.com/CatchThatException: Release 1.12: Wow a very fast change and a much better and faster writing to the text fileNaked Objects: Naked Objects Release 5.0.0: Corresponds to the packaged version 5.0.0 available via NuGet. Please note that the easiest way to install and run the Naked Objects Framework is via the NuGet package manager: just search the Official NuGet Package Source for 'nakedobjects'. It is only necessary to download the source code (from here) if you wish to modify or re-build the framework yourself. If you do wish to re-build the framework, consul the file HowToBuild.txt in the release. Major enhancementsNaked Objects 5.0 is desi...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...D3 Loot Tracker: 1.4.1: This version will automatically save a recording session on application exit if the user didn't stop the current session.SubExtractor: Release 1029: Feature: Added option to make i and ¡ characters movie-specific for improved OCR on Spanish subs (Special Characters tab in Options) Feature: Allow switch to Word Spacing dialog directly from Spell Check dialog Fix: Added more default word spacings for accented characters Fix: Changed Word Spacing dialog to show all OCR'd characters in current sub Fix: Removed application focus grab during OCR Fix: Tightened HD subs fuzzy logic to reduce false matches in small characters Fix: Improved Arrow k...MCEBuddy 2.x: MCEBuddy 2.2.18: Reccomended download Changelog for 2.2.18 (32bit and 64bit) 1. Added support for checking if Showanalyzer has hung and cancelling it 2. New version of comskip, 0.81.48 3. Speeding up comskip 4. Fixed a build bug in 64bit 2.2.17 5. Added a new comkip.ini, better commercial detection for international channels and less aggressive. Old one has been retained as comskip_old.ini 6. Added support for Audio Offset on Conversion Task page in GUI (this overrides the profiles AudioDelay when specified)Readable Passphrase Generator: KeePass Plugin 0.7.1: See the KeePass Plugin Step By Step Guide for instructions on how to install the plugin. Changes Built against KeePass 2.20Windows 8 Toolkit - Charts and More: Beta 1.0: The First Compiled Version of my LibraryPDF.NET: PDF.NET.Ver4.5-OpenSourceCode: PDF.NET Ver4.5 ????,????Web??????。 PDF.NET Ver4.5 Open Source Code,include a sample Web application project.Visual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textNew Projects.Net Exception Reporter: A reusable and extensible exception reporter for Microsoft .NET projects.Aesha Broker: A rich client Auction House Broker application. Built upon Blizzard's new REST API. Provides a client experience which caches historical auction data to provideASP.NET Friendly URLs: A library that enables automatic resolving of extensionless URLs to ASP.NET file-based handlers, e.g. ASPX pages.Astro Power CMS: Astro Power CMS build on GraffitiCMS, a product of Telligent. GraffitiCMS stop develop, I create this project with name is Astro Power CMSaTester: Here is a good place. And now, I can upload my soruce to it. It's very good.Automacao Residencial: O Netduino é uma plataforma onde voce utiliza a linguagem C# para controlar hardware. O objetivo é criar uma estrutura de comunicaçao com o netduino.Derbster: Explore and learn about modern C# architecture and programming by implementing software to support the modern game of roller derby. Dot FPE - A free Format-preserving encryption implementation for .net: There aren't any widely available implementations of a format-preserving encryption in .NET. Thus we aim to be the first!DotNetEx: .NET Framework extended functionality for data access, working with Tasks and asynchronous programming, encryption algorithms such as SkipJack and other stuff.Elemental Development Toolchain (.NET version): A complete toolchain built around the Æthere langauge.elFinder ASP.NET Connector: The one and only .NET connector for the amazing elFinder 2.X web-based file manager. Finally you can manage your files easily right from your browser!Geosynkronisering: Prosjekt for utarbeidelse av spesifikasjoner for grensesnitt som muliggjør synkronisering av datalager med geografisk datainnhold på tvers av ulike plattformerGIII_P1: Jesli wszyscy w Ciebie zwatpili pokaz ze sie mylili !IntroduceCompany: Website gi?i thi?u doanh nghi?p - công ty.JsonToStaticTypeGenerator: This is the JsonToStaticTypeGenerator project that gives the possibility to generate c# classes out of Json data.kwerty: Coming soonMachine Learning: My machine learning project. Just to figure out things...MicroManager: MaNGOS Web-based ManagerMvcContrib3: This is the version of mvccontrib which works with ASP.Net MVC 3Oracle Destination via ODP.Net (Custom Destination Component): SSIS 2008 R2 solution (custom destination component) to write to oracle via ODP.NetOrchard Commerce History with PayPal: Project expands on Nwazet.Commerce module (and is required for this module to work). Adds a purchase history, product role associations, and PayPal.Phoenix Trans: Web Phoenix Trans v?n t?i hàng hóa trong và ngoài nu?cPowerState: PowerState is .NET application for sending Wake-On-LAN (WOL) requests to computers. It can also shutdown, log off and reboot computers using the WMI.RenameApp: RenameApp is a free and very simple to use renaming software for Windows. RenameApp allows you to easily rename files based on the specified criteria and order.Rose-Hulman User Experience Design: This project will contain labs intended for use in Rose-Hulman's Computer Science and Software Engineering department.Server d? phòng: Ðây là server d? phòng, SharePoint BCS External Connector Caching Pattern Library: Library for enabling caching on SharePoint BCS external connectors. Enables BCS .Net Assemblies to be written that are scalable and performant for search.SharpDX.WPF: This projects provides a DirectX 9, DirectX 10 and DirectX 11 support for WPF. The assembly contains DXElement - an easy to use WPF-FrameworkElement.Simple Password Generator Library: The password generator library, written in C#, is a simple assembly which allow generation of passwords with length anywhere from 1-99.SisEagle.NET: Esse sistema foi desenvolvido pra fins de apresentação do TCC referente ao ano de 2012 na UDF-BrasiliaSWebshop: SWebshop is a PHP based webshop system which allows you to insert, edit and delete data easily and is easy to use for customers.Tabular Database Powershell Cmdlets: This project provides a sample of PowerShell Cmdlets to manage Tabular models, from Analysis Services.University timetable using java: the project is using java language to create timetable (full timetable with exam tables and labs tables) and it will be free for all users with sql databaseURLShoter: This project for shorting URL for ASP.NETWeb Input Form Control: This control allow developer to create the input form by configuring the control in html modeWeibo: rtWorkoutMemo: Project descritpion(first draft): Memorise your workout. Keep archive records of your daily trening such: - series of excercise, - quantity of each serie, - weWPF - Automate Acrobat Security Policy: This WPF Tool was created to quickly password protect batches of PDF documents, using a random generator to generate the passwords.XiaoKyun: Hello Page for Web.Z3: Z3 is a high-performance theorem prover being developed at Microsoft Research.

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416  | Next Page >