Search Results

Search found 7294 results on 292 pages for 'parameters'.

Page 157/292 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • SQL SERVER – How to Set Variable and Use Variable in SQLCMD Mode

    - by Pinal Dave
    Here is the question which I received the other day on SQLAuthority Facebook page. Social media is a wonderful thing and I love the active conversation between blog readers and myself – actually I think social media adds lots of human factor to any conversation. Here is the question - “I am using sqlcmd in SSMS – I am not sure how to declare variable and pass it, for example I have a database and it has table, how can I make the table variable dynamic and pass different value everytime?” Fantastic question, and here is its very simple answer. First of all, enable sqlcmd mode in SQL Server Management Studio as described in following image. Now in query editor type following SQL. :SETVAR DatabaseName “AdventureWorks2012″ :SETVAR SchemaName “Person” :SETVAR TableName “EmailAddress“ USE $(DatabaseName); SELECT * FROM $(SchemaName).$(TableName); Note that I have set the value of the database, schema and table as a sqlcmd variable and I am executing the query using the same parameters. Well, that was it, sqlcmd is a very simple language to master and it also aids in doing various tasks easily. If you have any other sqlcmd tips, please leave a comment and I will publish it with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: sqlcmd

    Read the article

  • Metro: Namespaces and Modules

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can use the Windows JavaScript (WinJS) library to create namespaces. In particular, you learn how to use the WinJS.Namespace.define() and WinJS.Namespace.defineWithParent() methods. You also learn how to hide private methods by using the module pattern. Why Do We Need Namespaces? Before we do anything else, we should start by answering the question: Why do we need namespaces? What function do they serve? Do they just add needless complexity to our Metro applications? After all, plenty of JavaScript libraries do just fine without introducing support for namespaces. For example, jQuery has no support for namespaces and jQuery is the most popular JavaScript library in the universe. If jQuery can do without namespaces, why do we need to worry about namespaces at all? Namespaces perform two functions in a programming language. First, namespaces prevent naming collisions. In other words, namespaces enable you to create more than one object with the same name without conflict. For example, imagine that two companies – company A and company B – both want to make a JavaScript shopping cart control and both companies want to name the control ShoppingCart. By creating a CompanyA namespace and CompanyB namespace, both companies can create a ShoppingCart control: a CompanyA.ShoppingCart and a CompanyB.ShoppingCart control. The second function of a namespace is organization. Namespaces are used to group related functionality even when the functionality is defined in different physical files. For example, I know that all of the methods in the WinJS library related to working with classes can be found in the WinJS.Class namespace. Namespaces make it easier to understand the functionality available in a library. If you are building a simple JavaScript application then you won’t have much reason to care about namespaces. If you need to use multiple libraries written by different people then namespaces become very important. Using WinJS.Namespace.define() In the WinJS library, the most basic method of creating a namespace is to use the WinJS.Namespace.define() method. This method enables you to declare a namespace (of arbitrary depth). The WinJS.Namespace.define() method has the following parameters: · name – A string representing the name of the new namespace. You can add nested namespace by using dot notation · members – An optional collection of objects to add to the new namespace For example, the following code sample declares two new namespaces named CompanyA and CompanyB.Controls. Both namespaces contain a ShoppingCart object which has a checkout() method: // Create CompanyA namespace with ShoppingCart WinJS.Namespace.define("CompanyA"); CompanyA.ShoppingCart = { checkout: function (){ return "Checking out from A"; } }; // Create CompanyB.Controls namespace with ShoppingCart WinJS.Namespace.define( "CompanyB.Controls", { ShoppingCart: { checkout: function(){ return "Checking out from B"; } } } ); // Call CompanyA ShoppingCart checkout method console.log(CompanyA.ShoppingCart.checkout()); // Writes "Checking out from A" // Call CompanyB.Controls checkout method console.log(CompanyB.Controls.ShoppingCart.checkout()); // Writes "Checking out from B" In the code above, the CompanyA namespace is created by calling WinJS.Namespace.define(“CompanyA”). Next, the ShoppingCart is added to this namespace. The namespace is defined and an object is added to the namespace in separate lines of code. A different approach is taken in the case of the CompanyB.Controls namespace. The namespace is created and the ShoppingCart object is added to the namespace with the following single line of code: WinJS.Namespace.define( "CompanyB.Controls", { ShoppingCart: { checkout: function(){ return "Checking out from B"; } } } ); Notice that CompanyB.Controls is a nested namespace. The top level namespace CompanyB contains the namespace Controls. You can declare a nested namespace using dot notation and the WinJS library handles the details of creating one namespace within the other. After the namespaces have been defined, you can use either of the two shopping cart controls. You call CompanyA.ShoppingCart.checkout() or you can call CompanyB.Controls.ShoppingCart.checkout(). Using WinJS.Namespace.defineWithParent() The WinJS.Namespace.defineWithParent() method is similar to the WinJS.Namespace.define() method. Both methods enable you to define a new namespace. The difference is that the defineWithParent() method enables you to add a new namespace to an existing namespace. The WinJS.Namespace.defineWithParent() method has the following parameters: · parentNamespace – An object which represents a parent namespace · name – A string representing the new namespace to add to the parent namespace · members – An optional collection of objects to add to the new namespace The following code sample demonstrates how you can create a root namespace named CompanyA and add a Controls child namespace to the CompanyA parent namespace: WinJS.Namespace.define("CompanyA"); WinJS.Namespace.defineWithParent(CompanyA, "Controls", { ShoppingCart: { checkout: function () { return "Checking out"; } } } ); console.log(CompanyA.Controls.ShoppingCart.checkout()); // Writes "Checking out" One significant advantage of using the defineWithParent() method over the define() method is the defineWithParent() method is strongly-typed. In other words, you use an object to represent the base namespace instead of a string. If you misspell the name of the object (CompnyA) then you get a runtime error. Using the Module Pattern When you are building a JavaScript library, you want to be able to create both public and private methods. Some methods, the public methods, are intended to be used by consumers of your JavaScript library. The public methods act as your library’s public API. Other methods, the private methods, are not intended for public consumption. Instead, these methods are internal methods required to get the library to function. You don’t want people calling these internal methods because you might need to change them in the future. JavaScript does not support access modifiers. You can’t mark an object or method as public or private. Anyone gets to call any method and anyone gets to interact with any object. The only mechanism for encapsulating (hiding) methods and objects in JavaScript is to take advantage of functions. In JavaScript, a function determines variable scope. A JavaScript variable either has global scope – it is available everywhere – or it has function scope – it is available only within a function. If you want to hide an object or method then you need to place it within a function. For example, the following code contains a function named doSomething() which contains a nested function named doSomethingElse(): function doSomething() { console.log("doSomething"); function doSomethingElse() { console.log("doSomethingElse"); } } doSomething(); // Writes "doSomething" doSomethingElse(); // Throws ReferenceError You can call doSomethingElse() only within the doSomething() function. The doSomethingElse() function is encapsulated in the doSomething() function. The WinJS library takes advantage of function encapsulation to hide all of its internal methods. All of the WinJS methods are defined within self-executing anonymous functions. Everything is hidden by default. Public methods are exposed by explicitly adding the public methods to namespaces defined in the global scope. Imagine, for example, that I want a small library of utility methods. I want to create a method for calculating sales tax and a method for calculating the expected ship date of a product. The following library encapsulates the implementation of my library in a self-executing anonymous function: (function (global) { // Public method which calculates tax function calculateTax(price) { return calculateFederalTax(price) + calculateStateTax(price); } // Private method for calculating state tax function calculateStateTax(price) { return price * 0.08; } // Private method for calculating federal tax function calculateFederalTax(price) { return price * 0.02; } // Public method which returns the expected ship date function calculateShipDate(currentDate) { currentDate.setDate(currentDate.getDate() + 4); return currentDate; } // Export public methods WinJS.Namespace.define("CompanyA.Utilities", { calculateTax: calculateTax, calculateShipDate: calculateShipDate } ); })(this); // Show expected ship date var shipDate = CompanyA.Utilities.calculateShipDate(new Date()); console.log(shipDate); // Show price + tax var price = 12.33; var tax = CompanyA.Utilities.calculateTax(price); console.log(price + tax); In the code above, the self-executing anonymous function contains four functions: calculateTax(), calculateStateTax(), calculateFederalTax(), and calculateShipDate(). The following statement is used to expose only the calcuateTax() and the calculateShipDate() functions: // Export public methods WinJS.Namespace.define("CompanyA.Utilities", { calculateTax: calculateTax, calculateShipDate: calculateShipDate } ); Because the calculateTax() and calcuateShipDate() functions are added to the CompanyA.Utilities namespace, you can call these two methods outside of the self-executing function. These are the public methods of your library which form the public API. The calculateStateTax() and calculateFederalTax() methods, on the other hand, are forever hidden within the black hole of the self-executing function. These methods are encapsulated and can never be called outside of scope of the self-executing function. These are the internal methods of your library. Summary The goal of this blog entry was to describe why and how you use namespaces with the WinJS library. You learned how to define namespaces using both the WinJS.Namespace.define() and WinJS.Namespace.defineWithParent() methods. We also discussed how to hide private members and expose public members using the module pattern.

    Read the article

  • Testing Routes in ASP.NET MVC with MvcContrib

    - by Guilherme Cardoso
    I've decide to write about unit testing in the next weeks. If we decide to develop with Test-Driven Developement pattern, it's important to not forget the routes. This article shows how to test routes. I'm importing my routes from my RegisterRoutes method from the Global.asax of Project.Web created by default (in SetUp). I'm using ShouldMapTp() from MvcContrib: http://mvccontrib.codeplex.com/ The controller is specified in the ShouldMapTo() signature, and we use lambda expressions for the action and parameters that are passed to that controller. [SetUp] public void Setup() { Project.Web.MvcApplication.RegisterRoutes(RouteTable.Routes); } [Test] public void Should_Route_HomeController() { "~/Home" .ShouldMapTo<HomeController>(action => action.Index()); } [Test] public void Should_Route_EventsController() { "~/Events" .ShouldMapTo<EventsController>(action => action.Index()); "~/Events/View/44/Concert-DevaMatri-22-January-" .ShouldMapTo<EventosController>(action => action.Read(1, "Title")); // In this example,44 is the Id for my Event and "Concert-DevaMatri-22-January" is the title for that Event } [TearDown] public void teardown() { RouteTable.Routes.Clear(); }

    Read the article

  • How to write PowerShell code part 1 (Using external xml configuration file)

    - by ybbest
    In this post, I will show you how to use external xml file with PowerShell. The advantage for doing so is that you can avoid other people to open up your PowerShell code to make the configuration changes; instead all they need to do is to change the xml file. I will refactor my site creation script as an example; you can download the script here and refactored code here. 1. As you can see below, I hard code all the variables in the script itself. $url = "http://ybbest" $WebsiteName = "Ybbest" $WebsiteDesc = "Ybbest test site" $Template = "STS#0" $PrimaryLogin = "contoso\administrator" $PrimaryDisplay = "administrator" $PrimaryEmail = "[email protected]" $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers" 2. Next, I will show you how to manipulate xml file using PowerShell. You can use the get-content to grab the content of the file. [xml] $xmlconfigurations=get-content .\SiteCollection.xml 3. Then you can set it to variable (the variable has to be typed [xml] after that you can read the content of the xml content, PowerShell also give you nice IntelliSense by press the Tab key. [xml] $xmlconfigurations=get-content .\SiteCollection.xml $xmlconfigurations.SiteCollection $xmlconfigurations.SiteCollection.SiteName 4. After refactoring my code, I can set the variables using the xml file as below. #Set the parameters $siteInformation=$xmlinput.SiteCollection $url = $siteInformation.URL $siteName = $siteInformation.SiteName $siteDesc = $siteInformation.SiteDescription $Template = $siteInformation.SiteTemplate $PrimaryLogin = $siteInformation.PrimaryLogin $PrimaryDisplay = $siteInformation.PrimaryDisplayName $PrimaryEmail = $siteInformation.PrimaryLoginEmail $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers"

    Read the article

  • Including Overestimates in MSF Agile Burndown Report

    After using the MSF Agile Burndown report for a few weeks in our new TFS 2010 environment, I have to say I am a huge fan.  I especially find the assignment of Work (hours) portion to be very useful in motivating the team to keep their tasks up to date every day.  Here is a view of the report that you get out of the box. However, I have one problem.  Id like the top line to have some more meaning.  Specifically, when it is changing is that an indication of scope creep, mis-estimation or a combination of the two.  So, today I decided to try to build in a view that would show overestimated time.  This would give me a more consistent top line.  My idea was to add another visual area on top of the graph whenever my originally estimated time was greater than the sum of completed and remaining.  This will effectively show me at least when the top line goes down whether it was scope change or over-estimation. Here is the final result. How did I do it?  Step 1: Add Cumulative_Original_Estimate field to the dsBurndown My approach was to follow the pattern where the completed time is included in the burndown chart and add my Overestimated hours.  First I added a field to the dsBurndown to hold the estimated time.         <Field Name="Cumulative_Original_Estimate">           <DataField><?xml version="1.0" encoding="utf-8"?><Field xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xsi:type="Measure" UniqueName="[Measures].[Microsoft_VSTS_Scheduling_OriginalEstimate]" /></DataField>           <rd:TypeName>System.Int32</rd:TypeName>         </Field> Step 2: Add a column to the query SELECT {     [Measures].[DateValue],     [Measures].[Work Item Count],     [Measures].[Microsoft_VSTS_Scheduling_RemainingWork],     [Measures].[Microsoft_VSTS_Scheduling_CompletedWork],     [Measures].[Microsoft_VSTS_Scheduling_OriginalEstimate],     [Measures].[RemainingWorkLine],     [Measures].[CountLine] Step 3: Add a new Item to the QueryDefinition <Item> <ID xsi:type="Measure"> <MeasureName>Microsoft_VSTS_Scheduling_OriginalEstimate</MeasureName> <UniqueName>[Measures].[Microsoft_VSTS_Scheduling_OriginalEstimate]</UniqueName> </ID> <ItemCaption>Cumulative Original Estimate</ItemCaption> <FormattedValue>true</FormattedValue> </Item> Step 4: Add a new ChartMember to DundasChartControl1 The burndown chart is called DundasChartControl1.  I need to add a ChartMember for the estimated time. <ChartMember>   <Label>Cumulative Original Estimate</Label> </ChartMember> Step 5: Add a ChartSeries to show the Overestimated Time <ChartSeries Name="OriginalEstimate">   <Hidden>=IIF(Parameters!YAxis.Value="count",True,False)</Hidden>   <ChartDataPoints>     <ChartDataPoint>       <ChartDataPointValues>         <Y>=IIF(Parameters!YAxis.Value = "hours", IIF(SUM(Fields!Cumulative_Original_Estimate.Value)>SUM(Fields!Cumulative_Completed_Work.Value+Fields!Cumulative_Remaining_Work.Value), SUM(Fields!Cumulative_Original_Estimate.Value-(Fields!Cumulative_Completed_Work.Value+Fields!Cumulative_Remaining_Work.Value)),Nothing),Nothing)</Y>       </ChartDataPointValues>       <ChartDataLabel>         <Style>           <FontFamily>Microsoft Sans Serif</FontFamily>           <FontSize>8pt</FontSize>         </Style>       </ChartDataLabel>       <Style>         <Border>           <Color>#9bdb00</Color>           <Width>0.75pt</Width>         </Border>         <Color>#666666</Color>         <BackgroundGradientEndColor>#666666</BackgroundGradientEndColor>       </Style>       <ChartMarker>         <Style />       </ChartMarker>       <CustomProperties>         <CustomProperty>           <Name>LabelStyle</Name>           <Value>Top</Value>         </CustomProperty>       </CustomProperties>     </ChartDataPoint>   </ChartDataPoints>   <Type>Area</Type>   <Subtype>Stacked</Subtype>   <Style />   <ChartEmptyPoints>     <Style>       <Color>#00ffffff</Color>     </Style>     <ChartMarker>       <Style />     </ChartMarker>     <ChartDataLabel>       <Style />     </ChartDataLabel>   </ChartEmptyPoints>   <LegendName>Default</LegendName>   <ChartItemInLegend>     <LegendText>Overestimated Hours</LegendText>   </ChartItemInLegend>   <ChartAreaName>Default</ChartAreaName>   <ValueAxisName>Primary</ValueAxisName>   <CategoryAxisName>Primary</CategoryAxisName>   <ChartSmartLabel>     <Disabled>true</Disabled>     <MaxMovingDistance>22.5pt</MaxMovingDistance>   </ChartSmartLabel> </ChartSeries> Thats it.  I find the improved report to add some value over the out of the box version.  You can download the updated rdl for the report here.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ADF Bounded Taskflow Activation

    - by Vijay Mohan
    hey guys, It's really been a while since I last blogged. Just came across a hard-to-debug scenario, so thought of sharing it for the benefit of ADF developers.I had a page fragment(jsff) wrapped inside a  bounded taskflow, for which the activation was conditional and was based on a requestScope property (be it a requestScope variable or a property coming from a requestScope bean). As soon as the taskflow activates and page renders the requestScope parameters life span ends. After that, when you raise an event inside the page (click of commandLink, moseHover, valueChange event etc) then for the first time the event gets fired but it fails to affect the change in the page, moreover, for the subsequent times the event itself doesn't get fired. Any guesses as to what could be the culprit..?I guess, I already gave the reason in the initial paragraph. For the first time when the event gets fired, the fwk sees that the page is already lying in inactivate state, so it fails to affect the change and for subsequent times it doesn't even fire the event because it already knew that the page/region is inactive. So, in such a scenario we must use either a pageFlowScope property or transientVO property which could exist till the page's life span.

    Read the article

  • BIP Debugging to a file

    - by Tim Dexter
    If you use the standalone server or with OBIEE and use OC4J as the web server. Have you ever taken a looksee at the console window (doc/xterm) that you use to start it. Ever turned on debugging to see masses of info flow by that window and want to capture it all? I have been debugging today and watched all that info fly by and on Windoze gets lost before you can see it! The BIP developers use the System.out.println() and System.err.println()methods in the BIP applications to generate debugging formation. Normally the output from these method calls go to the console where the OC4J process is started. However you can specify command line options when starting OC4J to direct the stdout and stderr output directly to files. The ?out and ?err parameters tell OC4J which file to direct the output to. All you need do is modify the oc4j.cmd file used to start BIP. I didnt get fancy and just plugged in the following to the file under the start section. I just modified the line: set CMDARGS=-config "%SERVER_XML%" -userThreads to set CMDARGS=-config "%SERVER_XML%" -out D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.out -err D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.err -userThreads Bounced the server and I now have a ballooning pair of debug files that I can pour over to my hearts content. The .out file appears to contain BIP only log info and the .err file, OBIEE messages. If you are using another web server to host BIP, just check out the user docs to find out how to get the log files to write. Note to self, remember to turn off the debug when Im done!

    Read the article

  • CodePlex Daily Summary for Thursday, June 03, 2010

    CodePlex Daily Summary for Thursday, June 03, 2010New ProjectsAlbatross: Albatross framework. We are still working on the documentation, more details will be available soon.ApiChange: ApiChange is the Swiss army knife for inspecting your assemblies from the command line. Now you can do basic operations like diff, who uses (method...BaseCalendar: BaseCalendar is a server-side ASP.NET web control (WebForms or MVC) that renders a calendar while giving you full control over the generated HTML. ...CESAVE: Proyectos para el Comité Estatal de Sanidad Vegetal.Closure Compiler w/ Annotations Visual Studio 2010 Snippets: This is an attempt to create reusable Visual Studio snippets to make working with closure compiler annotated JavaScript more productive. VS2010 ...Common Service Host: Common Service Host is a generic Windows Communication Service Host and factory that uses the Common Service Locator to create Service objects. ...DarkLight: DarkLight is a 2D Lighting Engine written in XNA, and allows developers to create 2D shadowing effects in their 2D games easily. It supports poi...Earn Burn Tracker: A tool to track earned value against a given release, initiative, feature set, and objects.eOfficeAACS: eOffice is an open source access control and attendance management system developed by e-bird Innovation (www.ebirdinfo.com).Its flexible design al...FLV Video conversion library for .Net 3.5: This is a component created to call the ffmpeg tool to convert various video formats to the Adobe Flash FLV output format. The component also takes...Google Moderator: .NET client library for the Google Moderator API.linq to jquery: provides support for linq to jquery objectsMobile Vikings Data: App to view your data usage RefBrowser: RefBrowserRESX Translator with Bing (from Microsoft Consulting Services, UK): A Windows Form application that automatically translates RESX files using Bing web servicesRhyduino - Remote Arduino Control via Managed Code: Rhyduino makes it easy for Visual Studio / Windows devs to control the Arduino using a computer. It's like supercharging your Arduino with all the ...SharePoint 2010 CSV Bulk Term Set Importer: Allows for multiple import of *.csv files to a given term group in SharePoint 2010 Term Store. It will create new term group based on the name pr...SharePoint Feature - Export history version to Excel: Add a function to list the action button, the ability to export history version of the item sheet to Excel from the specified date. Features suppo...SwEntry: A system that allows people to open doors by using a Bluetooth enabled phone. Things to Do with the DLR: This project is about ideas and sample code around the Dynamic Language Runtime.Work Recorder - Hold on own time!: Work Recorder is a office aid software which can recorde the time used on PC for researchers, office workers and students. And it is also a good he...xuezhixu: xuezhixu foundYaget: Yet Another Game Engine TechnologyNew ReleasesBackUpAnyWhere: backupanywhere RC1: this is the RC of our programBaseCalendar: BaseControls 1.0: BaseControls 1.0 contains the BaseCalendar ASP.NET control.BizTalk Server Pipeline Component Wizard: 2.20: Version suitable for 2010 release.CheckHeader: CheckHeader v0.8.6: The Microsoft .NET Framework 4.0 is needed to run this program.Chirpy - VS Add In For Handling Js, Css, and DotLess Files: Chirpy Installer for VS 2010 (Ver-1.0.0.2): VS 2010 Installer for the Chirpy AddIn. Version 1.0.0.2Christoc's DotNetNuke C# Module Development Template: 00.00.01: This is the initial release of Christoc's DotNetNuke C# Module Development Template. You can use the Template as-is, or you can customize the VSTem...Closure Compiler w/ Annotations Visual Studio 2010 Snippets: v1 release: The initial release of the projectCommunity Forums NNTP bridge: Community Forums NNTP Bridge V22: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V23: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V24: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...DarkLight: DarkLight Engine v1.0: This is the first version of the DarkLight engine and currently supports point, spot and area lights with no upper limit on the number of lights. ...DotNetNuke® Skin Collaborate: Collaborate Package 1.1.0: Newer version of Collaborate included fixes: - removed conditional code to display control panel - changed background color to match with backgroun...dotSpatial: System.Spatial.Projection Zip June 2, 2010: This version tries to fix a problem with reprojecting to UTM zones. It is still being tested though.Entity Framework Repository & Unit of Work Template: 1.0.1: This version has more than just the T4 template. I have added a new template that has a RepositoryHelper class for use with StructureMap. Also th...FLV Video conversion library for .Net 3.5: Beta 1: This is the first release of this project. Improvements may be added if necessary.HERB.IQ: Alpha 0.1 Preview: Only clone tab works, just setting up the GUI and getting the XML data handling working correctlyJetfire - Workflow DSL: V1.2.0: The complete source code required for a Jetfire system (server and client nexus) is included in the release. Highlights of Changes Full programmat...linq to jquery: linq to jquery alpha: beta development projectMapWindow6: MapWindow 6.0 June 2, 2010: This version fixes a problem with projecting to UTM zones. I'm not sure that this works perfectly yet. It seemed to require a zone adjustment by ...patterns & practices Web Client Developer Guidance: Developing Web Apps May 2010 Beta: This RelesaeThis drop includes updated documentation, links, and graphics. We are still looking for feedback on this release. Plans going forward...patterns & practices: Composite WPF and Silverlight: Prism 4.0 Drop 1: Prism 4.0 Drop 1 Welcome to the first drop of Prism 4.0 (formally known as the Composite Application Guidance for WPF and Silverlight). This drop i...Powershell4SQL: Version 1.3: Changes from version 1.2 Added support for -Confirm and -WhatIf parameters Added support for -Verbose mode. Includes SQL Batch text, parameters ...RESX Translator with Bing (from Microsoft Consulting Services, UK): v1.0: This is the initial release of the toolRhyduino - Remote Arduino Control via Managed Code: Beta Release (v0.80): LibraryAuto-detects connected Arduino devices. Uses system resources intelligently to take advantage of multiple CPU cores when present. Firmata ...SharePoint Feature - Export history version to Excel: Export Item List Version: - multilanguage support Czech, English Install: "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\stsadm.exe" -o addsol...Simulo: Simulo v2.5: That's the third release of Simulo (v2.5). For detailed info on what's new, read the changes log block at the project's home page. System requirem...Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.5: Please carefully follow the Installation Guideas there are additional actions that need to be undertaken in this release. As 1.4 with the followin...Spackle.NET: 4.1.0.0 Release: Added IEquatable<T> to Range<T>StreamInsight Samples: Microsoft StreamInsight Product Team Samples: This is the current snapshot of the samples created by the Streaminsight Product Team.Touch Mice: 0.1: Initial release of Touch MiceVCC: Latest build, v2.1.30602.0: Automatic drop of latest buildVivoSocial: VivoSocial 7.2.0: Version 7.2.0 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 7.2.0 rel...Work Recorder - Hold on own time!: WorkRecorder 1.0: +Finished Version 1.0Most Popular ProjectsCommunity Forums NNTP bridgeOutSyncASP.NET MVC Time PlannerNeatUploadMoonyDesk (windows desktop widgets)Mute4eXpress Persistent Objects (XPO) ToolkitAgUnit - Silverlight unit testing with ReSharperASP.NET MVC ExtensionsAviva Solutions C# Coding GuidelinesMost Active ProjectsCommunity Forums NNTP bridgeGMap.NET - Great Maps for Windows Forms & PresentationRawrIonics Isapi Rewrite FilterN2 CMSpatterns & practices – Enterprise LibraryBlogEngine.NETGameSetFarseer Physics EngineMirror Testing System

    Read the article

  • TFS Build 2010: BuildNumber and DropLocation

    - by javarg
    Automatic Builds for Application Release is a current practice in every major development factory nowadays. Using Team Foundation Server Build 2010 to accomplish this offers many opportunities to improve quality of your releases. The following approach allow us to generate build drop folders including the BuildNumber and the Changeset or Label provided. Using this procedure we can quickly identify the generated binaries in the Drop Server with the corresponding Version. Branch the DefaultTemplate.xaml and renamed it with CustomDefaultTemplate.xaml Open it for edit (check out) Go to the Set Drop Location Activity and edit the DropLocation property. Write the following expression: BuildDetail.DropLocationRoot + "\" + BuildDetail.BuildDefinition.Name + "\" + If(String.IsNullOrWhiteSpace(GetVersion), BuildDetail.SourceGetVersion, GetVersion) + "_" + BuildDetail.BuildNumber Check in the branched template. Now create a build definition named TestBuildForDev using the new template. The previous expression sets the DropLocation with the following format: (ChangesetNumber|LabelName)_BuildName_BuildNumber The first part of the folder name will be the changeset number or the label name (if triggered using labels). Folder names will be generated as following: C1850_TestBuildForDev_20111117.1 (changesets start with letter C) LLabelname_TestBuildForDev_20111117.1 (labels start with letter L) Try launching a build from a Changeset and from a Label. You can specify a Label in the GetVersion parameter in the Queue new Build Wizard, going to the Parameters tab (for labels add the “L” prefix):

    Read the article

  • Using the JRockit Flight Recorder as an In-Flight Black Box

    - by Marcus Hirt
    The new JRockit Flight Recorder has some very interesting properties. It can be used like the black box of an airplane, allowing users to go back in time and check what was happening around the time when something went wrong. Here is how to enable the default continuous recording in JRockit to allow for that use case. The flight recorder is on by default in JRockit R28, the problem is that there is no recording running by default. To configure JRockit to start with the default recording running, add the parameter: -XX:FlightRecorderOptions=defaultrecording=true That will enable a recording with recording ID 0. You can see that it has been started properly by choosing Show Recordings from the context menu in JRockit Mission Control.   You should see something similar to the picture below. Simply right click on the recording and select dump to dump information available in the flight recorder. You can select to dump data for a specific period of time or all data. For more information about the command line parameters available to control the Flight Recorder, see the JRockit documentation.

    Read the article

  • VLOOKUP in Excel, part 2: Using VLOOKUP without a database

    - by Mark Virtue
    In a recent article, we introduced the Excel function called VLOOKUP and explained how it could be used to retrieve information from a database into a cell in a local worksheet.  In that article we mentioned that there were two uses for VLOOKUP, and only one of them dealt with querying databases.  In this article, the second and final in the VLOOKUP series, we examine this other, lesser known use for the VLOOKUP function. If you haven’t already done so, please read the first VLOOKUP article – this article will assume that many of the concepts explained in that article are already known to the reader. When working with databases, VLOOKUP is passed a “unique identifier” that serves to identify which data record we wish to find in the database (e.g. a product code or customer ID).  This unique identifier must exist in the database, otherwise VLOOKUP returns us an error.  In this article, we will examine a way of using VLOOKUP where the identifier doesn’t need to exist in the database at all.  It’s almost as if VLOOKUP can adopt a “near enough is good enough” approach to returning the data we’re looking for.  In certain circumstances, this is exactly what we need. We will illustrate this article with a real-world example – that of calculating the commissions that are generated on a set of sales figures.  We will start with a very simple scenario, and then progressively make it more complex, until the only rational solution to the problem is to use VLOOKUP.  The initial scenario in our fictitious company works like this:  If a salesperson creates more than $30,000 worth of sales in a given year, the commission they earn on those sales is 30%.  Otherwise their commission is only 20%.  So far this is a pretty simple worksheet: To use this worksheet, the salesperson enters their sales figures in cell B1, and the formula in cell B2 calculates the correct commission rate they are entitled to receive, which is used in cell B3 to calculate the total commission that the salesperson is owed (which is a simple multiplication of B1 and B2). The cell B2 contains the only interesting part of this worksheet – the formula for deciding which commission rate to use: the one below the threshold of $30,000, or the one above the threshold.  This formula makes use of the Excel function called IF.  For those readers that are not familiar with IF, it works like this: IF(condition,value if true,value if false) Where the condition is an expression that evaluates to either true or false.  In the example above, the condition is the expression B1<B5, which can be read as “Is B1 less than B5?”, or, put another way, “Are the total sales less than the threshold”.  If the answer to this question is “yes” (true), then we use the value if true parameter of the function, namely B6 in this case – the commission rate if the sales total was below the threshold.  If the answer to the question is “no” (false), then we use the value if false parameter of the function, namely B7 in this case – the commission rate if the sales total was above the threshold. As you can see, using a sales total of $20,000 gives us a commission rate of 20% in cell B2.  If we enter a value of $40,000, we get a different commission rate: So our spreadsheet is working. Let’s make it more complex.  Let’s introduce a second threshold:  If the salesperson earns more than $40,000, then their commission rate increases to 40%: Easy enough to understand in the real world, but in cell B2 our formula is getting more complex.  If you look closely at the formula, you’ll see that the third parameter of the original IF function (the value if false) is now an entire IF function in its own right.  This is called a nested function (a function within a function).  It’s perfectly valid in Excel (it even works!), but it’s harder to read and understand. We’re not going to go into the nuts and bolts of how and why this works, nor will we examine the nuances of nested functions.  This is a tutorial on VLOOKUP, not on Excel in general. Anyway, it gets worse!  What about when we decide that if they earn more than $50,000 then they’re entitled to 50% commission, and if they earn more than $60,000 then they’re entitled to 60% commission? Now the formula in cell B2, while correct, has become virtually unreadable.  No-one should have to write formulae where the functions are nested four levels deep!  Surely there must be a simpler way? There certainly is.  VLOOKUP to the rescue! Let’s redesign the worksheet a bit.  We’ll keep all the same figures, but organize it in a new way, a more tabular way: Take a moment and verify for yourself that the new Rate Table works exactly the same as the series of thresholds above. Conceptually, what we’re about to do is use VLOOKUP to look up the salesperson’s sales total (from B1) in the rate table and return to us the corresponding commission rate.  Note that the salesperson may have indeed created sales that are not one of the five values in the rate table ($0, $30,000, $40,000, $50,000 or $60,000).  They may have created sales of $34,988.  It’s important to note that $34,988 does not appear in the rate table.  Let’s see if VLOOKUP can solve our problem anyway… We select cell B2 (the location we want to put our formula), and then insert the VLOOKUP function from the Formulas tab: The Function Arguments box for VLOOKUP appears.  We fill in the arguments (parameters) one by one, starting with the Lookup_value, which is, in this case, the sales total from cell B1.  We place the cursor in the Lookup_value field and then click once on cell B1: Next we need to specify to VLOOKUP what table to lookup this data in.  In this example, it’s the rate table, of course.  We place the cursor in the Table_array field, and then highlight the entire rate table – excluding the headings: Next we must specify which column in the table contains the information we want our formula to return to us.  In this case we want the commission rate, which is found in the second column in the table, so we therefore enter a 2 into the Col_index_num field: Finally we enter a value in the Range_lookup field. Important:  It is the use of this field that differentiates the two ways of using VLOOKUP.  To use VLOOKUP with a database, this final parameter, Range_lookup, must always be set to FALSE, but with this other use of VLOOKUP, we must either leave it blank or enter a value of TRUE.  When using VLOOKUP, it is vital that you make the correct choice for this final parameter. To be explicit, we will enter a value of true in the Range_lookup field.  It would also be fine to leave it blank, as this is the default value: We have completed all the parameters.  We now click the OK button, and Excel builds our VLOOKUP formula for us: If we experiment with a few different sales total amounts, we can satisfy ourselves that the formula is working. Conclusion In the “database” version of VLOOKUP, where the Range_lookup parameter is FALSE, the value passed in the first parameter (Lookup_value) must be present in the database.  In other words, we’re looking for an exact match. But in this other use of VLOOKUP, we are not necessarily looking for an exact match.  In this case, “near enough is good enough”.  But what do we mean by “near enough”?  Let’s use an example:  When searching for a commission rate on a sales total of $34,988, our VLOOKUP formula will return us a value of 30%, which is the correct answer.  Why did it choose the row in the table containing 30% ?  What, in fact, does “near enough” mean in this case?  Let’s be precise: When Range_lookup is set to TRUE (or omitted), VLOOKUP will look in column 1 and match the highest value that is not greater than the Lookup_value parameter. It’s also important to note that for this system to work, the table must be sorted in ascending order on column 1! If you would like to practice with VLOOKUP, the sample file illustrated in this article can be downloaded from here. Similar Articles Productive Geek Tips Using VLOOKUP in ExcelImport Microsoft Access Data Into ExcelImport an Access Database into ExcelCopy a Group of Cells in Excel 2007 to the Clipboard as an ImageShare Access Data with Excel in Office 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • Accessing and Updating Data in ASP.NET: Filtering Data Using a CheckBoxList

    Filtering Database Data with Parameters, an earlier installment in this article series, showed how to filter the data returned by ASP.NET's data source controls. In a nutshell, the data source controls can include parameterized queries whose parameter values are defined via parameter controls. For example, the SqlDataSource can include a parameterized SelectCommand, such as: SELECT * FROM Books WHERE Price > @Price. Here, @Price is a parameter; the value for a parameter can be defined declaratively using a parameter control. ASP.NET offers a variety of parameter controls, including ones that use hard-coded values, ones that retrieve values from the querystring, and ones that retrieve values from session, and others. Perhaps the most useful parameter control is the ControlParameter, which retrieves its value from a Web control on the page. Using the ControlParameter we can filter the data returned by the data source control based on the end user's input. While the ControlParameter works well with most types of Web controls, it does not work as expected with the CheckBoxList control. The ControlParameter is designed to retrieve a single property value from the specified Web control, but the CheckBoxList control does not have a property that returns all of the values of its selected items in a form that the CheckBoxList control can use. Moreover, if you are using the selected CheckBoxList items to query a database you'll quickly find that SQL does not offer out of the box functionality for filtering results based on a user-supplied list of filter criteria. The good news is that with a little bit of effort it is possible to filter data based on the end user's selections in a CheckBoxList control. This article starts with a look at how to get SQL to filter data based on a user-supplied, comma-delimited list of values. Next, it shows how to programmatically construct a comma-delimited list that represents the selected CheckBoxList values and pass that list into the SQL query. Finally, we'll explore creating a custom parameter control to handle this logic declaratively. Read on to learn more! Read More >

    Read the article

  • Security exception in Twitterizer

    - by Raghu
    Hi, We are using Twitterizer for Twitter integration to get the Tweets details. When making call to the method OAuthUtility.GetRequestToken, following exception is coming. System.Security.SecurityException: Request for the permission of type 'System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. When the application is hosted on IIS 5, the application works fine and the above error is coming only when the application is hosted in IIS 7 on Windows 2008 R2. and the method OAuthUtility.GetRequestToken throws above exception. It seems the issue is something with code access security. Please suggest what kind of permissions should be given to fix the security exception. The application has the Full Trust and I have even tried by registering the Twitterizer DLL in GAC and still the same error is coming. I am not sure what makes the difference between IIS 5 and IIS 7 with regards to code access security to cause that exception. Following is the stack track of the exception. [SecurityException: Request for the permission of type 'System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.] System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) +0 System.Security.CodeAccessPermission.Demand() +54 Twitterizer.OAuthUtility.ExecuteRequest(String baseUrl, Dictionary`2 parameters, HTTPVerb verb, String consumerKey, String consumerSecret, String token, String tokenSecret, WebProxy proxy) +224 Twitterizer.OAuthUtility.GetRequestToken(String consumerKey, String consumerSecret, String callbackAddress, WebProxy proxy) +238 Twitter._Default.btnSubmit_Click(Object sender, EventArgs e) +94 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +11045655 System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +11045194 System.Web.UI.Page.ProcessRequest() +91 System.Web.UI.Page.ProcessRequest(HttpContext context) +240 ASP.authorization_aspx.ProcessRequest(HttpContext context) in c:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\twitter\c2fd5853\dcb96ae9\App_Web_y_ada-ix.0.cs:0 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +599 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +171 Any help would be greatly appreciated. Thanks in advance. Regards, Raghu

    Read the article

  • How to write PowerShell code part 1 (Using external xml configuration file)

    - by ybbest
    In this post, I will show you how to use external xml file with PowerShell. The advantage for doing so is that you can avoid other people to open up your PowerShell code to make the configuration changes; instead all they need to do is to change the xml file. I will refactor my site creation script as an example; you can download the script here and refactored code here. 1. As you can see below, I hard code all the variables in the script itself. $url = "http://ybbest" $WebsiteName = "Ybbest" $WebsiteDesc = "Ybbest test site" $Template = "STS#0" $PrimaryLogin = "contoso\administrator" $PrimaryDisplay = "administrator" $PrimaryEmail = "[email protected]" $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers" 2. Next, I will show you how to manipulate xml file using PowerShell. You can use the get-content to grab the content of the file. [xml] $xmlconfigurations=get-content .\SiteCollection.xml 3. Then you can set it to variable (the variable has to be typed [xml] after that you can read the content of the xml content, PowerShell also give you nice IntelliSense by press the Tab key. [xml] $xmlconfigurations=get-content .\SiteCollection.xml $xmlconfigurations.SiteCollection $xmlconfigurations.SiteCollection.SiteName 4. After refactoring my code, I can set the variables using the xml file as below. #Set the parameters $siteInformation=$xmlinput.SiteCollection $url = $siteInformation.URL $siteName = $siteInformation.SiteName $siteDesc = $siteInformation.SiteDescription $Template = $siteInformation.SiteTemplate $PrimaryLogin = $siteInformation.PrimaryLogin $PrimaryDisplay = $siteInformation.PrimaryDisplayName $PrimaryEmail = $siteInformation.PrimaryLoginEmail $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers"

    Read the article

  • SQL Authority News – Download SQL Server Data Type Conversion Chart

    - by pinaldave
    Datatypes are very important concepts of SQL Server and there are quite often need to convert them from one datatypes to another datatype. I have seen that deveoper often get confused when they have to convert the datatype. There are two important concept when it is about datatype conversion. Implicit Conversion: Implicit conversions are those conversions that occur without specifying either the CAST or CONVERT function. Explicit Conversions: Explicit conversions are those conversions that require the CAST or CONVERT function to be specified. What it means is that if you are trying to convert value from datetime2 to time or from tinyint to int, SQL Server will automatically convert (implicit conversation) for you. However, if you are attempting to convert timestamp to smalldatetime or datetime to int you will need to explicitely convert them using either CAST or CONVERT function as well appropriate parameters. Let us see a quick example of Implict Conversion and Explict Conversion. Implicit Conversion: Explicit Conversion: You can see from above example that how we need both of the types of conversion in different situation. There are so many different datatypes and it is humanly impossible to know which datatype require implicit and which require explicit conversion. Additionally there are cases when the conversion is not possible as well. Microsoft have published a chart where the grid displays various conversion possibilities as well a quick guide. Download SQL Server Data Type Conversion Chart Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Trying to use HUAWEI E173 on Ubuntu 12.04

    - by Scott Warren
    I have a HUAWEI E173 usb stick to access the internet. It works normally on windows but I need to use it on my Ubuntu system. I plug it in, and after 25-20 seconds the status light turns green and blinks twice every 3 seconds. I see no changes in my system whatsoever, nothing gets installed, I don't get prompted for my PIN. I tried to create a connection using edit connections. I entered the following parameters: Turkey, Turkcell, My Plan is not listed, Internet. Nothing happens. I tried the lsusb command and got the following: tosh2000@tosh:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 058f:a001 Alcor Micro Corp. Bus 001 Device 004: ID 12d1:14ba Huawei Technologies Co., Ltd. I tried the usb-devices command and can see the device: T: Bus=01 Lev=01 Prnt=01 Port=01 Cnt=01 Dev#= 4 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=12d1 ProdID=14ba Rev=00.00 S: Manufacturer=HUAWEI Technology S: Product=HUAWEI Mobile C: #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage I: If#= 1 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage Based on other advice I found, I tried the following command: sudo modprobe usbserial vendor=0x12d1 product=0x14ba Again, nothing happened. My question is why does the "enable mobile broadband" not populate automatically in the dropdown networking menu and how can I can I start using the device? Thank you.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 18 (sys.dm_io_virtual_file_stats)

    - by Tamarick Hill
    The sys.dm_io_virtual_file_stats Dynamic Management Function is used to return IO statistic information about each of your database files on your server. As input parameters, this function takes a database_id and a file_id. If you want to return IO statistic information for all files, you can simply pass in NULL values for both of these. Let’s have a look at this function  and examine its results: SELECT db_name(database_id) DatabaseName, * FROM sys.dm_io_virtual_file_stats(NULL, NULL) The first column in the result set is the DatabaseName which is just a column I created using the db_name() system function and the database_id column from this function. Next we have a file_id which represent the ID for the file, whether it be a data file or transaction log file. The ‘sample_ms’ column represents the total time in milliseconds that the instance has been up and running. Next we have the ‘num_of_reads’, ‘num_of_bytes_read’, and later ‘num_of_writes’, and ‘num_of_bytes_written’. These columns represent the number of reads or writes and number of bytes read or written against a particular file. These columns are beneficial when determining how often a particular file is being accessed. The ‘io_stall_read_ms’ and io_stall_write_ms’ columns each represent the the total time in milliseconds that users have had to wait for reads or writes against a file respectively. The ‘io_stall’ column is the sum of both read and write io stalls. The ‘size_on_disk_bytes’ column represents the size of the respective file on your disk subsystem. Lastly the ‘file_handle’ column is simply the Windows File handle. This Dynamic Management Function is useful when you are needing to analyze your database files for the purposes of segregating high IO databases. This DMF gives you a good view of which of your database files are being accessed the most and which ones may be generating the largest IO stalls. These could be your best candidates for moving into separate IO channels. For more information about this DMF, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190326.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • DevDays ‘00 The Netherlands day #2

    - by erwin21
    Day 2 of DevDays 2010 and again 5 interesting sessions at the World Forum in The Hague. The first session of the today in the big world forum theater was from Scott Hanselman, he gives a lap around .NET 4.0. In his way of presenting he talked about all kind of new features of .NET 4.0 like MEF, threading, parallel processing, changes and additions to the CLR and DLR, WPF and all new language features of .NET 4.0. After a small break it was ready for session 2 from Scott Allen about Tips, Tricks and Optimizations of LINQ. He talked about lazy and deferred executions, the difference between IQueryable and IEnumerable and the two flavors of LINQ syntax. The lunch was again very good prepared and delicious, but after that it was time for session 3 Web Vulnerabilities and Exploits from Alex Thissen. This was no normal session but more like a workshop, we decided what kind of subjects we discussed, the subjects where OWASP, XSS and other injections, validation, encoding. He gave some handy tips and tricks how to prevent such attacks. Session 4 was about the new features of C# 4.0 from Alex van Beek. He talked about Optional- en Named Parameters, Generic Co- en Contra Variance, Dynamic keyword and COM Interop features. He showed how to use them but also when not to use them. The last session of today and also the last session of DevDays 2010 was about WCF Best Practices from Gerben van Loon. He talked about 7 best practices that you must know when you are going to use WCF. With some quick demos he showed the problem and the solution for some common issues. It where two interesting days and next year i sure will be attending again.

    Read the article

  • Have SSIS' differing type systems ever caused you problems?

    - by jamiet
    One thing that has always infuriated me about SSIS is the fact that every package has three different type systems; to give you an idea of what I am talking about consider the following: The SSIS dataflow's type system is made up of types called DT_*  (e.g. DT_STR, DT_I4) The SSIS variable type system is based on .Net datatypes (e.g. String, Int32) The types available for Execute SQL Task's parameters are based on something else - I don't exactly know what (e.g. VARCHAR, LONG) Speaking euphemistically ... this is not an optimum situation (were I not speaking euphemistically I would be a lot ruder) and hence I have submitted a suggestion to Connect at [SSIS] Consolidate three type systems into one requesting that it be remedied. This accompanying blog post is not however a request for votes (though that would be nice); the reason is actually subtler than that. Let me explain. I have been submitting bugs and suggestions pertaining to SSIS for years and have, so far, submitted over 200 Connect items. If that experience has taught me anything it is this - Connect items are not generally actioned because they are considered "nice to have". No, SSIS Connect items get actioned because they cause customers grief and if I am perfectly honest I must admit that, other than being a bit gnarly, SSIS' three type system architecture has never knowingly caused me any significant problems. The reason for this blog post is to ask if any reader out there has ever encountered any problems on account of SSIS' three type systems or have you, like me, never found them to be a problem? Errors or performance degredation caused by implicit type conversions would, I believe, present a strong case for getting this situation remedied in a future version of SSIS so if you HAVE encountered such problems I would encourage you to leave a comment on the Connect submission accordingly. Let me know in the comments too - I would be interested to hear others' opinions on this. @Jamiet

    Read the article

  • TRADACOMS Support in B2B

    - by Dheeraj Kumar
    TRADACOMS is an initial standard for EDI. Predominantly used in the retail sector of United Kingdom. This is similar to EDIFACT messaging system, involving ecs file for translation and validation of messages. The slight difference between EDIFACT and TRADACOMS is, 1. TRADACOMS is a simpler version than EDIFACT 2. There is no Functional Acknowledgment in TRADACOMS 3. Since it is just a business message to be sent to the trading partner, the various reference numbers at STX, BAT, MHD level need not be persisted in B2B as there is no Business logic gets derived out of this. Considering this, in AS11 B2B, this can be handled out of the box using Positional Flat file document plugin. Since STX, BAT segments which define the envelope details , and part of transaction, has to be sent from the back end application itself as there is no Document protocol parameters defined in B2B. These would include some of the identifiers like SenderCode, SenderName, RecipientCode, RecipientName, reference numbers. Additionally the batching in this case be achieved by sending all the messages of a batch in a single xml from backend application, containing total number of messages in Batch as part of EOB (Batch trailer )segment. In the case of inbound scenario, we can identify the document based on start position and end position of the incoming document. However, there is a plan to identify the incoming document based on Tradacom standard instead of start/end position. Please email to [email protected] if you need a working sample.

    Read the article

  • VSDB to SSDT part 3 : command-line deployment with SqlPackage.exe, replacement for Vsdbcmd.exe

    - by Etienne Giust
    For our continuous integration needs, we use a powershell script to handle deployment. A simpler approach would be to have a deployment task embedded within the build process. See the solution provided here by Jakob Ehn (a most interesting read which also dives into the '”deploying from Visual Studio” specifics) : http://geekswithblogs.net/jakob/archive/2012/04/25/deploying-ssdt-projects-with-tfs-build.aspx   For our needs, though, clearly separating our build phase from our deployment phase is important. It allows us to instantly deploy old versions. Also it is more convenient for continuous integration. So we stick with the powershell script approach. With VSDB projects, that script used to call the following command (the vsdbcmd executable was locally available, along with needed libraries): vsdbcmd.exe /a:Deploy /dd /cs:<CONNECTIONSTRING TO TARGET DB> /dsp:SQL /manifest:< PATH TO .deploymanifest FILE>   To be able to do the approximately same thing with a SSDT produced file (dacpac), you would call this command on a machine which has VS2012 installed (or the SSDT installed, see here : http://msdn.microsoft.com/en-us/library/hh500335%28v=vs.103%29):   C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   And from within a powershell script :   & "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe" /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   The command will consume a publish.xml file where the connection string and the deployment options are specified. You must be familiar with it if you have done some deployments from visual studio. If not, please refer to the above mentioned article by Jakob Ehn.   It is also possible to pass those parameters in the command line. The complete SqlPackage.exe syntax is detailed here : http://msdn.microsoft.com/en-us/library/hh550080%28v=vs.103%29.aspx

    Read the article

  • Stupid Geek Tricks: Change Your IP Address From the Command Line in Linux

    - by Taylor Gibb
    Almost everybody can figure out how to change their IP address using an interface, but did you know you can set your network card’s IP address using a simple command from the command line? Changing Your IP From the Command Line in Linux Note: This will work on all Debian based Linux Distro’s. To get started type ifconfig into the terminal and hit enter, take note of the name of the interface that you want to change the settings for. To change the settings, you also use the ifconfig command, this time with a few parameters: sudo ifconfig eth0   192.168.0.1 netmask 255.255.255.0 That’s about all all you need to do to change your IP, of course the above command assumes a few things: The interface that you want to change the IP for is eth0 The IP you want to give the interface is 192.168.0.1 The Subnet Mask you want to set for the interface is 255.255.255.0 If you run ifconfig again you will see that your interface has now taken on the new settings you assigned to it. If you wondering how to change the Default Gateway, you can use the route command. sudo route add default gw 192.168.0.253 eth0 Will set your Default Gateway on the eth0 interface to 192.168.0.253. To see your new setting, you will need to display the routing table. route -n That’s all there is to it. How to Play Classic Arcade Games On Your PC How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8

    Read the article

  • Why is TDD not working here?

    - by TobiMcNamobi
    I want to write a class A that has a method calculate(<params>). That method should calculate a value using database data. So I wrote a class Test_A for unit testing (TDD). The database access is done using another class which I have mocked with a class, let's call it Accessor_Mockup. Now, the TDD cycle requires me to add a test that fails and make the simplest changes to A so that the test passes. So I add data to Accessor_Mockup and call A.calculate with appropriate parameters. But why should A use the accessor class at all? It would be simpler (!) if the class just "knows" the values it could retrieve from the database. For every test I write I could introduce such a new value (or an if-branch or whatever). But wait ... TDD is more. There is the refactoring part. But that sounds to me like "OK, I can do this all with a big if-elseif construct. I could refactor it using a new class ... but instead I make use of the DB accessor and do this in a totally different way. The code will not necessarily look better afterwards but I know I WANT to use the database".

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • My experience working with Teradata SQL Assistant

    - by Kevin Shyr
    Originally posted on: http://geekswithblogs.net/LifeLongTechie/archive/2014/05/28/my-experience-working-with-teradata-sql-assistant.aspx To this date, I still haven't figure out how to "toggle" between my query windows. It seems like unless I click on that "new" button on top, whatever SQL I generate from right-click just overrides the current SQL in the window. I'm probably missing a "generate new sql in new window" setting The default Teradata SQL Assistant doesn't execute just the SQL query I highlighted. There is a setting I have to change first. I'm not really happy that the SQL assistant and SQL admin are different app. Still trying to get used to the fact that I can't quickly look up a table's keys/relationships while writing query. I have to switch between windows. LOVE the execution plan / explanation. I think that part is better done than MS SQL in some ways. The error messages can be better. I feel that Teradata .NET provider sends smaller query command over than others. I don't have any hard data to support my claim. One of my query in SSRS was passing multi-valued parameters to another query, and got error "Teradata 3577 row size or sort key size overflow". The search on this error says the solution is to cast result column into smaller data type, but I found that the problem was that the parameter passed into the where clause could not be too large. I wish Teradata SQL Assistant would remember the window size I just adjusted to. Every time I execute the query, the result set, query, and exec log auto re-adjust back to the default size. In SSMS, if I adjust the result set area to be smaller, it would stay like that if I execute query in the same window.

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >