Search Results

Search found 37844 results on 1514 pages for 'function composition'.

Page 563/1514 | < Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >

  • Platform Builder: Cloning – the Linker is your Friend

    - by Bruce Eitman
    I was tasked this week with making a minor change to NetMsgBox() behavior. NetMsgBox() is a little function in NETUI that handles MessageBox() for the Network User Interface.  The obvious solution is to clone the entire NETUI directory from Public\Common\Oak\Drivers (see Platform Builder: Clone Public Code for more on cloning). If you haven’t already, take a minute to look in that folder. There are a lot of files in the folder, but I only needed to modify one function in one of those files. There must be a better way. Enter the linker. Instead of cloning the entire folder, here is what I did: Create a new folder in my Platform named NETUI (but the name isn’t important) Copy the C file that I needed to modify to the new folder, in this case netui.c Copy a makefile from one of the other folder (really they are all the same) Run Sysgen_capture Open a build window (see Platform Builder: Build Tools, Opening a Build Window) Change directories to the new folder Run “Sysgen_capture netui” Rename sources.netui to sources Add the C file to sources as SOURCES=netui.c Modify the code Build the code Done That is it, the functions from my new folder now replace the functions from the Public code and link with the rest to create NETUI.dll. There is a catches. If you remove any of the functions from the C file, linking will fail because the remaining functions will be found twice.   Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • WPF ListView as a DataGrid – Part 2

    - by psheriff
    In my last blog post I showed you how to create GridViewColumn objects on the fly from the meta-data in a DataTable. By doing this you can create columns for a ListView at runtime instead of having to pre-define each ListView for each different DataTable. Well, many of us use collections of our classes and it would be nice to be able to do the same thing for our collection classes as well. This blog post will show you one approach for using collection classes as the source of the data for your ListView.  Figure 1: A List of Data using a ListView Load Property NamesYou could use reflection to gather the property names in your class, however there are two things wrong with this approach. First, reflection is too slow, and second you may not want to display all your properties from your class in the ListView. Instead of reflection you could just create your own custom collection class of PropertyHeader objects. Each PropertyHeader object will contain a property name and a header text value at a minimum. You could add a width property if you wanted as well. All you need to do is to create a collection of property header objects where each object represents one column in your ListView. Below is a simple example: PropertyHeaders coll = new PropertyHeaders(); coll.Add(new PropertyHeader("ProductId", "Product ID"));coll.Add(new PropertyHeader("ProductName", "Product Name"));coll.Add(new PropertyHeader("Price", "Price")); Once you have this collection created, you could pass this collection to a method that would create the GridViewColumn objects based on the information in this collection. Below is the full code for the PropertyHeader class. Besides the PropertyName and Header properties, there is a constructor that will allow you to set both properties when the object is created. C#public class PropertyHeader{  public PropertyHeader()  {  }   public PropertyHeader(string propertyName, string headerText)  {    PropertyName = propertyName;    HeaderText = headerText;  }   public string PropertyName { get; set; }  public string HeaderText { get; set; }} VB.NETPublic Class PropertyHeader  Public Sub New()  End Sub   Public Sub New(ByVal propName As String, ByVal header As String)    PropertyName = propName    HeaderText = header  End Sub   Private mPropertyName As String  Private mHeaderText As String   Public Property PropertyName() As String    Get      Return mPropertyName    End Get    Set(ByVal value As String)      mPropertyName = value    End Set  End Property   Public Property HeaderText() As String    Get      Return mHeaderText    End Get    Set(ByVal value As String)      mHeaderText = value    End Set  End PropertyEnd Class You can use a Generic List class to create a collection of PropertyHeader objects as shown in the following code. C#public class PropertyHeaders : List<PropertyHeader>{} VB.NETPublic Class PropertyHeaders  Inherits List(Of PropertyHeader)End Class Create Property Header Objects You need to create a method somewhere that will create and return a collection of PropertyHeader objects that will represent the columns you wish to add to your ListView prior to binding your collection class to that ListView. Below is a sample method called GetProperties that builds a list of PropertyHeader objects with properties and headers for a Product object. C#public PropertyHeaders GetProperties(){  PropertyHeaders coll = new PropertyHeaders();   coll.Add(new PropertyHeader("ProductId", "Product ID"));  coll.Add(new PropertyHeader("ProductName", "Product Name"));  coll.Add(new PropertyHeader("Price", "Price"));   return coll;} VB.NETPublic Function GetProperties() As PropertyHeaders  Dim coll As New PropertyHeaders()   coll.Add(New PropertyHeader("ProductId", "Product ID"))  coll.Add(New PropertyHeader("ProductName", "Product Name"))  coll.Add(New PropertyHeader("Price", "Price"))   Return collEnd Function WPFListViewCommon Class Now that you have a collection of PropertyHeader objects you need a method that will create a GridView and a collection of GridViewColumn objects based on this PropertyHeader collection. Below is a static/Shared method that you might put into a class called WPFListViewCommon. C#public static GridView CreateGridViewColumns(  PropertyHeaders properties){  GridView gv;  GridViewColumn gvc;   // Create the GridView  gv = new GridView();  gv.AllowsColumnReorder = true;   // Create the GridView Columns  foreach (PropertyHeader item in properties)  {    gvc = new GridViewColumn();    gvc.DisplayMemberBinding = new Binding(item.PropertyName);    gvc.Header = item.HeaderText;    gvc.Width = Double.NaN;    gv.Columns.Add(gvc);  }   return gv;} VB.NETPublic Shared Function CreateGridViewColumns( _    ByVal properties As PropertyHeaders) As GridView  Dim gv As GridView  Dim gvc As GridViewColumn   ' Create the GridView  gv = New GridView()  gv.AllowsColumnReorder = True   ' Create the GridView Columns  For Each item As PropertyHeader In properties    gvc = New GridViewColumn()    gvc.DisplayMemberBinding = New Binding(item.PropertyName)    gvc.Header = item.HeaderText    gvc.Width = [Double].NaN    gv.Columns.Add(gvc)  Next   Return gvEnd Function Build the Product Screen To build the window shown in Figure 1, you might write code like the following: C#private void CollectionSample(){  Product prod = new Product();   // Setup the GridView Columns  lstData.View = WPFListViewCommon.CreateGridViewColumns(       prod.GetProperties());  lstData.DataContext = prod.GetProducts();} VB.NETPrivate Sub CollectionSample()  Dim prod As New Product()   ' Setup the GridView Columns  lstData.View = WPFListViewCommon.CreateGridViewColumns( _       prod.GetProperties())  lstData.DataContext = prod.GetProducts()End Sub The Product class contains a method called GetProperties that returns a PropertyHeaders collection. You pass this collection to the WPFListViewCommon’s CreateGridViewColumns method and it will create a GridView for the ListView. When you then feed the DataContext property of the ListView the Product collection the appropriate columns have already been created and data bound. Summary In this blog you learned how to create a ListView that acts like a DataGrid using a collection class. While it does take a little code to do this, it is an alternative to creating each GridViewColumn in XAML. This gives you a lot of flexibility. You could even read in the property names and header text from an XML file for a truly configurable ListView. NOTE: You can download the complete sample code (in both VB and C#) at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "WPF ListView as a DataGrid – Part 2" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free eBook on "Fundamentals of N-Tier".  

    Read the article

  • T-SQL Improvements And Data Types in ms sql 2008

    - by Aamir Hasan
     Microsoft SQL Server 2008 is a new version released in the first half of 2008 introducing new properties and capabilities to SQL Server product family. All these new and enhanced capabilities can be defined as the classic words like secure, reliable, scalable and manageable. SQL Server 2008 is secure. It is reliable. SQL2008 is scalable and is more manageable when compared to previous releases. Now we will have a look at the features that are making MS SQL Server 2008 more secure, more reliable, more scalable, etc. in details.Microsoft SQL Server 2008 provides T-SQL enhancements that improve performance and reliability. Itzik discusses composable DML, the ability to declare and initialize variables in the same statement, compound assignment operators, and more reliable object dependency information. Table-Valued ParametersInserts into structures with 1-N cardinality problematicOne order -> N order line items"N" is variable and can be largeDon't want to force a new order for every 20 line itemsOne database round-trip / line item slows things downNo ARRAY data type in SQL ServerXML composition/decomposition used as an alternativeTable-valued parameters solve this problemTable-Valued ParametersSQL Server has table variablesDECLARE @t TABLE (id int);SQL Server 2008 adds strongly typed table variablesCREATE TYPE mytab AS TABLE (id int);DECLARE @t mytab;Parameters must use strongly typed table variables Table Variables are Input OnlyDeclare and initialize TABLE variable  DECLARE @t mytab;  INSERT @t VALUES (1), (2), (3);  EXEC myproc @t;Procedure must declare variable READONLY  CREATE PROCEDURE usetable (    @t mytab READONLY ...)  AS    INSERT INTO lineitems SELECT * FROM @t;    UPDATE @t SET... -- no!T-SQL Syntax EnhancementsSingle statement declare and initialize  DECLARE @iint = 4;Compound Assignment Operators  SET @i += 1;Row constructors  DECLARE @t TABLE (id int, name varchar(20));  INSERT INTO @t VALUES    (1, 'Fred'), (2, 'Jim'), (3, 'Sue');Grouping SetsGrouping Sets allow multiple GROUP BY clauses in a single SQL statementMultiple, arbitrary, sets of subtotalsSingle read pass for performanceNested subtotals provide ever better performanceGrouping Sets are an ANSI-standardCOMPUTE BY is deprecatedGROUPING SETS, ROLLUP, CUBESQL Server 2008 - ANSI-syntax ROLLUP and CUBEPre-2008 non-ANSI syntax is deprecatedWITH ROLLUP produces n+1 different groupings of datawhere n is the number of columns in GROUP BYWITH CUBE produces 2^n different groupingswhere n is the number of columns in GROUP BYGROUPING SETS provide a "halfway measure"Just the number of different groupings you needGrouping Sets are visible in query planGROUPING_ID and GROUPINGGrouping Sets can produce non-homogeneous setsGrouping set includes NULL values for group membersNeed to distinguish by grouping and NULL valuesGROUPING (column expression) returns 0 or 1Is this a group based on column expr. or NULL value?GROUPING_ID (a,b,c) is a bitmaskGROUPING_ID bits are set based on column expressions a, b, and cMERGE StatementMultiple set operations in a single SQL statementUses multiple sets as inputMERGE target USING source ON ...Operations can be INSERT, UPDATE, DELETEOperations based onWHEN MATCHEDWHEN NOT MATCHED [BY TARGET] WHEN NOT MATCHED [BY SOURCE]More on MERGEMERGE statement can reference a $action columnUsed when MERGE used with OUTPUT clauseMultiple WHEN clauses possible For MATCHED and NOT MATCHED BY SOURCEOnly one WHEN clause for NOT MATCHED BY TARGETMERGE can be used with any table sourceA MERGE statement causes triggers to be fired onceRows affected includes total rows affected by all clausesMERGE PerformanceMERGE statement is transactionalNo explicit transaction requiredOne Pass Through TablesAt most a full outer joinMatching rows = when matchedLeft-outer join rows = when not matched by targetRight-outer join rows = when not matched by sourceMERGE and DeterminismUPDATE using a JOIN is non-deterministicIf more than one row in source matches ON clause, either/any row can be used for the UPDATEMERGE is deterministicIf more than one row in source matches ON clause, its an errorKeeping Track of DependenciesNew dependency views replace sp_dependsViews are kept in sync as changes occursys.dm_sql_referenced_entitiesLists all named entities that an object referencesExample: which objects does this stored procedure use?sys.dm_sql_referencing_entities 

    Read the article

  • Back to Basics: When does a .NET Assembly Dependency get loaded

    - by Rick Strahl
    When we work on typical day to day applications, it's easy to forget some of the core features of the .NET framework. For me personally it's been a long time since I've learned about some of the underlying CLR system level services even though I rely on them on a daily basis. I often think only about high level application constructs and/or high level framework functionality, but the low level stuff is often just taken for granted. Over the last week at DevConnections I had all sorts of low level discussions with other developers about the inner workings of this or that technology (especially in light of my Low Level ASP.NET Architecture talk and the Razor Hosting talk). One topic that came up a couple of times and ended up a point of confusion even amongst some seasoned developers (including some folks from Microsoft <snicker>) is when assemblies actually load into a .NET process. There are a number of different ways that assemblies are loaded in .NET. When you create a typical project assemblies usually come from: The Assembly reference list of the top level 'executable' project The Assembly references of referenced projects Dynamically loaded at runtime via AppDomain/Reflection loading In addition .NET automatically loads mscorlib (most of the System namespace) the boot process that hosts the .NET runtime in EXE apps, or some other kind of runtime hosting environment (runtime hosting in servers like IIS, SQL Server or COM Interop). In hosting environments the runtime host may also pre-load a bunch of assemblies on its own (for example the ASP.NET host requires all sorts of assemblies just to run itself, before ever routing into your user specific code). Assembly Loading The most obvious source of loaded assemblies is the top level application's assembly reference list. You can add assembly references to a top level application and those assembly references are then available to the application. In a nutshell, referenced assemblies are not immediately loaded - they are loaded on the fly as needed. So regardless of whether you have an assembly reference in a top level project, or a dependent assembly assemblies typically load on an as needed basis, unless explicitly loaded by user code. The same is true of dependent assemblies. To check this out I ran a simple test: I have a utility assembly Westwind.Utilities which is a general purpose library that can work in any type of project. Due to a couple of small requirements for encoding and a logging piece that allows logging Web content (dependency on HttpContext.Current) this utility library has a dependency on System.Web. Now System.Web is a pretty large assembly and generally you'd want to avoid adding it to a non-Web project if it can be helped. So I created a Console Application that loads my utility library: You can see that the top level Console app a reference to Westwind.Utilities and System.Data (beyond the core .NET libs). The Westwind.Utilities project on the other hand has quite a few dependencies including System.Web. I then add a main program that accesses only a simple utillity method in the Westwind.Utilities library that doesn't require any of the classes that access System.Web: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); } StringUtils.NewStringId() calls into Westwind.Utilities, but it doesn't rely on System.Web. Any guesses what the assembly list looks like when I stop the code on the ReadLine() command? I'll wait here while you think about it… … … So, when I stop on ReadLine() and then fire up Process Explorer and check the assembly list I get: We can see here that .NET has not actually loaded any of the dependencies of the Westwind.Utilities assembly. Also not loaded is the top level System.Data reference even though it's in the dependent assembly list of the top level project. Since this particular function I called only uses core System functionality (contained in mscorlib) there's in fact nothing else loaded beyond the main application and my Westwind.Utilities assembly that contains the method accessed. None of the dependencies of Westwind.Utilities loaded. If you were to open the assembly in a disassembler like Reflector or ILSpy, you would however see all the compiled in dependencies. The referenced assemblies are in the dependency list and they are loadable, but they are not immediately loaded by the application. In other words the C# compiler and .NET linker are smart enough to figure out the dependencies based on the code that actually is referenced from your application and any dependencies cascading down into the dependencies from your top level application into the referenced assemblies. In the example above the usage requirement is pretty obvious since I'm only calling a single static method and then exiting the app, but in more complex applications these dependency relationships become very complicated - however it's all taken care of by the compiler and linker figuring out what types and members are actually referenced and including only those assemblies that are in fact referenced in your code or required by any of your dependencies. The good news here is: That if you are referencing an assembly that has a dependency on something like System.Web in a few places that are not actually accessed by any of your code or any dependent assembly code that you are calling, that assembly is never loaded into memory! Some Hosting Environments pre-load Assemblies The load behavior can vary however. In Console and desktop applications we have full control over assembly loading so we see the core CLR behavior. However other environments like ASP.NET for example will preload referenced assemblies explicitly as part of the startup process - primarily to minimize load conflicts. Specifically ASP.NET pre-loads all assemblies referenced in the assembly list and the /bin folder. So in Web applications it definitely pays to minimize your top level assemblies if they are not used. Understanding when Assemblies Load To clarify and see it actually happen what I described in the first example , let's look at a couple of other scenarios. To see assemblies loading at runtime in real time lets create a utility function to print out loaded assemblies to the console: public static void PrintAssemblies() { var assemblies = AppDomain.CurrentDomain.GetAssemblies(); foreach (var assembly in assemblies) { Console.WriteLine(assembly.GetName()); } } Now let's look at the first scenario where I have class method that references internally uses System.Web. In the first scenario lets add a method to my main program like this: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); PrintAssemblies(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } UpdateFromWebRequest() internally accesses HttpContext.Current to read some information of the ASP.NET Request object so it clearly needs a reference System.Web to work. In this first example, the method that holds the calling code is never called, but exists as a static method that can potentially be called externally at some point. What do you think will happen here with the assembly loading? Will System.Web load in this example? No - it doesn't. Because the WebLogEntry() method is never called by the mainline application (or anywhere else) System.Web is not loaded. .NET dynamically loads assemblies as code that needs it is called. No code references the WebLogEntry() method and so System.Web is never loaded. Next, let's add the call to this method, which should trigger System.Web to be loaded because a dependency exists. Let's change the code to: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.WriteLine("--- Before:"); PrintAssemblies(); WebLogEntry(); Console.WriteLine("--- After:"); PrintAssemblies(); Console.ReadLine(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } Looking at the code now, when do you think System.Web will be loaded? Will the before list include it? Yup System.Web gets loaded, but only after it's actually referenced. In fact, just until before the call to UpdateFromRequest() System.Web is not loaded - it only loads when the method is actually called and requires the reference in the executing code. Moral of the Story So what have we learned - or maybe remembered again? Dependent Assembly References are not pre-loaded when an application starts (by default) Dependent Assemblies that are not referenced by executing code are never loaded Dependent Assemblies are just in time loaded when first referenced in code All of this is nothing new - .NET has always worked like this. But it's good to have a refresher now and then and go through the exercise of seeing it work in action. It's not one of those things we think about everyday, and as I found out last week, I couldn't remember exactly how it worked since it's been so long since I've learned about this. And apparently I'm not the only one as several other people I had discussions with in relation to loaded assemblies also didn't recall exactly what should happen or assumed incorrectly that just having a reference automatically loads the assembly. The moral of the story for me is: Trying at all costs to eliminate an assembly reference from a component is not quite as important as it's often made out to be. For example, the Westwind.Utilities module described above has a logging component, including a Web specific logging entry that supports pulling information from the active HTTP Context. Adding that feature requires a reference to System.Web. Should I worry about this in the scope of this library? Probably not, because if I don't use that one class of nearly a hundred, System.Web never gets pulled into the parent process. IOW, System.Web only loads when I use that specific feature and if I am, well I clearly have to be running in a Web environment anyway to use it realistically. The alternative would be considerably uglier: Pulling out the WebLogEntry class and sticking it into another assembly and breaking up the logging code. In this case - definitely not worth it. So, .NET definitely goes through some pretty nifty optimizations to ensure that it loads only what it needs and in most cases you can just rely on .NET to do the right thing. Sometimes though assembly loading can go wrong (especially when signed and versioned local assemblies are involved), but that's subject for a whole other post…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • C# Neural Networks with Encog

    - by JoshReuben
    Neural Networks ·       I recently read a book Introduction to Neural Networks for C# , by Jeff Heaton. http://www.amazon.com/Introduction-Neural-Networks-C-2nd/dp/1604390093/ref=sr_1_2?ie=UTF8&s=books&qid=1296821004&sr=8-2-spell. Not the 1st ANN book I've perused, but a nice revision.   ·       Artificial Neural Networks (ANNs) are a mechanism of machine learning – see http://en.wikipedia.org/wiki/Artificial_neural_network , http://en.wikipedia.org/wiki/Category:Machine_learning ·       Problems Not Suited to a Neural Network Solution- Programs that are easily written out as flowcharts consisting of well-defined steps, program logic that is unlikely to change, problems in which you must know exactly how the solution was derived. ·       Problems Suited to a Neural Network – pattern recognition, classification, series prediction, and data mining. Pattern recognition - network attempts to determine if the input data matches a pattern that it has been trained to recognize. Classification - take input samples and classify them into fuzzy groups. ·       As far as machine learning approaches go, I thing SVMs are superior (see http://en.wikipedia.org/wiki/Support_vector_machine ) - a neural network has certain disadvantages in comparison: an ANN can be overtrained, different training sets can produce non-deterministic weights and it is not possible to discern the underlying decision function of an ANN from its weight matrix – they are black box. ·       In this post, I'm not going to go into internals (believe me I know them). An autoassociative network (e.g. a Hopfield network) will echo back a pattern if it is recognized. ·       Under the hood, there is very little maths. In a nutshell - Some simple matrix operations occur during training: the input array is processed (normalized into bipolar values of 1, -1) - transposed from input column vector into a row vector, these are subject to matrix multiplication and then subtraction of the identity matrix to get a contribution matrix. The dot product is taken against the weight matrix to yield a boolean match result. For backpropogation training, a derivative function is required. In learning, hill climbing mechanisms such as Genetic Algorithms and Simulated Annealing are used to escape local minima. For unsupervised training, such as found in Self Organizing Maps used for OCR, Hebbs rule is applied. ·       The purpose of this post is not to mire you in technical and conceptual details, but to show you how to leverage neural networks via an abstraction API - Encog   Encog ·       Encog is a neural network API ·       Links to Encog: http://www.encog.org , http://www.heatonresearch.com/encog, http://www.heatonresearch.com/forum ·       Encog requires .Net 3.5 or higher – there is also a Silverlight version. Third-Party Libraries – log4net and nunit. ·       Encog supports feedforward, recurrent, self-organizing maps, radial basis function and Hopfield neural networks. ·       Encog neural networks, and related data, can be stored in .EG XML files. ·       Encog Workbench allows you to edit, train and visualize neural networks. The Encog Workbench can generate code. Synapses and layers ·       the primary building blocks - Almost every neural network will have, at a minimum, an input and output layer. In some cases, the same layer will function as both input and output layer. ·       To adapt a problem to a neural network, you must determine how to feed the problem into the input layer of a neural network, and receive the solution through the output layer of a neural network. ·       The Input Layer - For each input neuron, one double value is stored. An array is passed as input to a layer. Encog uses the interface INeuralData to hold these arrays. The class BasicNeuralData implements the INeuralData interface. Once the neural network processes the input, an INeuralData based class will be returned from the neural network's output layer. ·       convert a double array into an INeuralData object : INeuralData data = new BasicNeuralData(= new double[10]); ·       the Output Layer- The neural network outputs an array of doubles, wraped in a class based on the INeuralData interface. ·        The real power of a neural network comes from its pattern recognition capabilities. The neural network should be able to produce the desired output even if the input has been slightly distorted. ·       Hidden Layers– optional. between the input and output layers. very much a “black box”. If the structure of the hidden layer is too simple it may not learn the problem. If the structure is too complex, it will learn the problem but will be very slow to train and execute. Some neural networks have no hidden layers. The input layer may be directly connected to the output layer. Further, some neural networks have only a single layer. A single layer neural network has the single layer self-connected. ·       connections, called synapses, contain individual weight matrixes. These values are changed as the neural network learns. Constructing a Neural Network ·       the XOR operator is a frequent “first example” -the “Hello World” application for neural networks. ·       The XOR Operator- only returns true when both inputs differ. 0 XOR 0 = 0 1 XOR 0 = 1 0 XOR 1 = 1 1 XOR 1 = 0 ·       Structuring a Neural Network for XOR  - two inputs to the XOR operator and one output. ·       input: 0.0,0.0 1.0,0.0 0.0,1.0 1.0,1.0 ·       Expected output: 0.0 1.0 1.0 0.0 ·       A Perceptron - a simple feedforward neural network to learn the XOR operator. ·       Because the XOR operator has two inputs and one output, the neural network will follow suit. Additionally, the neural network will have a single hidden layer, with two neurons to help process the data. The choice for 2 neurons in the hidden layer is arbitrary, and often comes down to trial and error. ·       Neuron Diagram for the XOR Network ·       ·       The Encog workbench displays neural networks on a layer-by-layer basis. ·       Encog Layer Diagram for the XOR Network:   ·       Create a BasicNetwork - Three layers are added to this network. the FinalizeStructure method must be called to inform the network that no more layers are to be added. The call to Reset randomizes the weights in the connections between these layers. var network = new BasicNetwork(); network.AddLayer(new BasicLayer(2)); network.AddLayer(new BasicLayer(2)); network.AddLayer(new BasicLayer(1)); network.Structure.FinalizeStructure(); network.Reset(); ·       Neural networks frequently start with a random weight matrix. This provides a starting point for the training methods. These random values will be tested and refined into an acceptable solution. However, sometimes the initial random values are too far off. Sometimes it may be necessary to reset the weights again, if training is ineffective. These weights make up the long-term memory of the neural network. Additionally, some layers have threshold values that also contribute to the long-term memory of the neural network. Some neural networks also contain context layers, which give the neural network a short-term memory as well. The neural network learns by modifying these weight and threshold values. ·       Now that the neural network has been created, it must be trained. Training a Neural Network ·       construct a INeuralDataSet object - contains the input array and the expected output array (of corresponding range). Even though there is only one output value, we must still use a two-dimensional array to represent the output. public static double[][] XOR_INPUT ={ new double[2] { 0.0, 0.0 }, new double[2] { 1.0, 0.0 }, new double[2] { 0.0, 1.0 }, new double[2] { 1.0, 1.0 } };   public static double[][] XOR_IDEAL = { new double[1] { 0.0 }, new double[1] { 1.0 }, new double[1] { 1.0 }, new double[1] { 0.0 } };   INeuralDataSet trainingSet = new BasicNeuralDataSet(XOR_INPUT, XOR_IDEAL); ·       Training is the process where the neural network's weights are adjusted to better produce the expected output. Training will continue for many iterations, until the error rate of the network is below an acceptable level. Encog supports many different types of training. Resilient Propagation (RPROP) - general-purpose training algorithm. All training classes implement the ITrain interface. The RPROP algorithm is implemented by the ResilientPropagation class. Training the neural network involves calling the Iteration method on the ITrain class until the error is below a specific value. The code loops through as many iterations, or epochs, as it takes to get the error rate for the neural network to be below 1%. Once the neural network has been trained, it is ready for use. ITrain train = new ResilientPropagation(network, trainingSet);   for (int epoch=0; epoch < 10000; epoch++) { train.Iteration(); Debug.Print("Epoch #" + epoch + " Error:" + train.Error); if (train.Error > 0.01) break; } Executing a Neural Network ·       Call the Compute method on the BasicNetwork class. Console.WriteLine("Neural Network Results:"); foreach (INeuralDataPair pair in trainingSet) { INeuralData output = network.Compute(pair.Input); Console.WriteLine(pair.Input[0] + "," + pair.Input[1] + ", actual=" + output[0] + ",ideal=" + pair.Ideal[0]); } ·       The Compute method accepts an INeuralData class and also returns a INeuralData object. Neural Network Results: 0.0,0.0, actual=0.002782538818034049,ideal=0.0 1.0,0.0, actual=0.9903741937121177,ideal=1.0 0.0,1.0, actual=0.9836807956566187,ideal=1.0 1.0,1.0, actual=0.0011646072586172778,ideal=0.0 ·       the network has not been trained to give the exact results. This is normal. Because the network was trained to 1% error, each of the results will also be within generally 1% of the expected value.

    Read the article

  • How to use SharePoint modal dialog box to display Custom Page Part1

    - by ybbest
    In the part1 of this series, I will show you how to use the modal dialog box to display the custom page and close the page. You can download solution here. 1. Firstly, I create custom action on the list item ECB called Display Custom Page. To do so, you need to create an element item in SharePoint project and copy the following xml to the element file. <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <CustomAction Id="ReportConcern" RegistrationType="ContentType" RegistrationId="0x010100866B1423D33DDA4CA1A4639B54DD4642" Location="EditControlBlock" Sequence="107" Title="Display Custom Page" Description="To Display Custom Page in a modal dialog box on this item"> <UrlAction Url="javascript: function CallDETCustomDialog(dialogResult, returnValue) { SP.UI.ModalDialog.RefreshPage(SP.UI.DialogResult.OK); } var options = { url: '{SiteUrl}' + '/_layouts/YBBEST/TitleRename.aspx?List={ListId}&amp;ID={ItemId}', title: 'Rename title', allowMaximize: false, showClose: true, width: 500, height: 300, dialogReturnValueCallback: CallDETCustomDialog }; SP.UI.ModalDialog.showModalDialog(options);" /> </CustomAction> </Elements> 2. In your code behind, you can implement a close dialog function as below. This will close your modal dialog box once the button is clicked. protected void CloseDialog() { if (HttpContext.Current.Request.QueryString["IsDlg"] == null) return; if (!ClientScript.IsStartupScriptRegistered("CloseDialogFunction")) { const string script = "<script type='text/javascript'>" + "SP.UI.ModalDialog.commonModalDialogClose(1, 1);" + "</script>"; ClientScript.RegisterStartupScript(GetType(), "CloseDialogFunction", script); } }

    Read the article

  • How to use SharePoint modal dialog box to display Custom Page Part1

    - by ybbest
    In the part1 of this series, I will show you how to use the modal dialog box to display the custom page and close the page. You can download solution here. 1. Firstly, I create custom action on the list item ECB called Display Custom Page. To do so, you need to create an element item in SharePoint project and copy the following xml to the element file. <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <CustomAction Id="ReportConcern" RegistrationType="ContentType" RegistrationId="0x010100866B1423D33DDA4CA1A4639B54DD4642" Location="EditControlBlock" Sequence="107" Title="Display Custom Page" Description="To Display Custom Page in a modal dialog box on this item"> <UrlAction Url="javascript: function CallDETCustomDialog(dialogResult, returnValue) { SP.UI.ModalDialog.RefreshPage(SP.UI.DialogResult.OK); } var options = { url: '{SiteUrl}' + '/_layouts/YBBEST/TitleRename.aspx?List={ListId}&amp;ID={ItemId}', title: 'Rename title', allowMaximize: false, showClose: true, width: 500, height: 300, dialogReturnValueCallback: CallDETCustomDialog }; SP.UI.ModalDialog.showModalDialog(options);" /> </CustomAction> </Elements> 2. In your code behind, you can implement a close dialog function as below. This will close your modal dialog box once the button is clicked. protected void CloseDialog() { if (HttpContext.Current.Request.QueryString["IsDlg"] == null) return; if (!ClientScript.IsStartupScriptRegistered("CloseDialogFunction")) { const string script = "<script type='text/javascript'>" + "SP.UI.ModalDialog.commonModalDialogClose(1, 1);" + "</script>"; ClientScript.RegisterStartupScript(GetType(), "CloseDialogFunction", script); } }

    Read the article

  • AutoScroll panel working intermittently.

    - by Edward Boyle
    I spent hours last week trying to get AutoScroll to function properly on a derived/inherited panel control I have been writing. I found no answers on my own so I posted to several forums and move onto other code while I wait for a reply. Then out of nowhere, it started working properly. Now, Today (about a week later) I notice it is no longer working again!  I go back to those old posts with hopes I will find an answer – No such luck. I Google for about two hours reading everything I come across. I was just about to write a new custom control from the ground up, perhaps use a little unmanaged code to force things to function properly. All I knew was “options in front of me = dealys”.  Just before I gave up, my head in my hands,  Jordan Sirwin’s appropriately titled blog post: “C#: Windows Panel AutoScroll Bug / Intended Suckyness” saves the day! In order for scroll bars to display, there must be at least one control in the Panel with AutoSize set to true. This is absurd… I’m not sure if this is a bug or intended, but it’s stupid. –I feel your pain. How many others have spent hours on this, or worse,  just plain given up? I want those hours back Damnit!

    Read the article

  • My Optimized Adam &amp; Eve

    - by MarkPearl
    Today I had a few minutes in the evening to go over my original Adam and Eve code… what I wanted to see tonight was if I could optimize the code any further… which I was pretty sure could be done. Ultimately what I wanted to find from the experiment was a balance between optimized code an reusable code. On the one hand I can put everything into a single function and end up with a totally unusable function that is extremely compressed, which would have big comebacks when making modifications at a later stage. Alternatively I could have many single line functions that are extremely loosely coupled but sparsely spaced and so would almost be to fragmented to grok. Ultimately I found with my current iteration something that I consider readable, yet compressed. Code below… // Learn more about F# at http://fsharp.net open System let people = [ ("Adam", None); ("Eve", None); ("Cain", Some("Adam", "Eve")); ("Abel", Some("Adam", "Eve")) ] // // Prints the details // let showDetails(person : string * (string * string) option) = let ParentsName = let parents = snd(person) match parents with | Some(dad, mum) -> "Father " + dad + " and Mother " + mum | None -> "Has no parents!" let result = fst(person) + Environment.NewLine + ParentsName result // // Searches an array of people and looks for a match of names // let findPerson(name : string, people : (string * (string * string) option) list) = // Try and find a match of the name let o = Seq.tryFind(fun person -> match name with | firstName when firstName = fst(person) -> true | _ -> false) people // Show the details based on the match result match o with | Option.Some(x) -> showDetails(Option.get(o)) | _ -> "Not Found" Console.WriteLine(findPerson("Cains", people)) Console.ReadLine()

    Read the article

  • Google Analytics and Whos.amung.us in realtime visitors, why such an enormous discrepancy?

    - by jacouh
    Since years I use in a site both Google Analytics and Whos.amung.us, both Google analytics and whos.amung.us javascripts are inserted in the same pages in the tracked part of the site. In real-time visitors, why such an enormous discrepancy ? for example at the moment, Google analytics gives me 9 visitors, whos.amung.us indicates 59, a ratio of 6 times? Why whos.amung.us is 6 times optimistic than Google Analytics in terms of the realtime visitors? Google whos.amung.us My question is: whos.amung.us does not detect robots while Google does? GA ignores visitors from some countries, not whos.amung.us? Some robots/bots execute whos.amung.us javascript for tracking? While no robots/bots can execute the tracking javascript provided by Google Analytics? To facilitate your analysis, I copy JS code used below: Google analytics: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'MyGaAccountNo']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Whos.amung.us: <script>var _wau = _wau || []; _wau.push(["tab", "MyWAUAccountNo", "c6x", "right-upper"]);(function() { var s=document.createElement("script"); s.async=true; s.src="http://widgets.amung.us/tab.js";document.getElementsByTagName("head")[0].appendChild(s);})();</script> I've aleady signaled this to WAU staff some time ago, NR, I've not done this to Google as they don't handle this kind of feedback. Thank you for your explanations.

    Read the article

  • Are `break` and `continue` bad programming practices?

    - by Mikhail
    My boss keeps mentioning nonchalantly that bad programmers use break and continue in loops. I use them all the time because they make sense; let me show you the inspiration: function verify(object) { if (object->value < 0) return false; if (object->value > object->max_value) return false; if (object->name == "") return false; ... } The point here is that first the function checks that the conditions are correct, then executes the actual functionality. IMO same applies with loops: while (primary_condition) { if (loop_count > 1000) break; if (time_exect > 3600) break; if (this->data == "undefined") continue; if (this->skip == true) continue; ... } I think this makes it easier to read & debug; but I also don't see a downside. Please comment.

    Read the article

  • Is duck typing a subset of polymorphism

    - by Raynos
    From Polymorphism on WIkipedia In computer science, polymorphism is a programming language feature that allows values of different data types to be handled using a uniform interface. From duck typing on Wikipedia In computer programming with object-oriented programming languages, duck typing is a style of dynamic typing in which an object's current set of methods and properties determines the valid semantics, rather than its inheritance from a particular class or implementation of a specific interface. My interpretation is that based on duck typing, the objects methods/properties determine the valid semantics. Meaning that the objects current shape determines the interface it upholds. From polymorphism you can say a function is polymorphic if it accepts multiple different data types as long as they uphold an interface. So if a function can duck type, it can accept multiple different data types and operate on them as long as those data types have the correct methods/properties and thus uphold the interface. (Usage of the term interface is meant not as a code construct but more as a descriptive, documenting construct) What is the correct relationship between ducktyping and polymorphism ? If a language can duck type, does it mean it can do polymorphism ?

    Read the article

  • Why are marketing employees, product managers, etc. deserving of their own office, yet programmers are jammed in a room as many as possible?

    - by TheImirOfGroofunkistan
    I don't understand why many (many) companies treat software developers like they are assembly line workers making widgets. Joel Spolsky has a great example of the problems this creates: With programmers, it's especially hard. Productivity depends on being able to juggle a lot of little details in short term memory all at once. Any kind of interruption can cause these details to come crashing down. When you resume work, you can't remember any of the details (like local variable names you were using, or where you were up to in implementing that search algorithm) and you have to keep looking these things up, which slows you down a lot until you get back up to speed. Here's the simple algebra. Let's say (as the evidence seems to suggest) that if we interrupt a programmer, even for a minute, we're really blowing away 15 minutes of productivity. For this example, lets put two programmers, Jeff and Mutt, in open cubicles next to each other in a standard Dilbert veal-fattening farm. Mutt can't remember the name of the Unicode version of the strcpy function. He could look it up, which takes 30 seconds, or he could ask Jeff, which takes 15 seconds. Since he's sitting right next to Jeff, he asks Jeff. Jeff gets distracted and loses 15 minutes of productivity (to save Mutt 15 seconds). Now let's move them into separate offices with walls and doors. Now when Mutt can't remember the name of that function, he could look it up, which still takes 30 seconds, or he could ask Jeff, which now takes 45 seconds and involves standing up (not an easy task given the average physical fitness of programmers!). So he looks it up. So now Mutt loses 30 seconds of productivity, but we save 15 minutes for Jeff. Ahhh! Quote Link More Spolsky on Offices Why don't managers and owner's see this?

    Read the article

  • Why do marketing employees get their own office, yet programmers are jammed in a room as many as possible?

    - by TheImirOfGroofunkistan
    I don't understand why many (many) companies treat software developers like they are assembly line workers making widgets. Joel Spolsky has a great example of the problems this creates: With programmers, it's especially hard. Productivity depends on being able to juggle a lot of little details in short term memory all at once. Any kind of interruption can cause these details to come crashing down. When you resume work, you can't remember any of the details (like local variable names you were using, or where you were up to in implementing that search algorithm) and you have to keep looking these things up, which slows you down a lot until you get back up to speed. Here's the simple algebra. Let's say (as the evidence seems to suggest) that if we interrupt a programmer, even for a minute, we're really blowing away 15 minutes of productivity. For this example, lets put two programmers, Jeff and Mutt, in open cubicles next to each other in a standard Dilbert veal-fattening farm. Mutt can't remember the name of the Unicode version of the strcpy function. He could look it up, which takes 30 seconds, or he could ask Jeff, which takes 15 seconds. Since he's sitting right next to Jeff, he asks Jeff. Jeff gets distracted and loses 15 minutes of productivity (to save Mutt 15 seconds). Now let's move them into separate offices with walls and doors. Now when Mutt can't remember the name of that function, he could look it up, which still takes 30 seconds, or he could ask Jeff, which now takes 45 seconds and involves standing up (not an easy task given the average physical fitness of programmers!). So he looks it up. So now Mutt loses 30 seconds of productivity, but we save 15 minutes for Jeff. Ahhh! Quote Link More Spolsky on Offices Why don't managers and owner's see this?

    Read the article

  • SQL SERVER – New SQL Server 2012 Functions – Webinar by Rick Morelan

    - by Pinal Dave
    My friend Rick Morelan is a wonderful speaker and listening to him is very delightful. Rick is one of the speakers who can articulate a very complex subject in very simple words. Rick has attained over 30 Microsoft certifications in applications, networking, databases and .NET development, including MCDBA, MCTS, MCITP, MCAD, MOE, MCSE and MCSE+. Here is the chance for every one who has not listened Rick Morelan before as he is presenting an online webinar on New SQL Server 2012 Functions. Whether or not you’re a database developer or administrator, you love the power of SQL functions. The functions in SQL Server give you the power to accelerate your applications and database performance. Each version of SQL Server adds new functionality, so come and see Rick Morelan explain what’s new in SQL Server 2012! This webinar will focus on the new string, time and logical functions added to SQL Server 2012. Register for the webinar now to learn: SQL Server 2012 function basics String, time and logical function details Tools to accelerate the SQL coding process Tuesday June 11, 2013  7:00 AM PDT / 10:00 AM EDT 11:00 AM PDT / 2:00 PM EDT Secret Hint: Here is something I would like to tell everyone that there is a quiz coming up on SQLAuthority.com and those who will attend the webinar will find it very easy to resolve it. Register for webinar Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Unity Plugin DLLNotFoundException

    - by Dewayne
    I am using a plugin DLL that I created in Visual C++ Express 2010 on windows 7 64 bit Ultimate Edition. The DLL functions properly on the machine that it was originally created on. The problem is that the DLL is not functioning in the Unity3d Editor on another machine and giving an error that basically states that the DLL is missing some of its dependencies. The target machine is running Windows 7 Home 64 bit (if this is relevant) Results from the error log of Dependency Walker: Error: The Side-by-Side configuration information for "c:\users\dewayne\desktop\shared\vrpnplugin\unityplugin\build\release\OPTITRACKPLUGIN.DLL" contains errors. The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log or use the command-line sxstrace.exe tool for more detail (14001). Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module. Error: Modules with different CPU types were found. Warning: At least one delay-load dependency module was not found. Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module. The Visual C++ Express 2010 project and solution file can be found here: https://docs.google.com/leaf?id=0B1F4pP7mRSiYMGU2YTJiNTUtOWJiMS00YTYzLThhYWQtMzNiOWJhZDU5M2M0&hl=en&authkey=CJSXhqgH The zip is 79MB and also contains its dependencies. The DLL in question is OptiTrackPlugin.dll

    Read the article

  • Getting NLog Running in Partial Trust

    - by grant.barrington
    To get things working you will need to: Strong name sign the assembly Allow Partially Trusted Callers In the AssemblyInfo.cs file you will need to add the assembly attribute for “AllowPartiallyTrustedCallers” You should now be able to get NLog working as part of a partial trust installation, except that the File target won’t work. Other targets will still work (database for example)   Changing BaseFileAppender.cs to get file logging to work In the directory \Internal\FileAppenders there is a file called “BaseFileAppender.cs”. Make a change to the function call “TryCreateFileStream()”. The error occurs here: Change the function call to be: private FileStream TryCreateFileStream(bool allowConcurrentWrite) { FileShare fileShare = FileShare.Read; if (allowConcurrentWrite) fileShare = FileShare.ReadWrite; #if DOTNET_2_0 if (_createParameters.EnableFileDelete && PlatformDetector.GetCurrentRuntimeOS() != RuntimeOS.Windows) { fileShare |= FileShare.Delete; } #endif #if !NETCF try { if (PlatformDetector.IsCurrentOSCompatibleWith(RuntimeOS.WindowsNT) || PlatformDetector.IsCurrentOSCompatibleWith(RuntimeOS.Windows)) { return WindowsCreateFile(FileName, allowConcurrentWrite); } } catch (System.Security.SecurityException secExc) { InternalLogger.Error("Security Exception Caught in WindowsCreateFile. {0}", secExc.Message); } #endif return new FileStream(FileName, FileMode.Append, FileAccess.Write, fileShare, _createParameters.BufferSize); }   Basically we wrap the call in a try..catch. If we catch a SecurityException when trying to create the FileStream using WindowsCreateFile(), we just swallow the exception and use the native System.Io.FileStream instead.

    Read the article

  • What Banks Can Learn From An English Teacher’s Advice

    - by Gaurav H
    The earliest definitions I learnt at school pertained to nouns and verbs. Nouns, my teacher said, indicated names of people, things and places. Verbs, the stern lady said, are “action words”. They indicated motion.  The idea for this blog filtered in when I applied these definitions to the entity I most often deal with for my personal financial needs, and think about or relate to from a professional standpoint: ‘a bank’. Noun? It certainly is. At least that’s how I’d had it figured in my head. It used to be a place I visited to get my financial business done. It is the name of an entity I have a business relationship with. But, taking a closer look at how ‘the bank’ has evolved recently makes me wonder. Is it not after all acquiring some shades of a verb? For one, it’s in motion if I consider my mobile device with its financial apps. For another, it’s in ‘quasi-action’ if I consider a highly interactive virtual bank. The point I’m driving at is not semantic. But the words we use and the way we use them are revealing, and can offer tremendous insights into our existing mindsets. I think the same applies to businesses. Banks that first began examining and deconstructing their cherished ‘definitions’ or business models (nouns) were the earliest to adapt, change, and reinvent (verbs). They were able to waltz past disintermediation threats. Though rooted in a ‘brick and mortar’ heritage, their thinking and infrastructure were flexible enough for the digital era. While their physical premises imposed restrictions—opening hours, transaction hours, appointments, waiting time, overcrowding, processing time, clearing time, etc,—their thinking did not. They innovated. Across traditional and new-era channels, they easily slipped in customer services of a differentiated kind: spot loans, deposits with idle account balances, convenient mortgages with multiple liens or collateral, and instant payment options.I believe the most successful banks are those that fit into the rhythm of their customers’ lives rather than forcing their customers to fit into theirs. It was true for banks that existed before the Internet era; it’s true for banks now. I look no further than UBANK, JIBUN and HBOS Germany to make my point. They are resounding successes because they are not trapped in their own definitions of ‘a bank’. They walk with their customers, rather than waiting for their clients to walk-in for services.Back to my English teacher. She once advised me to use more verbs in my composition. Readers relate better to “action” she said. Banks too can profit from her advice. To succeed, they need to interact more. And remain flexible enough to interact with their customers. Sonny Singh is Senior Vice President  and General Manager of the Oracle Financial Services Global Business Unit. He can be reached at sonny.singh AT oracle.com or on twitter @sonnyhsingh

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • Isometric displaying two different images in different positions

    - by Canvas
    I'm creating a simple Isometric game using HTML5 and Javascript, but I can't seem to get the display to work, at the moment i have 9 tiles that have X and Y positions and the player has a X and Y position, the players X and Y properties are set to 100, and the tiles are as shown tiles[0] = new Array(3); tiles[1] = new Array(3); tiles[2] = new Array(3); tiles[0][0] = new point2D( 100, 100); tiles[0][1] = new point2D( 160, 100); tiles[0][2] = new point2D( 220, 100); tiles[1][0] = new point2D( 100, 160); tiles[1][1] = new point2D( 160, 160); tiles[1][2] = new point2D( 220, 160); tiles[2][0] = new point2D( 100, 220); tiles[2][1] = new point2D( 160, 220); tiles[2][2] = new point2D( 220, 220); Now I use this method to work out the isometric position function twoDToIso( point ) { var cords = point2D; cords.x = point.x - point.y; cords.y = (point.x + point.y) / 2; return cords; } point2D is function point2D( x, y) { this.x = x; this.y = y; } Now this i'm sure does work out the correct positioning, but here is the output Isometric view I just need to move my player position a tiny bit, but is that the best way to display my player position in the right position? Canvas P.S. the tile width is 120 and height is 60 and the player is width 30 by height 15

    Read the article

  • Using old code on new version of Visual Studio [migrated]

    - by Tu Tran
    I have a project which was started from 90s in C/C++. Therefore, it contains many old coding styles such as K&R-style function declaration, obsolete function, ... The project works fine in Visual Studio 2008, but now I want to use it in the new version of Visual Studio (specifically VS 2010) because we have other projects in Visual Studio 2010/2012. I don't want to have too many versions of Visual Studio on my machine. When I try to compile the old project, Visual Studio throws too many errors. I can fix all of them but I am scared to edit the source code and I want other people to be able to pen it in the old version of VS too. I want the project to remain backwards compatible with VS. My question is how to use the old code in Visual Studio 2010/2012 without changing the code. Or if necessary how do I just fix a few lines of code, but make sure it won't cause an error if someone else opens that code in the older version of VS. Is there a way to tell newer Visual Studio versions to use older compiler flags or something like that?

    Read the article

  • Release Management as Orchestra

    - by ericajanine
    I read an excellent, concise article (http://www.buildmeister.com/articles/software_release_management_best_practices) on the basics of release management practices. In the article, it states "Release Management is often likened to the conductor of an orchestra, with the individual changes to be implemented the various instruments within it." I played in music ensembles for years, so this is especially close to my heart as example. I learned most of my discipline from hours and hours of practice at the hand of a very skilled conductor and leader. I also learned that the true magic in symphonic performance is one where everyone involved is focused on one sound, one goal. In turn, that solid focus creates a sound and experience bigger than just mechanics alone accomplish. In symphony, a conductor's true purpose is to make you, a performer, better so the overall sound and end product is better. The big picture (the performance of the composition) is the end-game, and all musicians in the orchestra know without question their part makes up an important but incomplete piece of that performance. A good conductor works with each section (e.g. group) to ensure their individual pieces are solid. Let's restate: The conductor leads and is responsible for ensuring those pieces are solid. While the performers themselves are doing the work, the conductor is the final authority on when the pieces are ready or not. If not, the conductor initiates the efforts to get them ready or makes the decision to scrap their parts altogether for the sake of an overall performance. Let it sink in, because it's clear--It is not the performer's call if they play their part as agreed, it's the conductor's final call to allow it. In comparison, if a software release manager is a conductor, the only way for that manager to be effective is to drive the overarching process and execution of individual pieces of a software development lifecycle. It does not mean the release manager performs each and every piece, it means the release manager has oversight and influence because the end-game is a successful software enhancin a useable environment. It means the release manager, not the developer or development manager, has the final call if something goes into a software release. Of course, this is not a process of autocracy or dictation of absolute rule, it's cooperative effort. But the release manager must have the final authority to make a decision if something is ready to be added to the bigger piece, the overall symphony of software changes being considered for package and release. It also goes without saying a release manager, like a conductor, must have full autonomy and isolation from other software groups. A conductor is the one on the podium waving a little stick at the each section and cueing them for their parts, not yelling from the back of the room while also playing a tuba and taking direction from the horn section. I have personally seen where release managers are relegated to being considered little more than coordinators, red-tapers to "satisfy" the demands of an audit group without being bothered to actually respect all that a release manager gives a group willing to employ them fully. In this dysfunctional scenario, development managers, project managers, business users, and other stakeholders have been given nearly full clearance to demand and push their agendas forward, causing a tail-wagging-the-dog scenario where an inherent conflict will ensue. Depending on the strength, determination for peace, and willingness to overlook a built-in expectation that is wrong, the release manager here must face the crafted conflict head-on and diffuse it as quickly as possible. Then, the release manager must clearly make a case why a change cannot be released without negative impact to all parties involved. If a political agenda is solely driving a software release, there IS no symphony, there is no "software lifecycle". It's just out-of-tune noise. More importantly, there is no real conductor. Sometimes, just wanting to make a beautiful sound is not enough. If you are a release manager, are you freed up enough to move, to conduct the sections of software creation to ensure a solid release performance is possible? If not, it's time to take stock in what your role actually is and see if that is what you truly want to achieve in your position. If you are, then you can successfully build your career and that of the people in your groups to create truly beautiful software (music) together.

    Read the article

  • Asynchrony in C# 5: Dataflow Async Logger Sample

    - by javarg
    Check out this (very simple) code examples for TPL Dataflow. Suppose you are developing an Async Logger to register application events to different sinks or log writers. The logger architecture would be as follow: Note how blocks can be composed to achieved desired behavior. The BufferBlock<T> is the pool of log entries to be process whereas linked ActionBlock<TInput> represent the log writers or sinks. The previous composition would allows only one ActionBlock to consume entries at a time. Implementation code would be something similar to (add reference to System.Threading.Tasks.Dataflow.dll in %User Documents%\Microsoft Visual Studio Async CTP\Documentation): TPL Dataflow Logger var bufferBlock = new BufferBlock<Tuple<LogLevel, string>>(); ActionBlock<Tuple<LogLevel, string>> infoLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Info: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> errorLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Error: {0}", e.Item2)); bufferBlock.LinkTo(infoLogger, e => (e.Item1 & LogLevel.Info) != LogLevel.None); bufferBlock.LinkTo(errorLogger, e => (e.Item1 & LogLevel.Error) != LogLevel.None); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Info, "info message")); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Error, "error message")); Note the filter applied to each link (in this case, the Logging Level selects the writer used). We can specify message filters using Predicate functions on each link. Now, the previous sample is useless for a Logger since Logging Level is not exclusive (thus, several writers could be used to process a single message). Let´s use a Broadcast<T> buffer instead of a BufferBlock<T>. Broadcast Logger var bufferBlock = new BroadcastBlock<Tuple<LogLevel, string>>(     e => new Tuple<LogLevel, string>(e.Item1, e.Item2)); ActionBlock<Tuple<LogLevel, string>> infoLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Info: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> errorLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Error: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> allLogger =     new ActionBlock<Tuple<LogLevel, string>>(     e => Console.WriteLine("All: {0}", e.Item2)); bufferBlock.LinkTo(infoLogger, e => (e.Item1 & LogLevel.Info) != LogLevel.None); bufferBlock.LinkTo(errorLogger, e => (e.Item1 & LogLevel.Error) != LogLevel.None); bufferBlock.LinkTo(allLogger, e => (e.Item1 & LogLevel.All) != LogLevel.None); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Info, "info message")); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Error, "error message")); As this block copies the message to all its outputs, we need to define the copy function in the block constructor. In this case we create a new Tuple, but you can always use the Identity function if passing the same reference to every output. Try both scenarios and compare the results.

    Read the article

  • What's the benefit of object-oriented programming over procedural programming?

    - by niko
    I'm trying to understand the difference between procedural languages like C and object-oriented languages like C++. I've never used C++, but I've been discussing with my friends on how to differentiate the two. I've been told C++ has object-oriented concepts as well as public and private modes for definition of variables: things C does not have. I've never had to use these for while developing programs in Visual Basic.NET: what are the benefits of these? I've also been told that if a variable is public, it can be accessed anywhere, but it's not clear how that's different from a global variable in a language like C. It's also not clear how a private variable differs from a local variable. Another thing I've heard is that, for security reasons, if a function needs to be accessed it should be inherited first. The use-case is that an administrator should only have as much rights as they need and not everything, but it seems a conditional would work as well: if ( login == "admin") { // invoke the function } Why is this not ideal? Given that there seems to be a procedural way to do everything object-oriented, why should I care about object-oriented programming?

    Read the article

  • Building Interactive User Interfaces with Microsoft ASP.NET AJAX: Refreshing An UpdatePanel With Jav

    The ASP.NET AJAX UpdatePanel provides a quick and easy way to implement a snappier, AJAX-based user interface in an ASP.NET WebForm. In a nutshell, UpdatePanels allow page developers to refresh selected parts of the page (instead of refreshing the entire page). Typically, an UpdatePanel contains user interface elements that would normally trigger a full page postback - controls like Buttons or DropDownLists that have their AutoPostBack property set to True. Such controls, when placed inside an UpdatePanel, cause a partial page postback to occur. On a partial page postback only the contents of the UpdatePanel are refreshed, avoiding the "flash" of having the entire page reloaded. (For a more in-depth look at the UpdatePanel control, refer back to the Using the UpdatePanel installment in this article series.) Triggering a partial page postback refreshes the contents within an UpdatePanel, but what if you want to refresh an UpdatePanel's contents via JavaScript? Ideally, the UpdatePanel would have a client-side function named something like Refresh that could be called from script to perform a partial page postback and refresh the UpdatePanel. Unfortunately, no such function exists. Instead, you have to write script that triggers a partial page postback for the UpdatePanel you want to refresh. This article looks at how to accomplish this using just a single line of markup/script and includes a working demo you can download and try out for yourself. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >