Search Results

Search found 88607 results on 3545 pages for 'code completion'.

Page 57/3545 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Calculated Columns in Entity Framework Code First Migrations

    - by David Paquette
    I had a couple people ask me about calculated properties / columns in Entity Framework this week.  The question was, is there a way to specify a property in my C# class that is the result of some calculation involving 2 properties of the same class.  For example, in my database, I store a FirstName and a LastName column and I would like a FullName property that is computed from the FirstName and LastName columns.  My initial answer was: 1: public string FullName 2: { 3: get { return string.Format("{0} {1}", FirstName, LastName); } 4: } Of course, this works fine, but this does not give us the ability to write queries using the FullName property.  For example, this query: 1: var users = context.Users.Where(u => u.FullName.Contains("anan")); Would result in the following NotSupportedException: The specified type member 'FullName' is not supported in LINQ to Entities. Only initializers, entity members, and entity navigation properties are supported. It turns out there is a way to support this type of behavior with Entity Framework Code First Migrations by making use of Computed Columns in SQL Server.  While there is no native support for computed columns in Code First Migrations, we can manually configure our migration to use computed columns. Let’s start by defining our C# classes and DbContext: 1: public class UserProfile 2: { 3: public int Id { get; set; } 4: 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: 8: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 9: public string FullName { get; private set; } 10: } 11: 12: public class UserContext : DbContext 13: { 14: public DbSet<UserProfile> Users { get; set; } 15: } The DatabaseGenerated attribute is needed on our FullName property.  This is a hint to let Entity Framework Code First know that the database will be computing this property for us. Next, we need to run 2 commands in the Package Manager Console.  First, run Enable-Migrations to enable Code First Migrations for the UserContext.  Next, run Add-Migration Initial to create an initial migration.  This will create a migration that creates the UserProfile table with 3 columns: FirstName, LastName, and FullName.  This is where we need to make a small change.  Instead of allowing Code First Migrations to create the FullName property, we will manually add that column as a computed column. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: CreateTable( 6: "dbo.UserProfiles", 7: c => new 8: { 9: Id = c.Int(nullable: false, identity: true), 10: FirstName = c.String(), 11: LastName = c.String(), 12: //FullName = c.String(), 13: }) 14: .PrimaryKey(t => t.Id); 15: Sql("ALTER TABLE dbo.UserProfiles ADD FullName AS FirstName + ' ' + LastName"); 16: } 17: 18: 19: public override void Down() 20: { 21: DropTable("dbo.UserProfiles"); 22: } 23: } Finally, run the Update-Database command.  Now we can query for Users using the FullName property and that query will be executed on the database server.  However, we encounter another potential problem. Since the FullName property is calculated by the database, it will get out of sync on the object side as soon as we make a change to the FirstName or LastName property.  Luckily, we can have the best of both worlds here by also adding the calculation back to the getter on the FullName property: 1: [DatabaseGenerated(DatabaseGeneratedOption.Computed)] 2: public string FullName 3: { 4: get { return FirstName + " " + LastName; } 5: private set 6: { 7: //Just need this here to trick EF 8: } 9: } Now we can both query for Users using the FullName property and we also won’t need to worry about the FullName property being out of sync with the FirstName and LastName properties.  When we run this code: 1: using(UserContext context = new UserContext()) 2: { 3: UserProfile userProfile = new UserProfile {FirstName = "Chanandler", LastName = "Bong"}; 4: 5: Console.WriteLine("Before saving: " + userProfile.FullName); 6: 7: context.Users.Add(userProfile); 8: context.SaveChanges(); 9:  10: Console.WriteLine("After saving: " + userProfile.FullName); 11:  12: UserProfile chanandler = context.Users.First(u => u.FullName == "Chanandler Bong"); 13: Console.WriteLine("After reading: " + chanandler.FullName); 14:  15: chanandler.FirstName = "Chandler"; 16: chanandler.LastName = "Bing"; 17:  18: Console.WriteLine("After changing: " + chanandler.FullName); 19:  20: } We get this output: It took a bit of work, but finally Chandler’s TV Guide can be delivered to the right person. The obvious downside to this implementation is that the FullName calculation is duplicated in the database and in the UserProfile class. This sample was written using Visual Studio 2012 and Entity Framework 5. Download the source code here.

    Read the article

  • Who owns the IP rights of the software without written employment contract? Employer or employee? [closed]

    - by P T
    I am a software engineer who got an idea, and developed alone an integrated ERP software solution over the past 2 years. I got the idea and coded much of the software in my personal time, utilizing my own resources, but also as intern/employee at small wholesale retailer (company A). I had a verbal agreement with the company that I could keep the IP rights to the code and the company would have the "shop rights" to use "a copy" of the software without restrictions. Part of this agreement was that I was heavily underpaid to keep the rights. Recently things started to take a down turn in the company A as the company grew fairly large and new head management was formed, also new partners were brought in. The original owners distanced themselves from the business, and the new "greedy" group indicated that they want to claim the IP rights to my software, offering me a contract that would split the IP ownership into 50% co-ownership, completely disregarding the initial verbal agreements. As of now there was no single written job description and agreement/contract/policy that I signed with the company A, I signed only I-9 and W-4 forms. I now have an opportunity to leave the company A and form a new business with 2 partners (Company B), obviously using the software as the primary tool. There would be no direct conflict of interest as the company A sells wholesale goods. My core question is: "Who owns the code without contract? Me or the company A? (in FL, US)" Detailed questions: I am familiar with the "shop rights", I don't have any problem leaving a copy of the code in the company for them to use/enhance to run their wholesale business. What worries me, Can the company A make any legal claims to the software/code/IP and potential derived profits/interests after I leave and form a company B? Can applying for a copyright of the code at http://www.copyright.gov in my name prevent any legal disputes in the future? Can I use it as evidence for legal defense? Could adding a note specifying the company A as exclusive license holder clarify the arrangements? If I leave and the company A sues me, what evidence would they use against me? On what basis would the sue since their business is in completely different industry than software (wholesale goods). Every single source file was created/stored on my personal computer with proper documentation including a copyright notice with my credentials (name/email/addres/phone). It's also worth noting that I develop significant part of the software prior to my involvement with the company A as student. If I am forced to sign a contract and the company A doesn't honor the verbal agreement, making claims towards the ownership, what can I do settle the matter legally? I like to avoid legal process altogether as my budget for court battles is extremely limited at the moment. Would altering the code beyond recognition and using it for the company B prevent the company A make any copyright claims? My common sense tells me that what I developed is by default mine in terms of IP, unless there is a signed legal agreement stating otherwise. But looking online it may be completely backwards, this really worries me. I understand that this is not legal advice, and I know to get the ultimate answer I need to hire a lawyer. I am only hoping to get some valuable input/experience/advice/opinion from those who were in similar situation or are familiar with the topic. Thank you, PT

    Read the article

  • Converting openGl code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run in to some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it rigth before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has diffrent coordinations systems, and functions that alters how you must implement to code abit. This is his code example: public Ray GetPickRay() { int mouseX = Mouse.getX(); int mouseY = WORLD.Byte56Game.getHeight() - Mouse.getY(); float windowWidth = WORLD.Byte56Game.getWidth(); float windowHeight = WORLD.Byte56Game.getHeight(); //get the mouse position in screenSpace coords double screenSpaceX = ((float) mouseX / (windowWidth / 2) - 1.0f) * aspectRatio; double screenSpaceY = (1.0f - (float) mouseY / (windowHeight / 2)); double viewRatio = Math.tan(((float) Math.PI / (180.f/ViewAngle) / 2.00f))* zoomFactor; screenSpaceX = screenSpaceX * viewRatio; screenSpaceY = screenSpaceY * viewRatio; //Find the far and near camera spaces Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * NearPlane), (float) (screenSpaceY * NearPlane), (float) (-NearPlane), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * FarPlane), (float) (screenSpaceY * FarPlane), (float) (-FarPlane), 1); //Unproject the 2D window into 3D to see where in 3D we're actually clicking Matrix4f tmpView = Matrix4f(view); Matrix4f invView = (Matrix4f) tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invView, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invView, cameraSpaceFar, worldSpaceFar); //calculate the ray position and direction Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); return new Ray(rayPosition, rayDirection); } All rigths reserved to him of course This is my DirectX 11 code : void GraphicEngine::pickRayVector(float mouseX, float mouseY,XMVECTOR& pickRayInWorldSpacePos, XMVECTOR& pickRayInWorldSpaceDir) { float PRVecX, PRVecY; float nearPlane = 0.1f; float farPlane = 200.0f; floar viewAngle = 0.4 * 3.14; PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); XMVECTOR cameraSpaceNear = XMVectorSet(PRVecX * nearPlane,PRVecY * nearPlane, -nearPlane, 1.0f); XMVECTOR cameraSpaceFar = XMVectorSet(PRVecX * farPlane,PRVecY * farPlane, -farPlane, 1.0f); // Transform 3D Ray from View space to 3D ray in World space XMMATRIX invMat; XMVECTOR matInvDeter; invMat = XMMatrixInverse(&matInvDeter, cam->getCameraView()); //Inverse of View Space matrix is World space matrix XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); pickRayInWorldSpacePos = worldSpaceNear; pickRayInWorldSpaceDir = worldSpaceFar-worldSpaceNear; pickRayInWorldSpaceDir = XMVector3Normalize(pickRayInWorldSpaceDir); } A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom rigth (800,600) ( or whatever resolution you would have) I hadn't used any far or near plane before, so i just made some arbitrary number up for them. To my understanding it shouldnt matter as long as the object you are trying to pick is in between the range of thoese numbers The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH , I just hadn't made it a member variable of my Camera class yet. I removed the variable aspectRation and zoomFactor because I assumed that they where related to some specific function of his game. Now I'm not sure, but I think the problems lies either withing the mouse to viewspace conversion, maby that we use diffrent coordinations systems. Either that or how i transform the matrixes in the the end, because i know order is important when it comes to matrixes. Any help is appriciated! Thanks in advance. Edit: One more note, my code is in c++

    Read the article

  • How to write simple code using TDD [migrated]

    - by adeel41
    Me and my colleagues do a small TDD-Kata practice everyday for 30 minutes. For reference this is the link for the excercise http://osherove.com/tdd-kata-1/ The objective is to write better code using TDD. This is my code which I've written public class Calculator { public int Add( string numbers ) { const string commaSeparator = ","; int result = 0; if ( !String.IsNullOrEmpty( numbers ) ) result = numbers.Contains( commaSeparator ) ? AddMultipleNumbers( GetNumbers( commaSeparator, numbers ) ) : ConvertToNumber( numbers ); return result; } private int AddMultipleNumbers( IEnumerable getNumbers ) { return getNumbers.Sum(); } private IEnumerable GetNumbers( string separator, string numbers ) { var allNumbers = numbers .Replace( "\n", separator ) .Split( new string[] { separator }, StringSplitOptions.RemoveEmptyEntries ); return allNumbers.Select( ConvertToNumber ); } private int ConvertToNumber( string number ) { return Convert.ToInt32( number ); } } and the tests for this class are [TestFixture] public class CalculatorTests { private int ArrangeAct( string numbers ) { var calculator = new Calculator(); return calculator.Add( numbers ); } [Test] public void Add_WhenEmptyString_Returns0() { Assert.AreEqual( 0, ArrangeAct( String.Empty ) ); } [Test] [Sequential] public void Add_When1Number_ReturnNumber( [Values( "1", "56" )] string number, [Values( 1, 56 )] int expected ) { Assert.AreEqual( expected, ArrangeAct( number ) ); } [Test] public void Add_When2Numbers_AddThem() { Assert.AreEqual( 3, ArrangeAct( "1,2" ) ); } [Test] public void Add_WhenMoreThan2Numbers_AddThemAll() { Assert.AreEqual( 6, ArrangeAct( "1,2,3" ) ); } [Test] public void Add_SeparatorIsNewLine_AddThem() { Assert.AreEqual( 6, ArrangeAct( @"1 2,3" ) ); } } Now I'll paste code which they have written public class StringCalculator { private const char Separator = ','; public int Add( string numbers ) { const int defaultValue = 0; if ( ShouldReturnDefaultValue( numbers ) ) return defaultValue; return ConvertNumbers( numbers ); } private int ConvertNumbers( string numbers ) { var numberParts = GetNumberParts( numbers ); return numberParts.Select( ConvertSingleNumber ).Sum(); } private string[] GetNumberParts( string numbers ) { return numbers.Split( Separator ); } private int ConvertSingleNumber( string numbers ) { return Convert.ToInt32( numbers ); } private bool ShouldReturnDefaultValue( string numbers ) { return String.IsNullOrEmpty( numbers ); } } and the tests [TestFixture] public class StringCalculatorTests { [Test] public void Add_EmptyString_Returns0() { ArrangeActAndAssert( String.Empty, 0 ); } [Test] [TestCase( "1", 1 )] [TestCase( "2", 2 )] public void Add_WithOneNumber_ReturnsThatNumber( string numberText, int expected ) { ArrangeActAndAssert( numberText, expected ); } [Test] [TestCase( "1,2", 3 )] [TestCase( "3,4", 7 )] public void Add_WithTwoNumbers_ReturnsSum( string numbers, int expected ) { ArrangeActAndAssert( numbers, expected ); } [Test] public void Add_WithThreeNumbers_ReturnsSum() { ArrangeActAndAssert( "1,2,3", 6 ); } private void ArrangeActAndAssert( string numbers, int expected ) { var calculator = new StringCalculator(); var result = calculator.Add( numbers ); Assert.AreEqual( expected, result ); } } Now the question is which one is better? My point here is that we do not need so many small methods initially because StringCalculator has no sub classes and secondly the code itself is so simple that we don't need to break it up too much that it gets confusing after having so many small methods. Their point is that code should read like english and also its better if they can break it up earlier than doing refactoring later and third when they will do refactoring it would be much easier to move these methods quite easily into separate classes. My point of view against is that we never made a decision that code is difficult to understand so why we are breaking it up so early. So I need a third person's opinion to understand which option is much better.

    Read the article

  • SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

    - by DrJohn
    When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another. The Trick When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called TableType DbTableName FriendlyName QueryDefinition Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip   using System;using System.Data;using System.IO;using Microsoft.AnalysisServices; ... class code removed for clarity// connect to the OLAP server Server olapServer = new Server();olapServer.Connect(config.olapServerName);if (olapServer != null){ // connected to server ok, so obtain reference to the OLAP databaseDatabase olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);if (olapDatabase != null){ Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",   config.olapDatabaseName,   config.olapServerName));// export SQL from each data source view (usually only one, but can be many!)foreach (DataSourceView dsv in olapDatabase.DataSourceViews){ Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));// for each table in the DSV, export the SQL in a fileforeach (DataTable dt in dsv.Schema.Tables){ Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName)); // get name of the table in the DSV// use the FriendlyName as the user inputs this and therefore has control of itstring queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql"); // delete the sql file if it exists... file deletion code removed for clarity// write out the SQL to a fileif (dt.ExtendedProperties["TableType"].ToString() == "View"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());}if (dt.ExtendedProperties["TableType"].ToString() == "Table"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString()); } } } Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName)); } }   Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

    Read the article

  • What are the tradeoffs for using 'partial view models'?

    - by Kenny Evitt
    I've become aware of an itch due to some non-DRY code pertaining to view model classes in an (ASP.NET) MVC web application and I'm thinking of scratching my itch by organizing code in various 'partial view model' classes. By partial-view-model, I'm referring to a class like a view model class in an analogous way to how partial views are like views, i.e. a way to encapsulate common info and behavior. To strengthen the 'analogy', and to aid in visually organizing the code in my IDE, I was thinking of naming the partial-view-model classes with a _ prefix, e.g. _ParentItemViewModel. As a slightly more concrete example of why I'm thinking along these lines, imagine that I have a domain-model-entity class ParentItem and the user-friendly descriptive text that identifies these items to users is complex enough that I'd like to encapsulate that code in a method in a _ParentItemViewModel class, for which I can then include an object or a collection of objects of that class in all the view model classes for all the views that need to include a reference to a parent item, e.g. ChildItemViewModel can have a ParentItem property of the _ParentItemViewModel class type, so that in my ChildItemView view, I can use @Model.ParentItem.UserFriendlyDescription as desired, like breadcrumbs, links, etc. Edited 2014-02-06 09:56 -05 As a second example, imagine that I have entity classes SomeKindOfBatch, SomeKindOfBatchDetail, and SomeKindOfBatchDetailEvent, and a view model class and at least one view for each of those entities. Also, the example application covers a lot more than just some-kind-of-batches, so that it wouldn't really be useful or sensible to include info about a specific some-kind-of-batch in all of the project view model classes. But, like the above example, I have some code, say for generating a string for identifying a some-kind-of-batch in a user-friendly way, and I'd like to be able to use that in several views, say as breadcrumb text or text for a link. As a third example, I'll describe another pattern I'm currently using. I have a Contact entity class, but it's a fat class, with dozens of properties, and at least a dozen references to other fat classes. However, a lot of view model classes need properties for referencing a specific contact and most of those need other properties for collections of contacts, e.g. possible contacts to be referenced for some kind of relationship. Most of these view model classes only need a small fraction of all of the available contact info, basically just an ID and some kind of user-friendly description (i.e. a friendly name). It seems to be pretty useful to have a 'partial view model' class for contacts that all of these other view model classes can use. Maybe I'm just misunderstanding 'view model class' – I understand a view model class as always corresponding to a view. But maybe I'm assuming too much.

    Read the article

  • Rewriting code under BSD license

    - by Frank
    I am currently studding OpengGL with OpenGL Supebible 5th edition. I've found interested for me some C++ code that is distributed with the book (see also on google code). That code is under New BSD License. I am writing my software on C# with SharpGL wrapper and I'd like to know following things: Can I rewrite that C++ to C#? edid: I'am interesting in using such things like GLBatch, GLShaderManager and some other thing from GLTools. Problem is that library is on C++, but I use C#. How do I have to mark my source code if I put it somewhere like to my github account? What disclaimer should be? Original disclaimer looks like: /* GLShaderManager.h Copyright (c) 2009, Richard S. Wright Jr. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Richard S. Wright Jr. nor the names of other contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ Edit: Should my copyright looks like after rewriting something like that? Copyright (c) 2014, My Name Copyright (c) 2009, Richard S. Wright Jr. All rights reserved. Redistribution...................

    Read the article

  • 0xC0017011 and other error messages - what is the error message text?

    Recently there was a bug raised against BIDS Helper which originated in my Expression Editor control. Thankfully the person that raised it kindly included a screenshot, so I had the error code (HRESULT 0xC0017011) and a stack trace that pointed the finger firmly at my control, but no error message text. The code itself looked fine so I searched on the error code but got no results. I’d expected to get a hit from Books Online with the Integration Services Error and Message Reference topic at the very least, but no joy. There is however a more accurate and definitive reference, namely the header file that defines all these codes dtsmsg.h which you can find at- C:\Program Files (x86)\Microsoft SQL Server\110\SDK\Include\dtsmsg.h Looking the code up in the header file gave me a much more useful error message. //////////////////////////////////////////////////////////////////////////// // The parameter is sensitive // // MessageId: DTS_E_SENSITIVEPARAMVALUENOTALLOWED // // MessageText: // // Accessing value of the parameter variable for the sensitive parameter "%1!s!" is not allowed. Verify that the variable is used properly and that it protects the sensitive information. // #define DTS_E_SENSITIVEPARAMVALUENOTALLOWED ((HRESULT)0xC0017011L) Unfortunately I’d forgotten all about this. By the time I had remembered about it, the person who raised the issue had managed to narrow it down to something to do with having  sensitive parameter. Putting that together with the error message I’d finally found, a quick poke around in the code and I found the new GetSensitiveValue method which seemed to do the trick. The HResult fields are also listed online but it only shows the short error message, and it doesn’t include that all so important HRESULT value itself. So let this be a lesson to you (and me!), if you need to check  SSIS error go straight to the horses mouth - dtsmsg.h. This is particularly true when working with early builds, or CTP releases when we expect the documentation to be a bit behind. There is also a programmatic approach to getting better SSIS error messages. I should to take another look at the error handling in the control, or the way it is hosted in BIDS Helper. I suspect that if I use an implementation of Microsoft.SqlServer.Dts.Runtime.Wrapper.IDTSInfoEvents100 I could catch the error itself and get the full error message text which I could then report back. This would obviously be a better user experience and also make it easier to diagnose any issues like this in the future. See ExprssionEvaluator.cs for an example of this in use in the Expression Editor control.

    Read the article

  • My co-worker has not been doing such a good job for the past decade. What do I do? [closed]

    - by stijn
    Possible Duplicate: How do I approach a coworker about his or her code quality? I started working with him almost a decade ago and back then I had never really programmed before, being a young hardware engineer. Right now however I have made quite some progress in all areas being part of software design and i am much, much more skilled than my co-worker who is 15 years older and has been programming more than twice as long. He is super nice and definitely smart enough, but lately his lack of skill and performance are starting to drag me down because we're more and more working on the same codebase. And soon we are going to do a quite ambitious start from scratch creating a whole new hard/software system. I feel it is time to address all issues now, but i do not know how to start. Here are some of the things that I would like to see him improve on: no consistent usage of style, spaces nor tabs (eg if(something ) a =b ) adds newlines around pieces of code to make it easier to read, then commits those with messages like 'no changes made' overall commit messages are useless and so are most of the comments, if there are any (eg 'remove solves for bug Rik' if Rik reported a bug). There is no function/class documentation. lots of spelling errors, in both English and native language, which sometimes are mixed 6/7/8 level deep deep nesting is no exception, a lot of functions start with one level already like if(ptr!=Null){ even when ptr is the result of allocation via new in the constructor numerous source files have over 10k lines of those lines, a major part is simply a result of copy-pasting functionality instead of using a function. This includes copying comments so we end up with 50 occurrences of var=NULL; //TODO TEST this!!!!!!! another part is hundreds of lines of dead code knows what versioning does, yet comments out old code and places new code underneath it when making changes coding skills are below par, especially for the type of rather high precision applications we do. Yet somehow, after a lot of trying and testing, stuff starts to work. But then breaks again some time later because every change casues a waterfall effect. violates every single item in the C++ FAQ lite, practices every bad practice I can think of still doesn't know how to properly use the debugger, but spends hours inspecting messy logfiles in notepad on a tiny laptop screen. Does not make any adjustments to the settings of the software he uses. Never uses keyboard shortcuts. does not seem to progress or learn new things at all. Work rather slow, mostly due to the lack of planning and incorrect usage of tools. How does one deal with this? For starters, how do I make him aware of all these problems? Should I tell the staff about it? And the next step, how to get him to learn new things and adopt another way of working?

    Read the article

  • Keep coding the wrong way to remain consistent? [closed]

    - by bwalk2895
    Possible Duplicate: Code maintenance: keeping a bad pattern when extending new code for being consistent, or not? To keep things simple let's say I am responsible for maintaining two applications, AwesomeApp and BadApp (I am responsible for more and no that is not their actual names). AwesomeApp is a greenfield project I have been working on with other members on my team. It was coded using all the fancy buzzwords, Multilayer, SOA, SOLID, TDD, and so on. It represents the direction we want to go as a team. BadApp is a application that has been around for a long time. The architecture suffers from many sins, namely everything is tightly coupled together and it is not uncommon to get a circular dependency error from the compiler, it is almost impossible to unit test, large classes, duplicate code, and so on. We have a plan to rewrite the application following the standards established by AwesomeApp, but that won't happen for a while. I have to go into BadApp and fix a bug, but after spending months coding what I consider correctly, I really don't want do continue perpetuate bad coding practices. However, the way AwesomeApp is coded is vastly different from the way BadApp is coded. I fear implementing the "correct" way would cause confusion for other developers who have to maintain the application. Question: Is it better to keep coding the wrong way to remain consistent with the rest of the code in the application (knowing it will be replaced) or is it better to code the right way with an understanding it could cause confusion because it is so much different? To give you an example. There is a large class (1000+ lines) with several functions. One of the functions is to calculate a date based on an enumerated value. Currently the function handles all the various calculations. The function relies on no other functionality within the class. It is self contained. I want to break the function into smaller functions (at the very least) and put them into their own classes and hide those classes behind an interface (at the most) and use the factory pattern to instantiate the date classes. If I just broke it out into smaller functions within the class it would follow the existing coding standard. The extra steps are to start following some of the SOLID principles.

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 2

    - by shiju
    In my previous post Developing web apps using ASP.NET MVC 3, Razor and EF Code First - Part 1, we have discussed on how to work with ASP.NET MVC 3 and EF Code First for developing web apps. We have created generic repository and unit of work with EF Code First for our ASP.NET MVC 3 application and did basic CRUD operations against a simple domain entity. In this post, I will demonstrate on working with domain entity with deep object graph, Service Layer and View Models and will also complete the rest of the demo application. In the previous post, we have done CRUD operations against Category entity and this post will be focus on Expense entity those have an association with Category entity. You can download the source code from http://efmvc.codeplex.com . The following frameworks will be used for this step by step tutorial.    1. ASP.NET MVC 3 RTM    2. EF Code First CTP 5    3. Unity 2.0 Domain Model Category Entity public class Category   {       public int CategoryId { get; set; }       [Required(ErrorMessage = "Name Required")]       [StringLength(25, ErrorMessage = "Must be less than 25 characters")]       public string Name { get; set;}       public string Description { get; set; }       public virtual ICollection<Expense> Expenses { get; set; }   } Expense Entity public class Expense     {                public int ExpenseId { get; set; }                public string  Transaction { get; set; }         public DateTime Date { get; set; }         public double Amount { get; set; }         public int CategoryId { get; set; }         public virtual Category Category { get; set; }     } We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. Repository class for Expense Transaction Let’s create repository class for handling CRUD operations for Expense entity public class ExpenseRepository : RepositoryBase<Expense>, IExpenseRepository     {     public ExpenseRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface IExpenseRepository : IRepository<Expense> { } Service Layer If you are new to Service Layer, checkout Martin Fowler's article Service Layer . According to Martin Fowler, Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations. Controller classes should be lightweight and do not put much of business logic onto it. We can use the service layer as the business logic layer and can encapsulate the rules of the application. Let’s create a Service class for coordinates the transaction for Expense public interface IExpenseService {     IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime ednDate);     Expense GetExpense(int id);             void CreateExpense(Expense expense);     void DeleteExpense(int id);     void SaveExpense(); } public class ExpenseService : IExpenseService {     private readonly IExpenseRepository expenseRepository;            private readonly IUnitOfWork unitOfWork;     public ExpenseService(IExpenseRepository expenseRepository, IUnitOfWork unitOfWork)     {                  this.expenseRepository = expenseRepository;         this.unitOfWork = unitOfWork;     }     public IEnumerable<Expense> GetExpenses(DateTime startDate, DateTime endDate)     {         var expenses = expenseRepository.GetMany(exp => exp.Date >= startDate && exp.Date <= endDate);         return expenses;     }     public void CreateExpense(Expense expense)     {         expenseRepository.Add(expense);         unitOfWork.Commit();     }     public Expense GetExpense(int id)     {         var expense = expenseRepository.GetById(id);         return expense;     }     public void DeleteExpense(int id)     {         var expense = expenseRepository.GetById(id);         expenseRepository.Delete(expense);         unitOfWork.Commit();     }     public void SaveExpense()     {         unitOfWork.Commit();     } }   View Model for Expense Transactions In real world ASP.NET MVC applications, we need to design model objects especially for our views. Our domain objects are mainly designed for the needs for domain model and it is representing the domain of our applications. On the other hand, View Model objects are designed for our needs for views. We have an Expense domain entity that has an association with Category. While we are creating a new Expense, we have to specify that in which Category belongs with the new Expense transaction. The user interface for Expense transaction will have form fields for representing the Expense entity and a CategoryId for representing the Category. So let's create view model for representing the need for Expense transactions. public class ExpenseViewModel {     public int ExpenseId { get; set; }       [Required(ErrorMessage = "Category Required")]     public int CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]     public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]     public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } The ExpenseViewModel is designed for the purpose of View template and contains the all validation rules. It has properties for mapping values to Expense entity and a property Category for binding values to a drop-down for list values of Category. Create Expense transaction Let’s create action methods in the ExpenseController for creating expense transactions public ActionResult Create() {     var expenseModel = new ExpenseViewModel();     var categories = categoryService.GetCategories();     expenseModel.Category = categories.ToSelectListItems(-1);     expenseModel.Date = DateTime.Today;     return View(expenseModel); } [HttpPost] public ActionResult Create(ExpenseViewModel expenseViewModel) {                      if (!ModelState.IsValid)         {             var categories = categoryService.GetCategories();             expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);             return View("Save", expenseViewModel);         }         Expense expense=new Expense();         ModelCopier.CopyModel(expenseViewModel,expense);         expenseService.CreateExpense(expense);         return RedirectToAction("Index");              } In the Create action method for HttpGet request, we have created an instance of our View Model ExpenseViewModel with Category information for the drop-down list and passing the Model object to View template. The extension method ToSelectListItems is shown below   public static IEnumerable<SelectListItem> ToSelectListItems(         this IEnumerable<Category> categories, int  selectedId) {     return           categories.OrderBy(category => category.Name)                 .Select(category =>                     new SelectListItem                     {                         Selected = (category.CategoryId == selectedId),                         Text = category.Name,                         Value = category.CategoryId.ToString()                     }); } In the Create action method for HttpPost, our view model object ExpenseViewModel will map with posted form input values. We need to create an instance of Expense for the persistence purpose. So we need to copy values from ExpenseViewModel object to Expense object. ASP.NET MVC futures assembly provides a static class ModelCopier that can use for copying values between Model objects. ModelCopier class has two static methods - CopyCollection and CopyModel.CopyCollection method will copy values between two collection objects and CopyModel will copy values between two model objects. We have used CopyModel method of ModelCopier class for copying values from expenseViewModel object to expense object. Finally we did a call to CreateExpense method of ExpenseService class for persisting new expense transaction. List Expense Transactions We want to list expense transactions based on a date range. So let’s create action method for filtering expense transactions with a specified date range. public ActionResult Index(DateTime? startDate, DateTime? endDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!startDate.HasValue)     {         startDate = new DateTime(dtNow.Year, dtNow.Month, 1);         endDate = startDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of start date's month, if end date is not passed     if (startDate.HasValue && !endDate.HasValue)     {         endDate = (new DateTime(startDate.Value.Year, startDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }     var expenses = expenseService.GetExpenses(startDate.Value ,endDate.Value);     //if request is Ajax will return partial view     if (Request.IsAjaxRequest())     {         return PartialView("ExpenseList", expenses);     }     //set start date and end date to ViewBag dictionary     ViewBag.StartDate = startDate.Value.ToShortDateString();     ViewBag.EndDate = endDate.Value.ToShortDateString();     //if request is not ajax     return View(expenses); } We are using the above Index Action method for both Ajax requests and normal requests. If there is a request for Ajax, we will call the PartialView ExpenseList. Razor Views for listing Expense information Let’s create view templates in Razor for showing list of Expense information ExpenseList.cshtml @model IEnumerable<MyFinance.Domain.Expense>   <table>         <tr>             <th>Actions</th>             <th>Category</th>             <th>                 Transaction             </th>             <th>                 Date             </th>             <th>                 Amount             </th>         </tr>       @foreach (var item in Model) {              <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.ExpenseId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.ExpenseId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divExpenseList" })             </td>              <td>                 @item.Category.Name             </td>             <td>                 @item.Transaction             </td>             <td>                 @String.Format("{0:d}", item.Date)             </td>             <td>                 @String.Format("{0:F}", item.Amount)             </td>         </tr>          }       </table>     <p>         @Html.ActionLink("Create New Expense", "Create") |         @Html.ActionLink("Create New Category", "Create","Category")     </p> Index.cshtml @using MyFinance.Helpers; @model IEnumerable<MyFinance.Domain.Expense> @{     ViewBag.Title = "Index"; }    <h2>Expense List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery-ui.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.ui.datepicker.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/jquery-ui-1.8.6.custom.css")" rel="stylesheet" type="text/css" />      @using (Ajax.BeginForm(new AjaxOptions{ UpdateTargetId="divExpenseList", HttpMethod="Get"})) {     <table>         <tr>         <td>         <div>           Start Date: @Html.TextBox("StartDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["StartDate"].ToString())), new { @class = "ui-datepicker" })         </div>         </td>         <td><div>            End Date: @Html.TextBox("EndDate", Html.Encode(String.Format("{0:mm/dd/yyyy}", ViewData["EndDate"].ToString())), new { @class = "ui-datepicker" })          </div></td>          <td> <input type="submit" value="Search By TransactionDate" /></td>         </tr>     </table>         }   <div id="divExpenseList">             @Html.Partial("ExpenseList", Model)     </div> <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script> Ajax search functionality using Ajax.BeginForm The search functionality of Index view is providing Ajax functionality using Ajax.BeginForm. The Ajax.BeginForm() method writes an opening <form> tag to the response. You can use this method in a using block. In that case, the method renders the closing </form> tag at the end of the using block and the form is submitted asynchronously by using JavaScript. The search functionality will call the Index Action method and this will return partial view ExpenseList for updating the search result. We want to update the response UI for the Ajax request onto divExpenseList element. So we have specified the UpdateTargetId as "divExpenseList" in the Ajax.BeginForm method. Add jQuery DatePicker Our search functionality is using a date range so we are providing two date pickers using jQuery datepicker. You need to add reference to the following JavaScript files to working with jQuery datepicker. jquery-ui.js jquery.ui.datepicker.js For theme support for datepicker, we can use a customized CSS class. In our example we have used a CSS file “jquery-ui-1.8.6.custom.css”. For more details about the datepicker component, visit jquery UI website at http://jqueryui.com/demos/datepicker . In the jQuery ready event, we have used following JavaScript function to initialize the UI element to show date picker. <script type="text/javascript">     $().ready(function () {         $('.ui-datepicker').datepicker({             dateFormat: 'mm/dd/yy',             buttonImage: '@Url.Content("~/Content/calendar.gif")',             buttonImageOnly: true,             showOn: "button"         });     }); </script>   Source Code You can download the source code from http://efmvc.codeplex.com/ . Summary In this two-part series, we have created a simple web application using ASP.NET MVC 3 RTM, Razor and EF Code First CTP 5. I have demonstrated patterns and practices  such as Dependency Injection, Repository pattern, Unit of Work, ViewModel and Service Layer. My primary objective was to demonstrate different practices and options for developing web apps using ASP.NET MVC 3 and EF Code First. You can implement these approaches in your own way for building web apps using ASP.NET MVC 3. I will refactor this demo app on later time.

    Read the article

  • Entity Framework 4.3.1 Code based Migrations and Connector/Net 6.6

    - by GABMARTINEZ
     Code-based migrations is a new feature as part of the Connector/Net support for Entity Framework 4.3.1. In this tutorial we'll see how we can use it so we can keep track of the changes done to our database creating a new application using the code first approach. If you don't have a clear idea about how code first works we highly recommend you to check this subject before going further with this tutorial. Creating our Model and Database with Code First  From VS 2010  1. Create a new console application 2.  Add the latest Entity Framework official package using Package Manager Console (Tools Menu, then Library Package Manager -> Package Manager Console). In the Package Manager Console we have to type  Install-Package EntityFramework This will add the latest version of this library.  We will also need to make some changes to your config file. A <configSections> was added which contains the version you have from EntityFramework.  An <entityFramework> section was also added where you can set up some initialization. This section is optional and by default is generated to use SQL Express. Since we don't need it for now (we'll see more about it below) let's leave this section empty as shown below. 3. Create a new Model with a simple entity. 4. Enable Migrations to generate the our Configuration class. In the Package Manager Console we have to type  Enable-Migrations; This will make some changes in our application. It will create a new folder called Migrations where all the migrations representing the changes we do to our model.  It will also create a Configuration class that we'll be using to initialize our SQL Generator and some other values like if we want to enable Automatic Migrations.  You can see that it already has the name of our DbContext. You can also create you Configuration class manually. 5. Specify our Model Provider. We need to specify in our Class Configuration that we'll be using MySQLClient since this is not part of the generated code. Also please make sure you have added the MySql.Data and the MySql.Data.Entity references to your project. using MySql.Data.Entity;   // Add the MySQL.Data.Entity namespace public Configuration() { this.AutomaticMigrationsEnabled = false; SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());    // This will add our MySQLClient as SQL Generator } 6. Add our Data Provider and set up our connection string <connectionStrings> <add name="PersonalContext" connectionString="server=localhost;User Id=root;database=Personal;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> <system.data> <DbProviderFactories> <remove invariant="MySql.Data.MySqlClient" /> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.6.2.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" /> </DbProviderFactories> </system.data> * The version recommended to use of Connector/Net is 6.6.2 or earlier. At this point we can create our database and then start working with Migrations. So let's do some data access so our database get's created. You can run your application and you'll get your database Personal as specified in our config file. Add our first migration Migrations are a great resource as we can have a record for all the changes done and will generate the MySQL statements required to apply these changes to the database. Let's add a new property to our Person class public string Email { get; set; } If you try to run your application it will throw an exception saying  The model backing the 'PersonelContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269). So as suggested let's add our first migration for this change. In the Package Manager Console let's type Add-Migration AddEmailColumn Now we have the corresponding class which generate the necessary operations to update our database. namespace MigrationsFromScratch.Migrations { using System.Data.Entity.Migrations; public partial class AddEmailColumn : DbMigration { public override void Up(){ AddColumn("People", "Email", c => c.String(unicode: false)); } public override void Down() { DropColumn("People", "Email"); } } } In the Package Manager Console let's type Update-Database Now you can check your database to see all changes were succesfully applied. Now let's add a second change and generate our second migration public class Person   {       [Key]       public int PersonId { get; set;}       public string Name { get; set; }       public string Address {get; set;}       public string Email { get; set; }       public List<Skill> Skills { get; set; }   }   public class Skill   {     [Key]     public int SkillId { get; set; }     public string Description { get; set; }   }   public class PersonelContext : DbContext   {     public DbSet<Person> Persons { get; set; }     public DbSet<Skill> Skills { get; set; }   } If you would like to customize any part of this code you can do that at this step. You can see there is the up method which can update your database and the down that can revert the changes done. If you customize any code you should make sure to customize in both methods. Now let's apply this change. Update-database -verbose I added the verbose flag so you can see all the SQL generated statements to be run. Downgrading changes So far we have always upgraded to the latest migration, but there may be times when you want downgrade to a specific migration. Let's say we want to return to the status we have before our last migration. We can use the -TargetMigration option to specify the migration we'd like to return. Also you can use the -verbose flag. If you like to go  back to the Initial state you can do: Update-Database -TargetMigration:$InitialDatabase  or equivalent: Update-Database -TargetMigration:0  Migrations doesn't allow by default a migration that would ocurr in a data loss. One case when you can got this message is for example in a DropColumn operation. You can override this configuration by setting AutomaticMigrationDataLossAllowed to true in the configuration class. Also you can set your Database Initializer in case you want that these Migrations can be applied automatically and you don't have to go all the way through creating a migration and updating later the changes. Let's see how. Database Initialization by Code We can specify an initialization strategy by using Database.SetInitializer (http://msdn.microsoft.com/en-us/library/gg679461(v=vs.103)). One of the strategies that I found very useful when you are at a development stage (I mean not for production) is the MigrateDatabaseToLatestVersion. This strategy will make all the necessary migrations each time there is a change in our model that needs a database replication, this also implies that we have to enable AutomaticMigrationsEnabled flag in our Configuration class. public Configuration()         {             AutomaticMigrationsEnabled = true;             AutomaticMigrationDataLossAllowed = true;             SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());    // This will add our MySQLClient as SQL Generator          } In the new EntityFramework section of your Config file we can set this at a context level basis.  The syntax is as follows: <contexts> <context type="Custom DbContext name, Assembly name"> <databaseInitializer type="System.Data.Entity.MigrateDatabaseToLatestVersion`2[[ Custom DbContext name, Assembly name],  [Configuration class name, Assembly name]],  EntityFramework" /> </context> </contexts> In our example this would be: The syntax is kind of odd but very convenient. This way all changes will always be applied when we do any data access in our application. There are a lot of new things to explore in EF 4.3.1 and Migrations so we'll continue writing some more posts about it. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Hope you found this information useful. Happy MySQL/.Net Coding! 

    Read the article

  • How to get git-completion.bash to work on Mac OS X?

    - by n179911
    Hi, I have followed http://blog.bitfluent.com/page/3 to add git-completion.bash to my /opt/local/etc/bash_completion.d/git-completion and I put PS1='\h:\W$(__git_ps1 "(%s)") \u\$ ' in my .bashrc_profile But now I am getting this -bash: __git_ps1: command not found everything I do a cd. Can you please tell me what am I missing?

    Read the article

  • December release of Microsoft All-In-One Code Framework is available now.

    - by Jialiang
    The code samples in Microsoft All-In-One Code Framework are updated on 2010-12-13. Download address: http://1code.codeplex.com/releases/view/57459#DownloadId=185534 Updated code sample index categorized by technologies: http://1code.codeplex.com/wikipage?title=All-In-One%20Code%20Framework%20Sample%20Catalog (it also allows you to download individual code samples instead of the entire All-In-One Code Framework sample package.) If it’s the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on YouTube http://www.youtube.com/watch?v=cO5Li3APU58, or read the introduction on our homepage http://1code.codeplex.com/,  and this Port25 article http://port25.technet.com/archive/2010/01/18/the-all-in-one-code-framework.aspx.  -------------- New ASP.NET Code Samples VBASPNETAJAXWebChat and CSASPNETAJAXWebChat Most of you have some experience in chatting with friends on the web. So you may want to know how to make a web chat application, it seems to be quite complicated. But ASP.NET gives you the power to buiild a chat room easily. In this code sample, we will construct our own web chat room with the amazing AJAX feature. The principle is simple relatively. As we all know, a base chat application need 4 base controls: one List control to show the chat room members, one List control to show the message list, one TextBox control to input messages and one button to send message. User inputs his message in the textbox first and then presses Send button, it will send the message to the server. The message list will update every 2 seconds to get the newest message list in the chat room from the server. We need to know, it is hard for us to make an AJAX web chat application like a windows form application because we cannot keep the connection after one web request ended. So a lot of events which communicates between client side and server side cannot be realized. The common workaround is to make web requests in every some seconds to check whether the server side has been updated. But another technique called COMET makes it possible. But it is different with AJAX and will not be talked in details in this KB. For more details about COMET, we can get some clues from the Reference.   CSASPNETCurrentOnlineUserList and VBASPNETCurrentOnlineUserList This sample demos a system that needs to display a list of current online users' information. As a matter of fact, Membership.GetNumberOfUsersOnline Method  can get the number of online users and there is a convenient approach to check whether the user is online by using Membership.GetUser(string userName).IsOnline property,however many asp.net projects are not using membership.So in this case,the sample shows how to display a list of current online users' information without using membership provider. It is not difficult to check whether the user is online by using session.Many projects tend to be used “Session_End” event to mark a user as “Offline”,however ,it may not be a good idea,because it can’t detect the user status accurately. In addition, "Session_End" event is only available in the "InProc" session mode. If you are storing session states in the State Server or SQL Server, "Session_End" event will never fire. To handle this issue, we need to save the user online status to a  global DataTable or  DataBase. In the sample application, define a global DataTable to store the information of online users.Use XmlHttpRequest in the pages to update and check user's last active time at intervals and also retrieve information on how many users are still online. The sample project can auto delete offline users' information from a global DataTable by checking users’ last active time. A step-by-step guide illustrating how to display a list of current online users' information without using membership provider: 1. Login page. Let user sign in and add current user’s information to a global datatable while Initialize the global datatable which used to store information of current online users. 2. Current online user list page. Use XmlHttpRequest in this page to update and check user's last active time at intervals and also retrieve information on how many users are still online. 3. If user closes the page without clicking  the sign out link button ,the sample project can auto mark the user as offline and delete offline users' information from a global DataTable which used to store information of current online users  by checking users’ last active time. Then the current online user list will be like this:   CSASPNETIPtoLocation This sample demonstrates how to find the geographical location from an IP address. As we know, it is not hard for us to get the IP address of visitors via Request.ServerVariable property, but it is really difficult for us to know where they come from. To achieve this feature, the sample uses a free third party web service from http://freegeoip.appspot.com/, which returns the information about an IP address we send to the server in the format of XML, JSON or CSV. It makes all things easier.   CSASPNETBackgroundWorker Sometimes we do an operation which needs long time to complete. It will stop the response and the page is blank until the operation finished. In this case, we want the operation to run in the background, and in the page, we want to display the progress of the running operation. Therefore, the user can know the operation is running and can know the progress. CSASPNETInheritingFromTreeNode In windows forms TreeView, each tree node has a property called "Tag" which can be used to store a custom object. Many customers want to implement the same tag feature in ASP.NET TreeView. This project creates a custom TreeView control named "CustomTreeView" to achieve this goal. CSASPNETRemoteUploadAndDownload and VBASPNETRemoteUploadAndDownload This code sample was created in response to a code sample request in our new code sample request frunction for customers. The code samples demonstrate uploading files to and downloading files from a remote HTTP or FTP server. In .NET Framework 2.0 and higher versions, there are some lightweight class libraries which support HTTP and FTP protocol transmission. By using these classes, we can achieve this programming requirement.   CSASPNETImageEditUpload and VBASPNETImageEditUpload This demo will shows how to insert, edit and update a common image with the type of "jpg", "png", "gif" or "bmp" . We mainly use two different SqlDataSources with the same database to bind to GridView and FormView in order to establish the “cascading” effort. Besides we apply our self-made ImageHanlder to encoding or decoding images of different types, and use context to output the stream of images. We will explicitly assign the binary streams of images through the event of “FormView_ItemInserting” or “Form_ItemUpdating” to synchronize the stream both in what we can see on an aspx page as well as in what’s really stored in the database.   WebBrowser Control, Network and other Windows General New Code Samples   CSWebBrowserSuppressError and VBWebBrowserSuppressError The sample demonstrates how to make WebBrowser suppress errors, such as script error, navigation error and so on.   CSWebBrowserWithProxy and VBWebBrowserWithProxy The sample demonstrates how to make WebBrowser use a proxy server.   CSWebDownloadProgress and VBWebDownloadProgress The sample demonstrates how to show progress during the download. It also supplies the features to Start, Pause, Resume and Cancel a download.   CppSetDesktopWallpaper, CSSetDesktopWallpaper and VBSetDesktopWallpaper This code sample application allows you select an image, view a preview (resized smaller to fit if necessary), select a display style among Tile, Center, Stretch, Fit (Windows 7 and later) and Fill (Windows 7 and later), and set the image as the Desktop wallpaper. CSWindowsServiceRecoveryProperty and VBWindowsServiceRecoveryProperty CSWindowsServiceRecoveryProperty example demonstrates how to use ChangeServiceConfig2 to configure the service "Recovery" properties in C#. This example operates all the options you can see on the service "Recovery" tab, including setting the "Enable actions for stops with errors" option in Windows Vista and later operating systems. This example also include how to grant the shut down privilege to the process, so that we can configure a special option in the "Recovery" tab - "Restart Computer Options...".   New Office Development Code Samples   CSOneNoteRibbonAddIn and VBOneNoteRibbonAddIn The code sample demonstrates a OneNote 2010 COM add-in that implements IDTExtensibility2. The add-in also supports customizing the Ribbon by implementing the IRibbonExtensibility interface. It is a skeleton OneNote add-in that developers can extend it to implement more functions. The code sample was requested by a customer in our code sample request service. We expect that this could help developers in the community.   New Windows Shell Code Samples   CppShellExtPreviewHandler, CSShellExtPreviewHandler and VBShellExtPreviewHandler In the past two months, we released the code samples of Windows Context Menu Handler, Infotip Handler, and Thumbnail Handler. This is the fourth part of the shell extension series: Preview Handler. The code samples demo the C++, C# and VB.NET implementation of a preview handler for a new file type registered with the .recipe extension. Preview handlers are called when an item is selected to show a lightweight, rich, read-only preview of the file's contents in the view's reading pane. This is done without launching the file's associated application. Windows Vista and later operating systems support preview handlers. To be a valid preview handler, several interfaces must be implemented. This includes IPreviewHandler (shobjidl.h); IInitializeWithFile, IInitializeWithStream, or IInitializeWithItem (propsys.h); IObjectWithSite (ocidl.h); and IOleWindow (oleidl.h). There are also optional interfaces, such as IPreviewHandlerVisuals (shobjidl.h), that a preview handler can implement to provide extended support. Windows API Code Pack for Microsoft .NET Framework makes the implementation of these interfaces very easy in .NET. The example preview handler provides previews for .recipe files. The .recipe file type is simply an XML file registered as a unique file name extension. It includes the title of the recipe, its author, difficulty, preparation time, cook time, nutrition information, comments, an embedded preview image, and so on. The preview handler extracts the title, comments, and the embedded image, and display them in a preview window.   In response to many customers' request, we added setup projects in every shell extension samples in this release. Those setup projects allow you to deploy the shell extensions to your end users' machines. ---------- Download address: http://1code.codeplex.com/releases/view/57459#DownloadId=185534 Updated code sample index categorized by technologies: http://1code.codeplex.com/wikipage?title=All-In-One%20Code%20Framework%20Sample%20Catalog (it also allows you to download individual code samples instead of the entire All-In-One Code Framework sample package.) If you have any feedback for us, please email: [email protected]. We look forward to your comments.

    Read the article

  • SQL SERVER – CTRL+SHIFT+] Shortcut to Select Code Between Two Parenthesis

    - by pinaldave
    Every weekend brings creative ideas and accidents brings best unknown secrets in front of us. Just a day while working with complex SQL Server code in SSMS I came across very interesting shortcut which I have never used before and instantly fell in love with it. It is totally possible that you are familiar with this but for me it was the first time and I was surprised that I did know know this short cut so far. Shortcut key is CTRL+SHIFT+]. This key can be very useful when dealing with multiple subqueries, CTE or query with multiple parentheses. When exercised this shortcut key it selects T-SQL code between two parentheses. Let us see the examples to understand the same. In each of the examples I have put the cursor at the position displayed and pressed CTRL+SHIFT+] and it has selected the code between two corresponding parentheses. Cursor position 1 Cursor position 2 Cursor position 3 If you are a developer and have to code with complex queries, you will totally appreciate that this feature can save so much of the time for development. I often remember my experience as a developer when I have lost a lot of hours to just balance parentheses. As I said yesterday I found this shortcut accidently. How many of you were aware of this feature? Is there any other useful feature you would like to share with us? Please leave a comment and if I have not covered it earlier, I will share it due credit on this blog. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Shortcut

    Read the article

  • ASP.NET Connections Fall 2011 Slides and Code

    - by Stephen Walther
    Thanks everyone who came to my talks at ASP.NET Connections in Las Vegas!  There was a definite theme to my talks this year…taking advantage of JavaScript to build a rich presentation layer. I gave the following three talks: JsRender Templates – Originally, I was scheduled to give a talk on jQuery Templates.  However, jQuery Templates has been deprecated and JsRender is the new technology which replaces jQuery Templates. In the talk, I give plenty of code samples of using JsRender.  You can download the slides and code samples RIGHT HERE   HTML5 – In this talk, I focused on the features of HTML5 which are the most interesting to developers building database-driven Web applications. In particular, I discussed Web Sockets,  Web workers, Web Storage, Indexed DB, and the Offline Application Cache. All of these features are supported by Safari, Chrome, and Firefox today and they will be supported by Internet Explorer 10. You can download the slides and code samples RIGHT HERE   Ajax Control Toolkit – My company, Superexpert, is responsible for developing and maintaining the Ajax Control Toolkit. In this talk, I discuss all of the bug fixes and new features which the developers on the Superexpert team have added to the Ajax Control Toolkit over the previous six months. We also had a good discussion of the features which people want in future releases of the Ajax Control Toolkit. The slides and code samples for this talk can be downloaded RIGHT HERE   I had a great time in Las Vegas!  Good questions, friendly audience, and lots of opportunities for me to learn new things!      -- Stephen

    Read the article

  • Is functional intellisense and code browsing more beneficial than the use of dependency injection containers

    - by Gavin Howden
    This question is really based on PHP, but could be valid for other dynamically typed, interpreted languages and specifically the methods of generating code insight and object browsing in development environments. We use PHPStorm, and find intellisense invaluable, but it is provided by some limited static analysis and parsing of doc comments. Obviously this does not lend well to obtaining dependencies through a container, as the IDE has no idea of the type returned, so the developer loses out on a plethora of (in the case of our framework anyway) rich documentation provided through the doc comments. So we start to see stuff like this: $widget = $dic->YieldInstance('WidgetA', $arg1, $arg2, $arg3, $arg4...)); /** * @var $widget WidgetA */ So that code insight works. In effect the comments are tightly bound, but worse they come out of sync when code is modified but not the comments: $widget = $dic->YieldInstance('WidgetB', $arg1, $arg2, $arg3, $arg4...)); /** * @var $widget WidgetA */ Obviously the comment could be improved by referencing a Widget interface, but then we might as well use a factory and avoid the requirement for the extra typing hints in the comments, and dic complexity / boiler plating. Which is more important to the average developer, code insight / intellisense or 'nirvana' decouplement?

    Read the article

  • How to write PowerShell code part 4 (using loop)

    - by ybbest
    In this post, I’d like to show you how to loop through the xml element. I will use the list data deletion script as an example. You can download the script here. 1. To perform the loop, I use foreach in powershell. Here is my xml looks like <?xml version="1.0" encoding="utf-8"?> <Site Url="http://workflowuat/npdmoc"> <Lists> <List Name="YBBEST Collaboration Areas" Type="Document Library"/> <List Name="YBBEST Project" /> <List Name="YBBEST Document"/> </Lists> </Site> 2. Here is the PowerShell to manipulate the xml. Note, you need to get to the $configurationXml.Site.Lists.List variable rather than $configurationXml.Site.Lists foreach ($list in $configurationXml.Site.Lists.List){ AppendLog "Clearing data for $($list.Name) at site $weburl" Yellow if($list.Type -eq "Document Library"){ deleteItemsFromDocumentLibrary -Url $weburl -ListName $list.Name }else{ deleteItemsFromList -Url $weburl -ListName $list.Name } AppendLog "Data in $($list.Name) at $weburl is cleared" Green } How to write PowerShell code part 1 How to write PowerShell code part 2 How to write PowerShell code part 3 How to write PowerShell code part 4

    Read the article

  • How to write PowerShell code part 1 (Using external xml configuration file)

    - by ybbest
    In this post, I will show you how to use external xml file with PowerShell. The advantage for doing so is that you can avoid other people to open up your PowerShell code to make the configuration changes; instead all they need to do is to change the xml file. I will refactor my site creation script as an example; you can download the script here and refactored code here. 1. As you can see below, I hard code all the variables in the script itself. $url = "http://ybbest" $WebsiteName = "Ybbest" $WebsiteDesc = "Ybbest test site" $Template = "STS#0" $PrimaryLogin = "contoso\administrator" $PrimaryDisplay = "administrator" $PrimaryEmail = "[email protected]" $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers" 2. Next, I will show you how to manipulate xml file using PowerShell. You can use the get-content to grab the content of the file. [xml] $xmlconfigurations=get-content .\SiteCollection.xml 3. Then you can set it to variable (the variable has to be typed [xml] after that you can read the content of the xml content, PowerShell also give you nice IntelliSense by press the Tab key. [xml] $xmlconfigurations=get-content .\SiteCollection.xml $xmlconfigurations.SiteCollection $xmlconfigurations.SiteCollection.SiteName 4. After refactoring my code, I can set the variables using the xml file as below. #Set the parameters $siteInformation=$xmlinput.SiteCollection $url = $siteInformation.URL $siteName = $siteInformation.SiteName $siteDesc = $siteInformation.SiteDescription $Template = $siteInformation.SiteTemplate $PrimaryLogin = $siteInformation.PrimaryLogin $PrimaryDisplay = $siteInformation.PrimaryDisplayName $PrimaryEmail = $siteInformation.PrimaryLoginEmail $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers"

    Read the article

  • How should I design a correct OO design in case of a Business-logic wide operation

    - by Mithir
    EDIT: Maybe I should ask the question in a different way. in light of ammoQ's comment, I realize that I've done something like suggested which is kind of a fix and it is fine by me. But I still want to learn for the future, so that if I develop new code for operations similar to this, I can design it correctly from the start. So, if I got the following characteristics: The relevant input is composed from data which is connected to several different business objects All the input data is validated and cross-checked Attempts are made in order to insert the data to the DB All this is just a single operation from Business side prospective, meaning all of the cross checking and validations are just side effects. I can't think of any other way but some sort of Operator/Coordinator kind of Object which activates the entire procedure, but then I fall into a Functional-Decomposition kind of code. so is there a better way in doing this? Original Question In our system we have many complex operations which involve many validations and DB activities. One of the main Business functionality could have been designed better. In short, there were no separation of layers, and the code would only work from the scenario in which it was first designed at, and now there were more scenarios (like requests from an API or from other devices) So I had to redesign. I found myself moving all the DB code to objects which acts like Business to DB objects, and I've put all the business logic in an Operator kind of a class, which I've implemented like this: First, I created an object which will hold all the information needed for the operation let's call it InformationObject. Then I created an OperatorObject which will take the InformationObject as a parameter and act on it. The OperatorObject should activate different objects and validate or check for existence or any scenario in which the business logic is compromised and then make the operation according to the information on the InformationObject. So my question is - Is this kind of implementation correct? PS, this Operator only works on a single Business-wise Operation.

    Read the article

  • How to write PowerShell code part 1 (Using external xml configuration file)

    - by ybbest
    In this post, I will show you how to use external xml file with PowerShell. The advantage for doing so is that you can avoid other people to open up your PowerShell code to make the configuration changes; instead all they need to do is to change the xml file. I will refactor my site creation script as an example; you can download the script here and refactored code here. 1. As you can see below, I hard code all the variables in the script itself. $url = "http://ybbest" $WebsiteName = "Ybbest" $WebsiteDesc = "Ybbest test site" $Template = "STS#0" $PrimaryLogin = "contoso\administrator" $PrimaryDisplay = "administrator" $PrimaryEmail = "[email protected]" $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers" 2. Next, I will show you how to manipulate xml file using PowerShell. You can use the get-content to grab the content of the file. [xml] $xmlconfigurations=get-content .\SiteCollection.xml 3. Then you can set it to variable (the variable has to be typed [xml] after that you can read the content of the xml content, PowerShell also give you nice IntelliSense by press the Tab key. [xml] $xmlconfigurations=get-content .\SiteCollection.xml $xmlconfigurations.SiteCollection $xmlconfigurations.SiteCollection.SiteName 4. After refactoring my code, I can set the variables using the xml file as below. #Set the parameters $siteInformation=$xmlinput.SiteCollection $url = $siteInformation.URL $siteName = $siteInformation.SiteName $siteDesc = $siteInformation.SiteDescription $Template = $siteInformation.SiteTemplate $PrimaryLogin = $siteInformation.PrimaryLogin $PrimaryDisplay = $siteInformation.PrimaryDisplayName $PrimaryEmail = $siteInformation.PrimaryLoginEmail $MembersGroup = "$WebsiteName Members" $ViewersGroup = "$WebsiteName Viewers"

    Read the article

  • DDD East Anglia, 29th June 2013 - Async Patterns presentation and source code

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/07/01/ddd-east-anglia-29th-june-2013---async-patterns-presentation.aspxMany thanks to the team in Cambridge for an awesome first conference DDD East Anglia.  I definitely appreciate how each of the different areas have their own distinctive atmosphere and feel.  Thanks to some great sponsors we enjoyed a great venue and some excellent nibbles. For those who attended my Async my source code and presentation are available on GitHub, https://github.com/westleyl/DDDEastAnglia2013-Async.git If you are new to Git then the easiest client to install is GitHub for Windows, a graphical UI for accessing GitHub. Personally, I also have Git Extensions and Tortoise Git installed. Tortoise Git is the file explorer add-in that works in a familiar manner to TortoiseSVN. As I mentioned during the presentation I have not included the sample data, the music files, in the source code placed on GitHub but I have included instructions on how to download them from http://silents.bandcamp.comand place them in the correct folders. Also, Windows Media Player, by default, does not play Ogg Vorbis and Flac music files, however you can download the codec installer for these, for free, from http://xiph.org/dshow. I have included the .Net 4.0 version of the source code that uses the Microsoft.Bcl.Async NuGet package - once you have got the project from GitHub you will need to install this NuGet package for the code to compile. Load Project into Visual Studio 2012 Access the NuGet package manager (Tools -> Library Package Manager -> Manage NuGet Packages For Solution) Highlight Online and then Search Online for microsoft.bcl.async Click on Install button Resources : You can download the Task-based Asynchronous Pattern white paper by Stephen Toub, which was the inspiration for this presentation from here - http://www.microsoft.com/en-us/download/details.aspx?id=19957 Presentation : If you just want the presentation and don’t want to bother with a GitHub login you can download the PowerPoint presentation from here.

    Read the article

  • Carolina Code Camp 2010

    - by Mark A. Wilson
    "Grow your skills in 2010" The Enterprise Developers Guild in Charlotte, the Greenville-Spartanburg Enterprise Developers Guild and the Triad Developers Guild have joined with Microsoft and Central Piedmont Community College (CPCC) Association for Computing Machinery (ACM) to present the 10th MSDN Code Camp to be held in Charlotte. Please join me and fellow developers and code enthusiasts on Saturday, May 15, 2010, at the CPCC Levine Campus in Matthews, NC. The focus this year is Microsoft Visual Studio 2010 and Windows Phone 7. Everyone is invited to attend and/or speak! Get in-depth exposure to Visual Studio 2010 and other exciting new Microsoft technologies. Sessions will range from presentations, to hands on labs, to informal "chalk talks". We will have a mix of speakers including Microsoft MVPs, authors, and most importantly, local developers just like you! And thanks to the generosity of our contributors, we will be able to provide breakfast, lunch, snacks, and lots of swag. Registration is open and there are a limited number of seats left. For more information or to register, visit the Carolina Code Camp 2010 event website. I encourage you to "give back" by registering as a volunteer or a proctor. This will be the only Carolina Code Camp held this year – no event is schedule for the fall – so register today before it’s too late! Thanks for visiting and till next time, Mark A. Wilson      Mark's Geekswithblogs Blog Enterprise Developers Guild Technorati Tags: Community

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >