Search Results

Search found 21160 results on 847 pages for 'vs 2010'.

Page 559/847 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • How to migrate ASP.NET MVC 3 , MVC4 project to ASP.NET MVC5 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/16/how-to-migrate-asp.net-mvc-3--mvc4-project-to.aspxSoon you will see a new version of MVC5 in VS2013. MVC5 will be incorporated in VS2013. MVC3 will not be supported in VS2013. I confirmed it on channel9 last time. So People who have installed only VS2013 or doesn’t have old version will be got trouble with the project that is still in MVC3. This error happen because MVC4 and 5 installation doesn’t contain the DLL that is used in Version 3 of ASP.NET MVC.   Don’t be panic. You guys want to upgrade your project. Here is a trick  to solve the issue.   When you open the project you have seen that in Reference there is some dll that have yellow icon. This means that dll are missing or not found in your configuration or system.   Now remember that dll name. Remove them from reference and add them from adding reference. I telling you to remove so VS will not prevent you to add new version of same assembly. Add all those assembly. Those dll will be following : System.Web.Mvc Razor and Webpages Dll.   Remember that in MVC3 we use old version of these assembly. Now When you done by adding all assembly then now open web.config.   There is 2 web.config file in our mvc project.  One is in root folder and second in Views folder. You need to update all those version no. This is not a big deal if you know the name of assembly. Now if you web.config show you assembly version as 3.000.00 then 3 would be replaced with 4 or 5 according to version no. Same thing need to applied all dll for both web.config.   Note :- In VS Template Views goes in ~/Views folder but if someone use any other folder then Views for views and those folder have also web.config then remember to update them also. Your project will be compile and make no warning and error but that certainly not work. for examples areas/views and themes/views that contain web.config also need to be updated with newer assembly version no.   After done these thing you can compile your project and it will be work as it should be Thanks for read my post. Follow me on FB and Twitter to stay updated

    Read the article

  • Should you ever re-estimate user stories?

    - by f1dave
    My current project is having a 'discussion' which is split down the middle- "this story is more complex than we originally thought, we should re-estimate" vs "you should never re-estimate as you only ever estimate up and never down". Can anyone shed some light on whether you ever should re-estimate? IMHO I'd imagine you could bring up an entirely new card for a new requirement or story, but going back and re-estimating on backlog items seems to skew the concept of relative sizing and will only ever 'inflate' your backlog.

    Read the article

  • Difference between Windows and Linux development environments?

    - by Ryan
    I have an interview coming up soon for a Business Analyst position and the recruiter mentioned some feedback from a prior candidate that was interviewed who said the interviewers asked him what the difference between a Windows and Linux development environment was. Are there some high level things I need to be aware of from a business point of view when working with a development team or designing an application on Windows vs Linux?

    Read the article

  • SQL University: Parallelism Week - Part 3, Settings and Options

    - by Adam Machanic
    Congratulations! You've made it back for the the third and final installment of Parallelism Week here at SQL University . So far we've covered the fundamentals of multitasking vs. parallel processing and delved into how parallel query plans actually work . Today we'll take a look at the settings and options that influence intra-query parallelism and discuss how best to set things up in various situations. Instance-Level Configuration Your database server probably has more than one logical processor....(read more)

    Read the article

  • Has Little Endian won?

    - by espertus
    When teaching recently about the Big vs. Little Endian battle, a student asked whether it had been settled, and I realized I didn't know. Looking at the Wikipedia article, it seems that the most popular current OS/architecture pairs use Little Endian but that Internet Protocol specifies Big Endian for transferring numeric values in packet headers. Would that be a good summary of the current status? Do current network cards or CPUs provide hardware support for switching byte order?

    Read the article

  • Cross-platform independent development

    - by Joe Wreschnig
    Some years ago, if you wrote in C and some subset of C++ and used a sufficient number of platform abstractions (via SDL or whatever), you could run on every platform an indie could get on - Linux, Windows, Mac OS of various versions, obscure stuff like BeOS, and the open consoles like the GP2X and post-death Dreamcast. If you got a contract for a closed platform at some point, you could port your game to that platform with "minimal" code changes as well. Today, indie developers must use XNA to get on the Xbox 360 (and upcoming Windows phone); must not use XNA to work anywhere else but Windows; until recently had to use Java on Android; Flash doesn't run on phones, HTML5 doesn't work on IE. Unlike e.g. DirectX vs. OpenGL or Windows vs. Unix, these are changes to the core language you write your code in and can't be papered over without, basically, writing a compiler. You can move some game logic into scripts and include an interpreter - except when you can't, because the iPhone SDK doesn't allow it, and performance suffers because no one allows JIT. So what can you do if you want a really cross-platform portable game, or even just a significant body of engine and logic code? Is this not a problem because the platforms have fundamentally diverged - it's just plain not worthwhile to try to target both an iPhone and the Xbox 360 with any shared code because such a game would be bad? (I find this very unlikely. I can easily see wanting to share a game between a Windows Mobile phone and an Android, or an Xbox 360 and an iPad.) Are interfaces so high-level now that porting time is negligible? (I might believe this for business applications, but not for games with strict performance requirements.) Is this going to become more pronounced in the future? Is the split going to be, somewhat scarily, still down vendor lines? Will we all rely on high-level middleware like Flash or Unity to get anything cross-platform done? tl;dr - Is porting a problem, is it going to be a bigger problem in the future, and if so how do we solve it?

    Read the article

  • Newbie worried about CASE tool.

    - by Jason Evans
    Hi there. I'm looking for some guidance on CASE tools and whether my concerns are valid. Recently I was in a meeting between my employer and an external software company which have a CASE tool currently in beta. They demonstrated this tool to us, showing how you build a UML model in Enterprise Architect (or something like it) and then, through their tool, that UML model is transformed into a Visual Studio project, with C# files, stored procedures for SQL Server, code for the data layer, WCF stuff, logging code and allsorts. Now, admittedly, I don't see the point in this, as in I'm not convinced it will save that much time (plus it feels like overkill). The tool authors said that a trial of the tool at another company had saved a team there 5 weeks of development time (from 6 weeks down to about 1 week) using this tool. I find the accuracy of that estimate hard to believe. My main concern is whether using this tool is going slow down my productivity. For example - Say I have a UML model which I built a VS solution from. Now, I want to rename a class method to something else; will this mean having to update the UML model first and then rebuilding the code? Is this how case tools normally work? Something I will need to check with the authors is the structure of the generated VS solution. I like the Domain Driven Design way of project structure - Infrstructure, Services, Model, etc. I doubt very much this tool will do that. Also, I've been playing around with Entity Framework Code First and think it's a great way to build the data model. I have nice repositories, unit of work classes and other design patterns that work well with EF. I have data anootations and stuff like that working great. By not having EF (the CASE tool uses it's own data layer code) I'm concerned that this tool's data layer code might not be a nice to integrate in the UoW pattern, repositories, etc. This I will need to verify when I get a closer look at the generated code. What are other people's experiences with CASE tools? Am I being paranoid about nothing? Am I being unfair - are my negativities unfounded? EDIT: I like to use TDD/BDD for building my code, and using a CASE tool looks like it will make this difficult. Again, any feedback on this would be great. Cheers. Jas.

    Read the article

  • How to pause/resume transfer of large files?

    - by Olivier Lalonde
    I recently had to copy about 20 GB of data split between about 20 files from my laptop to an external hard drive. Since this operation takes quite a while (at ~560kb/s), I was wondering if there was any way to pause the transfer and resume it later (in case, I need to interrupt the transfer). As a side question, is there any performance difference between copying from the terminal vs copying from Nautilus?

    Read the article

  • What is the difference between "contracting" and "consulting"? [closed]

    - by Rice Flour Cookies
    Possible Duplicate: Contractor vs Consultant I read a discussion thread on Slashdot the other day where some developers were discussing taking on "consulting" and "contracting" jobs. My understanding is that both of these terms imply an individual selling his services as a programmer on his own terms as opposed to working for an employer. What is the difference between these two: "consulting" and "contracting"?

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • A talk about observer pattern

    - by Martin
    As part of a university course I'm taking, I have to hold a 10 minute talk about the observer pattern. So far these are my points: What is it? (defenition) Polling is an alternative Polling VS Observer When Observer is better When Polling is better (taken from here) A statement that Mediator pattern is worth checking out. (I won't have time to cover it in 10 minutes) I would be happy to hear about: Any suggestions to add something? Another alternative (if exists)

    Read the article

  • Why are exceptions considered better than explicit error testing?

    - by Richard Keller
    I often come across heated blog posts where the author uses the argument of "exceptions vs explicit error checking" to advocate their preferred language over some other language. The general consensus seems to be that languages which make use of exceptions are inherently better / cleaner than languages which rely heavily on error checking through explicit function calls. Is the use of exceptions considered better programming practice than explicit error checking, and if so, why?

    Read the article

  • What is the start point in game development? Where to start?

    - by Dragon
    I understand, I'm not unique with such a question, there are a lot of questions like this one. But I hope you'll take a minute and maybe can give me a piece of advice. I have an idea to develop games, but I don't know where is the start point in game development. The learning curve isn't as straight as in learning of a programming language, but I want to give it a try. I have some experience with OOP and programming in general. I know (not too deep) C#, Java programming languages. I searched info on where to start, read a lot of blogs, forums and so on. Once I decided "stop wandering around, just start develop a game" and I started. At the moment I have a console version of very simple game (RPS - rock-paper-scissors) developed with C#. It has different modes: "player vs cpu" and "player vs player". Some time later I looked at the code and decided that it should be refactored or even redeveloped from the scratch. And I thought that time "GUI is what I need. I can add logic later." And now I'm here. I've already decided to make RPS with GUI, then make multiplayer and so on. I'm not thinking about 3D now, 2D is enough. It doesn't matter what language to use: C# or Java, I found frameworks for both - XNA (C#) and Slick (Java). Both are good for 2D game development. But I know nothing about sprites, how to bind objects on the screen and so on. You can say "you don't need it for such simple game like RPS", but RPS is the beginning, I have some ideas like "Tower Defense" game... you know, everybody has ideas, wishes.... and this knowledge is useful and in some way obligatory. So what is the start point to achieve my plans, ideas, wishes? Where to start? Is it possible to make game development learning curve a little bit straight? Or there're ways that amateur and game development beginners use for years? Thank you for you answers and advise in advance. P.S Sorry for that this post turned out an essay, but I tried to express my wish to start acting. Hope I managed to do it.

    Read the article

  • What is your favourite/ideal development environment?

    - by Nico Huysamen
    If you could describe your ideal development environment, what would it be? There are numerous things to take into consideration, including but not limited to: Hardware Software (Operating System of Choice, Paid vs. Free Software, ...) Physical Environment (lighting, open-plan, location, ...) An endless supply of coffee... ... In other words, if you could tell your company what they could do to make your development experience there better, what would it be?

    Read the article

  • Pros n Cons of ADPs

    The Access MDB format is evil and ADPs (Access Data Projects) should be used exclusively vs. there are "no advantages to an ADP." There are strong feelings on both sides of this argument, with the truth falling somewhere in-between. Read on to learn more...

    Read the article

  • Interesting links week #7

    - by erwin21
    Below a list of interesting links that I found this week: Frontend: HTML5 Peeks, Pokes and Pointers HTML 5 Markup that Gracefully Degrades Mobile Sites vs. Media Queries Development: Register your HTTP modules at runtime without config mobl - Open Source Language For Mobile Development PageMethod an easier and faster approach for Asp.Net AJAX Interested in more interesting links follow me at twitter http://twitter.com/erwingriekspoor

    Read the article

  • Outsourcing a private project: can it be done?

    - by Stafford Williams
    I'm an employed software designer/developer/analyst/monkey and I'm pondering the possibility of outsourcing the coding component(s) of some private(ly funded) projects. I have never used outsourcing before and am hesitant due to the contractors i've seen in the workplace that seem to have a reverse relationship on renumeration vs results/quality. Has anyone had any luck with outsourcing private coding jobs and can you offer any pointers?

    Read the article

  • AnkhSVN

    - by csharp-source.net
    AnkhSVN is a Visual Studio .NET addin for the Subversion version control system. It allows you to perform the most common version control operations directly from inside the VS.NET IDE. Not all the functionality provided by SVN is (yet) supported, but the majority of operations that support the daily workflow are implemented.

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario   Conventional Structures   Columnstore   Δ SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >