Search Results

Search found 15779 results on 632 pages for 'vs sdk'.

Page 385/632 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Ubuntu is not detecting my android device

    - by user3514160
    I am new to android. I have just downloaded and installed android sdk. Now when I run the application from eclipse, my device is not getting detected. I have googled and was brought up with this as my solution, but that also didn't worked. Here's the 51-android.rules SUBSYSTEMS=="usb", ATTR{idProduct}=="0bb4", ATTR{idProduct}=="0c03", MODE="0666", GROUP="plugindev", OWNER="<username>" After that I rebooted my laptop, and ran this command: username@laptopname:~/Android/adt-bundle/sdk/platform-tools$ adb devices The output i get is: * daemon not running. starting it now on port 5037 * * daemon started successfully * List of devices attached ???????????? no permissions EDIT crazydeveloper@crazydeveloper:~$ lsusb Bus 002 Device 004: ID 0bb4:0c03 HTC (High Tech Computer Corp.) Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 04f2:b337 Chicony Electronics Co., Ltd Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub crazydeveloper@crazydeveloper:~$ ls -l /dev/bus/usb/004/ ls: cannot access /dev/bus/usb/004/: No such file or directory crazydeveloper@crazydeveloper:~$ Edit: 2 After the answer submitted here's the output that i got: crazydeveloper@crazydeveloper:~$ ls -l /dev/bus/usb/002 total 0 crw-rw-r-- 1 root root 189, 128 May 7 09:45 001 crw-rw-r--+ 1 root root 189, 129 May 7 09:45 002 crw-rw-rw- 1 root plugdev 189, 130 May 7 09:48 003 I am using Micromax Canvas 2.2 A114 - Android Version 4.2.2 Please help me. Thanks.

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • Should I use OpenGL or DX11 for my game?

    - by Sundareswaran Senthilvel
    I'm planning to write a game from scratch (a BIG Game, for commercial purpose). I'm aware that there are certain compute libraries like OpenCL, AMD APP SDK, C++ AMP as well as DirectCompute - both from MS (NOT interested in CUDA) are available in the market. I'm planning to write the game from the scratch, which includes the following engines... Physics Engine AI Engine Main Game Engine (... and if anything is missed). I'm aware that, there are some free physics engine libraries in the market. Not sure about free AI engine libraries. I'm bit confused in choosing between the OpenCL, AMD APP SDK, and C++ AMP libraries (as already mentioned i'm NOT interested in CUDA). I want my game to be published in Windows/Android/Mac OSX. It means it should be a cross-platform game. I will be having "one source code" that i'll compile for various platforms like Windows/Android/Mac OSX, and any others if i missed. Note: Since I'm NOT a Java guy, kindly do NOT suggest me the Java Language. For Graphics language should i use OpenGL or DirectX 11? I have heard that OpenGL runs on a single core, and not sure of DirectX 11. Between OpenGL and DirectX which one should i follow? or else, are there any other graphics language that i need to start with? I want to make use of the parallelism in GPU as well as CPU.

    Read the article

  • C++ Succinctly now available!

    - by Michael B. McLaughlin
    Over the summer I worked with SyncFusion to create an eBook based off of my C# to C++ guide for their free Succinctly Series of eBooks. Today the result, C++ Succinctly, was published for download. It is a free (registration required; they make tools and libraries for .NET development so you might get an occasional email from them – I’ve been signed up for a few months and have had maybe 3 emails total so it’s not horrible super spam or anything ) and you can download it as a PDF or a Kindle .MOBI file (or both). I’m excited with how it turned out and enjoyed working with the people at SyncFusion. The book contains a total of 20 code samples, which you can download from BitBucket (there’s a link very early in the book). Almost all of the code is also inline in the book itself so that you don’t need to worry about flipping back and forth between your dev machine and your eReader (but if you want to try to understand a concept better, you can easily download the code, open it up in VS 2012, and play around with it to see what happens when you tinker with things). The code does require Visual Studio 2012 because of its expanded support for C++11 features and since I wrote all of the samples as Console programs for clarity and compactness, you will need a version that supports C++ desktop development (currently VS 2012 Pro, Premium, or Ultimate). Sometime this Fall, Microsoft will be releasing Visual Studio 2012 Express for Windows Desktop which should provide a free way to use the samples. That said, I tested all of the samples with MinGW and only the StorageDurationSample will not compile with it due to the thread-local storage code. If you comment that out then you can compile and run all the samples with MinGW (or using a recent version of GCC in a GNU/Linux environment, or any other C++ compiler that provides the same level of C++11 support that Visual Studio 2012 does). I hope it proves helpful to those of you who choose to check it out!

    Read the article

  • How to avoid jumping to a solution when under pressure? [closed]

    - by GlenPeterson
    When under a particularly strict programming deadline (like an hour), if I panic at all, my tendency is to jump into coding without a real plan and hope I figure it out as I go along. Given enough time, this can work, but in an interview it's been pretty unsuccessful, if not downright counter-productive. I'm not always comfortable sitting there thinking while the clock ticks away. Is there a checklist or are there techniques to recognize when you understand the problem well enough to start coding? Maybe don't touch the keyboard for the first 5-10 minutes of the problem? At what point do you give up and code a brute-force solution with the hope of reasoning out a better solution later? When is it most productive to think and design more vs. code some experiments to and figure out the design later? Here is a list of techniques for taking a math test and another for taking an oral exam. Is there is a similar list of techniques for handling a programming problem under pressure? ANSWERS: I think this is a valid answer: How To Solve It. I found the link as an answer to Steps to solve or approach towards a solution. There were also some really good tips at Is thinking out loud during an interview really the best strategy?. A great and concise argument for TDD is the first answer to TDD Writing code vs Figuring out the answer to a problem?. My question may be a near-duplicate of that one.

    Read the article

  • Best way to calculate unit deaths in browser game combat?

    - by MikeCruz13
    My browser game's combat system is written and mechanically functioning well. It's written in PHP and uses a SQL database. I'm happy with the unit balance in relation to one another. I am, however, a little worried about how I'm calculating unit deaths when one player attacks another because the deaths seem to pile up a little fast for my taste. For this system, a battle doesn't just trigger, calculate winner, and end. Instead, it is allowed to go for several rounds (say one round every 15 mins.) until one side passes a threshold of being too strong for the other player and allows players to send reinforcements between rounds. Each round, units pair up and attack each other. Essentially what I do is calculate the damage: AP = Attack Points HP = Hit Points Units AP * Quantity * Random Factors * other factors (such as attrition) I take that and divide by the defending unit's HP to find the number of casualties of defending units. So, for example (simplified to take out some factors), if I have: 500 attackers with 50 AP vs 1000 defenders with 100 HP = 250 deaths. I wonder if that last step could be handled better to reduce the deaths piling up. Some ideas: I just change all the units with more HP? I make sure to set the Attacking unit's AP to be a max of the defender's HP to make sure they only kill 1 unit. (is that fair if I have less huge units vs many small units?) I spread the damage around more by including the defending unit's quantity more? i.e. in that scenario some are dead and some are 50% damage. (How would I track this every round?) Other better mathematical approaches?

    Read the article

  • Is it worth replacing mouse by standalone trackpad for heavy code-editing? [on hold]

    - by heltonbiker
    I recently got more interested in improving my tools, workspace and worflow. The first sting came with a sore finger due to a crappy keyboard, and then after some research I fell in love with the "mechanical keyboard is what you need" doctrine, bought one (cherry MX Brown if you're curious), and am very happy with the results. Currently I am replacing my previous text editor (Geany) with Sublime Text 3, and am also very happy and feeling much more powerful and professional :) Well, but while I re-read all the ancient debates about VIM vs whatever-else, the following excerpt from a blog post got me thinking again about the mouse vs keyboard, and the "moving around from the very home row" (in VIM) versus gesturing away with the tiny and unstable mouse cursor: Reaching for a mouse may indeed slow you down, but developers are commonly on machines where the trackpad is a micro-hand movement away. Most novice programmers can click on a character on screen faster than an expert Vimmer can type 20jFp; or LkEEE or /word or any other nasty way Vimmers have to use. The point of a mouse is to make arbitrary on screen jumps efficient, and it’s very good at doing that. Don’t you ever think you can beat a mouse. Well, although there is some bitterness in this statement, it makes a lot of sense, and EVEN MORE if you consider your direct input to be a TRACKPAD conveniently placed in front of your spacebar (which oddly is where I like to put my mouse, rotated 90° ccw, due to a serious tendonitis in my right shoulder, already healed, but you knod...). So, the question is: Has anyone replaced mouse by a standalone trackpad, to work in code editing in a desktop machine (that is, with a sandalone keyboard)? Was it worth the change?

    Read the article

  • Windows Azure Error: Failed to start Storage Emulator: the SQL Server instance ‘localhost\SQLExpress’ could not be found

    - by DigiMortal
    When running some of your Windows Azure applications when storage emulator is not configured you may get the following error: "Windows Azure Tools: Failed to initialize Windows Azure storage emulator. Unable to start Development Storage. Failed to start Storage Emulator: the SQL Server instance ‘localhost\SQLExpress’ could not be found.   Please configure the SQL Server instance for Storage Emulator using the ‘DSInit’ utility in the Windows Azure SDK.". Here’s how to solve this problem. You need to run DSInit utility to create database. For Windows Azure SDK 1.6 the location for DSInit utility is: C:\Program Files\Windows Azure Emulator\emulator\devstore By default DSInit expects that your database server is (local)\SQLEXPRESS but you can change it easily. If you have MSSQL instance called SQLEXPRESS then it is enough to just run DSInit. If you need some other instance then run the following command: DSInit /sqlinstance:<instance name> For default instance use “.” as instance name: DSInit /sqlinstance:. You can find more information about sqlinstance and other switches from DSInit documentation. If database was correctly created you should see dialog like this: When storage database is ready you can run your application.

    Read the article

  • Best setup/workflow for distributed team to integrated DSVC with fragmented huge .NET site?

    - by lazfish
    So we have a team with 2 developers one manager. The dev server sits in a home office and the live server sits in a rack somewhere handled by the larger part of my company. We have freedom to do as we please but I want to incorporate Kiln DSVC and FogBugz for us with some standard procedures to make sense of our decisions/designs/goals. Our main product is web-based training through our .NET site with many videos etc, and we also do mobile apps for multiple platforms. Our code-base is a 15 yr old fragmented mess. The approach has been rogue .asp/.aspx pages with some class management implemented in the last 6 years. We still mix our html/vb/js all on the same file when we add a feature/page to our site. We do not separate the business logic from the rest of the code. Wiring anything up in VS for Intelli-sense or testing or any other benefit is more frustrating than it is worth, because of having to manually rejigger everything back to one file. How do other teams approach this? I noticed when I did wire everything up for VS it wants to make a class for all functions. Do people normally compile DLLs for page-specific functions that won't be reusable? What approaches make sense for getting our practices under control while still being able to fix old anti-patterns and outdated code and still moving towards a logical structure for future devs to build on?

    Read the article

  • What is the most concise, unambiguous syntax for operator associated methods (for overloading etc.) that doesn't pollute the namespace?

    - by Doug Treadwell
    Python tends to add double underscores before its built-in or overloadable operator methods, like __add(), whereas C++ requires declaring overloaded operators as operator + (Thing& thing) { /* code */ } for example. Personally I like the operator syntax because it seems to be more explicit and keeps these operator overloading methods separated from other methods without introducing weird prefix notation. What are your thoughts? Also, what about the case of built-in methods that are needed for the programming language to work properly? Is name mangling (like adding __ prefix or sys or something) the best solution here? What do you think about having another type of method declaration, like ... "system method" for lack of creativity at the moment. So there would be two kinds of declarations: int method_name() { ... } system int method_name() { ... } ... and the call would need to be different to distinguish between them. obj.method_name(); vs obj:method_name(); perhaps, assuming a language where : can be unambiguously used in this situation. obj.method_name() vs obj.(system method_name)() Sure, the latter is ugly, but the idea is to make the common case simple and system stuff should be kept out of the way. Maybe the Objective-C notation of method calls? [obj method_name]? Are there more alternatives? Please make suggestions.

    Read the article

  • Game Development

    - by Sundareswaran Senthilvel
    I'm planning to write a game from scratch (a BIG Game, for commercial purpose). I'm aware that there are certain compute libraries like OpenCL, AMD APP SDK, C++ AMP as well as DirectCompute - both from MS (NOT interested in CUDA) are available in the market. I'm planning to write the game from the scratch, which includes the following engines... 1.Physics Engine 2.AI Engine 3.Main Game Engine (... and if anything is missed). I'm aware that, there are some free physics engine libraries in the market. Not sure about free AI engine libraries. I'm bit confused in choosing between the OpenCL, AMD APP SDK, and C++ AMP libraries (as already mentioned i'm NOT interested in CUDA). I want my game to be published in Windows/Android/Mac OSX. It means it should be a cross-platform game. I will be having "one source code" that i'll compile for various platforms like Windows/Android/Mac OSX, and any others if i missed. Note: Since I'm NOT a Java guy, kindly do NOT suggest me the Java Language. For Graphics language should i use OpenGL or DirectX 11? I have heard that OpenGL runs on a single core, and not sure of DirectX 11. Between OpenGL and DirectX which one should i follow? or else, are there any other graphics language that i need to start with? I want to make use of the parallelism in GPU as well as CPU.

    Read the article

  • Why using Fragments?

    - by ahmed_khan_89
    I have read the documentation and some other questions' threads about this topic and I don't really feel convinced; I don't see clearly the limits of use of this technique. Fragments are now seen as a Best Practice; every Activity should be basically a support for one or more Fragments and not call a layout directly. Fragments are created in order to: allow the Activity to use many fragments, to change between them, to reuse these units... == the Fragment is totally dependent to the Context of an activity , so if I need something generic that I can reuse and handle in many Activities, I can create my own custom layouts or Views ... I will not care about this additional Complexity Developing Layer that fragments would add. a better handling to different resolution == OK for tablets/phones in case of long process that we can show two (or more) fragments in the same Activity in Tablets, and one by one in phones. But why would I use fragments always ? handling callbacks to navigate between Fragments (i.e: if the user is Logged-in I show a fragment else I show another fragment). === Just try to see how many bugs facebook SDK Log-in have because of this, to understand that it is really (?) ... considering that an Android Application is based on Activities... Adding another life cycles in the Activity would be better to design an Application... I mean the modules, the scenarios, the data management and the connectivity would be better designed, in that way. === This is an answer of someone who's used to see the Android SDK and Android Framework with a Fragments vision. I don't think it's wrong, but I am not sure it will give good results... And it is really abstract... ==== Why would I complicate my life, coding more, in using them always? else, why is it a best practice if it's just a tool for some cases? what are these cases?

    Read the article

  • WCF REST on .Net 4.0

    - by AngelEyes
    A simple and straight forward article taken from: http://christopherdeweese.com/blog2/post/drop-the-soap-wcf-rest-and-pretty-uris-in-net-4 Drop the Soap: WCF, REST, and Pretty URIs in .NET 4 Years ago I was working in libraries when the Web 2.0 revolution began.  One of the things that caught my attention about early start-ups using the AJAX/REST/Web 2.0 model was how nice the URIs were for their applications.  Those were my first impressions of REST; pretty URIs.  Turns out there is a little more to it than that. REST is an architectural style that focuses on resources and structured ways to access those resources via the web.  REST evolved as an “anti-SOAP” movement, driven by developers who did not want to deal with all the complexity SOAP introduces (which is al lot when you don’t have frameworks hiding it all).  One of the biggest benefits to REST is that browsers can talk to rest services directly because REST works using URIs, QueryStrings, Cookies, SSL, and all those HTTP verbs that we don’t have to think about anymore. If you are familiar with ASP.NET MVC then you have been exposed to rest at some level.  MVC is relies heavily on routing to generate consistent and clean URIs.  REST for WCF gives you the same type of feel for your services.  Let’s dive in. WCF REST in .NET 3.5 SP1 and .NET 4 This post will cover WCF REST in .NET 4 which drew heavily from the REST Starter Kit and community feedback.  There is basic REST support in .NET 3.5 SP1 and you can also grab the REST Starter Kit to enable some of the features you’ll find in .NET 4. This post will cover REST in .NET 4 and Visual Studio 2010. Getting Started To get started we’ll create a basic WCF Rest Service Application using the new on-line templates option in VS 2010: When you first install a template you are prompted with this dialog: Dude Where’s my .Svc File? The WCF REST template shows us the new way we can simply build services.  Before we talk about what’s there, let’s look at what is not there: The .Svc File An Interface Contract Dozens of lines of configuration that you have to change to make your service work REST in .NET 4 is greatly simplified and leverages the Web Routing capabilities used in ASP.NET MVC and other parts of the web frameworks.  With REST in .NET 4 you use a global.asax to set the route to your service using the new ServiceRoute class.  From there, the WCF runtime handles dispatching service calls to the methods based on the Uri Templates. global.asax using System; using System.ServiceModel.Activation; using System.Web; using System.Web.Routing; namespace Blog.WcfRest.TimeService {     public class Global : HttpApplication     {         void Application_Start(object sender, EventArgs e)         {             RegisterRoutes();         }         private static void RegisterRoutes()         {             RouteTable.Routes.Add(new ServiceRoute("TimeService",                 new WebServiceHostFactory(), typeof(TimeService)));         }     } } The web.config contains some new structures to support a configuration free deployment.  Note that this is the default config generated with the template.  I did not make any changes to web.config. web.config <?xml version="1.0"?> <configuration>   <system.web>     <compilation debug="true" targetFramework="4.0" />   </system.web>   <system.webServer>     <modules runAllManagedModulesForAllRequests="true">       <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule,            System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     </modules>   </system.webServer>   <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>     <standardEndpoints>       <webHttpEndpoint>         <!--             Configure the WCF REST service base address via the global.asax.cs file and the default endpoint             via the attributes on the <standardEndpoint> element below         -->         <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>       </webHttpEndpoint>     </standardEndpoints>   </system.serviceModel> </configuration> Building the Time Service We’ll create a simple “TimeService” that will return the current time.  Let’s start with the following code: using System; using System.ServiceModel; using System.ServiceModel.Activation; using System.ServiceModel.Web; namespace Blog.WcfRest.TimeService {     [ServiceContract]     [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]     [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]     public class TimeService     {         [WebGet(UriTemplate = "CurrentTime")]         public string CurrentTime()         {             return DateTime.Now.ToString();         }     } } The endpoint for this service will be http://[machinename]:[port]/TimeService.  To get the current time http://[machinename]:[port]/TimeService/CurrentTime will do the trick. The Results Are In Remember That Route In global.asax? Turns out it is pretty important.  When you set the route name, that defines the resource name starting after the host portion of the Uri. Help Pages in WCF 4 Another feature that came from the starter kit are the help pages.  To access the help pages simply append Help to the end of the service’s base Uri. Dropping the Soap Having dabbled with REST in the past and after using Soap for the last few years, the WCF 4 REST support is certainly refreshing.  I’m currently working on some REST implementations in .NET 3.5 and VS 2008 and am looking forward to working on REST in .NET 4 and VS 2010.

    Read the article

  • Dec 5th Links: ASP.NET, ASP.NET MVC, jQuery, Silverlight, Visual Studio

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series for another on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET ASP.NET Code Samples Collection: J.D. Meier has a great post that provides a detailed round-up of ASP.NET code samples and tutorials from a wide variety of sources.  Lots of useful pointers. Slash your ASP.NET compile/load time without any hard work: Nice article that details a bunch of optimizations you can make to speed up ASP.NET project load and compile times. You might also want to read my previous blog post on this topic here. 10 Essential Tools for Building ASP.NET Websites: Great article by Stephen Walther on 10 great (and free) tools that enable you to more easily build great ASP.NET Websites.  Highly recommended reading. Optimize Images using the ASP.NET Sprite and Image Optimization Framework: A nice article by 4GuysFromRolla that discusses how to use the open-source ASP.NET Sprite and Image Optimization Framework (one of the tools recommended by Stephen in the previous article).  You can use this to significantly improve the load-time of your pages on the client. Formatting Dates, Times and Numbers in ASP.NET: Scott Mitchell has a great article that discusses formatting dates, times and numbers in ASP.NET.  A very useful link to bookmark.  Also check out James Michael’s DateTime is Packed with Goodies blog post for other DateTime tips. Examining ASP.NET’s Membership, Roles and Profile APIs (Part 18): Everything you could possibly want to known about ASP.NET’s built-in Membership, Roles and Profile APIs must surely be in this tutorial series. Part 18 covers how to store additional user info with Membership. ASP.NET with jQuery An Introduction to jQuery Templates: Stephen Walther has written an outstanding introduction and tutorial on the new jQuery Template plugin that the ASP.NET team has contributed to the jQuery project. Composition with jQuery Templates and jQuery Templates, Composite Rendering, and Remote Loading: Dave Ward has written two nice posts that talk about composition scenarios with jQuery Templates and some cool scenarios you can enable with them. Using jQuery and ASP.NET to Build a News Ticker: Scott Mitchell has a nice tutorial that demonstrates how to build a dynamically updated “news ticker” style UI with ASP.NET and jQuery. Checking All Checkboxes in a GridView using jQuery: Scott Mitchell has a nice post that covers how to use jQuery to enable a checkbox within a GridView’s header to automatically check/uncheck all checkboxes contained within rows of it. Using jQuery to POST Form Data to an ASP.NET AJAX Web Service: Rick Strahl has a nice post that discusses how to capture form variables and post them to an ASP.NET AJAX Web Service (.asmx). ASP.NET MVC ASP.NET MVC Diagnostics Using NuGet: Phil Haack has a nice post that demonstrates how to easily install a diagnostics page (using NuGet) that can help identify and diagnose common configuration issues within your apps. ASP.NET MVC 3 JsonValueProviderFactory: James Hughes has a nice post that discusses how to take advantage of the new JsonValueProviderFactory support built into ASP.NET MVC 3.  This makes it easy to post JSON payloads to MVC action methods. Practical jQuery Mobile with ASP.NET MVC: James Hughes has another nice post that discusses how to use the new jQuery Mobile library with ASP.NET MVC to build great mobile web applications. Credit Card Validator for ASP.NET MVC 3: Benjii Me has a nice post that demonstrates how to build a [CreditCard] validator attribute that can be used to easily validate credit card numbers are in the correct format with ASP.NET MVC. Silverlight Silverlight FireStarter Keynote and Sessions: A great blog post from John Papa that contains pointers and descriptions of all the great Silverlight content we published last week at the Silverlight FireStarter.  You can watch all of the talks online.  More details on my keynote and Silverlight 5 announcements can be found here. 31 Days of Windows Phone 7: 31 great tutorials on how to build Windows Phone 7 applications (using Silverlight).  Silverlight for Windows Phone Toolkit Update: David Anson has a nice post that discusses some of the additional controls provided with the Silverlight for Windows Phone Toolkit. Visual Studio JavaScript Editor Extensions: A nice (and free) Visual Studio plugin built by the web tools team that significantly improves the JavaScript intellisense support within Visual Studio. HTML5 Intellisense for Visual Studio: Gil has a blog post that discusses a new extension my team has posted to the Visual Studio Extension Gallery that adds HTML5 schema support to Visual Studio 2008 and 2010. Team Build + Web Deployment + Web Deploy + VS 2010 = Goodness: Visual blogs about how to enable a continuous deployment system with VS 2010, TFS 2010 and the Microsoft Web Deploy framework.  Visual Studio 2010 Emacs Emulation Extension and VIM Emulation Extension: Check out these two extensions if you are fond of Emacs and VIM key bindings and want to enable them within Visual Studio 2010. Hope this helps, Scott

    Read the article

  • “Being Agile” Means No Documentation, Right?

    - by jesschadwick
    Ask most software professionals what Agile is and they’ll probably start talking about flexibility and delivering what the customer wants.  Some may even mention the word “iterations”.  But inevitably, they’ll say at some point that it means less or even no documentation.  After all, doesn’t creating, updating, and circulating painstakingly comprehensive documentation that everyone and their mother have officially signed off on go against the very core of Agile?  Of course it does!  But really, they’re missing the point! Read The Agile Manifesto. (No, seriously - read it now. It’s short. I’ll wait.)  It’s essentially a list of values.  More specifically, it’s a right-side/left-side weighted list of values:  “Value this over that”. Many people seem to get the impression that this is really a “good vs. bad” list and that those values on the right side are evil and should essentially be tossed on the floor.  This leads to the conclusion that in order to be Agile we must throw away our fancy expensive tools, document as little as possible, and scoff at the idea of a project plan.  This conclusion is quite convenient because it essentially means “less work, more productivity!” (particularly in regards to the documentation and project planning).  I couldn’t disagree with this conclusion more. My interpretation of the Manifesto targets “over” as the operative word.  It’s not just a list of right vs. wrong or good vs. bad.  It’s a list of priorities.  In other words, none of the concepts on the list should be removed from your development lifecycle – they are all important… just not equally important.  This is not a unique interpretation, in fact it says so right at the end of the manifesto! So, the next time your team sits down to tackle that big new project, don’t make the first order of business to outlaw all meetings, documentation, and project plans.  Instead, collaborate with both your team and the business members involved (you do have business members sitting in the room, directly involved in the project planning, right?) and determine the bare minimum that will allow all of you to work and communicate in the best way possible.  This often means that you can pick and choose which parts of the Agile methodologies and process work for your particular project and end up with an amalgamation of Waterfall, Agile, XP, SCRUM and whatever other methodologies the members of your team have been exposed to (my favorite is “SCRUMerfall”). The biggest implication of this is that there is no one way to implement Agile.  There is no checklist with which you can tick off boxes and confidently conclude that, “Yep, we’re Agile™!”  In fact, depending on your business and the members of your team, moving to Agile full-bore may actually be ill-advised.  Such a drastic change just ends up taking everyone out of their comfort zone which they inevitably fall back into by the end of the project.  This often results in frustration to the point that Agile is abandoned altogether because “we just need to ship something!”  Needless to say, this is far more devastating to a project. Instead, I offer this approach: keep it simple and take it slow.  If your business members or customers are only involved at the beginning phases and nowhere to be seen until the project is delivered, invite them to your daily meetings; encourage them to keep up to speed on what’s going on on a daily basis and provide feedback.  If your current process is heavy on the documentation, try to reduce it as opposed to eliminating it outright.  If you need a “TPS Change Request” signed in triplicate with a 5-day “cooling off period” before a change is implemented, try a simple bug tracking system!  Tighten the feedback loop! Finally, at the end of every “iteration” (whatever that means to you, as long as it’s relatively frequent), take as much time as you can spare (even if it’s an hour or so) and perform some kind of retrospective.  Learn from your mistakes.  Figure out what’s working for you and what’s not, then fix it.  Before you know it you’ve got a handful of iterations and/or projects under your belt and you sit down with your team to realize that, “Hey, this is working - we’re pretty Agile!”  After all, Agile is a Zen journey.  It’s a destination that you aim for, not force, and even if you never reach true “enlightenment” that doesn’t mean your team can’t be exponentially better off from merely taking the journey.

    Read the article

  • Programmatically reuse Dynamics CRM 4 icons

    - by gperera
    The team that wrote the dynamics crm sdk help rocks! I wanted to display the same crm icons on our time tracking application for consistency, so I opened up the sdk help file, searched for 'icon', ignored all the sitemap/isv config entries since I know I want to get these icons programatically, about half way down the search results I see 'organizationui', sure enough that contains the 16x16 (gridicon), 32x32 (outlookshortcuticon) and 66x48 (largeentityicon) icons!To get all the entities, execute a retrieve multiple request. RetrieveMultipleRequest request = new RetrieveMultipleRequest{    Query = new QueryExpression    {        EntityName = "organizationui",        ColumnSet = new ColumnSet(new[] { "objecttypecode", "formxml", "gridicon" }),    }}; var response = sdk.Execute(request) as RetrieveMultipleResponse;Now you have all the entities and icons, here's the tricky part, all the custom entities in crm store the icons inside gridicon, outlookshortcuticon and largeentityicon attributes, the built-in entity icons are stored inside the /_imgs/ folder with the format of /_imgs/ico_16_xxxx.gif (gridicon), with xxxx being the entity type code. The entity type code is not stored inside an attribute of organizationui, however you can get it by looking at the formxml attribute objecttypecode xml attribute. response.BusinessEntityCollection.BusinessEntities.ToList()    .Cast<organizationui>().ToList()    .ForEach(a =>    {        try        {            // easy way to check if it's a custom entity            if (!string.IsNullOrEmpty(a.gridicon))            {                byte[] gif = Convert.FromBase64String(a.gridicon);            }            else            {                // built-in entity                if (!string.IsNullOrEmpty(a.formxml))                {                    int start = a.formxml.IndexOf("objecttypecode=\"") + 16;                    int end = a.formxml.IndexOf("\"", start);                     // found the entity type code                    string code = a.formxml.Substring(start, end - start);                    string url = string.Format("/_imgs/ico_16_{0}.gif", code);Enjoy!

    Read the article

  • Running a Silverlight application in the Google App Engine platform

    - by rajbk
    This post shows you how to host a Silverlight application in the Google App Engine (GAE) platform. You deploy and host your Silverlight application on Google’s infrastructure by creating a configuration file and uploading it along with your application files. I tested this by uploading an old demo of mine - the four stroke engine silverlight demo. It is currently being served by the GAE over here: http://fourstrokeengine.appspot.com/ The steps to run your Silverlight application in GAE are as follows: Account Creation Create an account at http://appengine.google.com/. You are allocated a free quota at signup. Select “Create an Application”   Verify your account by SMS   Create your application by clicking on “Create an Application”   Pick an application identifier on the next screen. The identifier has to be unique. You will use this identifier when uploading your application. The application you create will by default be accessible at [applicationidentifier].appspot.com. You can also use custom domains if needed (refer to the docs).   Save your application. Download SDK  We will use the  Windows Launcher for Google App Engine tool to upload our apps (it is possible to do the same through command line). This is a GUI for creating, running and deploying applications. The launcher lets you test the app locally before deploying it to the GAE. This tool is available in the Google App Engine SDK. The GUI is written in Python and therefore needs an installation of Python to run. Download and install the Python Binaries from here: http://www.python.org/download/ Download and install the Google App Engine SDK from here: http://code.google.com/appengine/downloads.html Run the GAE Launcher. Select Create New Application.   On the next dialog, give your application a name (this must match the identifier we created earlier) For Parent Directory, point to the directory containing your Silverlight files. Change the port if you want to. The port is used by the GAE local web server. The server is started if you choose to run the application locally for testing purposes. Hit Save. Configure, Test and Upload As shown below, the files I am interested in uploading for my Silverlight demo app are The html page used to host the Silverlight control The xap file containing the compiled Silverlight application A favicon.ico file.   We now create a configuration file for our application called app.yaml. The app.yaml file specifies how URL paths correspond to request handlers and static files.  We edit the file by selecting our app in the GUI and clicking “Edit” The contents of file after editing is shown below (note that the contents of the file should be in plain text): application: fourstrokeengine version: 1 runtime: python api_version: 1 handlers: - url: /   static_files: Default.html   upload: Default.html - url: /favicon.ico   static_files: favicon.ico   upload: favicon.ico - url: /FourStrokeEngine.xap   static_files: FourStrokeEngine.xap   upload: FourStrokeEngine.xap   mime_type: application/x-silverlight-app - url: /.*   static_files: Default.html   upload: Default.html We have listed URL patterns for our files, specified them as static files and specified a mime type for our xap file. The wild card URL at the end will match all URLs that are not found to our default page (you would normally include a html file that displays a 404 message).  To understand more about app.yaml, refer to this page. Save the file. Run the application locally by selecting “Browse” in the GUI. A web server listening on the port you specified is started (8080 in my case). The app is loaded in your default web browser pointing to http://localhost:8080/. Make sure the application works as expected. We are now ready to deploy. Click the “Deploy” icon. You will be prompted for your username and password. Hit OK. The files will get uploaded and you should get a dialog telling you to “close the window”. We are done uploading our Silverlight application. Go to http://appengine.google.com/ and launch the application by clicking on the link in the “Current Version” column.   You should be taken to a URL which points to your application running in Google’s infrastructure : http://fourstrokeengine.appspot.com/. We are done deploying our application! Clicking on the link in the Application column will take you to the Admin console where you can see stats related to system usage.  To learn more about the Google Application Engine, go here: http://code.google.com/appengine/docs/whatisgoogleappengine.html

    Read the article

  • How-to filter table filter input to only allow numeric input

    - by frank.nimphius
    In a previous ADF Code Corner post, I explained how to change the table filter behavior by intercepting the query condition in a query filter. See sample #30 at http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html In this OTN Harvest post I explain how to prevent users from providing invalid character entries as table filter criteria to avoid problems upon re-querying the table. In the example shown next, only numeric values are allowed for a table column filter. To create a table that allows data filtering, drag a View Object – or a data collection of a Web Service or JPA business service – from the DataControls panel and drop it as a table. Choose the Enable Filtering option in the Edit Table Columns dialog so the table renders with the column filter boxes displayed. The table filter fields are created using implicit af:inputText components that need to be customized for you to apply a custom filter input component, or to change the input behavior. To change the input filter, so only a defined set of input keys is allowed, you need to change the default filter field with your own af:inputText field to which you apply an af:clientListener tag that filters user keyboard entries. For this, in the Oracle JDeveloper visual editor, select the column which filter you want to change and expand the column node in the Oracle JDeveloper Structure Window. Part of the column definition is the Column facet node. Expand the facets so you see the filter facet entry. The filter facet is grayed out as there is no custom facet defined. In a next step, open theComponent Palette (ctrl+shift+P) and drag an Input Text component onto the facet. This demarks the first part in the filter customization. To make the custom filter component work, you need to map the af:inputText component value property to the ADF filter criteria that is exposed in the Expression Builder. Open the Expression Builder for the filter input component value property by clicking the arrow icon to its right. In the Expression Builder expand the JSP Objects | vs | filterCriteria node to select the attribute name represented by the table column. The vs entry is the name of a variable that is defined on the table and that grants you access to the table attributes. Now that the filter works as before – though using a custom filter input component – you can add the af:clientListener tag to your custom filter component – af:inputText – to call out to JavaScript when users type in the column filter field Point the client filter method property to a JavaScript function that you reference or add through using the af:resource tag and set the type property value to keyDown. <af:document id="d1">     <af:resource type="javascript" source="/js/filterHandler.js"/> … The filter definition looks as shown below <af:inputText label="Label 1" id="it1"                         value="#{vs.filterCriteria.Employe        <af:clientListener method="suppressCharacterInput"                                     type="keyDown"/> </af:inputText> The JavaScript code that you can use to either filter character inputs or numeric inputs is shown below. Just store this code in an external JavaScript (.js) file and reference it from the af:resource tag. //Allow numbers, cursor control keys and delete keys function suppressCharacterInput(evt) {     var _keyCode = evt.getKeyCode();     var _filterField = evt.getCurrentTarget();     var _oldValue = _filterField.getValue();     if (!((_keyCode < 57) ||(_keyCode > 96 && _keyCode < 105))) {         _filterField.setValue(_oldValue);         evt.cancel();     } } //Allow characters, cursor control keys and delete keys function suppressNumericInput(evt) {  var _keyCode = evt.getKeyCode();  var _filterField = evt.getCurrentTarget();  var _oldValue = _filterField.getValue();  //check for numbers  if ((_keyCode < 57 && _keyCode > 47) ||      (_keyCode > 96 && _keyCode < 105)){     _filterField.setValue(_oldValue);     evt.cancel();   } } But what if browsers don't allow JavaScript ? Don't worry about this. If browsers would not support JavaScript then ADF Faces as a whole would not work and you had a different problem.

    Read the article

  • Cloud Computing Architecture Patterns: Don’t Focus on the Client

    - by BuckWoody
    Normally I try to put topics in the positive in other words "Do this" not "Don't do that". Sometimes its clearer to focus on what *not* to do. Popular development processes often start with screen mockups, or user input descriptions. In a scale-out pattern like Cloud Computing on Windows Azure, that's the wrong place to start. Start with the Data    Instead, I recommend that you start with the data that a process requires. That data might be temporary or persisted, but starting with the data and its requirements helps to define not only the storage engine you need but also drives everything from security to the integrity of the application. For instance, assume the requirements show that the user must enter their phone number, and that this datum is used in a contact management system further down the application chain. For that datum, you can determine what data type you need (U.S. only or International?) the security requirements, whether it needs ACID compliance, how it will be searched, indexed and so on. From one small data point you can extrapolate out your options for storing and processing the data. Here's the interesting part, which begins to break the patterns that we've used for decades: all of the data doesn't have the same requirements. The phone number might be best suited for a list, or an element, or a string, with either BASE or ACID requirements, based on how it is used. That means we don't have to dump everything into XML, an RDBMS, a NoSQL engine, or a flat file exclusively. In fact, one record might use all of those depending on the use-case requirements. Next Is Data Management  With the data defined, we can move on to how to store the data. Again, the requirements now dictate whether we need a full relational calculus or set-based operations, or we can choose another method based on the requirements for the data. And breaking another pattern its OK to store in more than once, in more than one location. We do this all the time for reporting systems and Business Intelligence systems, so this is a pattern we need to think about even for OLTP data. Move to Data Transport How does the data get around? We can use a connection-based method, sending the data along a transport to the storage engine, but in some cases we may want to use a cache, a queue, the Service Bus, or Complex Event Processing. Finally, Data Processing Most RDBMS engines, NoSQL, and certainly Big Data engines not only store data, but can process and manipulate it as well. Its doubtful that you'll calculate that phone number right? Well, if you're the phone company, you most certainly will. And so we see that even once we've chosen the data type, storage and engine, the same element can have different computing requirements based on how it is used. Sure, We Need A Front-End At Some Point Not all data is entered by human hands in fact most data isn't. We don't really need a Graphical User Interface (GUI) we need some way for a GUI to get data into and out of the systems listed earlier.   But when we do need to allow users to enter or examine data, that should be left to the GUI that best fits the device the user has. Ever tried to use an application designed for a web browser on a phone? Or one designed for a tablet on a phone? Its usually quite painful. The siren song of "We'll just write one interface for all devices" is strong, and has beguiled many an unsuspecting architect. But they just don't work out.   Instead, focus on the data, its transport and processing. Create API calls or a message system that allows for resilient transport to the device or interface, and let it do what it does best. References Microsoft Architecture Journal:   http://msdn.microsoft.com/en-us/architecture/bb410935.aspx Patterns and Practices:   http://msdn.microsoft.com/en-us/library/ff921345.aspx Windows Azure iOS, Android, Windows 8 Mobile Devices SDK: http://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started-ios/ Windows Azure Facebook SDK: http://ntotten.com/2013/03/14/using-windows-azure-mobile-services-with-the-facebook-sdk-for-windows-phone/

    Read the article

  • Driver error when using multiple shaders

    - by Jinxi
    I'm using 3 different shaders: a tessellation shader to use the tessellation feature of DirectX11 :) a regular shader to show how it would look without tessellation and a text shader to display debug-info such as FPS, model count etc. All of these shaders are initialized at the beginning. Using the keyboard, I can switch between the tessellation shader and regular shader to render the scene. Additionally, I also want to be able toggle the display of debug-info using the text shader. Since implementing the tessellation shader the text shader doesn't work anymore. When I activate the DebugText (rendered using the text-shader) my screens go black for a while, and Windows displays the following message: Display Driver stopped responding and has recovered This happens with either of the two shaders used to render the scene. Additionally: I can start the application using the regular shader to render the scene and then switch to the tessellation shader. If I try to switch back to the regular shader I get the same error as with the text shader. What am I doing wrong when switching between shaders? What am I doing wrong when displaying text at the same time? What file can I post to help you help me? :) thx P.S. I already checked if my keyinputs interrupt at the wrong time (during render or so..), but that seems to be ok Testing Procedure Regular Shader without text shader Add text shader to Regular Shader by keyinput (works now, I built the text shader back to only vertex and pixel shader) (somthing with the z buffer is stil wrong...) Remove text shader, then change shader to Tessellation Shader by key input Then if I add the Text Shader or switch back to the Regular Shader Switching/Render Shader Here the code snipet from the Renderer.cpp where I choose the Shader according to the boolean "m_useTessellationShader": if(m_useTessellationShader) { // Render the model using the tesselation shader ecResult = m_ShaderManager->renderTessellationShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), worldMatrix, viewMatrix, projectionMatrix, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), (D3DXVECTOR3)m_Camera->getPosition(), TESSELLATION_AMOUNT); } else { // todo: loaded model depends on distance to camera // Render the model using the light shader. ecResult = m_ShaderManager->renderShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(), lod_level, textures, texturecount, m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(), worldMatrix, viewMatrix, projectionMatrix); } And here the code snipet from the Mesh.cpp where I choose the Typology according to the boolean "useTessellationShader": // RenderBuffers is called from the Render function. The purpose of this function is to set the vertex buffer and index buffer as active on the input assembler in the GPU. Once the GPU has an active vertex buffer it can then use the shader to render that buffer. void Mesh::renderBuffers(ID3D11DeviceContext* deviceContext, bool useTessellationShader) { unsigned int stride; unsigned int offset; // Set vertex buffer stride and offset. stride = sizeof(VertexType); offset = 0; // Set the vertex buffer to active in the input assembler so it can be rendered. deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset); // Set the index buffer to active in the input assembler so it can be rendered. deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0); // Check which Shader is used to set the appropriate Topology // Set the type of primitive that should be rendered from this vertex buffer, in this case triangles. if(useTessellationShader) { deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_3_CONTROL_POINT_PATCHLIST); }else{ deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); } return; } RenderShader Could there be a problem using sometimes only vertex and pixel shader and after switching using vertex, hull, domain and pixel shader? Here a little overview of my architecture: TextClass: uses font.vs and font.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); RegularShader: uses vertex.vs and pixel.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); TessellationShader: uses tessellation.vs, tessellation.hs, tessellation.ds, tessellation.ps deviceContext-VSSetShader(m_vertexShader, NULL, 0); deviceContext-HSSetShader(m_hullShader, NULL, 0); deviceContext-DSSetShader(m_domainShader, NULL, 0); deviceContext-PSSetShader(m_pixelShader, NULL, 0); deviceContext-PSSetSamplers(0, 1, &m_sampleState); ClearState I'd like to switch between 2 shaders and it seems they have different context parameters, right? In clearstate methode it says it resets following params to NULL: I found following in my Direct3D Class: depth-stencil state - m_deviceContext-OMSetDepthStencilState rasterizer state - m_deviceContext-RSSetState(m_rasterState); blend state - m_device-CreateBlendState viewports - m_deviceContext-RSSetViewports(1, &viewport); I found following in every Shader Class: input/output resource slots - deviceContext-PSSetShaderResources shaders - deviceContext-VSSetShader to - deviceContext-PSSetShader input layouts - device-CreateInputLayout sampler state - device-CreateSamplerState These two I didn't understand, where can I find them? predications - ? scissor rectangles - ? Do I need to store them all localy so I can switch between them, because it doesn't feel right to reinitialize the Direct3d and the Shaders by every switch (key input)?!

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Great event : Microsoft Visual Studio 2010 Launch @ Microsoft TechEd Blore

    - by sathya
    Great event : Microsoft Visual Studio 2010 Launch @ Microsoft TechEd Blore   I was really excited on attending the day 1 of Microsoft TechEd 2010 in Bangalore. This is the first Teched that am attending. The event was really fun filled with lot of knowledge sharing sessions and lots of goodies and gifts by the partners Initially the Event Started by Murthy's Session. He explained about the Developers relating to the 5 elements of nature (Pancha Boothaas) 1. Fire - Passion 2. Wave (Water) - Catch the right wave which we need to apply. 3. Earth - Connections and lots of opportunities around the world 4. Air -  Its whatever we breathe. Developers.. Without them nothing is possible. they are like the air 5. Sky - Cloud based applications   Next the Keynote and the announcement of Visual Studio by SomaSegar. List of things that he delivered his speech on : 1. Announcement of Visual Studio 2010 2. Announcement of .NET 4.0 3. Announcement of Silverlight later this week 4. What is the current Trend? Microsoft has done a research with many developers across the globe and have got the following feedback from the users. Get Lost (interrupted) - When we do some work and somebody is calling or interrepting by someother way we lose track of what we were doing and we need to do from the start Falling Behind- Technology gets updated  phenomenally over a period of time and developers always have a scenario like they are not in the state of the art technology and they always have a doubt whether they are staying updated. Lack of Collobaration - When a Manager asks a person what the team members have done and some might be done and some might not be and finally all are into a state like we dont know where we are. So they have addressed these 3 points in the VS 2010 by the following features : Get Lost - Some cool features which could overcome this. We have some Graphical interface. which could show what we have done and where we are. Some Zoom features in the code level. Falling Behind - Everything is based on .NET language base. 2010 has been built in such a way that if developers know the native language that's enough for building good applications. Lack of Collobaration - Some Dashboard Features which would show where exactly the project is. And a graphical user interface is shown on clicking which it directly drills down even to the code level. 5. An overview on all new features in VS 2010. 6. Some good demos of new features in VS 2010 by Polita and one more girl. Some of the new features included : 1. Team Explorer 2. Zoom in Code 3. Ribbon Development 4. Development in Single Platform for Windows Phone, XBox, Zune, Azure, Web Based and Windows based applications 5. Sequence Diagram Generation directly from code 6. Dashboards to show project status 7. Javascript and JQuery intellisense 8. Native support for JQuery 9. Packaging feature while deploying. 10. Generation of different versions of web.config like Web.Config.Production, Web.Config.Staging, etc. 11. IntelliTrace - Eliminating the "Not Reproducible" statement. 12. Automated User Interface Testing. At last in the closing of the day we had a great event called Demo Extravaganza, where lot of cool projects that were launched by Microsoft and also the projects that are under research were also shown. I got a lot of info about Bing today. BING really rocks!!! It has the following : 1. Visual Search 2. Product based search. For each product different menu filters were provided to make an advanced search 3. BING Maps was awesome!! It zoomed in to the street level and we can assume that we are the persons who are walking or running on the road and we can see the real objects like buildings moving by our side. 4. PhotoSynth was used in BING to show up all the images taken around the globe in a 3D format. 5. Formula - If we give some formula it automatically gives the value for the variable or derivation of expression Also some info about some kool touch apps which does an authentication and computation of Teched Attendee's Points that they have scored and the sessions attended. One guy won an XBOX in lucky draw as a gift. There were lot of Partner Stalls like Accenture,Intel,Citrix,MicroFocus,Telerik,infragistics,Sapient etc. Some Offers were provided for us like 50% off on Certifications, 1 free Elearning Course, etc. Stay tuned!! Wil update you on other events too..

    Read the article

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Finding it Hard to Deliver Right Customer Experience: Think BPM!

    - by Ajay Khanna
    Our relationship with our customers is not a just a single interaction and we should not treat it like one. A customer’s relationship with a vendor is like a journey which starts way before customer makes a purchase and lasts long after that. The journey may start with customer researching a product that may lead to the eventual purchase and may continue with support or service needs for the product. A typical customer journey can be represented as shown below: As you may notice, customers tend to use multiple channels to interact with a company throughout their journey.  They also expect that they should get consistent experience, no matter what interaction channel they may choose. Customers do not like to repeat the information they have already provided and expect companies to remember their preferences, and offer them relevant products and services. If the company fails to meet this expectation, customers not only will abandon the purchase and go to the competitor but may also influence others’ purchase decision. Gone are the days when word of mouth was the only medium, and the customer could influence “Six” others. This is the age of social media and customer’s good or bad experience, especially bad get highly amplified and may influence hundreds of others. Challenges that face B2C companies today include: Delivering consistent experience: The reason that delivering consistent experience is challenging is due to fragmented data, disjointed systems and siloed multichannel interactions. Customers tend to get different service quality if they use web vs. phone vs. store. They get different responses from different service agents or get inconsistent answers if they call sales vs. service group in the company. Such inconsistent experiences result in lower customer satisfaction or NPS (net promoter score) numbers. Increasing Revenue: To stay competitive companies frequently introduce new products and services. Delay in launching such offerings has a significant impact on revenue realization. In addition to new product revenue, there are multiple opportunities to up-sell and cross-sell that impact bottom line. If companies are not able to identify such opportunities, bring a product to market quickly, or not offer the right product to the right customer at the right time, significant loss of revenue may occur. Ensuring Compliance: Companies must be compliant to ever changing regulations, these could be about Know Your Customer (KYC), Export/Import regulations, or taxation policies. In addition to government agencies, companies also need to comply with the SLA that they have committed to their customers. Lapse in meeting any of these requirements may lead to serious fines, penalties and loss in business. Companies have to make sure that they are in compliance will all such regulations and SLA commitments, at any given time. With the advent of social networks and mobile technology, companies not only need to focus on process efficiency but also on customer engagement. Improving engagement means delivering the customer experience as the customer is expecting and interacting with the customer at right time using right channel. Customers expect to be able to contact you via any channel of their choice (web, email, chat, mobile, social media), purchase via any viable channel (web, phone, store, mobile). Customers expect companies to understand their particular needs and remember their preferences on repeated visits. To deliver such an integrated, consistent, and contextual experience, power of BPM in must. Your company may be organized in departments like Marketing, Sales, Service. You may hold prospect data in SFA, order information in ERP, customer issues in CRM. However, the experience delivered to the customer must not be constrained by your system legacy. BPM helps in designing the right experience for the right customer and integrates all the underlining channels, systems, applications to make sure right information will be delivered to the right knowledge worker or to the customer every single time.     Orchestrating information across all systems (MDM, CRM, ERP), departments (commerce, merchandising, marketing service) and channels (Email, phone, web, social)  is the key, and that’s what BPM delivers. In addition to orchestrating systems and channels for consistency, BPM also provides an ability for analysis and decision management. By using data from historical transactions, social media and from other systems, users can determine the customer preferences, customer value, and churn propensity. This information, in the context, is then used while making a decision at a process step. Working with real-time decision management system can also suggest right up-sell or cross-sell offers, discounts or next-best-action steps for a particular customer. Timely action on customer issues or request is also a key tenet of a good customer experience. BPM’s complex event processing capabilities help companies to take proactive actions before issues get escalated. BPM system can be designed to listen to a certain event patters then deduce from those customer situations (credit card stolen, baggage lost, change of address) and do a triage before situation goes out of control. If such a situation arises you can send alerts to right people or immediately invoke corrective actions. Last but not least one of BPM’s key values is to drive continuous improvement. Learning about customers past experiences, interactions and social conversations, provide valuable insight. Such insight can be used to improve products, customer facing processes, and customer experience. You may take these insights as an input to design better more efficient and customer friendly sales, contact center or self-service processes. If customer experience is important for your business, make sure you have incorporated BPM as a part of your strategy to design, orchestrate and improve your customer facing processes.

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >